Environmental Encyclopedia Third Edition Volume 1 A-M
Marci Bortman, Peter Brimblecombe, Mary Ann Cunningham, William P. Cunningham, and William Freedman, Editors
Environmental Encyclopedia Third Edition Volume 2 N-Z Historical Chronology U.S. Environmental Legislation Organizations General Index Marci Bortman, Peter Brimblecombe, Mary Ann Cunningham, William P. Cunningham, and William Freedman, Editors
Disclaimer: Some images in the original version of this book are not available for inclusion in the eBook.
Environmental Encyclopedia 3 Marci Bortman, Peter Brimblecombe, William Freedman, Mary Ann Cunningham, William P. Cunningham
Project Coordinator Jacqueline L. Longe
Editorial Systems Support Andrea Lopeman
Editorial Deirdre S. Blanchfield, Madeline Harris, Chris Jeryan, Kate Kretschmann, Mark Springer, Ryan Thomason
Permissions Shalice Shah-Caldwell
©2003 by Gale. Gale is an imprint of the Gale Group, Inc., a division of Thomson Learning, Inc.
For permission to use material from this product, submit your request via the Web at http://www.gale-edit.com/ permissions, or you may download our Permissions Request form and submit your request by fax or mail to:
Gale and Design™ and Thomson Learning™ are trademarks used herein under license. For more information, contact The Gale Group, Inc. 27500 Drake Road Farmington Hills, MI 48331-3535 Or visit our Internet site at http://www.gale.com ALL RIGHTS RESERVED No part of this work covered by the copyright hereon may be reproduced or used in any form or by any means—graphic, electronic, or mechanical, including photocopying, recording, taping, Web distribution, or information storage retrieval systems—without the written permission of the publisher.
Imaging and Multimedia Robert Duncan, Mary Grimes, Lezlie Light, Dan Newell, David Oblender, Christine O’Bryan, Kelly A. Quin
Permissions Department The Gale Group, Inc. 27500 Drake Road Farmington Hills, MI 48331-3535 Permissions hotline: 248-699-8006 or 800-877-4253, ext. 8006 Fax: 248-699-8074 or 800-762-4058 Since this page cannot legibly accommodate all copyright notices, the acknowledgments constitute an extension of the copyright notice.
ISBN 0-7876-5486-8 (set), ISBN 0-7876-5487-6 (Vol. 1), ISBN 0-7876-5488-4 (Vol. 2), ISSN 1072-5083 Printed in the United States of America 10 9 8 7 6 5 4 3 2 1
Product Design Michelle DiMercurio, Tracey Rowens, Jennifer Wahi Manufacturing Evi Seoud, Rita Wimberley
While every effort has been made to ensure the reliability of the information presented in this publication, The Gale Group, Inc. does not guarantee the accuracy of the data contained herein. The Gale Group, Inc. accepts no payment for listing, and inclusion in the publication of any organization, agency, institution, publication, service, or individual does not imply endorsement of the editors or publisher. Errors brought to the attention of the publisher and verified to the satisfaction of the publisher will be corrected in future editions.
CONTENTS
ADVISORY BOARD ............................................ xxi CONTRIBUTORS .............................................. xxiii HOW TO USE THIS BOOK ............................ xxvii INTRODUCTION .............................................. xxix VOLUME 1 (A-M): .............................................1 A Abbey, Edward Absorption Acclimation Accounting for nature Accuracy Acetone Acid and base Acid deposition Acid mine drainage Acid rain Acidification Activated sludge Acute effects Adams, Ansel Adaptation Adaptive management Adirondack Mountains Adsorption Aeration Aerobic Aerobic sludge digestion Aerobic/anaerobic systems Aerosol Aflatoxin African Wildlife Foundation Africanized bees Agency for Toxic Substances and Disease Registry Agent Orange Agglomeration Agricultural chemicals Agricultural environmental management
Agricultural pollution Agricultural Research Service Agricultural revolution Agricultural Stabilization and Conservation Service Agroecology Agroforestry AIDS Air and Waste Management Association Air pollution Air pollution control Air pollution index Air quality Air quality control region Air quality criteria Airshed Alar Alaska Highway Alaska National Interest Lands Conservation Act (1980) Albedo Algal bloom Algicide Allelopathy Allergen Alligator, American Alpha particle Alternative energy sources Aluminum Amazon basin Ambient air Amenity value American box turtle American Cetacean Society American Committee for International Conservation American Farmland Trust American Forests American Indian Environmental Office American Oceans Campaign American Wildlands Ames test Amoco Cadiz Amory, Cleveland
iii
Environmental Encyclopedia 3
CONTENTS
Anaerobic Anaerobic digestion Anemia Animal cancer tests Animal Legal Defense Fund Animal rights Animal waste Animal Welfare Institute Antarctic Treaty (1961) Antarctica Antarctica Project Anthrax Anthropogenic Antibiotic resistance Aquaculture Aquarium trade Aquatic chemistry Aquatic microbiology Aquatic toxicology Aquatic weed control Aquifer Aquifer depletion Aquifer restoration Arable land Aral Sea Arco, Idaho Arctic Council Arctic haze Arctic National Wildlife Refuge Arid Arid landscaping Army Corps of Engineers Arrhenius, Svante Arsenic Arsenic-treated lumber Artesian well Asbestos Asbestos removal Asbestosis Ashio, Japan Asian longhorn beetle Asian (Pacific) shore crab Asiatic black bear Assimilative capacity Francis of Assisi, St. Asthma Aswan High Dam Atmosphere Atmospheric (air) pollutants Atmospheric deposition Atmospheric inversion Atomic Energy Commission Atrazine Attainment area Audubon, John James iv
Australia Autecology Automobile Automobile emissions Autotroph Avalanche
B Bacillus thuringiensis Background radiation Bacon, Sir Francis Baghouse Balance of nature Bald eagle Barrier island Basel Convention Bass, Rick Bats Battery recycling Bay of Fundy Beach renourishment Beattie, Mollie Bellwether species Below Regulatory Concern Bennett, Hugh Hammond Benzene Benzo(a)pyrene Berry, Wendell Best available control technology Best management practices Best Practical Technology Beta particle Beyond Pesticides Bhopal, India Bikini atoll Bioaccumulation Bioaerosols Bioassay Bioassessment Biochemical oxygen demand Biodegradable Biodiversity Biofilms Biofiltration Biofouling Biogeochemistry Biogeography Biohydrometallurgy Bioindicator Biological community Biological fertility Biological methylation Biological Resources Division Bioluminescence
Environmental Encyclopedia 3 Biomagnification Biomass Biomass fuel Biome Biophilia Bioregional Project Bioregionalism Bioremediation Biosequence Biosphere Biosphere reserve Biotechnology Bioterrorism Biotic community Biotoxins Bioventing BirdLife International Birth defects Bison Black lung disease Black-footed ferret Blackout/brownout Blow-out Blue Angel Blue revolution (fish farming) Blue-baby syndrome Bookchin, Murray Borlaug, Norman E. Boston Harbor clean up Botanical garden Boulding, Kenneth Boundary Waters Canoe Area Brackish Bromine Bronchitis Brower, David Ross Brown, Lester R. Brown pelican Brown tree snake Browner, Carol Brownfields Brundtland, Gro Harlem Btu Budyko, Mikhail I. Buffer Bulk density Burden of proof Bureau of Land Management Bureau of Oceans and International Environmental and Scientific Affairs (OES) Bureau of Reclamation Buried soil Burroughs, John Bush meat/market Bycatch
CONTENTS
Bycatch reduction devices
C Cadmium Cairo conference Calcareous soil Caldicott, Helen Caldwell, Lynton Keith California condor Callicott, John Baird Canadian Forest Service Canadian Parks Service Canadian Wildlife Service Cancer Captive propagation and reintroduction Carbon Carbon cycle Carbon dioxide Carbon emissions trading Carbon monoxide Carbon offsets (CO2-emission offsets) Carbon tax Carcinogen Carrying capacity Carson, Rachel Cash crop Catalytic converter Catskill watershed protection plan Center for Environmental Philosophy Center for Respect of Life and Environment Center for Rural Affairs Center for Science in the Public Interest Centers for Disease Control and Prevention Cesium 137 Chain reaction Chaparral Chelate Chelyabinsk, Russia Chemical bond Chemical oxygen demand Chemical spills Chemicals Chemosynthesis Chernobyl Nuclear Power Station Chesapeake Bay Child survival revolution Chimpanzees Chipko Andolan movement Chlordane Chlorinated hydrocarbons Chlorination Chlorine Chlorine monoxide Chlorofluorocarbons
v
Environmental Encyclopedia 3
CONTENTS
Cholera Cholinesterase inhibitor Chromatography Chronic effects Cigarette smoke Citizen science Citizens for a Better Environment Clay minerals Clay-hard pan Clayoquot Sound Clean Air Act (1963, 1970, 1990) Clean coal technology Clean Water Act (1972, 1977, 1987) Clear-cutting Clements, Frederic E. Climate Climax (ecological) Clod Cloning Cloud chemistry Club of Rome C:N ratio Coal Coal bed methane Coal gasification Coal washing Coase theorem Coastal Society, The Coastal Zone Management Act (1972) Co-composting Coevolution Cogeneration Cold fusion Coliform bacteria Colorado River Combined Sewer Overflows Combustion Cometabolism Commensalism Commercial fishing Commission for Environmental Cooperation Commoner, Barry Communicable diseases Community ecology Compaction Comparative risk Competition Competitive exclusion Composting Comprehensive Environmental Response, Compensation, and Liability Act Computer disposal Condensation nuclei Congo River and basin Coniferous forest vi
Conservation Conservation biology Conservation easements Conservation International Conservation Reserve Program (CRP) Conservation tillage Consultative Group on International Agricultural Research Container deposit legislation Contaminated soil Contour plowing Convention on International Trade in Endangered Species of Wild Fauna and Flora (1975) Convention on Long-Range Transboundary Air Pollution (1979) Convention on the Conservation of Migratory Species of Wild Animals (1979) Convention on the Law of the Sea (1982) Convention on the Prevention of Marine Pollution by Dumping of Waste and Other Matter (1972) Convention on Wetlands of International Importance (1971) Conventional pollutant Copper Coprecipitation Coral bleaching Coral reef Corporate Average Fuel Economy standards Corrosion and material degradation Cost-benefit analysis Costle, Douglas M. Council on Environmental Quality Cousteau, Jacques-Yves Cousteau Society, The Coyote Criteria pollutant Critical habitat Crocodiles Cronon, William Cross-Florida Barge Canal Cruzen, Paul Cryptosporidium Cubatao, Brazil Cultural eutrophication Cuyahoga River Cyclone collector
D Dam removal Dams (environmental effects) Darling, Jay Norwood “Ding” Darwin, Charles Robert Dead zones Debt for nature swap Deciduous forest Decline spiral
Environmental Encyclopedia 3 Decomposers Decomposition Deep ecology Deep-well injection Defenders of Wildlife Defoliation Deforestation Delaney Clause Demographic transition Denitrification Deoxyribose nucleic acid Desalinization Desert Desert tortoise Desertification Design for disassembly Detergents Detoxification Detritivores Detritus Dew point Diazinon Dichlorodiphenyl-trichloroethane Dieback Die-off Dillard, Annie Dioxin Discharge Disposable diapers Dissolved oxygen Dissolved solids Dodo Dolphins Dominance Dose response Double-crested cormorants Douglas, Marjory Stoneman Drainage Dredging Drift nets Drinking-water supply Drip irrigation Drought Dry alkali injection Dry cask storage Dry cleaning Dry deposition Dryland farming Dubos, Rene´ Ducks Unlimited Ducktown, Tennessee Dunes and dune erosion Dust Bowl
CONTENTS
E Earth Charter Earth Day Earth First! Earth Island Institute Earth Liberation Front Earth Pledge Foundation Earthquake Earthwatch Eastern European pollution Ebola Eco Mark Ecocide Ecofeminism Ecojustice Ecological consumers Ecological economics Ecological integrity Ecological productivity Ecological risk assessment Ecological Society of America Ecology EcoNet Economic growth and the environment Ecosophy Ecosystem Ecosystem health Ecosystem management Ecoterrorism Ecotone Ecotourism Ecotoxicology Ecotype Edaphic Edaphology Eelgrass Effluent Effluent tax EH Ehrlich, Paul El Nin˜o Electric utilities Electromagnetic field Electron acceptor and donor Electrostatic precipitation Elemental analysis Elephants Elton, Charles Emergency Planning and Community Right-to-Know Act (1986) Emergent diseases (human) Emergent ecological diseases Emission Emission standards
vii
Environmental Encyclopedia 3
CONTENTS
Emphysema Endangered species Endangered Species Act (1973) Endemic species Endocrine disruptors Energy and the environment Energy conservation Energy efficiency Energy flow Energy path, hard vs. soft Energy policy Energy recovery Energy Reorganization Act (1973) Energy Research and Development Administration Energy taxes Enteric bacteria Environment Environment Canada Environmental accounting Environmental aesthetics Environmental auditing Environmental chemistry Environmental Defense Environmental degradation Environmental design Environmental dispute resolution Environmental economics Environmental education Environmental enforcement Environmental engineering Environmental estrogens Environmental ethics Environmental health Environmental history Environmental impact assessment Environmental Impact Statement Environmental law Environmental Law Institute Environmental liability Environmental literacy and ecocriticism Environmental monitoring Environmental Monitoring and Assessment Program Environmental policy Environmental Protection Agency Environmental racism Environmental refugees Environmental resources Environmental science Environmental stress Environmental Stress Index Environmental Working Group Environmentalism Environmentally Preferable Purchasing Environmentally responsible investing Enzyme viii
Ephemeral species Epidemiology Erodible Erosion Escherichia coli Essential fish habitat Estuary Ethanol Ethnobotany Eurasian milfoil European Union Eutectic Evapotranspiration Everglades Evolution Exclusive Economic Zone Exotic species Experimental Lakes Area Exponential growth Externality Extinction Exxon Valdez
F Family planning Famine Fauna Fecundity Federal Energy Regulatory Commission Federal Insecticide, Fungicide and Rodenticide Act (1972) Federal Land Policy and Management Act (1976) Federal Power Commission Feedlot runoff Feedlots Fertilizer Fibrosis Field capacity Filters Filtration Fire ants First World Fish and Wildlife Service Fish kills Fisheries and Oceans Canada Floatable debris Flooding Floodplain Flora Florida panther Flotation Flu pandemic Flue gas Flue-gas scrubbing Fluidized bed combustion
Environmental Encyclopedia 3 Fluoridation Fly ash Flyway Food additives Food and Drug Administration Food chain/web Food irradiation Food policy Food waste Food-borne diseases Foot and mouth disease Forbes, Stephen A. Forel, Francois-Alphonse Foreman, Dave Forest and Rangeland Renewable Resources Planning Act (1974) Forest decline Forest management Forest Service Fossey, Dian Fossil fuels Fossil water Four Corners Fox hunting Free riders Freon Fresh water ecology Friends of the Earth Frogs Frontier economy Frost heaving Fuel cells Fuel switching Fugitive emissions Fumigation Fund for Animals Fungi Fungicide Furans Future generations
G Gaia hypothesis Gala´pagos Islands Galdikas, Birute Game animal Game preserves Gamma ray Gandhi, Mohandas Karamchand Garbage Garbage Project Garbology Gasohol Gasoline
CONTENTS
Gasoline tax Gastropods Gene bank Gene pool Genetic engineering Genetic resistance (or genetic tolerance) Genetically engineered organism Genetically modified organism Geodegradable Geographic information systems Geological Survey Georges Bank Geosphere Geothermal energy Giant panda Giardia Gibbons Gibbs, Lois Gill nets Glaciation Gleason, Henry A. Glen Canyon Dam Global Environment Monitoring System Global Forum Global ReLeaf Global Tomorrow Coalition Goiter Golf courses Good wood Goodall, Jane Gore Jr., Albert Gorillas Grand Staircase-Escalante National Monument Grasslands Grazing on public lands Great Barrier Reef Great Lakes Great Lakes Water Quality Agreement (1978) Great Smoky Mountains Green advertising and marketing Green belt/greenway Green Cross Green packaging Green plans Green politics Green products Green Seal Green taxes Greenhouse effect Greenhouse gases Greenpeace Greens Grinevald, Jacques Grizzly bear Groundwater
ix
Environmental Encyclopedia 3
CONTENTS
Groundwater monitoring Groundwater pollution Growth curve Growth limiting factors Guano Guinea worm eradication Gulf War syndrome Gullied land Gypsy moth
Humanism Human-powered vehicles Humus Hunting and trapping Hurricane Hutchinson, George E. Hybrid vehicles Hydrocarbons Hydrochlorofluorocarbons Hydrogen Hydrogeology Hydrologic cycle Hydrology Hydroponics Hydrothermal vents
H Haagen-Smit, Arie Jan Habitat Habitat conservation plans Habitat fragmentation Haeckel, Ernst Half-life Halons Hanford Nuclear Reservation Hardin, Garrett Hawaiian Islands Hayes, Denis Hazard Ranking System Hazardous material Hazardous Materials Transportation Act (1975) Hazardous Substances Act (1960) Hazardous waste Hazardous waste site remediation Hazardous waste siting Haze Heat (stress) index Heavy metals and heavy metal poisoning Heavy metals precipitation Heilbroner, Robert L. Hells Canyon Henderson, Hazel Herbicide Heritage Conservation and Recreation Service Hetch Hetchy Reservoir Heterotroph High-grading (mining, forestry) High-level radioactive waste High-solids reactor Hiroshima, Japan Holistic approach Homeostasis Homestead Act (1862) Horizon Horseshoe crabs Household waste Hubbard Brook Experimental Forest Hudson River Human ecology Humane Society of the United States x
I Ice age Ice age refugia Impervious material Improvement cutting Inbreeding Incineration Indicator organism Indigenous peoples Indonesian forest fires Indoor air quality Industrial waste treatment Infiltration INFORM INFOTERRA (U.N. Environment Programme) Injection well Inoculate Integrated pest management Intergenerational justice Intergovernmental Panel on Climate Change Internalizing costs International Atomic Energy Agency International Cleaner Production Cooperative International Convention for the Regulation of Whaling (1946) International Geosphere-Biosphere Programme International Institute for Sustainable Development International Joint Commission International Primate Protection League International Register of Potentially Toxic Chemicals International Society for Environmental Ethics International trade in toxic waste International Voluntary Standards International Wildlife Coalition Intrinsic value Introduced species Iodine 131 Ion
Environmental Encyclopedia 3
CONTENTS
Ion exchange Ionizing radiation Iron minerals Irrigation Island biogeography ISO 14000: International Environmental Management Standards Isotope Itai-itai disease IUCN—The World Conservation Union Ivory-billed woodpecker Izaak Walton League
Lawn treatment LD50 Leaching Lead Lead management Lead shot Leafy spurge League of Conservation Voters Leakey, Louis Leakey, Mary Leakey, Richard E. Leaking underground storage tank Leopold, Aldo Less developed countries Leukemia Lichens Life cycle assessment Limits to Growth (1972) and Beyond the Limits (1992) Limnology Lindeman, Raymond L. Liquid metal fast breeder reactor Liquified natural gas Lithology Littoral zone Loading Logging Logistic growth Lomborg, Bjørn Lopez, Barry Los Angeles Basin Love Canal Lovelock, Sir James Ephraim Lovins, Amory B. Lowest Achievable Emission Rate Low-head hydropower Low-level radioactive waste Lyell, Charles Lysimeter
J Jackson, Wes James Bay hydropower project Japanese logging
K Kapirowitz Plateau Kennedy Jr., Robert Kepone Kesterson National Wildlife Refuge Ketones Keystone species Kirtland’s warbler Krakatoa Krill Krutch, Joseph Wood Kudzu Kwashiorkor Kyoto Protocol/Treaty
L La Nina La Paz Agreement Lagoon Lake Baikal Lake Erie Lake Tahoe Lake Washington Land ethic Land Institute Land reform Land stewardship Land Stewardship Project Land trusts Land use Landfill Landscape ecology Landslide Land-use control Latency
M MacArthur, Robert Mad cow disease Madagascar Magnetic separation Malaria Male contraceptives Man and the Biosphere Program Manatees Mangrove swamp Marasmus Mariculture Marine ecology and biodiversity Marine Mammals Protection Act (1972) Marine pollution
xi
CONTENTS
Marine protection areas Marine Protection, Research and Sanctuaries Act (1972) Marine provinces Marsh, George Perkins Marshall, Robert Mass burn Mass extinction Mass spectrometry Mass transit Material Safety Data Sheets Materials balance approach Maximum permissible concentration McHarg, Ian McKibben, Bill Measurement and sensing Medical waste Mediterranean fruit fly Mediterranean Sea Megawatt Mendes, Chico Mercury Metabolism Metals, as contaminants Meteorology Methane Methane digester Methanol Methyl tertiary butyl ether Methylation Methylmercury seed dressings Mexico City, Mexico Microbes (microorganisms) Microbial pathogens Microclimate Migration Milankovitch weather cycles Minamata disease Mine spoil waste Mineral Leasing Act (1920) Mining, undersea Mirex Mission to Planet Earth (NASA) Mixing zones Modeling (computer applications) Molina, Mario Monarch butterfly Monkey-wrenching Mono Lake Monoculture Monsoon Montreal Protocol on Substances That Deplete the Ozone Layer (1987) More developed country Mortality Mount Pinatubo xii
Environmental Encyclopedia 3 Mount St. Helens Muir, John Mulch Multiple chemical sensitivity Multiple Use-Sustained Yield Act (1960) Multi-species management Municipal solid waste Municipal solid waste composting Mutagen Mutation Mutualism Mycorrhiza Mycotoxin
VOLUME 2 (N-Z): ..........................................931 N Nader, Ralph Naess, Arne Nagasaki, Japan National Academy of Sciences National Air Toxics Information Clearinghouse National Ambient Air Quality Standard National Audubon Society National Emission Standards for Hazardous Air Pollutants National Environmental Policy Act (1969) National Estuary Program National forest National Forest Management Act (1976) National Institute for the Environment National Institute for Urban Wildlife National Institute for Occupational Safety and Health National Institute of Environmental Health Sciences National lakeshore National Mining and Minerals Act (1970) National Oceanic and Atmospheric Administration (NOAA) National park National Park Service National Parks and Conservation Association National pollution discharge elimination system National Priorities List National Recycling Coalition National Research Council National seashore National Wildlife Federation National wildlife refuge Native landscaping Natural gas Natural resources Natural Resources Defense Council Nature Nature Conservancy, The Nearing, Scott Nekton Neoplasm
Environmental Encyclopedia 3 Neotropical migrants Neritic zone Neurotoxin Neutron Nevada Test Site New Madrid, Missouri New Source Performance Standard New York Bight Niche Nickel Nitrates and nitrites Nitrification Nitrogen Nitrogen cycle Nitrogen fixation Nitrogen oxides Nitrogen waste Nitrous oxide Noise pollution Nonattainment area Noncriteria pollutant Nondegradable pollutant Nongame wildlife Nongovernmental organization Nonpoint source Nonrenewable resources Non-timber forest products Non-Western environmental ethics No-observable-adverse-effect-level North North American Association for Environmental Education North American Free Trade Agreement North American Water And Power Alliance Northern spotted owl Not In My Backyard Nuclear fission Nuclear fusion Nuclear power Nuclear Regulatory Commission Nuclear test ban Nuclear weapons Nuclear winter Nucleic acid Nutrient
O Oak Ridge, Tennessee Occupational Safety and Health Act (1970) Occupational Safety and Health Administration Ocean Conservatory, The Ocean dumping Ocean Dumping Ban Act (1988) Ocean farming Ocean outfalls
CONTENTS
Ocean thermal energy conversion Octane rating Ode´n, Svante Odor control Odum, Eugene P. Office of Civilian Radioactive Waste Management Office of Management and Budget Office of Surface Mining Off-road vehicles Ogallala Aquifer Oil drilling Oil embargo Oil Pollution Act (1990) Oil shale Oil spills Old-growth forest Oligotrophic Olmsted Sr., Frederick Law Open marsh water management Open system Opportunistic organism Orangutan Order of magnitude Oregon silverspot butterfly Organic gardening and farming Organic waste Organization of Petroleum Exporting Countries Organochloride Organophosphate Orr, David W. Osborn, Henry Fairfield Osmosis Our Common Future(Brundtland Report) Overburden Overfishing Overgrazing Overhunting Oxidation reduction reactions Oxidizing agent Ozonation Ozone Ozone layer depletion
P Paleoecology/paleolimnology Parasites Pareto optimality (Maximum social welfare) Parrots and parakeets Particulate Partnership for Pollution Prevention Parts per billion Parts per million Parts per trillion Passenger pigeon
xiii
CONTENTS
Passive solar design Passmore, John A. Pathogen Patrick, Ruth Peat soils Peatlands Pedology Pelagic zone Pentachlorophenol People for the Ethical Treatment of Animals Peptides Percolation Peregrine falcon Perfluorooctane sulfonate Permaculture Permafrost Permanent retrievable storage Permeable Peroxyacetyl nitrate Persian Gulf War Persistent compound Persistent organic pollutants Pest Pesticide Pesticide Action Network Pesticide residue Pet trade Peterson, Roger Tory Petrochemical Petroleum Pfiesteria pH Phosphates Phosphorus removal Phosphorus Photic zone Photochemical reaction Photochemical smog Photodegradable plastic Photoperiod Photosynthesis Photovoltaic cell Phthalates Phytoplankton Phytoremediation Phytotoxicity Pinchot, Gifford Placer mining Plague Plankton Plant pathology Plasma Plastics Plate tectonics Plow pan xiv
Environmental Encyclopedia 3 Plume Plutonium Poaching Point source Poisoning Pollination Pollution Pollution control costs and benefits Pollution control Pollution credits Pollution Prevention Act (1990) Polunin, Nicholas Polybrominated biphenyls Polychlorinated biphenyls Polycyclic aromatic hydrocarbons Polycyclic organic compounds Polystyrene Polyvinyl chloride Population biology Population Council Population growth Population Institute Porter, Eliot Furness Positional goods Postmodernism and environmental ethics Powell, John Wesley Power plants Prairie Prairie dog Precision Precycling Predator control Predator-prey interactions Prescribed burning President’s Council on Sustainable Development Price-Anderson Act (1957) Primary pollutant Primary productivity (Gross and net) Primary standards Prince William Sound Priority pollutant Privatization movement Probability Project Eco-School Propellants Public Health Service Public interest group Public land Public Lands Council Public trust Puget Sound/Georgia Basin International Task Force Pulp and paper mills Purple loosestrife
Environmental Encyclopedia 3
CONTENTS
Resource recovery Resources for the Future Respiration Respiratory diseases Restoration ecology Retention time Reuse Rhinoceroses Ribonucleic acid Richards, Ellen Henrietta Swallow Right-to-Act legislation Right-to-know Riparian land Riparian rights Risk analysis Risk assessment (public health) Risk assessors River basins River blindness River dolphins Rocky Flats nuclear plant Rocky Mountain Arsenal Rocky Mountain Institute Rodale Institute Rolston, Holmes Ronsard, Pierre Roosevelt, Theodore Roszak, Theodore Rowland, Frank Sherwood Rubber Ruckleshaus, William Runoff
Q Quaamen, David
R Rabbits in Australia Rachel Carson Council Radiation exposure Radiation sickness Radioactive decay Radioactive fallout Radioactive pollution Radioactive waste Radioactive waste management Radioactivity Radiocarbon dating Radioisotope Radiological emergency response team Radionuclides Radiotracer Radon Rails-to-Trails Conservancy Rain forest Rain shadow Rainforest Action Network Rangelands Raprenox (nitrogen scrubbing) Rare species Rathje, William Recharge zone Reclamation Record of decision Recreation Recyclables Recycling Red tide Redwoods Refuse-derived fuels Regan, Tom [Thomas Howard] Regulatory review Rehabilitation Reilly, William K. Relict species Religion and the environment Remediation Renew America Renewable energy Renewable Natural Resources Foundation Reserve Mining Corporation Reservoir Residence time Resilience Resistance (inertia) Resource Conservation and Recovery Act
S Safe Drinking Water Act (1974) Sagebrush Rebellion Sahel St. Lawrence Seaway Sale, Kirkpatrick Saline soil Salinity Salinization Salinization of soils Salmon Salt, Henry S. Salt (road) Salt water intrusion Sand dune ecology Sanitary sewer overflows Sanitation Santa Barbara oil spill Saprophyte (decomposer) Savanna Savannah River site
xv
CONTENTS
Save the Whales Save-the-Redwoods League Scarcity Scavenger Schistosomiasis Schumacher, Ernst Friedrich Schweitzer, Albert Scientists’ Committee on Problems of the Environment Scientists’ Institute for Public Information Scotch broom Scrubbers Sea level change Sea otter Sea Shepherd Conservation Society Sea turtles Seabed disposal Seabrook Nuclear Reactor Seals and sea lions Sears, Paul B. Seattle, Noah Secchi disk Second World Secondary recovery technique Secondary standards Sediment Sedimentation Seed bank Seepage Selection cutting Sense of place Septic tank Serengeti National Park Seveso, Italy Sewage treatment Shade-grown coffee and cacao Shadow pricing Shanty towns Sharks Shepard, Paul Shifting cultivation Shoreline armoring Sick Building Syndrome Sierra Club Silt Siltation Silver Bay Singer, Peter Sinkholes Site index Skidding Slash Slash and burn agriculture Sludge Sludge treatment and disposal Slurry xvi
Environmental Encyclopedia 3 Small Quantity Generator Smart growth Smelter Smith, Robert Angus Smog Smoke Snail darter Snow leopard Snyder, Gary Social ecology Socially responsible investing Society for Conservation Biology Society of American Foresters Sociobiology Soil Soil and Water Conservation Society Soil compaction Soil conservation Soil Conservation Service Soil consistency Soil eluviation Soil illuviation Soil liner Soil loss tolerance Soil organic matter Soil profile Soil survey Soil texture Solar constant cycle Solar detoxification Solar energy Solar Energy Research, Development and Demonstration Act (1974) Solid waste Solid waste incineration Solid waste landfilling Solid waste recycling and recovery Solid waste volume reduction Solidification of hazardous materials Sonic boom Sorption Source separation South Spaceship Earth Spawning aggregations Special use permit Species Speciesism Spoil Stability Stack emissions Stakeholder analysis Statistics Steady-state economy Stegner, Wallace
Environmental Encyclopedia 3 Stochastic change Storage and transport of hazardous material Storm King Mountain Storm runoff Storm sewer Strategic lawsuits to intimidate public advocates Strategic minerals Stratification Stratosphere Stream channelization Stringfellow Acid Pits Strip-farming Strip mining Strontium 90 Student Environmental Action Coalition Styrene Submerged aquatic vegetation Subsidence Subsoil Succession Sudbury, Ontario Sulfate particles Sulfur cycle Sulfur dioxide Superconductivity Superfund Amendments and Reauthorization Act (1986) Surface mining Surface Mining Control and Reclamation Act (1977) Survivorship Suspended solid Sustainable agriculture Sustainable architecture Sustainable biosphere Sustainable development Sustainable forestry Swimming advisories Swordfish Symbiosis Synergism Synthetic fuels Systemic
T Taiga Tailings Tailings pond Takings Tall stacks Talloires Declaration Tansley, Arthur G. Tar sands Target species Taylor Grazing Act (1934) Tellico Dam
CONTENTS
Temperate rain forest Tennessee Valley Authority Teratogen Terracing Territorial sea Territoriality Tetrachloroethylene Tetraethyl lead The Global 2000 Report Thermal plume Thermal pollution Thermal stratification (water) Thermocline Thermodynamics, Laws of Thermoplastics Thermosetting polymers Third World Third World pollution Thomas, Lee M. Thoreau, Henry David Three Gorges Dam Three Mile Island Nuclear Reactor Threshold dose Tidal power Tigers Tilth Timberline Times Beach Tipping fee Tobacco Toilets Tolerance level Toluene Topography Topsoil Tornado and cyclone Torrey Canyon Toxaphene Toxic substance Toxic Substances Control Act (1976) Toxics Release Inventory (EPA) Toxics use reduction legislation Toxins Trace element/micronutrient Trade in pollution permits Tragedy of the commons Trail Smelter arbitration Train, Russell E. Trans-Alaska pipeline Trans-Amazonian highway Transboundary pollution Transfer station Transmission lines Transpiration Transportation
xvii
Environmental Encyclopedia 3
CONTENTS
Tributyl tin Trihalomethanes Trophic level Tropical rain forest Tropopause Troposphere Tsunamis Tundra Turbidity Turnover time Turtle excluder device 2,4,5-T 2,4-D
U Ultraviolet radiation Uncertainty in science, statistics Union of Concerned Scientists United Nations Commission on Sustainable Development United Nations Conference on the Human Environment (1972) United Nations Division for Sustainable Development United Nations Earth Summit (1992) United Nations Environment Programme Upwellings Uranium Urban contamination Urban design and planning Urban ecology Urban heat island Urban runoff Urban sprawl U.S. Department of Agriculture U.S. Department of Energy U.S. Department of Health and Human Services U.S. Department of the Interior U.S. Public Interest Research Group Used Oil Recycling Utilitarianism
V Vadose zone Valdez Principles Vapor recovery system Vascular plant Vector (mosquito) control Vegan Vegetarianism Vernadsky, Vladímir Victims’ compensation Vinyl chloride Virus Visibility Vogt, William xviii
Volatile organic compound Volcano Vollenweider, Richard
W War, environmental effects of Waste exchange Waste Isolation Pilot Plan Waste management Waste reduction Waste stream Wastewater Water allocation Water conservation Water diversion projects Water Environment Federation Water hyacinth Water pollution Water quality Water quality standards Water reclamation Water resources Water rights Water table Water table draw-down Water treatment Waterkeeper Alliance Waterlogging Watershed Watershed management Watt, James Gaius Wave power Weather modification Weathering Wells Werbach, Adam Wet scrubber Wetlands Whale strandings Whales Whaling White, Gilbert White Jr., Lynn Townsend Whooping crane Wild and Scenic Rivers Act (1968) Wild river Wilderness Wilderness Act (1964) Wilderness Society Wilderness Study Area Wildfire Wildlife Wildlife management Wildlife refuge
Environmental Encyclopedia 3 Wildlife rehabilitation Wilson, Edward Osborne Wind energy Windscale (Sellafield) plutonium reactor Winter range Wise use movement Wolman, Abel Wolves Woodwell, George M. World Bank World Conservation Strategy World Resources Institute World Trade Organization World Wildlife Fund Worldwatch Institute Wurster, Charles
X X ray Xenobiotic Xylene
CONTENTS
Yokkaichi asthma Yosemite National Park Yucca Mountain
Z Zebra mussel Zebras Zero discharge Zero population growth Zero risk Zone of saturation Zoo Zooplankton
HISTORICAL CHRONOLOGY .........................1555 ENVIRONMENTAL LEGISLATION IN THE UNITED STATES ........................................1561 ORGANIZATIONS ...........................................1567 GENERAL INDEX ...........................................1591
Y Yard waste Yellowstone National Park
xix
This Page Intentionally Left Blank
ADVISORY BOARD
A number of recognized experts in the library and environmental communities provided invaluable assistance in the formulation of this encyclopedia. Our panel of advisors helped us shape this publication into its final form, and we would like to express our sincere appreciation to them:
Dean Abrahamson: Hubert H. Humphrey Institute of Public Affairs, University of Minnesota, Minneapolis, Minnesota Maria Jankowska: Library, University of Idaho, Moscow, Idaho Terry Link: Library, Michigan State University, East Lansing, Michigan
Holmes Rolston: Department of Philosophy, Colorado State University, Fort Collins, Colorado Frederick W. Stoss: Science and Engineering Library, State University of New York—Buffalo, Buffalo, New York Hubert J. Thompson: Conrad Sulzer Regional Library, Chicago, Illinois
xxi
This Page Intentionally Left Blank
CONTRIBUTORS
Margaret Alic, Ph.D.: Freelance Writer, Eastsound, Washington William G. Ambrose Jr., Ph.D.: Department of Biology, East Carolina University, Greenville, North Carolina James L. Anderson, Ph.D.: Soil Science Department, University of Minnesota, St. Paul, Minnesota Monica Anderson: Freelance Writer, Hoffman Estates, Illinois Bill Asenjo M.S., CRC: Science Writer, Iowa City, Iowa Terence Ball, Ph.D.: Department of Political Science, University of Minnesota, Minneapolis, Minnesota Brian R. Barthel, Ph.D.: Department of Health, Leisure and Sports, The University of West Florida, Pensacola, Florida Stuart Batterman, Ph.D.: School of Public Health, University of Michigan, Ann Arbor, Michigan Eugene C. Beckham, Ph.D.: Department of Mathematics and Science, Northwood Institute, Midland, Michigan Milovan S. Beljin, Ph.D.: Department of Civil Engineering, University of Cincinnati, Cincinnati, Ohio Heather Bienvenue: Freelance Writer, Fremont, California Lawrence J. Biskowski, Ph.D.: Department of Political Science, University of Georgia, Athens, Georgia E. K. Black: University of Alberta, Edmonton, Alberta, Canada Paul R. Bloom, Ph.D.: Soil Science Department, University of Minnesota, St. Paul, Minnesota Gregory D. Boardman, Ph.D.: Department of Civil Engineering, Virginia Polytechnic Institute and State University, Blacksburg, Virginia Marci L. Bortman, Ph.D.: The Nature Conservancy, Huntington, New York Pat Bounds: Freelance Writer, Peter Brimblecombe, Ph.D.: School of Environmental Sciences, University of East Anglia, Norwich, United Kingdom
Kenneth N. Brooks, Ph.D.: College of Natural Resources, University of Minnesota, St. Paul, Minnesota Peggy Browning: Freelance Writer, Marie Bundy: Freelance Writer, Port Republic, Maryland Ted T. Cable, Ph.D.: Department of Horticulture, Forestry and Recreation Resources, Kansas State University, Manhattan, Kansas John Cairns Jr., Ph.D.: University Center for Environmental and Hazardous Materials Studies, Virginia Polytechnic Institute and State University, Blacksburg, Virginia Liane Clorfene Casten: Freelance Journalist, Evanston, Illinois Ann S. Causey: Prescott College, Prescott, Arizona Ann N. Clarke: Eckenfelder Inc., Nashville, Tennessee David Clarke: Freelance Journalist, Bethesda, Maryland Sally Cole-Misch: Freelance Writer, Bloomfield Hills, Michigan Edward J. Cooney: Patterson Associates, Inc., Chicago, Illinois Terence H. Cooper, Ph.D.: Soil Science Department, University of Minnesota, St. Paul, Minnesota Gloria Cooksey, C.N.E.: Freelance Writer, Sacramento, California Mark Crawford: Freelance Writer, Toronto, Ontario, Canada Neil Cumberlidge, Ph.D.: Department of Biology, Northern Michigan University, Marquette, Michigan John Cunningham: Freelance Writer, St. Paul, Minnesota Mary Ann Cunningham, Ph.D.: Department of Geology and Geography, Vassar College, Poughkeepsie, New York William P. Cunningham, Ph.D.: Department of Genetics and Cell Biology, University of Minnesota, St. Paul, Minnesota Richard K. Dagger, Ph.D.: Department of Political Science, Arizona State University, Tempe, Arizona xxiii
CONTRIBUTORS
Tish Davidson, A.M.: Freelance Writer, Fremont, California Stephanie Dionne: Freelance Journalist, Ann Arbor, Michigan Frank M. D’Itri, Ph.D.: Institute of Water Research, Michigan State University, East Lansing, Michigan Teresa C. Donkin: Freelance Writer, Minneapolis, Minnesota David A. Duffus, Ph.D.: Department of Geography, University of Victoria, Victoria, British Columbia, Canada Douglas Dupler, M.A.: Freelance Writer, Boulder, Colorado Cathy M. Falk: Freelance Writer, Portland, Oregon L. Fleming Fallon Jr., M.D., Dr.P.H.: Associate Professor, Public Health, Bowling Green State University , Bowling Green, Ohio George M. Fell: Freelance Writer, Inver Grove Heights, Minnesota Gordon R. Finch, Ph.D.: Department of Civil Engineering, University of Alberta, Edmonton, Alberta, Canada Paula Anne Ford-Martin, M.A.: Wordcrafts, Warwick, Rhode Island Janie Franz: Freelance Writer, Grand Forks, North Dakota Bill Freedman, Ph.D.: School for Resource and Environmental Studies, Dalhousie University, Halifax, Nova Scotia, Canada Rebecca J. Frey, Ph.D.: Writer, Editor, and Editorial Consultant, New Haven, Connecticut Cynthia Fridgen, Ph.D.: Department of Resource Development, Michigan State University, East Lansing, Michigan Andrea Gacki: Freelance Writer, Bay City, Michigan Brian Geraghty: Ford Motor Company, Dearborn, Michigan Robert B. Giorgis, Jr.: Air Resources Board, Sacramento, California Debra Glidden: Freelance American Indian Investigative Journalist, Syracuse, New York Eville Gorham, Ph.D.: Department of Ecology, Evolution and Behavior, University of Minnesota, St. Paul, Minnesota Darrin Gunkel: Freelance Writer, Seattle, Washington Malcolm T. Hepworth, Ph.D.: Department of Civil and Mineral Engineering, University of Minnesota, Minneapolis, Minnesota Katherine Hauswirth: Freelance Writer, Roanoke, Virginia Richard A. Jeryan: Ford Motor Company, Dearborn, Michigan xxiv
Environmental Encyclopedia 3 Barbara J. Kanninen, Ph.D.: Hubert H. Humphrey Institute of Public Affairs, University of Minnesota, Minneapolis, Minnesota Christopher McGrory Klyza, Ph.D.: Department of Political Science, Middlebury College, Middlebury, Vermont John Korstad, Ph.D.: Department of Natural Science, Oral Roberts University, Tulsa, Oklahoma Monique LaBerge, Ph.D.: Research Associate, Department of Biochemistry and Biophysics, University of Pennsylvania, Philadelphia, Pennsylvania Royce Lambert, Ph.D.: Soil Science Department, California Polytechnic State University, San Luis Obispo, California William E. Larson, Ph.D.: Soil Science Department, University of Minnesota, St. Paul, Minnesota Ellen E. Link: Freelance Writer, Laingsburg, Michigan Sarah Lloyd: Freelance Writer, Cambria, Wisconsin James P. Lodge Jr.: Consultant in Atmospheric Chemistry, Boulder, Colorado William S. Lynn, Ph.D.: Department of Geography, University of Minnesota, Minneapolis, Minnesota Alair MacLean: Environmental Editor, OMB Watch, Washington, DC Alfred A. Marcus, Ph.D.: Carlson School of Management, University of Minnesota, Minneapolis, Minnesota Gregory McCann: Freelance Writer, Freeland, Michigan Cathryn McCue: Freelance Journalist, Roanoke, Virginia Mary McNulty: Freelance Writer, Illinois Jennifer L. McGrath: Freelance Writer, South Bend, Indiana Robert G. McKinnell, Ph.D.: Department of Genetics and Cell Biology, University of Minnesota, St. Paul, Minnesota Nathan H. Meleen, Ph.D.: Engineering and Physics Department, Oral Roberts University, Tulsa, Oklahoma Liz Meszaros: Freelance Writer, Lakewood, Ohio Muthena Naseri: Moorpark College, Moorpark, California B. R. Niederlehner, Ph.D.: University Center for Environmental and Hazardous Materials Studies, Virginia Polytechnic Institute and State University, Blacksburg, Virginia David E. Newton: Instructional Horizons, Inc., San Francisco, California Robert D. Norris: Eckenfelder Inc., Nashville, Tennessee Teresa G. Norris, R.N.: Medical Writer, Ute Park, New Mexico Karen Oberhauser, Ph.D.: University of Minnesota, St. Paul, Minnesota Stephanie Ocko: Freelance Journalist, Brookline, Massachusetts
Environmental Encyclopedia 3 Kristin Palm: Freelance Writer, Royal Oak, Michigan James W. Patterson: Patterson Associates, Inc., Chicago, Illinois Paul Phifer, Ph.D.: Freelance Writer, Portland, Oregon Jeffrey L. Pintenich: Eckenfelder Inc., Nashville, Tennessee Douglas C. Pratt, Ph.D.: University of Minnesota: Department of Plant Biology, Scandia, Minnesota Jeremy Pratt: Institute for Human Ecology, Santa Rosa, California Klaus Puettman: University of Minnesota, St. Paul, Minnesota Stephen J. Randtke: Department of Civil Engineering, University of Kansas, Lawrence, Kansas Lewis G. Regenstein: Author and Environmental Writer, Atlanta, Georgia Linda Rehkopf: Freelance Writer, Marietta, Georgia Paul E. Renaud, Ph.D.: Department of Biology, East Carolina University, Greenville, North Carolina Marike Rijsberman: Freelance Writer, Chicago, Illinois L. Carol Ritchie: Environmental Journalist, Arlington, Virginia Linda M. Ross: Freelance Writer, Ferndale, Michigan Joan Schonbeck: Medical Writer, Nursing, Massachusetts Department of Mental Health, Marlborough, Massachusetts Mark W. Seeley: Department of Soil Science, University of Minnesota, St. Paul, Minnesota Kim Sharp, M.Ln.: Freelance Writer, Richmond, Texas James H. Shaw, Ph.D.: Department of Zoology, Oklahoma State University, Stillwater, Oklahoma Laurel Sheppard: Freelance Writer, Columbus, Ohio Judith Sims, M.S.: Utah Water Research Laboratory, Utah State University, Logan, Utah Genevieve Slomski, Ph.D.: Freelance Writer, New Britain, Connecticut Douglas Smith: Freelance Writer, Dorchester, Massachusetts
CONTRIBUTORS
Lawrence H. Smith, Ph.D.: Department of Agronomy and Plant Genetics, University of Minnesota, St. Paul, Minnesota Jane E. Spear: Freelance Writer, Canton, Ohio Carol Steinfeld: Freelance Writer, Concord, Massachusetts Paulette L. Stenzel, Ph.D.: Eli Broad College of Business, Michigan State University, East Lansing, Michigan Les Stone: Freelance Writer, Ann Arbor, Michigan Max Strieb: Freelance Writer, Huntington, New York Amy Strumolo: Freelance Writer, Beverly Hills, Michigan Edward Sucoff, Ph.D.: Department of Forestry Resources, University of Minnesota, St. Paul, Minnesota Deborah L. Swackhammer, Ph.D.: School of Public Health, University of MinnesotaMinneapolis, Minnesota Liz Swain: Freelance Writer, San Diego, California Ronald D. Taskey, Ph.D.: Soil Science Department, California Polytechnic State University, San Luis Obispo, California Mary Jane Tenerelli, M.S.: Freelance Writer, East Northport, New York Usha Vedagiri: IT Corporation, Edison, New Jersey Donald A. Villeneuve, Ph.D.: Ventura College, Ventura, California Nikola Vrtis: Freelance Writer, Kentwood, Michigan Eugene R. Wahl: Freelance Writer, Coon Rapids, Minnesota Terry Watkins: Indianapolis, Indiana Ken R. Wells: Freelance Writer, Laguna Hills, California Roderick T. White Jr.: Freelance Writer, Atlanta, Georgia T. Anderson White, Ph.D.: University of Minnesota, St. Paul, Minnesota Kevin Wolf: Freelance Writer, Minneapolis, Minnesota Angela Woodward: Freelance Writer, Madison, Wisconsin Gerald L. Young, Ph.D.: Program in Environmental Science and Regional Planning, Washington State University, Pullman, Washington
xxv
This Page Intentionally Left Blank
HOW TO USE THIS BOOK
The third edition of Environmental Encyclopedia has been designed with ready reference in mind. OStraight alphabetical arrangement of topics allows users to locate information quickly. OBold-faced terms within entries direct the reader to related articles. OContact information is given for each organization profiled in the book. OCross-references at the end of entries alert readers to related entries not specifically mentioned in the body of the text. O
The Resources sections direct readers to additional sources of information on a topic. OThree appendices provide the reader with a chronology of environmental events, a summary of environmental legislation, and a succinct alphabetical list of environmental organizations. OA comprehensive general index guides readers to all topics mentioned in the text. O
xxvii
This Page Intentionally Left Blank
INTRODUCTION
Welcome to the third edition of the Gale Environmental Encyclopedia! Those of us involved in writing and production of this book hope you will find the material here interesting and useful. As you might imagine, choosing what to include and what to exclude from this collection has been challenging. Almost everything has some environmental significance, so our task has been to select a limited number of topics we think are of greatest importance in understanding our environment and our relation to it. Undoubtedly, we have neglected some topics that interest you and included some you may consider irrelevant, but we hope that overall you will find this new edition helpful and worthwhile. The word environment is derived from the French environ, which means to “encircle” or “surround.” Thus, our environment can be defined as the physical, chemical, and biological world that envelops us, as well as the complex of social and cultural conditions affecting an individual or community. This broad definition includes both the natural world and the “built” or technological environment, as well as the cultural and social contexts that shape human lives. You will see that we have used this comprehensive meaning in choosing the articles and definitions contained in this volume. Among some central concerns of environmental science are: Ohow did the natural world on which we depend come to be as it is, and how does it work? Owhat have we done and what are we now doing to our environment—both for good and ill? Owhat can we do to ensure a sustainable future for ourselves, future generations, and the other species of organisms on which—although we may not be aware of it—our lives depend? The articles in this volume attempt to answer those questions from a variety of different perspectives. Historically, environmentalism is rooted in natural history, a search for beauty and meaning in nature. Modern environmental science expands this concern, drawing on
almost every area of human knowledge including social sciences, humanities, and the physical sciences. Its strongest roots, however, are in ecology, the study of interrelationships among and between organisms and their physical or nonliving environment. A particular strength of the ecological approach is that it studies systems holistically; that is, it looks at interconnections that make the whole greater than the mere sum of its parts. You will find many of those interconnections reflected in this book. Although the entries are presented individually so that you can find topics easily, you will notice that many refer to other topics that, in turn, can lead you on through the book if you have time to follow their trail. This series of linkages reflects the multilevel associations in environmental issues. As our world becomes increasingly interrelated economically, socially, and technologically, we find evermore evidence that our global environment is also highly interconnected. In 2002, the world population reached about 6.2 billion people, more than triple what it had been a century earlier. Although the rate of population growth is slowing— having dropped from 2.0% per year in 1970 to 1.2% in 2002—we are still adding about 200,000 people per day, or about 75 million per year. Demographers predict that the world population will reach 8 or 9 billion before stabilizing sometime around the middle of this century. Whether natural resources can support so many humans is a question of great concern. In preparation for the third global summit in South Africa, the United Nations released several reports in 2002 outlining the current state of our environment. Perhaps the greatest environmental concern as we move into the twentyfirst century is the growing evidence that human activities are causing global climate change. Burning of fossil fuels in power plants, vehicles, factories, and homes release carbon dioxide into the atmosphere. Burning forests and crop residues, increasing cultivation of paddy rice, raising billions of ruminant animals, and other human activities also add to the rapidly growing atmospheric concentrations of heat trapping gases in the atmosphere. Global temperatures have begun xxix
INTRODUCTION
to rise, having increased by about 1°F (0.6°C) in the second half of the twentieth century. Meteorologists predict that over the next 50 years, the average world temperature is likely to increase somewhere between 2.7–11°F (1.5–6.1°C). That may not seem like a very large change, but the difference between current average temperatures and the last ice age, when glaciers covered much of North America, was only about 10°F (5°C). Abundant evidence is already available that our climate is changing. The twentieth century was the warmest in the last 1,000 years; the 1990s were the warmest decade, and 2002 was the single warmest year of the past millennium. Glaciers are disappearing on every continent. More than half the world’s population depends on rivers fed by alpine glaciers for their drinking water. Loss of those glaciers could exacerbate water supply problems in areas where water is already scarce. The United Nations estimates that 1.1 billion people—one-sixth of the world population—now lack access to clean water. In 25 years, about two-thirds of all humans will live in water-stressed countries where supplies are inadequate to meet demand. Spring is now occurring about a week earlier and fall is coming about a week later over much of the northern hemisphere. This helps some species, but is changing migration patterns and home territories for others. In 2002, early melting of ice floes in Canada’s Gulf of St. Lawrence apparently drowned nearly all of the 200,000 to 300,000 harp seal pups normally born there. Lack of sea ice is also preventing polar bears from hunting seals. Environment Canada reports that polar bears around Hudson’s Bay are losing weight and decreasing in number because of poor hunting conditions. In 2002, a chunk of ice about the size of Rhode Island broke off the Larsen B ice shelf on the Antarctic Peninsula. As glacial ice melts, ocean levels are rising, threatening coastal ecosystems and cities around the world. After global climate change, perhaps the next greatest environmental concern for most biologists is the worldwide loss of biological diversity. Taxonomists warn that onefourth of the world’s species could face extinction in the next 30 years. Habitat destruction, pollution, introduction of exotic species, and excessive harvesting of commercially important species all contribute to species losses. Millions of species—most of which have never even been named by science, let alone examined for potential usefulness in medicine, agriculture, science, or industry—may disappear in the next century as a result of our actions. We know little about the biological roles of these organisms in the ecosystems and their loss could result in an ecological tragedy. Ecological economists have tried to put a price on the goods and services provided by natural ecosystems. Although many ecological processes aren’t traded in the market place, xxx
Environmental Encyclopedia 3 we depend on the natural world to do many things for us like purifying water, cleansing air, and detoxifying our wastes. How much would it cost if we had to do all this ourselves? The estimated annual value of all ecological goods and services provided by nature are calculated to be worth at least $33 trillion, or about twice the annual GNPs of all national economies in the world. The most valuable ecosystems in terms of biological processes are wetlands and coastal estuaries because of their high level of biodiversity and their central role in many biogeochemical cycles. Already there are signs that we are exhausting our supplies of fertile soil, clean water, energy, and biodiversity that are essential for life. Furthermore, pollutants released into the air and water, along with increasing amounts of toxic and hazardous wastes created by our industrial society, threaten to damage the ecological life support systems on which all organisms—including humans—depend. Even without additional population growth, we may need to drastically rethink our patterns of production and disposal of materials if we are to maintain a habitable environment for ourselves and our descendants. An important lesson to be learned from many environmental crises is that solving one problem often creates another. Chlorofluorocarbons, for instance, were once lauded as a wonderful discovery because they replaced toxic or explosive chemicals then in use as refrigerants and solvents. No one anticipated that CFCs might damage stratospheric ozone that protects us from dangerous ultraviolet radiation. Similarly, the building of tall smokestacks on power plants and smelters lessened local air pollution, but spread acid rain over broad areas of the countryside. Because of our lack of scientific understanding of complex systems, we are continually subjected to surprises. How to plan for “unknown unknowns” is an increasing challenge as our world becomes more tightly interconnected and our ability to adjust to mistakes decreases. Not all is discouraging, however, in the field of environmental science. Although many problems beset us, there are also encouraging signs of progress. Some dramatic successes have occurred in wildlife restoration and habitat protection programs, for instance. The United Nations reports that protected areas have increased five-fold over the past 30 years to nearly 5 million square miles. World forest losses have slowed, especially in Asia, where deforestation rates slowed from 8% in the 1980s to less than 1% in the 1990s. Forested areas have actually increased in many developed countries, providing wildlife habitat, removal of excess carbon dioxide, and sustainable yields of forest products. In spite of dire warnings in the 1960s that growing human populations would soon overshoot the earth’s carrying capacity and result in massive famines, food supplies have more than kept up with population growth. There is
Environmental Encyclopedia 3 more than enough food to provide a healthy diet for everyone now living, although inequitable distribution leaves about 800 million with an inadequate diet. Improved health care, sanitation, and nutrition have extended life expectancies around the world from 40 years, on average, a century ago, to 65 years now. Public health campaigns have eradicated smallpox and nearly eliminated polio. Other terrible diseases have emerged, however, most notably acquired immunodeficiency syndrome (AIDS), which is now the fourth most common cause of death worldwide. Forty million people are now infected with HIV—70% percent of them in subSaharan Africa—and health experts warn that unsanitary blood donation practices and spreading drug use in Asia may result in tens of millions more AIDS deaths in the next few decades. In developed countries, air and pollution have decreased significantly over the past 30 years. In 2002, the Environmental Protection Agency declared that Denver— which once was infamous as one of the most polluted cities in the United States—is the first major city to meet all the agency’s standards for eliminating air pollution. At about the same time, the EPA announced that 91% of all monitored river miles in the United States met the water quality goals set in the 1985 clean water act. Pollution-sensitive species like mayflies have returned to the upper Mississippi River, and in Britian, salmon are being caught in the Thames River after being absent for more than two centuries. Conditions aren’t as good, however, in many other countries. In most of Latin America, Africa, and Asia, less than two % of municipal sewage is given even primary treatment before being dumped into rivers, lakes, or the ocean. In South Asia, a 2-mile (3-km) thick layer of smog covers the entire Indian sub-continent for much of the year. This cloud blocks sunlight and appears to be changing the climate, bringing drought to Pakistan and Central Asia, and shifting monsoon winds that caused disastrous floods in 2002 in Nepal, Bangladesh, and eastern India that forced 25 million people from their homes and killed at least 1,000 people. Nobel laureate Paul Crutzen estimates that two million deaths each year in India alone can be attributed to air pollution effects. After several decades of struggle, a world-wide ban on the “dirty dozen” most dangerous persistent organic pollutants (POPs) was ratified in 2000. Elimination of compounds such as DDT, Aldrin, Dieldrin, Mirex, Toxaphene, polychlorinated biphenyls, and dioxins has allowed recovery of several wildlife species including bald eagles, perigrine falcons, and brown pelicans. Still, other toxic synthetic chemicals such as polybrominated diphenyl ethers, chromated copper arsenate, perflurooctane sulfonate, and atrazine are now being found accumulating in food chains far from anyplace where they have been used.
INTRODUCTION
Solutions for many of our pollution problems can be found in either improved technology, more personal responsibility, or better environmental management. The question is often whether we have the political will to enforce pollution control programs and whether we are willing to sacrifice short-term convenience and affluence for long-term ecological stability. We in the richer countries of the world have become accustomed to a highly consumptive lifestyle. Ecologists estimate that humans either use directly, destroy, coopt, or alter almost 40% of terrestrial plant productivity, with unknown consequences for the biosphere. Whether we will be willing to leave some resources for other species and future generations is a central question of environmental policy. One way to extend resources is to increase efficiency and recycling of the items we use. Automobiles have already been designed, for example, that get more than 100 mi/gal (42 km/l) of diesel fuel and are completely recyclable when they reach the end of their designed life. Although recycling rates in the United States have increased in recent years, we could probably double our current rate with very little sacrifice in economics or convenience. Renewable energy sources such as solar or wind power are making encouraging progress. Wind already is cheaper than any other power source except coal in many localities. Solar energy is making it possible for many of the two billion people in the world who don’t have access to electricity to enjoy some of the benefits of modern technology. Worldwide, the amount of installed wind energy capacity more than doubled between 1998 and 2002. Germany is on course to obtain 20% of its energy from renewables by 2010. Together, wind, solar, biomass and other forms of renewable energy have the potential to provide thousands of times as much energy as all humans use now. There is no reason for us to continue to depend on fossil fuels for the majority of our energy supply. One of the widely advocated ways to reduce poverty and make resources available to all is sustainable development. A commonly used definition of this term is given in Our Common Future, the report of the World Commission on Environment and Development (generally called the Brundtland Commission after the prime minister of Norway, who chaired it), described sustainable development as: “meeting the needs of the present without compromising the ability of future generations to meet their own needs.” This implies improving health, education, and equality of opportunity, as well as ensuring political and civil rights through jobs and programs based on sustaining the ecological base, living on renewable resources rather than nonrenewable ones, and living within the carrying capacity of supporting ecological systems. Several important ethical considerations are embedded in environmental questions. One of these is intergenerational xxxi
INTRODUCTION
justice: what responsibilities do we have to leave resources and a habitable planet for future generations? Is our profligate use of fossil fuels, for example, justified by the fact that we have technology to extract fossil fuels and enjoy their benefits? Will human lives in the future be impoverished by the fact that we have used up most of the easily available oil, gas, and coal? Author and social critic Wendell Berry suggests that our consumption of these resources constitutes a theft of the birthright and livelihood of posterity. Philosopher John Rawls advocates a “just savings principle” in which members of each generation may consume no more than their fair share of scarce resources. How many generations are we obliged to plan for and what is our “fair share?” It is possible that our use of resources now—inefficient and wasteful as it may be—represents an investment that will benefit future generations. The first computers, for instance, were huge clumsy instruments that filled rooms full of expensive vacuum tubes and consumed inordinate amounts of electricity. Critics complained that it was a waste of time and resources to build these enormous machines to do a few simple calculations. And yet if this technology had been suppressed in its infancy, the world would be much poorer today. Now nanotechnology promises to make machines and tools in infinitesimal sizes that use minuscule amounts of materials and energy to carry out valuable functions. The question remains whether future generations will be glad that we embarked on the current scientific and technological revolution or whether they will wish that we had maintained a simple agrarian, Arcadian way of life. Another ethical consideration inherent in many environmental issues is whether we have obligations or responsibilities to other species or to Earth as a whole. An anthropocentric (human-centered) view holds that humans have rightful dominion over the earth and that our interests and well-being take precedence over all other considerations. Many environmentalists criticize this perspective, considering it arrogant and destructive. Biocentric (life-centered) philosophies argue that all living organisms have inherent values and rights by virtue of mere existence, whether or not
xxxii
Environmental Encyclopedia 3 they are of any use to us. In this view, we have a responsibility to leave space and resources to enable other species to survive and to live as naturally as possible. This duty extends to making reparations or special efforts to encourage the recovery of endangered species that are threatened with extinction due to human activities.Some environmentalists claim that we should adopt an ecocentric (ecologically centered) outlook that respects and values nonliving entities such as rocks, rivers, mountains—even whole ecosystems—as well as other living organisms. In this view, we have no right to break up a rock, dam a free-flowing river, or reshape a landscape simply because it benefits us. More importantly, we should conserve and maintain the major ecological processes that sustain life and make our world habitable. Others argue that our existing institutions and understandings, while they may need improvement and reform, have provided us with many advantages and amenities. Our lives are considerably better in many ways than those of our ancient ancestors, whose lives were, in the words of British philosopher Thomas Hobbes: “nasty, brutish, and short.” Although science and technology have introduced many problems, they also have provided answers and possible alternatives as well. It may be that we are at a major turning point in human history. Current generations are in a unique position to address the environmental issues described in this encyclopedia. For the first time, we now have the resources, motivation, and knowledge to protect our environment and to build a sustainable future for ourselves and our children. Until recently, we didn’t have these opportunities, or there was not enough clear evidence to inspire people to change their behavior and invest in environmental protection; now the need is obvious to nearly everyone. Unfortunately, this also may be the last opportunity to act before our problems become irreversible. We hope that an interest in preserving and protecting our common environment is one reason that you are reading this encyclopedia and that you will find information here to help you in that quest. [William P. Cunningham, Managing Editor]
A
Edward Paul Abbey (1927 – 1989) American environmentalist and writer Novelist, essayist, white-water rafter, and self-described “desert rat,” Abbey wrote of the wonders and beauty of the American West that was fast disappearing in the name of “development” and “progress.” Often angry, frequently funny, and sometimes lyrical, Abbey recreated for his readers a region that was unique in the world. The American West was perhaps the last place where solitary selves could discover and reflect on their connections with wild things and with their fellow human beings. Abbey was born in Home, Pennsylvania, in 1927. He received his B.A. from the University of New Mexico in 1951. After earning his master’s degree in 1956, he joined the National Park Service, where he served as park ranger and fire fighter. He later taught writing at the University of Arizona. Abbey’s books and essays, such as Desert Solitaire (1968) and Down the River (1982), had their angrier fictional counterparts—most notably, The Monkey Wrench Gang (1975) and Hayduke Lives! (1990)—in which he gave voice to his outrage over the destruction of deserts and rivers by dam-builders and developers of all sorts. In The Monkey Wrench Gang Abbey weaves a tale of three “ecoteurs” who defend the wild west by destroying the means and machines of development—dams, bulldozers, logging trucks—which would otherwise reduce forests to lumber and raging rivers to irrigation channels. This aspect of Abbey’s work inspired some radical environmentalists, including Dave Foreman and other members of Earth First!, to practice “monkey-wrenching” or “ecotage” to slow or stop such environmentally destructive practices as strip mining, the clear-cutting of old-growth forests on public land, and the damming of wild rivers for flood control, hydroelectric power, and what Abbey termed “industrial tourism.” Although Abbey’s description and defense of such tactics has been widely condemned by many mainstream environmental groups, he remains a revered fig-
ure among many who believe that gradualist tactics have not succeeded in slowing, much less stopping, the destruction of North American wilderness. Abbey is unique among environmental writers in having an oceangoing ship named after him. One of the vessels in the fleet of the militant Sea Shepherd Conservation Society, the Edward Abbey, rams and disables whaling and drift-net fishing vessels operating illegally in international waters. Abbey would have welcomed the tribute and, as a white-water rafter and canoeist, would no doubt have enjoyed the irony. Abbey died on March 14, 1989. He is buried in a desert in the southwestern United States. [Terence Ball]
RESOURCES BOOKS Abbey, E. Desert Solitaire. New York: McGraw-Hill, 1968. ———. Down the River. Boston: Little, Brown, 1982. ———. Hayduke Lives! Boston: Little, Brown, 1990. ———. The Monkey Wrench Gang. Philadelphia: Lippincott, 1975. Berry, W. “A Few Words in Favor of Edward Abbey.” In What Are People For? San Francisco: North Point Press, 1991. Bowden, C. “Goodbye, Old Desert Rat.” In The Sonoran Desert. New York: Abrams, 1992. Manes, C. Green Rage: Radical Environmentalism and the Unmaking of Civilization. Boston: Little, Brown, 1990.
Absorption Absorption, or more generally “sorption,” is the process by which one material (the sorbent) takes up and retains another (the sorbate) to form a homogenous concentration at equilibrium. The general term is “sorption,” which is defined as adhesion of gas molecules, dissolved substances, or liquids to the surface of solids with which they are in contact. In soils, three types of mechanisms, often working together, constitute sorption. They can be grouped into physical sorp1
Environmental Encyclopedia 3
Acclimation
tion, chemiosorption, and penetration into the solid mineral phase. Physical sorption (also known as adsorption) involves the attachment of the sorbent and sorbate through weak atomic and molecular forces. Chemiosorption involves chemical bonds similar to holding atoms in a molecule. Electrostatic forces operate to bond minerals via ion exchange, such as the replacement of sodium, magnesium, potassium, and aluminum cations (+) as exchangeable bases with acid (-) soils. While cation (positive ion) exchange is the dominant exchange process occurring in soils, some soils have the ability to retain anions (negative ions) such as nitrates, chlorine and, to a larger extent, oxides of sulfur. Absorption and Wastewater Treatment In on-site wastewater treatment, the soil absorption field is the land area where the wastewater from the septic tank is spread into the soil. One of the most common types of soil absorption field has porous plastic pipe extending away from the distribution box in a series of two or more parallel trenches, usually 1.5–2 ft (30.5–61 cm) wide. In conventional, below-ground systems, the trenches are 1.5–2 ft deep. Some absorption fields must be placed at a shallower depth than this to compensate for some limiting soil condition, such as a hardpan or high water table. In some cases they may even be placed partially or entirely in fill material that has been brought to the lot from elsewhere. The porous pipe that carries wastewater from the distribution box into the absorption field is surrounded by gravel that fills the trench to within a foot or so of the ground surface. The gravel is covered by fabric material or building paper to prevent plugging. Another type of drainfield consists of pipes that extend away from the distribution box, not in trenches but in a single, gravel-filled bed that has several such porous pipes in it. As with trenches, the gravel in a bed is covered by fabric or other porous material. Usually the wastewater flows gradually downward into the gravel-filled trenches or bed. In some instances, such as when the septic tank is lower than the drainfield, the wastewater must be pumped into the drainfield. Whether gravity flow or pumping is used, wastewater must be evenly distributed throughout the drainfield. It is important to ensure that the drainfield is installed with care to keep the porous pipe level, or at a very gradual downward slope away from the distribution box or pump chamber, according to specifications stipulated by public health officials. Soil beneath the gravel-filled trenches or bed must be permeable so that wastewater and air can move through it and come in contact with each other. Good aeration is necessary to ensure that the proper chemical and microbiological processes will be occurring in the soil to cleanse the percolating wastewater of contaminants. A well-aerated soil also ensures slow travel and good contact between wastewater and soil. 2
How Common Are Septic Systems with Soil Absorption Systems? According to the 1990 U.S. Census, there are about 24.7 million households in the United States that use septic tank systems or cesspools (holes or pits for receiving sewage) for wastewater treatment. This figure represents roughly 24% of the total households included in the census. According to a review of local health department information by the National Small Flows Clearinghouse, 94% of participating health departments allow or permit the use of septic tank and soil absorption systems. Those that do not allow septic systems have sewer lines available to all residents. The total volume of waste disposed of through septic systems is more than one trillion gallons (3.8 trillion l) per year, according to a study conducted by the U.S. Environmental Protection Agency’s Office of Technology Assessment, and virtually all of that waste is discharged directly to the subsurface, which affects groundwater quality. [Carol Steinfeld]
RESOURCES BOOKS Elliott, L. F., and F. J. Stevenson, Soils for the Management of Wastes and Waste Waters. Madison, WI: Soil Science Society of America, 1977.
OTHER Fact Sheet SL-59, a series of the Soil and Water Science Department, Florida Cooperative Extension Service, Institute of Food and Agricultural Sciences, University of Florida. February 1993.
Acaricide see Pesticide
Acceptable risk see Risk analysis
Acclimation Acclimation is the process by which an organism adjusts to a change in its environment. It generally refers to the ability of living things to adjust to changes in climate, and usually occurs in a short time of the change. Scientists distinguish between acclimation and acclimatization because the latter adjustment is made under natural conditions when the organism is subject to the full range of changing environmental factors. Acclimation, however, refers to a change in only one environmental factor under laboratory conditions.
Environmental Encyclopedia 3
Acetone
In an acclimation experiment, adult frogs (Rana temporaria) maintained in the laboratory at a temperature of either 50°F (10°C) or 86°F (30°C) were tested in an environment of 32°F (0°C). It was found that the group maintained at the higher temperature was inactive at freezing. The group maintained at 50°F (10°C), however, was active at the lower temperature; it had acclimated to the lower temperature. Acclimation and acclimatization can have profound effects upon behavior, inducing shifts in preferences and in mode of life. The golden hamster (Mesocricetus auratus) prepares for hibernation when the environmental temperature drops below 59°F (15°C). Temperature preference tests in the laboratory show that the hamsters develop a marked preference for cold environmental temperatures during the pre-hibernation period. Following arousal from a simulated period of hibernation, the situation is reversed, and the hamsters actively prefer the warmer environments. An acclimated microorganism is any microorganism that is able to adapt to environmental changes such as a change in temperature or a change in the quantity of oxygen or other gases. Many organisms that live in environments with seasonal changes in temperature make physiological adjustments that permit them to continue to function normally, even though their environmental temperature goes through a definite annual temperature cycle. Acclimatization usually involves a number of interacting physiological processes. For example, in acclimatizing to high altitudes, the first response of human beings is to increase their breathing rate. After about 40 hours, changes have occurred in the oxygen-carrying capacity of the blood, which makes it more efficient in extracting oxygen at high altitudes. As this occurs, the breathing rate returns to normal. [Linda Rehkopf]
produced in a nation in a particular year. It is recognized that natural resources are economic assets that generate income, and that just as the depreciation of buildings and capital equipment are treated as economic costs and subtracted from GNP to get NNP, depreciation of natural capital should also be subtracted when calculating NNP. In addition, expenditures on environmental protection, which at present are included in GNP and NNP, are considered defensive expenditures in accounting for nature which should not be included in either GNP or NNP.
Accuracy Accuracy is the closeness of an experimental measurement to the “true value” (i.e., actual or specified) of a measured quantity. A “true value” can determined by an experienced analytical scientist who performs repeated analyses of a sample of known purity and/or concentration using reliable, well-tested methods. Measurement is inexact, and the magnitude of that exactness is referred to as the error. Error is inherent in measurement and is a result of such factors as the precision of the measuring tools, their proper adjustment, the method, and competency of the analytical scientist. Statistical methods are used to evaluate accuracy by predicting the likelihood that a result varies from the “true value.” The analysis of probable error is also used to examine the suitability of methods or equipment used to obtain, portray, and utilize an acceptable result. Highly accurate data can be difficult to obtain and costly to produce. However, different applications can require lower levels of accuracy that are adequate for a particular study. [Judith L. Sims]
RESOURCES BOOKS Ford, M. J. The Changing Climate: Responses of the Natural Fauna and Flora. Boston: G. Allen and Unwin, 1982. McFarland, D., ed. The Oxford Companion to Animal Behavior. Oxford, England: Oxford University Press, 1981. Stress Responses in Plants: Adaptation and Acclimation Mechanisms. New York: Wiley-Liss, 1990.
RESOURCES BOOKS Jaisingh, Lloyd R. Statistics for the Utterly Confused. New York, NY: McGraw-Hill Professional, 2000. Salkind, Neil J. Statistics for People Who (Think They) Hate Statistics. Thousand Oaks, CA: Sage Publications, Inc., 2000.
Acetone
Accounting for nature A new approach to national income accounting in which the degradation and depletion of natural resource stocks and environmental amenities are explicitly included in the calculation of net national product (NNP). NNP is equal to gross national product (GNP) minus capital depreciation, and GNP is equal to the value of all final goods and services
Acetone (C3H60) is a colorless liquid that is used as a solvent in products, such as in nail polish and paint, and in the manufacture of other chemicals such as plastics and fibers. It is a naturally occurring compound that is found in plants and is released during the metabolism of fat in the body. It is also found in volcanic gases, and is manufactured by the chemical industry. Acetone is also found in the atmo3
Environmental Encyclopedia 3
Acid and base
The basic mechanisms of acid deposition. (Illustration by Wadsworth Inc. Reproduced by permission.)
sphere as an oxidation product of both natural and anthropogenic volatile organic compounds (VOCs). It has a strong
smell and taste, and is soluble in water. The evaporation point of acetone is quite low compared to water, and the chemical is highly flammable. Because it is so volatile, the acetone manufacturing process results in a large percentage of the compound entering the atmosphere. Ingesting acetone can cause damage to the tissues in the mouth and can lead to unconsciousness. Breathing acetone can cause irritation of the eyes, nose, and throat; headaches; dizziness; nausea; unconsciousness; and possible coma and death. Women may experience menstrual irregularity. There has been concern about the carcinogenic nature of acetone, but laboratory studies, and studies of humans who have been exposed to acetone in the course of their occupational activities show no evidence that acetone causes cancer. [Marie H. Bundy]
Acid and base According to the definition used by environmental chemists, an acid is a substance that increases the hydrogen ion (H+) 4
concentration in a solution and a base is a substance that removes hydrogen ions (H+) from a solution. In water, removal of hydrogen ions results in an increase in the hydroxide ion (OH-) concentration. Water with a pH of 7.0 is neutral, while lower pH values are acidic and higher pH values are basic.
Acid deposition Acid precipitation from the atmosphere, whether in the form of dryfall (finely divided acidic salts), rain, or snow. Naturally occurring carbonic acid normally makes rain and snow mildly acidic (approximately 5.6 pH). Human activities often introduce much stronger and more damaging acids. Sulfuric acids formed from sulfur oxides released in coal or oil combustion or smelting of sulfide ores predominate as the major atmospheric acid in industrialized areas. Nitric acid created from nitrogen oxides, formed by oxidizing atmospheric nitrogen when any fuel is burned in an oxygenrich environment, constitutes the major source of acid precipitation in such cities as Los Angeles with little industry,
Environmental Encyclopedia 3 but large numbers of trucks and automobiles. The damage caused to building materials, human health, crops, and natural ecosystems by atmospheric acids amounts to billions of dollars per year in the United States.
Acid mine drainage The process of mining the earth for coal and metal ores has a long history of rich economic rewards—and a high level of environmental impact to the surrounding aquatic and terrestrial ecosystems. Acid mine drainage is the highly acidic, sediment-laden discharge from exposed mines that is released into the ambient aquatic environment. In large areas of Pennsylvania, West Virginia, and Kentucky, the bright orange seeps of acid mine drainage have almost completely eliminated aquatic life in streams and ponds that receive the discharge. In the Appalachian coal mining region, almost 7,500 mi (12,000 km) of streams and almost 30,000 acres (12,000 ha) of land are estimated to be seriously affected by the discharge of uncontrolled acid mine drainage. In the United States, coal-bearing geological strata occur near the surface in large portions of the Appalachian mountain region. The relative ease with which coal could be extracted from these strata led to a type of mining known as strip mining that was practiced heavily in the nineteenth and early twentieth centuries. In this process, large amounts of earth, called the overburden, were physically removed from the surface to expose the coal-bearing layer beneath. The coal was then extracted from the rock as quickly and cheaply as possible. Once the bulk of the coal had been mined, and no more could be extracted without a huge additional cost, the sites were usually abandoned. The remnants of the exhausted coal-bearing rock and soil are called the mine spoil waste. Acid mine drainage is not generated by strip mining itself but by the nature of the rock where it takes place. Three conditions are necessary to form acid mine drainage: pyrite-bearing rock, oxygen, and iron-oxidizing bacteria. In the Appalachians, the coal-bearing rocks usually contain significant quantities of pyrite (iron). This compound is normally not exposed to the atmosphere because it is buried underground within the rock; it is also insoluble in water. The iron and the sulfide are said to be in a reduced state, i.e., the iron atom has not released all the electrons that it is capable of releasing. When the rock is mined, the pyrite is exposed to air. It then reacts with oxygen to form ferrous iron and sulfate ions, both of which are highly soluble in water. This leads to the formation of sulfuric acid and is responsible for the acidic nature of the drainage. But the oxidation can only occur if the bacteria Thiobacillus ferrooxidans are present. These activate the iron-and-sulfur oxidizing
Acid mine drainage
reactions, and use the energy released during the reactions for their own growth. They must have oxygen to carry these reactions through. Once the maximum oxidation is reached, these bacteria can derive no more energy from the compounds and all reactions stop. The acidified water may be formed in several ways. It may be generated by rain falling on exposed mine spoils waste or when rain and surface water (carrying dissolved oxygen) flow down and seep into rock fractures and mine shafts, coming into contact with pyrite-bearing rock. Once the acidified water has been formed, it leaves the mine area as seeps or small streams. Characteristically bright orange to rusty red in color due to the iron, the liquid may be at a pH of 2–4. These are extremely low pH values and signify a very high degree of acidity. Vinegar, for example, has a pH of about 4.7 and the pH associated with acid rain is in the range of 4–6. Thus, acid mine drainage with a pH of 2 is more acidic than almost any other naturally occurring liquid release in the environment (with the exception of some volcanic lakes that are pure acid). Usually, the drainage is also very high in dissolved iron, manganese, aluminum, and suspended solids. The acidic drainage released from the mine spoil wastes usually follows the natural topography of its area and flows into the nearest streams or wetlands where its effect on the water quality and biotic community is unmistakable. The iron coats the stream bed and its vegetation as a thick orange coating that prevents sunlight from penetrating leaves and plant surfaces. Photosynthesis stops and the vegetation (both vascular plants and algae) dies. The acid drainage eventually also makes the receiving water acid. As the pH drops, the fish, the invertebrates, and algae die when their metabolism can no longer adapt. Eventually, there is no life left in the stream with the possible exception of some bacteria that may be able to tolerate these conditions. Depending on the number and volume of seeps entering a stream and the volume of the stream itself, the area of impact may be limited and improved conditions may exist downstream, as the acid drainage is diluted. Abandoned mine spoil areas also tend to remain barren, even after decades. The colonization of the acidic mineral soil by plant species is a slow and difficult process, with a few lichens and aspens being the most hardy species to establish. While many methods have been tried to control or mitigate the effects of acid mine drainage, very few have been successful. Federal mining regulations (Surface Mining Control and Reclamation Act of 1978) now require that when mining activity ceases, the mine spoil waste should be buried and covered with the overburden and vegetated topsoil. The intent is to restore the area to premining condition and to prevent the generation of acid mine drainage by 5
Environmental Encyclopedia 3
Acid rain
limiting the exposure of pyrite to oxygen and water. Although some minor seeps may still occur, this is the singlemost effective way to minimize the potential scale of the problem. Mining companies are also required to monitor the effectiveness of their restoration programs and must post bonds to guarantee the execution of abatement efforts, should any become necessary in the future. There are, however, numerous abandoned sites exposing pyrite-bearing spoils. Cleanup efforts for these sites have focused on controlling one or more of the three conditions necessary for the creation of the acidity: pyrite, bacteria, and oxygen. Attempts to remove bulk quantities of the pyrite-bearing mineral and store it somewhere else are extremely expensive and difficult to execute. Inhibiting the bacteria by using detergents, solvents, and other bactericidal agents are temporarily effective, but usually require repeated application. Attempts to seal out air or water are difficult to implement on a large scale or in a comprehensive manner. Since it is difficult to reduce the formation of acid mine drainage at abandoned sites, one of the most promising new methods of mitigation treats the acid mine drainage after it exits the mine spoil wastes. The technique channels the acid seeps through artificially created wetlands, planted with cattails or other wetland plants in a bed of gravel, limestone, or compost. The limestone neutralizes the acid and raises the pH of the drainage while the mixture of oxygen-rich and oxygen-poor areas within the wetland promote the removal of iron and other metals from the drainage. Currently, many agencies, universities, and private firms are working to improve the design and performance of these artificial wetlands. A number of additional treatment techniques may be strung together in an interconnected system of anoxic limestone trenches, settling ponds, and planted wetlands. This provides a variety of physical and chemical microenvironments so that each undesirable characteristic of the acid drainage can be individually addressed and treated, e.g., acidity is neutralized in the trenches, suspended solids are settled in the ponds, and metals are precipitated in the wetlands. In the United States, the research and treatment of acid mine drainage continues to be an active field of study in the Appalachians and in the metal-mining areas of the Rocky Mountains. [Usha Vedagiri]
RESOURCES PERIODICALS Clay, S. “A Solution to Mine Drainage?” American Forests 98 (July-August 1992): 42-43. Hammer, D. A. Constructed Wetlands for Wastewater Treatment: Municipal, Industrial, Agricultural. Chelsea, MI: Lewis, 1990. Schwartz, S. E. “Acid Deposition: Unraveling a Regional Phenomenon.” Science 243 (February 1989): 753–763.
6
Welter, T. R. “An ’All Natural’ Treatment: Companies Construct Wetlands to Reduce Metals in Acid Mine Drainage.” Industry Week 240 (August 5, 1991): 42–43.
Acid rain Acid rain is the term used in the popular press that is equivalent to acidic deposition as used in the scientific literature. Acid deposition results from the deposition of airborne acidic pollutants on land and in bodies of water. These pollutants can cause damage to forests as well as to lakes and streams. The major pollutants that cause acidic deposition are sulfur dioxide (SO2) and nitrogen oxides (NOx) produced during the combustion of fossil fuels. In the atmosphere these gases oxidize to sulfuric acid (H2SO4) and nitric acid (HNO3) that can be transported long distances before being returned to the earth dissolved in rain drops (wet deposition), deposited on the surfaces of plants as cloud droplets, or directly on plant surfaces (dry deposition). Electrical utilities contribute 70% of the 20 million tons (21 million metric tons) of SO2 that are annually added to the atmosphere. Most of this is from the combustion of coal. Electric utilities also contribute 30% of the 19 million tons of NOx added to the atmosphere, and internal combustion engines used in automobiles, trucks, and buses contribute more than 40%. Natural sources such as forest fires, swamp gases, and volcanoes only contribute 1–5% of atmospheric SO2. Forest fires, lightning, and microbial processes in soils contribute about 11% to atmospheric NOx. In response to air quality regulations, electrical utilities have switched to coal with lower sulfur content and installed scrubbing systems to remove SO2. This has resulted in a steady decrease in SO2 emissions in the United States since 1970, with a 18–20% decrease between 1975 and 1988. Emissions of NOx have also decreased from the peak in 1975, with a 9–15% decrease from 1975 to 1988. A commonly used indicator of the intensity of acid rain is the pH of this rainfall. The pH of non-polluted rainfall in forested regions is in the range 5.0–5.6. The upper limit is 5.6, not neutral (7.0), because of carbonic acid that results from the dissolution of atmospheric carbon dioxide. The contribution of naturally occurring nitric and sulfuric acid, as well as organic acids, reduces the pH somewhat to less than 5.6. In arid and semi-arid regions, rainfall pH values can be greater than 5.6 due the effect of alkaline soil dust in the air. Nitric and sulfuric acids in acidic rainfall (wet deposition) can result in pH values for individual rainfall events of less than 4.0. In North America, the lowest acid rainfall is in the northeastern United States and southeastern Canada. The lowest mean pH in this region is 4.15. Even lower pH values
Environmental Encyclopedia 3 are observed in central and northern Europe. Generally, the greater the population density and density of industrialization the lower the rainfall pH. Long distance transport, however, can result in low pH rainfall even in areas with low population and low density of industries, as in parts of New England, eastern Canada, and in Scandinavia. A very significant portion of acid deposition occurs in the dry form. In the United States, it is estimated that 30– 60% of acidic deposition occurs as dry fall. This material is deposited as sulfur dioxide gas and very finely divided particles (aerosols) directly on the surfaces of plants (needles and leaves). The rate of deposition depends not only on the concentration of acid materials suspended in the air, but on the nature and density of plant surfaces exposed to the atmosphere and the atmospheric conditions(e.g., wind speed and humidity). Direct deposition of acid cloud droplets can be very important especially in some high altitude forests. Acid cloud droplets can have acid concentrations of five to 20 times that in wet deposition. In some high elevation sites that are frequently shrouded in clouds, direct droplet deposition is three times that of wet deposition from rainfall. Acid deposition has the potential to adversely affect sensitive forests as well as lakes and streams. Agriculture is generally not included in the assessment of the effects of acidic deposition because experimental evidence indicates that even the most severe episodes of acid deposition do not adversely affect the growth of agricultural crops, and any long-term soil acidification can readily be managed by addition of agricultural lime. In fact, the acidifying potential of the fertilizers normally added to cropland is much greater than that of acidic deposition. In forests, however, longterm acidic deposition on sensitive soils can result in the depletion of important nutrient elements (e.g., calcium, magnesium, and potassium) and in soil acidification. Also, acidic pollutants can interact with other pollutants (e.g., ozone) to cause more immediate problems for tree growth. Acid deposition can also result in the acidification of sensitive lakes and with the loss of biological productivity. Long-term exposure of acid sensitive materials used in building construction and in monuments (e.g., zinc, marble, limestone, and some sandstone) can result in surface corrosion and deterioration. Monuments tend to be the most vulnerable because they are usually not as protected from rainfall as most building materials. Good data on the impact of acidic deposition on monuments and building material is lacking. Nutrient depletion due to acid deposition on sensitive soils is a long-term (decades to centuries) consequence of acidic deposition. Acidic deposition greatly accelerates the very slow depletion of soil nutrients due to natural weathering processes. Soils that contain less plant-available calcium,
Acid rain
magnesium and potassium are less buffered with respect to degradation due to acidic deposition. The most sensitive soils are shallow sandy soils over hard bedrock. The least vulnerable soils are the deep clay soils that are highly buffered against changes due to acidic deposition. The more immediate possible threat to forests is the forest decline phenomenon that has been observed in forests in northern Europe and North America. Acidic deposition in combination with other stress factors such as ozone, disease and adverse weather conditions can lead to decline in forest productivity and, in certain cases, to dieback. Acid deposition alone cannot account for the observed forest decline, and acid deposition probably plays a minor role in the areas where forest decline has occurred. Ozone is a much more serious threat to forests, and it is a key factor in the decline of forests in the Sierra Nevada and San Bernardino mountains in California. The greatest concern for adverse effects of acidic deposition is the decline in biological productivity in lakes. When a lake has a pH less than 6.0, several species of minnows, as well as other species that are part of the food chain for many fish, cannot survive. At pH values less than about 5.3, lake trout, walleye, and smallmouth bass cannot survive. At pH less than about 4.5, most fish cannot survive (largemouth bass are an exception). Many small lakes are naturally acidic due to organic acids produced in acid soils and acid bogs. These lakes have chemistries dominated by organic acids, and many have brown colored waters due to the organic acid content. These lakes can be distinguished from lakes acidified by acidic deposition, because lakes strongly affected by acidic deposition are dominated by sulfate. Lakes that are adversely affected by acidic deposition tend to be in steep terrain with thin soils. In these settings the path of rainwater movement into a lake is not influenced greatly by soil materials. This contrasts to most lakes where much of the water that collects in a lake flows first into the groundwater before entering the lake via subsurface flow. Due to the contact with soil materials, acidity is neutralized and the capacity to neutralize acidity is added to the water in the form of bicarbonate ions (bicarbonate alkalinity). If more than 5% of the water that reaches a lake is in the form of groundwater, a lake is not sensitive to acid deposition. An estimated 24% of the lakes in the Adirondack region of New York are devoid of fish. In one third to one half of these lakes this is due to acidic deposition. Approximately 16% of the lakes in this region may have lost one or more species of fish due to acidification. In Ontario, Canada, 115 lakes are estimated to have lost populations of lake trout. Acidification of lakes, by acidic deposition, extends as far west as Upper Michigan and northeastern Wisconsin, where many sensitive lakes occur and there is some evidence for 7
Environmental Encyclopedia 3
Acidification
Acidity see pH
Acoustics see Noise pollution
Acquired immune deficiency syndrome see AIDS
Activated sludge
Acid rain in Chicago, Illinois, erodes the structures of historical buildings. (Photograph by Richard P. Jacobs. JLM Visuals. Reproduced by permission.)
acidification. However, the extent of acidification is quite limited. [Paul R. Bloom]
RESOURCES BOOKS Bresser, A. H., ed. Acid Precipitation. New York: Springer-Verlag, 1990. Mellanby, K., ed. Air Pollution, Acid Rain and the Environment. New York: Elsevier, 1989. Turck, M. Acid Rain. New York: Macmillan, 1990. Wellburn, A. Air Pollution and Acid Rain: The Biological Impact. New York: Wiley, 1988. Young, P. M. Acidic Deposition: State of Science and Technology. Summary Report of the U. S. National Acid Precipitation Program. Washington, DC: U. S. Government Printing Office, 1991.
Acidification The process of becoming more acidic due to inputs of an acidic substance. The common measure of acidification is a decrease in pH. Acidification of soils and natural waters by acid rain or acid wastes can result in reduced biological productivity if the pH is sufficiently reduced. 8
The activated sludge process is an aerobic (oxygen-rich), continuous-flow biological method for the treatment of domestic and biodegradable industrial wastewater, in which organic matter is utilized by microorganisms for life-sustaining processes, that is, for energy for reproduction, digestion, movement, etc. and as a food source to produce cell growth and more microorganisms. During these activities of utilization and degradation of organic materials, degradation products of carbon dioxide and water are also formed. The activated sludge process is characterized by the suspension of microorganisms in the wastewater, a mixture referred to as the mixed liquor. Activated sludge is used as part of an overall treatment system, which includes primary treatment of the wastewater for the removal of particulate solids before the use of activated sludge as a secondary treatment process to remove suspended and dissolved organic solids. The conventional activated sludge process consists of an aeration basin, with air as the oxygen source, where treatment is accomplished. Soluble (dissolved) organic materials are absorbed through the cell walls of the microorganisms and into the cells, where they are broken down and converted to more microorganisms, carbon dioxide, water, and energy. Insoluble (solid) particles are adsorbed on the cell walls, transformed to a soluble form by enzymes (biological catalysts) secreted by the microorganisms, and absorbed through the cell wall, where they are also digested and used by the microorganisms in their life-sustaining processes. The microorganisms that are responsible for the degradation of the organic materials are maintained in suspension by mixing induced by the aeration system. As the microorganisms are mixed, they collide with other microorganisms and stick together to form larger particles called floc. The large flocs that are formed settle more readily than individual cells. These flocs also collide with suspended and colloidal materials (insoluble organic materials), which stick to the flocs and cause the flocs to grow even larger. The microor-
Environmental Encyclopedia 3 ganisms digest these adsorbed materials, thereby re-opening sites for more materials to stick. The aeration basin is followed by a secondary clarifier (settling tank), where the flocs of microorganisms with their adsorbed organic materials settle out. A portion of the settled microorganisms, referred to as sludge, are recycled to the aeration basin to maintain an active population of microorganisms and an adequate supply of biological solids for the adsorption of organic materials. Excess sludge is wasted by being piped to separate sludge-handling processes. The liquids from the clarifier are transported to facilities for disinfection and final discharge to receiving waters, or to tertiary treatment units for further treatment. Activated sludge processes are designed based on the mixed liquor suspended solids (MLSS) and the organic loading of the wastewater, as represented by the biochemical oxygen demand (BOD) or chemical oxygen demand (COD). The MLSS represents the quantity of microorganisms involved in the treatment of the organic materials in the aeration basin, while the organic loading determines the requirements for the design of the aeration system. Modifications to the conventional activated sludge process include: Extended aeration. The mixed liquor is retained in the aeration basin until the production rate of new cells is the same as the decay rate of existing cells, with no excess sludge production. In practice, excess sludge is produced, but the quantity is less than that of other activated sludge processes. This process is often used for the treatment of industrial wastewater that contains complex organic materials requiring long detention times for degradation. OContact stabilization. A process based on the premise that as wastewater enters the aeration basin (referred to as the contact basin), colloidal and insoluble organic biodegradable materials are removed rapidly by biological sorption, synthesis, and flocculation during a relatively short contact time. This method uses a reaeration (stabilization) basin before the settled sludge from the clarifier is returned to the contact basin. The concentrated flocculated and adsorbed organic materials are oxidized in the reaeration basin, which does not receive any addition of raw wastewater. OPlug flow. Wastewater is routed through a series of channels constructed in the aeration basin; wastewater flows through and is treated as a plug as it winds its way through the basin. As the “plug” passes through the tank, the concentrations of organic materials are gradually reduced, with a corresponding decrease in oxygen requirements and microorganism numbers. OStep aeration. Influent wastewater enters the aeration basin along the length of the basin, while the return sludge enters at the head of the basin. This process results in a more O
Ansel Easton Adams
uniform oxygen demand in the basin and a more stable environment for the microorganisms; it also results in a lower solids loading on the clarifier for a given mass of microorganisms. OOxidation ditch. A circular aeration basin (racetrackshaped) is used, with rotary brush aerators that extend across the width of the ditch. Brush aerators aerate the wastewater, keep the microorganisms in suspension, and drive the wastewater around the circular channel. [Judith Sims]
RESOURCES BOOKS Corbitt, R. A. “Wastewater Disposal.” In Standard Handbook of Environmental Engineering, edited by R. A. Corbitt. New York: McGraw-Hill, 1990. Junkins, R., K. Deeny, and T. Eckhoff. The Activated Sludge Process: Fundamentals of Operation. Boston: Butterworth Publishers, 1983.
Acute effects Effects that persist in a biologic system for only a short time, generally less than a week. The effects might range from behavioral or color changes to death. Tests for acute effects are performed with humans, animals, plants, insects, and microorganisms. Intoxication and a hangover resulting from the consumption of too much alcohol, the common cold, and parathion poisoning are examples of acute effects. Generally, little tissue damage occurs as a result of acute effects. The term acute effects should not be confused with acute toxicity studies or acute dosages, which respectively refer to short-term studies (generally less than a week) and short-term dosages (often a single dose). Both chronic and acute exposures can initiate acute effects.
Ansel Easton Adams
(1902 – 1984)
American photographer and conservationist Ansel Adams is best known for his stark black-and-white photographs of nature and the American landscape. He was born and raised in San Francisco. Schooled at home by his parents, he received little formal training except as a pianist. A trip to Yosemite Valley as a teenager had a profound influence on him, and Yosemite National Park and the Sierra “range of light” attracted him back many times and inspired two great careers: photographer and conservationist. As he observed, “Everybody needs something to believe in [and] my point of focus is conservation.” He used his photographs to make that point more vivid and turned it into an enduring legacy. 9
Environmental Encyclopedia 3
Adaptation
Adams was a painstaking artist, and some critics have chided him for an overemphasis on technique and for creating in his work “a mood that is relentlessly optimistic.” Adams was a careful technician, making all of his own prints (reportedly hand-producing over 13,000 in his lifetime), sometimes spending a whole day on one print. He explained: “I have made thousands of photographs of the natural scene, but only those images that were most intensely felt at the moment of exposure have survived the inevitable winnowing of time.” He did winnow, ruthlessly, and the result was a collection of work that introduced millions of people to the majesty and diversity of the American landscape. Not all of Adams’s pictures were “uplifting” or “optimistic” images of scenic wonders; he also documented scenes of overgrazing in the arid Southwest and of incarcerated Japanese-Americans in the Manzanar internment camp. From the beginning, Adams used his photographs in the cause of conservation. His pictures played a major role in the late 1930s in establishing Kings Canyon National Park. Throughout his life, he remained an active, involved conservationist; for many years he was on the Board of the Sierra Club and strongly influenced the Club’s activities and philosophy. Ansel Adams’s greatest bequest to the world will remain his photographs and advocacy of wilderness and the national park ideals. Through his work he not only generated interest in environmental conservation, he also captured the beauty and majesty of nature for all generations to enjoy. [Gerald L. Young]
RESOURCES BOOKS Adams, Ansel. Ansel Adams: An Autobiography. New York: New York Graphic Society, 1984.
PERIODICALS Cahn, R. “Ansel Adams, Environmentalist.” Sierra 64 (May–June 1979): 31–49. Grundberg, A. “Ansel Adams: The Politics of Natural Space.” The New Criterion 3 (1984): 48–52.
Adaptation All members of a population share many characteristics in common. For example, all finches in a particular forest are alike in many ways. But if many hard-to-shell seeds are found in the forest, those finches with stronger, more conical bills will have better rates of reproduction and survival than finches with thin bills. Therefore, a conical, stout bill can be considered an adaptation to that forest environment. Any specialized characteristic that permits an individual to 10
survive and reproduce is called an adaptation. Adaptations may result either from an individual’s genetic heritage or from its ability to learn. Since successful genetic adaptations are more likely to be passed from generation to generation through the survival of better adapted organisms, adaptation can be viewed as the force that drives biological evolution.
Adaptive management Adaptive management is taking an idea, implementing it, and then documenting and learning from any mistakes or benefits of the experiment. The basic idea behind adaptive management is that natural systems are too complex, too non-linear, and too multi-scale to be predictable. Management policies and procedures must therefore become more adaptive and capable of change to cope with unpredictable systems. Advocates suggest treating management policies as experiments, which are then designed to maximize learning rather than focusing on immediate resource yields. If the environmental and resource systems on which human beings depend are constantly changing, then societies who utilize that learning cannot rely on those systems to sustain continued use. Adaptive management mandates a continual experimental process, an on-going process of reevaluation and reassessment of planning methods and human actions, and a constant long-term monitoring of environmental impacts and change. This would keep up with the constant change in the environmental systems to which the policies or ideas are to be applied. The Grand Canyon Protection Act of 1992 is one example of adaptive management at work. It entails the study and monitoring of the Glen Canyon Dam and the operational effects on the surrounding environment, both ecological and biological. Haney and Power suggest that “uncertainty and complexity frustrate both science and management, and it is only by combining the best of both that we use all available tools to manage ecosystems sustainably.” However, Fikret Berkes and colleagues claim that adaptive management can be attained by approaching it as a rediscovery of traditional ecological knowledge among indigenous peoples: “These traditional systems had certain similarities to adaptive management with its emphasis on feedback learning, and its treatment of uncertainty and unpredictability intrinsic to all ecosystems.” An editorial in the journal Environment offered the rather inane statement that adaptive management “has not realized its promise.” The promise is in the idea, but implementation begins with people. Adaptive management, like Smart Growth and other seemingly innovative approaches
Environmental Encyclopedia 3
Adirondack Mountains
to land use and environmental management, is plagued by the problem of how to get people to actually put into practice what is proposed. Even for practical ideas the problem remains the same: not science, not technology, but human willfulness and human behavior. For policies or plans to be truly adaptive, the people themselves must be willing to adapt. Haney and Power provide the conclusion: “When properly integrated, the [adaptive management] process is continuous and cyclic; components of the adaptive management model evolve as information is gained and social and ecological systems change. Unless management is flexible and innovative, outcomes become less sustainable and less accepted by stakeholders. Management will be successful in the face of complexity and uncertainty only with holistic approaches, good science, and critical evaluation of each step. Adaptive management is where it all comes together.” [Gerald L. Young]
RESOURCES BOOKS Holling, C. S., ed. Adaptive Environmental Assessment and Management. NY: John Wiley & Sons, 1978.
PERIODICALS Haney, Alan, and Rebecca L. Power. “Adaptive Management for Sound Ecosystem Management.” Environmental Management 20, no. 6 (November/December 1996): 879–886. McLain, Rebecca J., and Robert G. Lee. “Adaptive management: Promises and Pitfalls.” Environmental Management 20, no. 4 (July/August, 1996): 437–448. Shindler, Bruce, Brent Steel, and Peter List. “Public Judgments of Adaptive Management: A Response from Forest Communities.” Journal of Forestry 96, no. 6 (June, 1996): 4–12. Walters, Carl. Adaptive Management of Renewable Resources. NY: Macmillan, 1986. Walters, Carl J. “Ecological Optimization and Adaptive Management.” Annual Review of Ecology and Systematics 9 (1978): 157–188.
Adirondack Mountains A range of mountains in northeastern New York, containing Mt. Marcy (5,344 ft; 1,644 m), the state’s highest point. Bounded by the Mohawk Valley on the south, the St. Lawrence Valley on the northeast, and by the Hudson River and Lake Champlain on the east, the Adirondack Mountains form the core of Adirondack Park. This park is one of the earliest and most comprehensive examples of regional planning in the United States. The regional plan attempts to balance conflicting interests of many users at the same time as it controls environmentally destructive development. Although the plan remains controversial, it has succeeded
in largely preserving one of the last and greatest wilderness areas in the East. The Adirondacks serve a number of important purposes for surrounding populations. Vacationers, hikers, canoeists, and anglers use the area’s 2,300 wilderness lakes and extensive river systems. The state’s greatest remaining forests stand in the Adirondacks, providing animal habitat and serving recreational visitors. Timber and mining companies, employing much of the area’s resident population, also rely on the forests, some of which contain the East’s most ancient old-growth groves. Containing the headwaters of numerous rivers, including the Hudson, Adirondack Park is an essential source of clean water for farms and cities at lower elevations. Adirondack Park was established by the New York State Constitution of 1892, which mandates that the region shall remain “forever wild.” Encompassing six million acres (2.4 million ha), this park is the largest wilderness area in the eastern United States—nearly three times the size of Yellowstone National Park. Only a third of the land within park boundaries, however, is owned by the state of New York. Private mining and timber concerns, public agencies, several towns, thousands of private cabins, and 107 units of local government occupy the remaining property. Because the development interests of various user groups and visitors conflict with the state constitution, a comprehensive regional land use plan was developed in 1972–1973. The novelty of the plan lay in the large area it covered and in its jurisdiction over land uses on private land as well as public land. According to the regional plan, all major development within park boundaries must meet an extensive set of environmental safeguards drawn up by the state’s Adirondack Park Agency. Stringent rules and extensive regulation frustrate local residents and commercial interests, who complain about the plan’s complexity and resent “outsiders” ruling on what Adirondackers are allowed to do. Nevertheless, this plan has been a milestone for other regions trying to balance the interests of multiple users. By controlling extensive development, the park agency has preserved a wilderness resource that has become extremely rare in the eastern United States. The survival of this century-old park, surrounded by extensive development, demonstrates the value of preserving wilderness in spite of ongoing controversy. In recent years forestry and recreation interests in the Adirondacks have encountered a new environmental problem in acid precipitation. Evidence of deleterious effects of acid rain and snow on aquatic and terrestrial vegetation began to accumulate in the early 1970s. Studies revealed that about half of the Adirondack lakes situated above 3,300 ft (1,000 m) have pH levels so low that all fish have disappeared. Prevailing winds put these mountains directly downstream of urban and industrial regions of western New York 11
Environmental Encyclopedia 3
Adsorption
and southern Ontario. Because they form an elevated obstacle to weather patterns, these mountains capture a great deal of precipitation carrying acidic sulfur and nitrogen oxides from upwind industrial cities. [Mary Ann Cunningham]
RESOURCES BOOKS Ciroff, R. A., and G. Davis. Protecting Open Space: Land Use Control in the Adirondack Park. Cambridge, MA: Ballinger, 1981. Davis, G., and T. Duffus. Developing a Land Conservation Strategy. Elizabethtown, NY: Adirondack Land Trust, 1987.
Aerobic Refers to either an environment that contains molecular oxygen gas (O2); an organism or tissue that requires oxygen for its metabolism; or a chemical or biological process that requires oxygen. Aerobic organisms use molecular oxygen in respiration, releasing carbon dioxide (CO2) in return. These organisms include mammals, fish, birds, and green plants, as well as many of the lower life forms such as fungi, algae, and sundry bacteria and actinomycetes. Many, but not all, organic decomposition processes are aerobic; a lack of oxygen greatly slows these processes.
Graham, F. J. The Adirondack Park: A Political History. New York: Knopf, 1978.
Aerobic/anaerobic systems
Popper, F. J. The Politics of Land Use Reform. Madison, WI: University of Wisconsin Press, 1981.
Most living organisms require oxygen to function normally, but a few forms of life exist exclusively in the absence of oxygen and some can function both in the presence of oxygen (aerobically) and in its absence (anaerobically). Examples of anaerobic organisms are found in bacteria of the genus Clostridium, in parasitic protozoans from the gastrointestinal tract of humans and other vertebrates, and in ciliates associated with sulfide-containing sediments. Organisms capable of switching between aerobic and anaerobic existence are found in forms of fungi known as yeasts. The ability of an organism to function both aerobically and anaerobically increases the variety of sites in which it is able to exist and conveys some advantages over organisms with less adaptive potential. Microbial decay activity in nature can occur either aerobically or anaerobically. Aerobic decomposers of compost and other organic substrates are generally preferable because they act more quickly and release fewer noxious odors. Large sewage treatment plants use a two-stage digestion system in which the first stage is anaerobic digestion of sludge that produces flammable methane gas that may be used as fuel to help operate the plant. Sludge digestion continues in the aerobic second stage, a process which is easier to control but more costly because of the power needed to provide aeration. Although most fungi are generally aerobic organisms, yeasts used in bread making and in the production of fermented beverages such as wine and beer can metabolize anaerobically. In the process, they release ethyl alcohol and the carbon dioxide that causes bread to rise. Tissues of higher organisms may have limited capability for anaerobic metabolism, but they need elaborate compensating mechanisms to survive even brief periods without oxygen. For example, human muscle tissue is able to metabolize anaerobically when blood cannot supply the large amounts of oxygen needed for vigorous activity. Muscle contraction requires an energy-rich compound called adeno-
Adsorption The removal of ions or molecules from solutions by binding to solid surfaces. Phosphorus is removed from water flowing through soils by adsorption on soil particles. Some pesticides adsorb strongly on soil particles. Adsorption by suspended solids is also an important process in natural waters.
AEC see Atomic Energy Commission
AEM see Agricultural Environmental Management
Aeration In discussions of plant growth, aeration refers to an exchange that takes place in soil or another medium allowing oxygen to enter and carbon dioxide to escape into the atmosphere. Crop growth is often reduced when aeration is poor. In geology, particularly with reference to groundwater, aeration is the portion of the earth’s crust where the pores are only partially filled with water. In relation to water treatment, aeration is the process of exposing water to air in order to remove such undesirable substances in drinking water as iron and manganese. 12
Environmental Encyclopedia 3
Aerobic sludge digestion
sine triphosphate (ATP). Muscle tissue normally contains enough ATP for 20–30 seconds of intense activity. ATP must then be metabolically regenerated from glycogen, the muscle’s primary energy source. Muscle tissue has both aerobic and anaerobic metabolic systems for regenerating ATP from glycogen. Although the aerobic system is much more efficient, the anaerobic system is the major energy source for the first minute or two of exercise. The carbon dioxide released in this process causes the heart rate to increase. As the heart beats faster and more oxygen is delivered to the muscle tissue, the more efficient aerobic system for generating ATP takes over. A person’s physical condition is important in determining how well the aerobic system is able to meet the needs of continued activity. In fit individuals who exercise regularly, heart function is optimized, and the heart is able to pump blood rapidly enough to maintain aerobic metabolism. If the oxygen level in muscle tissue drops, anaerobic metabolism will resume. Toxic products of anaerobic metabolism, including lactic acid, accumulate in the tissue, and muscle fatigue results. Other interesting examples of limited anaerobic capability are found in the animal kingdom. Some diving ducks have an adaptation that allows them to draw oxygen from stored oxyhemoglobin and oxymyoglobin in blood and muscles. This adaptation permits them to remain submerged in water for extended periods. To prevent desiccation, mussels and clams close their shells when out of the water at low tide, and their metabolism shifts from aerobic to anaerobic. When once again in the water, the animals rapidly return to aerobic metabolism and purge themselves of the acid products of anaerobiosis accumulated while they were dry. [Douglas C. Pratt]
RESOURCES BOOKS Lea, A.G.H., and Piggott, J. R. Fermented beverage production. New York: Blackie, 1995. McArdle, W. D. Exercise Physiology: Energy, Nutrition, and Human Performance. 4th ed. Baltimore: Williams & Wilkins, 1996. Stanbury, P. F., Whitaker, A., and Hall, S. J. Principles of Fermentation Technology. 2nd ed. Tarrytown, N.Y.: Pergamon, 1995.
PERIODICALS Klass, D. L. “Methane from Anaerobic Fermentation.” Science 223 (1984): 1021.
Aerobic sludge digestion Wastewater treatment plants produce organic sludge as wastewater is treated; this sludge must be further treated before ultimate disposal. Sludges are generated from primary settling tanks, which are used to remove settable, particulate
solids, and from secondary clarifiers (settling basins), which are used to remove excess biomass production generated in secondary biological treatment units. Disposal of sludges from wastewater treatment processes is a costly and difficult problem. The processes used in sludge disposal include: (1) reduction in sludge volume, primarily by removal of water, which constitutes 97–98% of the sludge; (2) reduction of the volatile (organic) content of the sludge, which eliminates nuisance conditions by reducing putrescibility and reduces threats to human health by reducing levels of microorganisms; and (3) ultimate disposal of the residues. Aerobic sludge digestion is one process that may be used to reduce both the organic content and the volume of the sludge. Under aerobic conditions, a large portion of the organic matter in sludge may be oxidized biologically by microorganisms to carbon dioxide and water. The process results in approximately 50% reduction in solids content. Aerobic sludge digestion facilities may be designed for batch or continuous flow operations. In batch operations, sludge is added to a reaction tank while the contents are continuously aerated. Once the tank is filled, the sludges are aerated for two to three weeks, depending on the types of sludge. After aeration is discontinued, the solids and liquids are separated. Solids at concentrations of 2–45 are removed, and the clarified liquid supernatant is decanted and recycled to the wastewater treatment plant. In a continuous flow system, an aeration tank is utilized, followed by a settling tank. Aerobic sludge digestion is usually used only for biological sludges from secondary treatment units, in the absence of sludges from primary treatment units. The most commonly used application is for the treatment of sludges wasted from extended aeration systems (which is a modification of the activated sludge system). Since there is no addition of an external food source, the microorganisms must utilize their own cell contents for metabolic purposes in a process called endogenous respiration. The remaining sludge is a mineralized sludge, with remaining organic materials comprised of cell walls and other cell fragments that are not readily biodegradable. The advantages of using aerobic digestion, as compared to the use of anaerobic digestion include: (1) simplicity of operation and maintenance; (2) lower capital costs; (3) lower levels of biochemical oxygen demand (BOD) and phosphorus in the supernatant; (4) fewer effects from upsets such as the presence of toxic interferences or changes in loading and pH; (5) less odor; (6) nonexplosive; (7) greater reduction in grease and hexane solubles; (8) greater sludge fertilizer value; (9) shorter retention periods; and (10) an effective alternative for small wastewater treatment plants. 13
Environmental Encyclopedia 3
Aerosol
Disadvantages include: (1) higher operating costs, especially energy costs; (2) highly sensitive to ambient temperature (operation at temperatures below 59°F [15°C]) may require excessive retention times to achieve stabilization; if heating is required, aerobic digestion may not be costeffective); (3) no useful byproduct such as methane gas that is produced in anaerobic digestion; (4) variability in the ability to dewater to reduce sludge volume; (5) less reduction in volatile solids; and (6) unfavorable economics for larger wastewater treatment plants. [Judith Sims]
RESOURCES BOOKS Corbitt, R. A. “Wastewater Disposal.” In Standard Handbook of Environmental Engineering, edited by R. A. Corbitt. New York: McGraw-Hill, 1990. Gaudy Jr., A. F., and E. T. Gaudy. Microbiology for Environmental Scientists and Engineers. New York: McGraw-Hill, 1980. Peavy, H. S., D. R. Rowe, and G. Tchobanoglous. Environmental Engineering. New York: McGraw-Hill, 1985.
Aerosol A suspension of particles, liquid or solid, in a gas. The term implies a degree of permanence in the suspension, which puts a rough upper limit on particle size at a few tens of micrometers at most (1 micrometer = 0.00004 in). Thus in proper use the term connotes the ensemble of the particles and the suspending gas. The atmospheric aerosol has two major components, generally referred to as coarse and fine particles, with different sources and different composition. Coarse particles result from mechanical processes, such as grinding. The smaller particles are ground, the more surface they have per unit of mass. Creating new surface requires energy, so the smallest average size that can be created by such processes is limited by the available energy. It is rare for such mechanically generated particles to be less than 1 m (0.00004 in.) in diameter. Fine particles, on the other hand, are formed by condensation from the vapor phase. For most substances, condensation is difficult from a uniform gaseous state; it requires the presence of pre-existing particles on which the vapors can deposit. Alternatively, very high concentrations of the vapor are required, compared with the concentration in equilibrium with the condensed material. Hence, fine particles form readily in combustion processes when substances are vaporized. The gas is then quickly cooled. These can then serve as nuclei for the formation of larger particles, still in the fine particle size range, in the presence of condensable vapors. However, in the atmo14
sphere such particles become rapidly more scarce with in-
creasing size, and are relatively rare in sizes much larger than a few micrometers. At about 2 m (0.00008 in.), coarse and fine particles are about equally abundant. Using the term strictly, one rarely samples the atmospheric aerosol, but rather the particles out of the aerosol. The presence of aerosols is generally detected by their effect on light. Aerosols of a uniform particle size in the vicinity of the wavelengths of visible light can produce rather spectacular optical effects. In the laboratory, such aerosols can be produced by condensation of the heated vapors of certain oils on nuclei made by evaporating salts from heated filaments. If the suspending gas is cooled quickly, particle size is governed by the supply of vapor compared with the supply of nuclei, and the time available for condensation to occur. Since these can all be made nearly constant throughout the gas, the resulting particles are quite uniform. It is also possible to produce uniform particles by spraying a dilute solution of a soluble material, then evaporating the solvent. If the spray head is vibrated in an appropriate frequency range, the drops will be uniform in size, with the size controlled by the frequency of vibration and the rate of flow of the spray. Obviously, the final particle size is also a function of the concentration of the sprayed solution. [James P. Lodge Jr.]
RESOURCES BOOKS Jennings, S. G., ed. Aerosol Effects on Climate. Tucson, AZ: University of Arizona Press, 1993. Reist, P. Aerosol Science and Technology. New York: McGraw-Hill, 1992.
PERIODICALS Monastersky, R. “Aerosols: Critical Questions for Climate.” Science News 138 (25 August 1990): 118. Sun, M. “Acid Aerosols Called Health Hazard.” Science 240 (24 June 1988): 1727.
Aflatoxin Toxic compounds produced by some fungi and among the most potent naturally occurring carcinogens for humans and animals. Aflatoxin intake is positively related to high incidence of liver cancer in humans in many developing countries. In many farm animals aflatoxin can cause acute or chronic diseases. Aflatoxin is a metabolic by-product produced by the fungi Aspergillus flavus and the closely related species Aspergillus parasiticus growing on grains and decaying organic compounds. There are four naturally occurring aflatoxins: B1, B2, G1, and G2. All of these compounds will fluoresce under a UV (black) light around 425– 450 nm providing a qualitative test for the presence of afla-
Environmental Encyclopedia 3 toxins. In general, starch grains, such as corn, are infected in storage when the moisture content of the grain reaches 17–18% and the temperature is 79–99°F (26–37°C). However, the fungus may also infect grain in the field under hot, dry conditions.
African Wildlife Foundation The African Wildlife Foundation (AWF), headquartered in Washington, DC, was established in 1961 to promote the protection of the animals native to Africa. The group maintains offices in both Washington, DC, and Nairobi, Kenya. The African headquarters promotes the idea that Africans themselves are best able to protect the wildlife of their continent. AWF also established two colleges of wildlife management in Africa (Tanzania and Cameroon), so that rangers and park and reserve wardens can be professionally trained. Conservation education, especially as it relates to African wildlife, has always been a major AWF goal—in fact, it has been the association’s primary focus since its inception. AWF carries out its mandate to protect Africa’s wildlife through a wide range of projects and activities. Since 1961, AWF has provided a radio communication network in Africa, as well as several airplanes and jeeps for antipoaching patrols. These were instrumental in facilitating the work of Dr. Richard Leakey in the Tsavo National Park, Kenya. In 1999, the African Hearlands project was set up and to try to connect large areas of wild land which is home to wild animals. They also attempt to involve people who live adjacent to protected wildlife areas by asking them to take joint responsibility for natural resources. The program demonstrates that land conservation and the needs of neighboring people and their livestock can be balanced, and the benefits shared. Currently there are four heartland areas: Maasai Steppe, Kilimanjaro, Virunga, and Samburu. Another highly successful AWF program is the Elephant Awareness Campaign. Its slogan, “Only Elephants Should Wear Ivory,” has become extremely popular, both in Africa and in the United States, and is largely responsible for bringing the plight of the African elephant (Loxodonta africana) to public awareness. Although AWF is concerned with all the wildlife of Africa, in recent years the group has focused on saving African elephants, black rhinoceroses (Diceros bicornis), and mountain gorillas (Gorilla gorilla berengei). These species are seriously endangered, and are benefiting from AWF’s Critical Habitats and Species Program, which works to aid these and other animals in critical danger. From its inception, AWF has supported education centers, wildlife clubs, national parks, and reserves. There
Africanized bees
is even a course at the College of African Wildlife Management in Tanzania that allows students to learn community conservation activities and helps park officials learn to work with residents living adjacent to protected areas. AWF also involves teachers in its endeavors with a series of publications, Let’s Conserve Our Wildlife. Written in Swahili, the series includes teacher’s guides and has been used in both elementary schools and adult literacy classes in African villages. AWF also publishes the quarterly magazine Wildlife News. [Cathy M. Falk]
RESOURCES ORGANIZATIONS African Wildlife Foundation., 1400 16th Street, NW, Washington, DC USA 20036 (202) 939-3333, Fax: (202) 939-3332, Email:
[email protected],
Africanized bees The Africanized bee (Apis mellifera scutellata), or “killer bee,” is an extremely aggressive honeybee. This bee developed when African honeybees were brought to Brazil to mate with other bees to increase honey production. The imported bees were accidentally released and they have since spread northward, traveling at a rate of 300 mi (483 km) per year. The bees first appeared in the United States at the TexasMexico border in late 1990. The bees get their “killer” title because of their vigorous defense of colonies or hives when disturbed. Aside from temperament, they are much like their counterparts now in the United States, which are European in lineage. Africanized bees are slightly smaller than their more passive cousins. Honeybees are social insects and live and work together in colonies. When bees fly from plant to plant, they help pollinate flowers and crops. Africanized bees, however, seem to be more interested in reproducing than in honey production or pollination. For this reason they are constantly swarming and moving around, while domestic bees tend to stay in local, managed colonies. Because Africanized bees are also much more aggressive than domestic honey bees when their colonies are disturbed, they can be harmful to people who are allergic to bee stings. More problematic than the threat to humans, however, is the impact the bees will have on fruit and vegetable industries in the southern parts of the United States. Many fruit and vegetable growers depend on honey bees for pollination, and in places where the Africanized bees have appeared, honey production has fallen by as much as 80%. Beekeepers in this country are experimenting with “re-queening” their 15
Environmental Encyclopedia 3
Agency for Toxic Substances and Disease Registry
Nevada bees were almost 90% Africanized in June of 2001. Most of Texas has been labeled as a quarantine zone, and beekeepers are not able to move hives out of these boundaries. The largest colony found to date was in southern Phoenix, Arizona. The hive was almost 6 ft (1.8 m) long and held about 50,000 Africanized bees. [Linda Rehkopf]
RESOURCES PERIODICALS “African Bees Make U.S. Debut.” Science News 138 (October 27, 1990): 261. Barinaga, M. “How African Are ’Killer’ Bees?” Science 250 (November 2, 1990): 628–629. Hubbell, S. “Maybe the ’Killer’ Bee Should Be Called the ’Bravo’ Instead.” Smithsonian 22 (September 1991): 116–124. White, W. “The Bees From Rio Claro.” The New Yorker 67 (September 16, 1991): 36–53. Winston, M. Killer Bees: The Africanized Honey Bee in the Americas. Cambridge: Harvard University Press, 1992.
OTHER “Africanized Bees in the Americas.” Sting Shield.com Page. April 25, 2002 [cited May 2002]. .
An Africanized bee collecting grass pollen in Brazil. (Photograph by Scott Camazine. Photo Researchers Inc. Reproduced by permission.)
colonies regularly to ensure that the colonies reproduce gentle offspring. Another danger is the propensity of the Africanized bee to mate with honey bees of European lineage, a kind of “infiltration” of the gene pool of more domestic bees. Researchers from the U.S. Department of Agriculture (USDA) are watching for the results of this interbreeding, particularly for those bees that display European-style physiques and African behaviors, or vice versa. When Africanized bees first appeared in southern Texas, researchers from the USDA’s Honeybee Research Laboratory in Weslaco, Texas, destroyed the colony, estimated at 5,000 bees. Some of the members of the 3-lb (1.4 kg) colony were preserved in alcohol and others in freezers for future analysis. Researchers are also developing management techniques, including the annual introduction of young mated European queens into domestic hives, in an attempt to maintain gentle production stock and ensure honey production and pollination. As of 2002, there were 140 counties in Texas, nine in New Mexico, nine in California, three in Nevada, and all of the 15 counties in Arizona in which Africanized bee colonies had been located. There have also been reported colonies in Puerto Rico and the Virgin Islands. Southern 16
Agency for Toxic Substances and Disease Registry The Agency for Toxic Substances and Disease Registry (ATSDR) studies the health effects of hazardous substances in general and at specific locations. As indicated by its title, the Agency maintains a registry of people exposed to toxic chemicals. Along with the Environmental Protection Agency (EPA), ATSDR prepares and updates profiles of toxic substances. In addition, ATSDR assesses the potential dangers posed to human health by exposure to hazardous substances at Superfund sites. The Agency will also perform health assessments when petitioned by a community. Though ATSDR’s early health assessments have been criticized, the Agency’s later assessments and other products are considered more useful. ATSDR was created in 1980 by the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), also known as the Superfund, as part of the U.S. Department of Health and Human Services. As
originally conceived, ATSDR’s role was limited to performing health studies and examining the relationship between toxic substances and disease. The Superfund Amendments and Reauthorization Act (SARA) of 1986 codified ATSDR’s responsibility for assessing health threats at Superfund sites. ATSDR, along with the national Centers for Disease Control and state health departments, conducts health surveys in communities near locations that have been
Environmental Encyclopedia 3
Agent Orange
placed on the Superfund’s National Priorities List for clean up. ATSDR has preformed 951 health assessments in the two years after the law was passed. Approximately one quarter of these assessments were memos or reports that had been completed prior to 1986 and were simply re-labeled as health assessments. These first assessments have been harshly criticized. The General Accounting Office (GAO), a congressional agency that reviews the actions of the federal administration, charged that most of these assessments were inadequate. Some argued that the agency was underfunded and poorly organized. Recently, ATSDR received less than 5% of the $1.6 billion appropriated for the Superfund project. Subsequent health assessments, more than 200 of them, have generally been more complete, but they still may not be adequate in informing the community and the EPA of the dangers at specific sites. In general, ATSDR identifies a local agency to help prepare the health surveys. Unlike many of the first assessments, more recent surveys now include site visits and face-to-face interviews. However, other data on environmental effects are limited. ATSDR only considers environmental information provided by the companies that created the hazard or data collected by the EPA. In addition, ATSDR only assesses health risks from illegal emissions, not from “permitted” emissions. Some scientists contend that not enough is known about the health effects of exposure to hazardous substances to make conclusive health assessments. Reaction to the performance of ATSDR’s other functions has been generally more positive. As mandated by SARA, ATSDR and the EPA have prepared hundreds of toxicological profiles of hazardous substances. These profiles have been judged generally helpful, and the GAO praised ATSDR’s registry of people who have been exposed to toxic substances. [Alair MacLean]
RESOURCES BOOKS Environmental Epidemiology: Public Health and Hazardous Wastes. National Research Council. Committee on Environmental Epidemiology. Washington, DC: National Academy Press, 1991. Lewis, S., B. Keating, and D. Russell. Inconclusive by Design: Waste, Fraud and Abuse in Federal Environmental Health Research. Boston: National Toxics Campaign Fund; and Harvey, LA: Environmental Health Network, 1992.
OTHER Superfund: Public Health Assessments Incomplete and of Questionable Value. Washington, DC: General Accounting Office, 1991.
ORGANIZATIONS The ATSDR Information Center, , (404) 498-0110, Fax: (404) 498-0057, Toll Free: (888) 422-8737, Email:
[email protected],
Agent Orange Agent Orange is a herbicide recognized for its use during the Vietnam War. It is composed of equal parts of two chemicals: 2,4-D and 2,4,5-T. A less potent form of the herbicide has also been used for clearing heavy growth on a commercial basis for a number of years. However, it does not contain 2,4-D. On a commercial level, the herbicide was used in forestry control as early as the 1930s. In the 1950s through the 1960s, Agent Orange was also exported. For example, New Brunswick, Canada, was the scene of major Agent Orange spraying to control forests for industrial development. In Malaysia in the 1950s, the British used compounds with the chemical mixture 2,4,5-T to clear communication routes. In the United States, herbicides were considered for military use towards the end of World War II, during the action in the Pacific. However, the first American military field tests were actually conducted in Puerto Rico, Texas, and Fort Drum, New York, in 1959. That same year—1959—the Crops Division at Fort Detrick, Maryland initiated the first large-scale military defoliation effort. The project involved the aerial application of Agent Orange to about 4 mi2 (10.4 km2) of vegetation. The experiment proved highly successful; the military had found an effective tool. By 1960, the South Vietnamese government, aware of these early experiments, had requested that the United States conduct trials of these herbicides for use against guerrilla forces. Spraying of Agent Orange in Southeast Asia began in 1961. South Vietnam President Diem stated that he wanted this “powder” in order to destroy the rice and the food crops that would be used by the Viet Cong. Thus began the use of herbicides as a weapon of war. The United States military became involved, recognizing the limitations of fighting in foreign territory with troops that were not accustomed to jungle conditions. The military wanted to clear communication lines and open up areas of visibility in order to enhance their opportunities for success. Eventually, the United States military took complete control of the spray missions. Initially, there were to be restrictions: the spraying was to be limited to clearing power lines and roadsides, railroads and other lines of communications and areas adjacent to depots. Eventually, the spraying was used to defoliate the thick jungle brush, thereby obliterating enemy hiding places. Once under the authority of the military, and with no checks or restraints, the spraying continued to increase in 17
Agent Orange
Environmental Encyclopedia 3
Deforestation of the Viet Cong jungle in South Vietnam. (AP/Wide World Photos. Reproduced by permission.)
intensity and abandon, escalating in scope because of military pressure. It was eventually used to destroy crops, mainly rice, in an effort to deprive the enemy of food. Unfortunately, the civilian population—Vietnamese men, women, and children—was also affected. The United States military sprayed 3.6 million acres (1.5 million ha) with 19 million gal (720 million l) of Agent Orange over nine years. The spraying also became useful in clearing military base perimeters, cache sites, and waterways. Base perimeters were often sprayed more than once. In the case of dense jungle growth, one application of spray was made for the upper and another for the lower layers of vegetation. Inland forests, mangrove forests, and cultivated lands were all targets. Through Project Ranch Hand—the Air Force team assigned to the spray missions—Agent Orange became the most widely produced and dispensed defoliant in Vietnam. Military requirements for herbicide use were developed by the Army’s Chemical Operations Division, J-3, Military Assistance Command, Vietnam, (MACV). With Project Ranch Hand underway, the spray missions increased monthly after 1962. This increase was made possible by the continued military promises to stay away from the civilians 18
or to re-settle those civilians and re-supply the food in any areas where herbicides destroyed the food of the innocent. These promises were never kept. The use of herbicides for crop destruction peaked in 1965 when 45% of the total spraying was designed to destroy crops. Initially, the aerial spraying took place near Saigon. Eventually the geographical base was widened. During the 1967 expansion period of herbicide procurement, when requirements had become greater than the industries’ ability to produce, the Air Force and Joint Chiefs of Staff become actively involved in the herbicide program. All production for commercial use was diverted to the military, and the Department of Defense (DOD) was appointed to deal with problems of procurement and production. Commercial producers were encouraged to expand their facilities and build new plants, and the DOD made attractive offers to companies that might be induced to manufacture herbicides. A number of companies were awarded contracts. Working closely with the military, certain chemical companies sent technical advisors to Vietnam to instruct personnel on the methods and techniques necessary for effective use of the herbicides.
Environmental Encyclopedia 3 During the peak of the spraying, approximately 129 sorties were flown per aircraft. Twenty-four UC-123B aircraft were used, averaging 39 sorties per day. In addition, there were trucks and helicopters that went on spraying missions, backed up by such countries as Australia. C-123 cargo planes and helicopters were also used. Helicopters flew without cargo doors so that frequent ground fire could be returned. But the rotary blades would kick up gusts of spray, thereby delivering a powerful dose onto the faces and bodies of the men inside the plane. The dense Vietnamese jungle growth required two applications to defoliate both upper and lower layers of vegetation. On the ground, both enemy troops and Vietnamese civilians came in contact with the defoliant. American troops were also exposed. They could inhale the fine misty spray or be splashed in the sudden and unexpected deluge of an emergency dumping. Readily absorbing the chemicals through their skin and lungs, hundreds of thousands of United States military troops were exposed as they lived on the sprayed bases, slept near empty drums, and drank and washed in water in areas where defoliation had occurred. They ate food that had been brushed with spray. Empty herbicide drums were indiscriminately used and improperly stored. Volatile fumes from these drums caused damage to shade trees and to anyone near the fumes. Those handling the herbicides in support of a particular project goal had the unfortunate opportunity of becoming directly exposed on a consistent basis. Nearly three million veterans served in Southeast Asia. There is growing speculation that nearly everyone who was in Vietnam was eventually exposed to some degree—far less a possibility for those stationed in urban centers or on the waters. According to official sources, in addition to the Ranch Hand group at least three groups were exposed: OA group considered secondary support personnel. This included Army pilots who may have been involved in helicopter spraying, along with the Navy and Marine pilots. OThose who transported the herbicide to Saigon, and from there to Bien Hoa and Da Nang. Such personnel transported the herbicide in the omnipresent 55-gallon (208l) containers. OSpecialized mechanics, electricians, and technical personnel assigned to work on various aircraft. Many of this group were not specifically assigned to Ranch Hand but had to work in aircraft that were repeatedly contaminated. Agent Orange was used in Vietnam in undiluted form at the rate of 3–4 gal (11.4-15.2 l) per acre. 13.8 lb (6.27 kg) of the chemical 2,4,5-T were added to 12 lb (5.5 kg) of 2,4-D per acre, a nearly 50-50 ratio. This intensity is 13.3 lb (6.06 kg) per acre more than was recommended by the military’s own manual. Computer tapes (HERBS TAPES) now available show that some areas were sprayed
Agent Orange
as much as 25 times in just a few short months, thereby dramatically increasing the exposure to anyone within those sprayed areas. Between 1962 and 1971 an estimated 11.2 million gal (42.4 million l) of Agent Orange were dumped over South Vietnam. Evaluations show that the chemical had killed and defoliated 90–95% of the treated vegetation. Thirty-six percent of all mangrove forest areas in South Vietnam were destroyed. Viet Cong tunnel openings, caves, and above ground shelters were revealed to the aircraft after the herbicides were shipped in drums identified by an orange stripe and a contract identification number that enabled the government to identify the specific manufacturer. The drums were sent to a number of central transportation points for shipment to Vietnam. Agent Orange is contaminated by the chemical dioxin, specifically TCDD. In Vietnam, the dioxin concentration in Agent Orange varied from parts per billion (ppb) to parts per million (ppm), depending on each manufacturer’s production methods. The highest reported concentration in Agent Orange was 45 ppm. The Environmental Protection Agency (EPA) evacuated Times Beach, Missouri, when tests revealed soil samples there with two parts per billion of dioxin. The EPA has stated that one ppb is dangerous to humans. Ten years after the spraying ended, the agricultural areas remained barren. Damaging amounts of dioxin stayed in the soil thus infecting the food chain and exposing the Vietnamese people. As a result there is some concern that the high levels of TCDD are responsible for infant mortality, birth defects, and spontaneous abortions that occur in higher numbers in the once sprayed areas of Vietnam. Another report indicates that thirty years after Agent Orange contaminated the area, there is 100 times as much dioxin found in the bloodstream of people living in the area than those living in non-contaminated areas of Vietnam. This is a result of the dioxin found in the soil of the once heavily sprayed land. The chemical is then passed on to humans through the food they eat. Consequently, dioxin is also spread to infants through the mother’s breast milk, which will undoubtedly affect the child’s development. In 1991 Congress passed the Agent Orange Act(Public Law 102-4), which funded the extensive scientific study of the long-term health effects of Agent Orange and other herbicides used in Vietnam. As of early 2002, Agent Orange had been linked to the development of peripheral neuropathy, type II diabetes, prostate cancer, multiple myeloma, lymphomas, soft tissue sarcomas, and respiratory cancers. Researchers have also found a possible correlation between dioxin and the development of spinal bifida, a birth defect, and childhood leukemia in offspring of exposed vets. It is important to acknowledge the statistics do not 19
Agglomeration
necessarily show a strong link between exposure to Agent Orange or TCDD and some of the conditions listed above. However, Vietnam veterans who were honorably discharged and have any of these “presumptive” conditions (i.e., conditions presumed caused by wartime exposure) are entitled to Veterans Administration (VA) health care benefits and disability compensation under federal law. Unfortunately many Vietnamese civilians will not receive any benefits despite the evidence that they continue to suffer from the affects of Agent Orange. [Liane Clorfene Casten and Paula Anne Ford-Martin]
RESOURCES BOOKS Committee to Review the Health Effects in Vietnam Veterans of Exposure to Herbicides, Division of Health Promotion and Disease Prevention, Institute of Medicine. Veterans and Agent Orange: Update 2000.Washington, DC: National Academy Press, 2001.
PERIODICALS “Agent Orange Exposure Linked to Type 2 Diabetes.” Nation’s Health 30, no. 11 (December 2000/January 2001): 11. “Agent Orange Victims.” Earth Island Journal 17, no. 1 (Spring 2002): 15. Dreyfus, Robert. “Apocolypse Still.” Mother Jones (January/February 2000). Korn, Peter."The Persisting Poison; Agent Orange in Vietnam.” The Nation 252, no.13 (April 8, 1991): 440. Young, Emma"Foul Fare.” New Scientist 170, no. 2292 (May 26, 2001): 13.
OTHER
U.S. Veterans Affairs (VA). Agent Orange [June 2002]. .
Agglomeration Any process by which a group of individual particles is clumped together into a single mass. The term has a number of specialized uses. Some types of rocks are formed by the agglomeration of particles of sand, clay, or some other material. In geology, an agglomerate is a rock composed of volcanic fragments. One technique for dealing with air pollution is ultrasonic agglomeration. A source of very high frequency sound is attached to a smokestack, and the ultrasound produced by this source causes tiny particulate matter in waste gases to agglomerate into particles large enough to be collected.
Agricultural chemicals The term agricultural chemical refers to any substance involved in the growth or utilization of any plant or animal of economic importance to humans. An agricultural chemical may be a natural product, such as urea, or a synthetic chemical, such as DDT. The agricultural chemicals now in use 20
Environmental Encyclopedia 3 include fertilizers, pesticides, growth regulators, animal feed supplements, and raw materials for use in chemical processes. In the broadest sense, agricultural chemicals can be divided into two large categories, those that promote the growth of a plant or animal and those that protect plants or animals. To the first group belong plant fertilizers and animal food supplements, and to the latter group belong pesticides, herbicides, animal vaccines, and antibiotics. In order to stay healthy and grow normally, crops require a number of nutrients, some in relatively large quantities called macronutrients, and others in relatively small quantities called micronutrients. Nitrogen, phosphorus, and potassium are considered macronutrients, and boron, calcium, chlorine, copper, iron, magnesium, manganese among others are micronutrients. Farmers have long understood the importance of replenishing the soil, and they have traditionally done so by natural means, using such materials as manure, dead fish, or compost. Synthetic fertilizers were first available in the early twentieth century, but they became widely used only after World War II. By 1990 farmers in the United States were using about 20 million tons (20.4 million metric tons) of these fertilizers a year. Synthetic fertilizers are designed to provide either a single nutrient or some combination of nutrients. Examples of single-component or “straight” fertilizers are urea (NH2CONH2), which supplies nitrogen, or potassium chloride (KCl), which supplies potassium. The composition of “mixed” fertilizers, those containing more than one nutrient, is indicated by the analysis printed on their container. An 8-10-12 fertilizer, for example, contains 8% nitrogen by weight, 10% phosphorus, and 12% potassium. Synthetic fertilizers can be designed to release nutrients almost immediately ("quick-acting") or over longer periods of time ("time-release"). They may also contain specific amounts of one or more trace nutrients needed for particular types of crops or soil. Controlling micronutrients is one of the most important problems in fertilizer compounding and use; the presence of low concentrations of some elements can be critical to a plant’s health, while higher levels can be toxic to the same plants or to animals that ingest the micronutrient. Plant growth patterns can also be influenced by direct application of certain chemicals. For example, the gibberellins are a class of compounds that can dramatically affect the rate at which plants grow and fruits and vegetables ripen. They have been used for a variety of purposes ranging from the hastening of root development to the delay of fruit ripening. Delaying ripening is most important for marketing agricultural products because it extends the time a crop can be transported and stored on grocery shelves. Other kinds of chemicals used in the processing, transporting, and storage
Environmental Encyclopedia 3 of fruits and vegetables include those that slow down or speed up ripening (maleic hydrazide, ethylene oxide, potassium permanganate, ethylene, and acetylene are examples), that reduce weight loss (chlorophenoxyacetic acid, for example), retain green color (cycloheximide), and control firmness (ethylene oxide). The term agricultural chemical is most likely to bring to mind the range of chemicals used to protect plants against competing organisms: pesticides and herbicides. These chemicals disable or kill bacteria, fungi, rodents, worms, snails and slugs, insects, mites, algae, termites, or any other species of plant or animal that feeds upon, competes with, or otherwise interferes with the growth of crops. Such chemicals are named according to the organism against which they are designed to act. Some examples are fungicides (designed to kill fungi), insecticides (used against insects), nematicides (to kill round worms), avicides (to control birds), and herbicides (to combat plants). In 1990, 393 million tons of herbicides, 64 million tons of insecticides, and 8 million tons of other pesticides were used on American farmlands. The introduction of synthetic pesticides in the years following World War II produced spectacular benefits for farmers. More than 50 major new products appeared between 1947 and 1967, resulting in yield increases in the United States ranging from 400% for corn to 150% for sorghum and 100% for wheat and soybeans. Similar increases in less developed countries, resulting from the use of both synthetic fertilizers and pesticides, eventually became known as the Green Revolution. By the 1970s, however, the environmental consequences of using synthetic pesticides became obvious. Chemicals were becoming less effective as pests developed resistances to them, and their toxic effects on other organisms had grown more apparent. Farmers were also discovering drawbacks to chemical fertilizers as they found that they had to use larger and larger quantities each year in order to maintain crop yields. One solution to the environmental hazards posed by synthetic pesticides is the use of natural chemicals such as juvenile hormones, sex attractants, and anti-feedant compounds. The development of such natural pest-control materials has, however, been relatively modest; the vast majority of agricultural companies and individual farmers continue to use synthetic chemicals that have served them so well for over a half century. Chemicals are also used to maintain and protect livestock. At one time, farm animals were fed almost exclusively on readily available natural foods. They grazed on rangelands or were fed hay or other grasses. Today, carefully blended chemical supplements are commonly added to the diet of most farm animals. These supplements have been determined on the basis of extensive studies of the nutrients that contribute to the growth or milk production of cows,
Agricultural environmental management
sheep, goats, and other types of livestock. A typical animal supplement diet consists of various vitamins, minerals, amino acids, and nonprotein (simple) nitrogen compounds. The precise formulation depends primarily on the species; a vitamin supplement for cattle, for example, tends to include A, D, and E, while swine and poultry diets would also contain Vitamin K, riboflavin, niacin, pantothenic acid, and choline. A number of chemicals added to animal feed serve no nutritional purpose but provide other benefits. For example, the addition of certain hormones to the feed of dairy cows can significantly increase their output of milk. Genetic engineering is also becoming increasingly important in the modification of crops and livestock. Cows injected with a genetically modified chemical, bovine somatotropin, produce a significantly larger quantity of milk. It is estimated that infectious diseases cause the death of 15–20 of all farm animals each year. Just as plants are protected from pests by pesticides, so livestock are protected from disease organisms by immunization, antibiotics, and other techniques. Animals are vaccinated against speciesspecific diseases, and farmers administer antibiotics, sulfonamides, nitrofurans, arsenicals, and other chemicals that protect against disease-causing organisms. The use of chemicals with livestock can have deleterious effects, just as crop chemicals have. In the 1960s, for example, the hormone diethylstilbestrol (DES) was widely used to stimulate the growth of cattle, but scientists found that detectable residues of the hormone remained in meat sold from the slaughtered animals. DES is now considered a carcinogen, and the U.S. Food and Drug Administration has banned its use in cattle feed since 1979. [David E. Newton]
RESOURCES BOOKS Benning, L. E. Beneath the Bottom Line: Agricultural Approaches to Reduce Agrichemical Contamination of Groundwater. Washington, DC: Office of Technology Assessment, 1990. ———, and J. H. Montgomery. Agrochemicals Desk Reference: Environmental Data. Boca Raton, FL: Lewis, 1993. ———, and T. E. Waddell. Managing Agricultural Chemicals in the Environment: The Case for a Multimedia Approach. Washington, DC: Conservation Foundation, 1988. Chemistry and the Food System, A Study by the Committee on Chemistry and Public Affairs of the American Chemical Society. Washington, DC: American Chemical Society, 1980.
Agricultural environmental management The complex interaction of agriculture and environment has been an issue since the beginning of man. Humans grow 21
Agricultural environmental management
food to eat and also hunt animals that depend on natural resources for healthy ongoing habitats. Therefore, the world’s human population must balance farming activities with maintaining natural resources. The term agriculture originally meant the act of cultivating fields or growing crops. However, it has expanded to include raising livestock as well. When early settlers began farming and ranching in the United States, they faced pristine wilderness and open prairies. There was little cause for concern about protecting the environment or population and for two centuries, the country’s land and water were aggressively used to create a healthy supply of ample food for Americans. In fact, many American families settled in rural areas and made a living as farmers and ranchers, passing the family business down through generations. By the 1930s, the federal government began requiring farmers to idle certain acres of land to prevent oversupply of food and to protect exhausted soil. Since that time, agriculture has become a complex science, as farmers must carefully manage soil and water to lessen risk of degrading the soil and its surrounding environment or depleting water tables beneath the land’s surface. In fact, farming and ranching present several environmental challenges that require careful management by farmers and local and federal regulatory agencies that guide their activities. The science of applying principles of ecology to agriculture is called agroecology. Those involved in agroecology develop farming methods that use fewer synthetic (manmade) pesticides and fertilizers and encourage organic farming. They also work to conserve energy and water. Soil erosion, converting land to agricultural use, introduction of fertilizer and pesticides, animal wastes, and irrigation are parts of farming that can lead to changes in quality or availability of water. An expanding human population has lead to increased farming and accelerated soil erosion. When soil has a low capacity to retain water, farmers must pump groundwater up and spray it over crops. After years of doing so, the local water table will eventually fall. This can impact native vegetation in the area. The industry calls the balance of environment and lessening of agricultural effects sustainability or sustainable development. In some parts of the world, like in the High Plains of the United States or parts of Saudi Arabia, populations and agriculture are depleting water aquifers faster than the natural environment can replenish them. Sustainable development involves dedicated, scientifically based plans to ensure that agricultural activity is managed in such a way that aquifers are not prematurely depleted. Agroforestry is a method of cultivating both crops and teres on the same land. Between rows of trees, farmers plant agricultural crops that generate income during the time it takes the trees to grow mature enough to produce earnings from nuts or lumber. 22
Environmental Encyclopedia 3 Increased modernization of agriculture also impacts the environment. Traditional farming practice, which continues in underdeveloped countries today, consists of subsistence agriculture. In subsistence farming, just enough crops and livestock are raised to meet the needs of a particular family. However, today large farms produce food for huge populations. More than half of the world’s working population is employed by some agricultural or agriculturally associated industry. Almost 40% of the world’s land area is devoted to agriculture (including permanent pasture). The growing use of machines, pesticides and man-made fertilizers have all seriously impacted the environment. For example, the use of pesticides like DDT in the 1960s were identified as leading to the deaths of certain species of birds. Most western countries banned use of the pesticides and the bird populations soon recovered. Today, use of pesticides is strictly regulated in the United States. Many more subtle effects of farming occur on the environment. When grasslands and wetlands or forests are converted to crops, and when crops are not rotated, eventually, the land changes to the point that entire species of plants and animals can become threatened. Urbanization also imposes onto farmland and cuts the amount of land available for farming. Throughout the world, countries and organizations develop strategies to protect the environment, natural habitats and resources while still supplying the food our populations require. In 1992, The United Nations Conference on Environment and Development in Rio de Janeiro focused on how to sustain the world’s natural resources but balance good policies on environment and community vitality. In the United States, the Department of Agriculture has published its own policy on sustainable development, which works toward balancing economics, environment and social needs concerning agriculture. In 1993, an Executive Order formed the President’s Council on Sustainable Development (PCSD) to develop new approaches to achieve economic and environmental goals for public policy in agriculture. Guiding principles include sections on agriculture, forestry and rural community development. According to the United States Environmental Protection Agency (EPA), Agricultural Environmental Management (AEM) is one of the most innovative programs in New York State. The program was begun in June 2000 when Governor George Pataki introduced legislation to the state’s Senate and Assembly proposing a partnership to promote farming’s good stewardship of land and to provide the funding and support of farmers’ efforts. The bill was passed and signed into law by the governor on August 24, 2000. The purpose of the law is to help farmers develop agricultural environmental management plans that control agricultural pollution and comply with federal, state and local regula-
Environmental Encyclopedia 3
Agricultural pollution
tions on use of land, water quality, and other environmental concerns. New York’s AEM program brings together agencies from state, local, and federal governments, conservation representatives, businesses from the private sector, and farmers. The program is voluntary and offers education, technical assistance, and financial incentives to farmers to participate. An example of a successful AEM project occurred at a dairy farm in central New York. The farm composted animals’ solid wastes, which reduced the amount of waste spread on the fields. This in turn reduced pollution in the local watershed. The New York State Department of Agriculture and Markets oversees the program. It begins when a farmer expresses interest in AEM. Next, the farmer completes a series of five tiers of the program. In Tier I, the farmer completes a short questionnaire that surveys current farming activities and future plans to identify potential environmental concerns. Tier II involves worksheets that document current activities that promote stewardship of the environment and help prioritize any environmental concerns. In Tier III, a conservation plan is developed that is tailored specifically for the individual farm. The farmer works together with an AEM coordinator and several members of the cooperating agency staff. Under Tier IV of the AEM program, agricultural agencies and consultants provide the farmer with educational, technical, and financial assistance to implement best management practices for preventing pollution to water bodies in the farm’s area. The plans use Natural Resources Conservation Service standards and guidance from cooperating professional engineers. Finally, farmers in the AEM program receive ongoing evaluations to ensure that the plan they have devised helps protect the environment and also ensures viability of the farm business. Funding for the AEM program comes from a variety of sources, including New York’s Clean Water/Clean Air Bond Act and the State Environmental Protection Fund. Local Soil and Water Conservation Districts (SWCDs) also partner in the effort, and farmers can access funds through these districts. The EPA says involvement of the SWCDs has likely been a positive factor in farmers’ acceptance of the program. Though New York is perceived as mostly urban, agriculture is a huge business in the state. The AEM program serves important environmental functions and helps keep New York State’s farms economically viable. More than 7,000 farms participate in the program. [Teresa G. Norris]
RESOURCES BOOKS Calow, Peter. The Encyclopedia of Ecology and Environmental Management. Malden, MA: Blackwell Science, Inc., 1998.
PERIODICALS Ervin, DE, et al. “Agriculture and Environment: A New Strategic Vision.“ Environment 40, no. 6 (July-August, 1998):8.
ORGANIZATIONS New York State Department of Agriculture and Markets, 1 Winners Circle, Albany, NY USA 12235 (518) 457-3738, Fax: (518)457-3412, Email:
[email protected], http://www.agmkt.state.ny.us Sustainable Development, USA, United States Department of Agriculture, 14th and Independence SW, Washington, DC USA 20250 (202) 7205447, Email:
[email protected], http://www.usda.gov
Agricultural pollution The development of modern agricultural practices is one of the great success stories of applied sciences. Improved plowing techniques, new pesticides and fertilizers, and better strains of crops are among the factors that have resulted in significant increases in agricultural productivity. Yet these improvements have not come without cost to the environment and sometimes to human health. Modern agricultural practices have contributed to the pollution of air, water, and land. Air pollution may be the most memorable, if not the most significant, of these consequences. During the 1920s and 1930s, huge amounts of fertile topsoil were blown away across vast stretches of the Great Plains, an area that eventually became known as the Dust Bowl. The problem occurred because farmers either did not know about or chose not to use techniques for protecting and conserving their soil. The soil then blew away during droughts, resulting not only in the loss of valuable farmland, but also in the pollution of the surrounding atmosphere. Soil conservation techniques developed rapidly in the 1930s, including contour plowing, strip cropping, crop rotation, windbreaks, and minimum- or no-tillage farming, and thereby greatly reduced the possibility of erosion on such a scale. However, such events, though less dramatic, have continued to occur, and in recent decades they have presented new problems. When top soils are blown away by winds today, they can carry with them the pesticides, herbicides, and other crop chemicals now so widely used. In the worst cases, these chemicals have contributed to the collection of air pollutants that endanger the health of plants and animals, including humans. Ammonia, released from the decay of fertilizers, is one example of a compound that may cause minor irritation to the human respiratory system and more serious damage to the health of other animals and plants. A more serious type of agricultural pollution are the solid waste problems resulting from farming and livestock practices. Authorities estimate that slightly over half of all the solid wastes produced in the United States each year— a total of about 2 billion tons (2 billion metric tons)—come 23
Environmental Encyclopedia 3
Agricultural pollution
from a variety of agricultural activities. Some of these wastes pose little or no threat to the environment. Crop residue left on cultivated fields and animal manure produced on rangelands, for example, eventually decay, returning valuable nutrients to the soil. Some modern methods of livestock management, however, tend to increase the risks posed by animal wastes. Farmers are raising a larger variety of animals, as well as larger numbers of them, in smaller and smaller areas such as feedlots or huge barns. In such cases, large volumes of wastes are generated in these areas. Many livestock managers attempt to sell these waste products or dispose of them in a way that poses no threat to the environment. Yet in many cases the wastes are allowed to accumulate in massive dumps where soluble materials are leached out by rain. Some of these materials then find their way into groundwater or surface water, such as lakes and rivers. Some are harmless to the health of animals, though they may contribute to the eutrophication of lakes and ponds. Other materials, however, may have toxic, carcinogenic, or genetic effects on humans and other animals. The leaching of hazardous materials from animal waste dumps contributes to perhaps the most serious form of agricultural pollution: the contamination of water supplies. Many of the chemicals used in agriculture today can be harmful to plants and animals. Pesticides and herbicides are the most obvious of these; used by farmers to disable or kill plant and animal pests, they may also cause problems for beneficial plants and animals as well as humans. Runoff from agricultural land is another serious environmental problem posed by modern agricultural practices. Runoff constitutes a nonpoint source of pollution. Rainfall leaches out and washes away pesticides, fertilizers, and other agricultural chemicals from a widespread area, not a single source such as a sewer pipe. Maintaining control over nonpoint sources of pollution is an especially difficult challenge. In addition, agricultural land is more easily leached out than is non-agricultural land. When lands are plowed, the earth is broken up into smaller pieces, and the finer the soil particles, the more easily they are carried away by rain. Studies have shown that the nitrogen and phosphorus in chemical fertilizers are leached out of croplands at a rate about five times higher than from forest woodlands or idle lands. The accumulation of nitrogen and phosphorus in waterways from chemical fertilizers has contributed to the acceleration of eutrophication of lakes and ponds. Scientists believe that the addition of human-made chemicals such as those in chemical fertilizers can increase the rate of eutrophication by a factor of at least 10. A more deadly effect is the poisoning of plants and animals by toxic chemicals leached off of farmlands. The biological effects of such chemicals are commonly magnified many times as they move up a food 24
chain/web. The best known example of this phenomenon involved a host of biological problems—from reduced rates of reproduction to malformed animals to increased rates of death—attributed to the use of DDT in the 1950s and 1960s. Sedimentation also results from the high rate of erosion on cultivated land, and increased sedimentation of waterways poses its own set of environmental problems. Some of these are little more than cosmetic annoyances. For example, lakes and rivers may become murky and less attractive, losing potential as recreation sites. However, sedimentation can block navigation channels, and other problems may have fatal results for organisms. Aquatic plants may become covered with sediments and die; marine animals may take in sediments and be killed; and cloudiness from sediments may reduce the amount of sunlight received by aquatic plants so extensively that they can no longer survive. Environmental scientists are especially concerned about the effects of agricultural pollution on groundwater. Groundwater is polluted by much the same mechanisms as is surface water, and evidence for that pollution has accumulated rapidly in the past decade. Groundwater pollution tends to persist for long periods of time. Water flows through an aquifer much more slowly than it does through a river, and agricultural chemicals are not flushed out quickly. Many solutions are available for the problems posed by agricultural pollution, but many of them are not easily implemented. Chemicals that are found to have serious toxic effects on plants and animals can be banned from use, such as DDT in the 1970s, but this kind of decision is seldom easy. Regulators must always assess the relative benefit of using a chemical, such as increased crop yields, against its environmental risks. Such as a risk-benefit analysis means that some chemicals known to have certain deleterious environmental effects remain in use because of the harm that would be done to agriculture if they were banned. Another way of reducing agricultural pollution is to implement better farming techniques. In the practices of minimum- or no-tillage farming, for example, plowing is reduced or eliminated entirely. Ground is left essentially intact, reducing the rate at which soil and the chemicals it contains are eroded away.
[David E. Newton]
RESOURCES BOOKS Benning, L. E. Agriculture and Water Quality: International Perspectives. Boulder, CO: L. Rienner, 1990. ———, and L. W. Canter. Environmental Impacts of Agricultural Production Activities. Chelsea, MI: Lewis, 1986.
Environmental Encyclopedia 3 ———, and M. W. Fox. Agricide: The Hidden Crisis That Affects Us All. New York: Shocken Books, 1986. Crosson, P. R. Implementation Policies and Strategies for Agricultural NonPoint Pollution. Washington, DC: Resources for the Future, 1985.
Agricultural Research Service A branch of the U.S. Department of Agriculture charged with the responsibility of agricultural research on a regional or national basis. The Agricultural Research Service (ARS) has a mission to develop new knowledge and technology needed to solve agricultural problems of broad scope and high national priority in order to ensure adequate production of high quality food and agricultural products for the United States. The national research center of the ARS is located at Beltsville, Maryland, consisting of laboratories, land, and other facilities. In addition, there are many other research centers located throughout the United States, such as the U.S. Dairy/Forage Research Center at Madison, Wisconsin. Scientists of the ARS are also located at Land Grant Universities throughout the country where they conduct cooperative research with state scientists. RESOURCES ORGANIZATIONS Beltsville Agricultural Research Center, Rm. 223, Bldg. 003, BARC-West, 10300 Baltimore Avenue , Beltsville , MD USA 20705 ,
Agricultural revolution The development of agriculture has been a fundamental part of the march of civilization. It is an ongoing challenge, for as long as population growth continues, mankind will need to improve agricultural production. The agricultural revolution is actually a series of four major advances, closely linked with other key historical periods. The first, the Neolithic or New Stone Age, marks the beginning of sedentary (settled) farming. Much of this history is lost in antiquity, dating back perhaps 10,000 years or more. Still, humans owe an enormous debt to those early pioneers who so painstakingly nourished the best of each year’s crop. Archaeologists have found corn cobs a mere 2 in (5.1 cm) long, so different from today’s giant ears. The second major advance came as a result of Christopher Columbus’ voyages to the New World. Isolation had fostered the development of two completely independent agricultural systems in the New and Old Worlds. A short list of interchanged crops and animals clearly illustrates the global magnitude of this event; furthermore, the current population explosion began its upswing during this period. From the New World came maize, beans, the “Irish” potato,
Agricultural revolution
squash, peanuts, tomatoes, and tobacco. From the Old World came wheat, rice, coffee, cattle, horses, sheep, and goats. Maize is now a staple food in Africa. Several Indian tribes in America adopted new lifestyles, notably the Navajo as sheepherders, and the Cheyenne as nomads using the horse to hunt buffalo. The Industrial Revolution both contributed to and was nourished by agriculture. The greatest agricultural advances came in transportation, where first canals, then railroads and steamships made possible the shipment of food from areas of surplus. This in turn allowed more specialization and productivity, but most importantly, it reduced the threat of starvation. The steamship ultimately brought refrigerated meat to Europe from distant Argentina and Australia. Without these massive increases in food shipments the exploding populations and greatly increased demand for labor by newly emerging industries could not have been sustained. In turn the Industrial Revolution introduced major advances in farm technology, such as the cotton gin, mechanical reaper, improved plows, and, in this century, tractors and trucks. These advances enabled fewer and fewer farmers to feed larger and larger populations, freeing workers to fill demands for factory labor and the growing service industries. Finally, agriculture has fully participated in the scientific advances of the twentieth century. Key developments include hybrid corn, the high responders in tropical lands, described as the “Green Revolution,” and current genetic research. Agriculture has benefited enormously from scientific advances in biology, and the future here is bright for applied research, especially involving genetics. Great potential exists for the development of crop strains with greatly improved dietary characteristics, such as higher protein or reduced fat. Growing populations, made possible by these food surpluses, have forced agricultural expansion onto less and less desirable lands. Because agriculture radically simplifies ecosystems and greatly amplifies soil erosion, many areas such as the Mediterranean Basin and tropical forest lands have suffered severe degradation. Major developments in civilization are directly linked to the agricultural revolution. A sedentary lifestyle, essential to technological development, was both mandated and made possible by farming. Urbanization flourished, which encouraged specialization and division of labor. Large populations provided the energy for massive projects, such as the Egyptian pyramids and the colossal engineering efforts of the Romans. The plow represented the first lever, both lifting and overturning the soil. The draft animal provided the first in a long line of nonhuman energy sources. Plant and animal selectivity are likely the first application of science and technology toward specific goals. A number of important crops 25
Environmental Encyclopedia 3
Agricultural Stabilization and Conservation Service
bear little resemblance to the ancestors from which they were derived. Animals such as the fat-tailed sheep represent thoughtful cultural control of their lineage. Climate dominates agriculture, second only to irrigation. Farmers are especially vulnerable to variations, such as late or early frosts, heavy rains, or drought. Rice, wheat, and maize have become the dominant crops globally because of their high caloric yield, versatility within their climate range, and their cultural status as the “staff of life.” Many would not consider a meal complete without rice, bread, or tortillas. This cultural influence is so strong that even starving peoples have rejected unfamiliar food. China provides a good example of such cultural differences, with a rice culture in the south and a wheat culture (noodles) in the north. These crops all need a wet season for germination and growth, followed by a dry season to allow spoilage-free storage. Rice was domesticated in the monsoonal lands of Southeast Asia, while wheat originated in the Fertile Crescent of the Middle East. Historically, wheat was planted in the fall, and harvested in late spring, coinciding with the cycle of wet and dry seasons in the Mediterranean region. Maize needs the heavy summer rains provided by the Mexican highland climate. Other crops predominate in areas with less suitable climates. These include barley in semiarid lands; oats and potatoes in cool, moist lands; rye in colder climates with short growing seasons; and dry rice on hillsides and drier lands where paddy rice is impractical. Although food production is the main emphasis in agriculture, more and more industrial applications have evolved. Cloth fibers have been a mainstay, but paper products and many chemicals now come from cultivated plants. The agricultural revolution is also associated with some of mankind’s darker moments. In the tropical and subtropical climates of the New World, slave labor was extensive. Close, unsanitary living conditions have fostered plagues of biblical proportions. And the desperate dependence on agriculture is all too vividly evident in the records of historic and contemporary famine. As a world, people are never more than one harvest away from global starvation, a fact amplified by the growing understanding of cosmic catastrophes. Some argue that the agricultural revolution masks the growing hazards of an overpopulated, increasingly contaminated earth. Since the agricultural revolution has been so productive it has more than compensated for the population explosion of the last two centuries. Some appropriately labeled “cornucopians” believe there is yet much potential for increased food production, especially through scientific agriculture and genetic engineering. There is much room for optimism, and also for a sobering assessment of the 26
environmental costs of agricultural progress. We must continually strive for answers to the challenges associated with the agricultural revolution. [Nathan H. Meleen]
RESOURCES BOOKS Anderson, E. “Man as a Maker of New Plants and New Plant Communities.” In Man’s Role in Changing the Face of the Earth, edited by W. L. Thomas Jr. Chicago: The University of Chicago Press, 1956. Doyle, J. Altered Harvest: Agriculture, Genetics, and the Fate of the World’s Food Supply. New York: Penguin, 1985. Gliessman, S. R., ed. Agroecology: Researching the Ecological Basis for Sustainable Agriculture. New York: Springer-Verlag, 1990. Jackson, R. H., and L. E. Hudman. Cultural Geography: People, Places, and Environment. St. Paul, MN: West, 1990. Narr, K. J. “Early Food-Producing Populations.” In Man’s Role in Changing the Face of the Earth, edited by W. L. Thomas, Jr. Chicago: The University of Chicago Press, 1956. Simpson, L. B. “The Tyrant: Maize.” In The Cultural Landscape, edited by C. Salter. Belmont, CA: Wadsworth, 1971.
PERIODICALS Crosson, P. R., and N. J. Rosenberg. “Strategies for Agriculture.” Scientific American 261 (September 1989): 128–32+.
Agricultural Stabilization and Conservation Service For the past half century, agriculture in the United States has faced the somewhat unusual and enviable problem of overproduction. Farmers have produced more food than United States citizens can consume, and, as a result, per capita farm income has decreased as the volume of crops has increased. To help solve this problem, the Secretary of Agriculture established the Agricultural Stabilization and Conservation Service on June 5, 1961. The purpose of the service is to administer commodity and land-use programs designed to control production and to stabilize market prices and farm income. The service operates through state committees of three to five members each and committees consisting of three farmers in approximately 3,080 agricultural counties in the nation. RESOURCES ORGANIZATIONS Agricultural Stabilization and Conservation Service, 10500 Buena Vista Court, Urbandale, IA USA 50322-3782 (515) 254-1540, Fax: (515) 254-1573.
Agriculture and energy conservation see Environmental engineering
Environmental Encyclopedia 3
Agriculture, drainage see Runoff
Agriculture, sustainable see Sustainable agriculture
Agroforestry
world, and in areas such as sub-Saharan Africa about 75% of the population is involved in some form of it. As population pressures on the world food supply increase, the application of agroecological principles is expected to stem the ecological consequences of traditional agricultural practices such as pesticide poisoning and erosion. [Linda Rehkopf]
Agroecology Agroecology is an interdisciplinary field of study that applies ecological principles to the design and management of agricultural systems. Agroecology concentrates on the relationship of agriculture to the biological, economic, political, and social systems of the world. The combination of agriculture with ecological principles such as biogeochemical cycles, energy conservation, and biodiversity has led to practical applications that benefit the whole ecosystem rather than just an individual crop. For instance, research into integrated pest management has developed ways to reduce reliance on pesticides. Such methods include biological or biotechnological controls such as genetic engineering, cultural controls such as changes in planting patterns, physical controls such as quarantines to prevent entry of new pests, and mechanical controls such as physically removing weeds or pests. Sustainable agriculture is another goal of agroecological research. Sustainable agriculture views farming as a total system and stresses the long-term conservation of resources. It balances the human need for food with concerns for the environment and maintains that agriculture can be carried on without reliance on pesticides and fertilizers. Agroecology advocates the use of biological controls rather than pesticides to minimize agricultural damage from insects and weeds. Biological controls use natural enemies to control weeds and pests, such as ladybugs that kill aphids. Biological controls include the disruption of the reproductive cycles of pests and the introduction of more biologically diverse organisms to inhibit overpopulation of different agricultural pests. Agroecological principals shift the focus of agriculture from food production alone to wider concerns, such as environmental quality, food safety, the quality of rural life, humane treatment of livestock, and conservation of air, soil, and water. Agroecology also studies how agricultural processes and technologies will be impacted by wider environmental problems such as global warming, desertification, or salinization. The entire world population depends on agriculture, and as the number of people continues to grow agroecology is becoming more important, particularly in developing countries. Agriculture is the largest economic activity in the
RESOURCES BOOKS Altieri, M. A. Agroecology: The Scientific Basis of Alternative Agriculture. Boulder, CO: Westview Press, 1987. Carroll, D. R. Agroecology. New York: McGraw-Hill, 1990. Gliessman, S. R., ed. Agroecology. New York: Springer-Verlag, 1991.
PERIODICALS Norse, D. “A New Strategy for Feeding a Drowned Planet.” Environment 34 (June 1992): 6–19.
Agroforestry Agroforestry is a land use system in which woody perennials (trees, shrubs, vines, palms, bamboo, etc.) are intentionally combined on the same land management unit with crops and sometimes animals, either in a spatial arrangement or a temporal sequence. It is based on the premise that woody perennials in the landscape can enhance the productivity and sustainability of agricultural practice. The approach is especially pertinent in tropical and subtropical areas where improper land management and intensive, continuous cropping of land have led to widespread devastation. Agroforestry recognizes the need for an alternative agricultural system that will preserve and sustain productivity. The need for both food and forest products has led to an interest in techniques that combine production of both in a manner that can halt and may even reverse the ruin caused by existing practices. Although the term agroforestry has come into widespread use only in the last 20–25 years, environmentally sound farming methods similar to those now proposed have been known and practiced in some tropical and subtropical areas for many years. As an example, one type of intercropping found on small rubber plantations (less than 25 acres/ 10 ha), in Malaysia, Thailand, Nigeria, India, and Sri Lanka involves rubber plants intermixed with fruit trees, pepper, coconuts, and arable crops such as soybeans, corn, banana, and groundnut. Poultry may also be included. Unfortunately, in other areas the pressures caused by expanding human and animal populations have led to increased use of destructive farming practices. In the process, inhabitants have further reduced their ability to provide basic food, fiber, fuel, and 27
Environmental Encyclopedia 3
AIDS
timber needs and contributed to even more environmental degradation and loss of soil fertility.
The successful introduction of agroforestry practices in problem areas requires the cooperative efforts of experts from a variety of disciplines. Along with specialists in forestry, agriculture, meteorology, ecology, and related fields, it is often necessary to enlist the help of those familiar with local culture and heritage to explain new methods and their advantages. Usually, techniques must be adapted to local circumstances, and research and testing are required to develop viable systems for a particular setting. Intercropping combinations that work well in one location may not be appropriate for sites only a short distance away because of important meteorological or ecological differences. Despite apparent difficulties, agroforestry has great appeal as a means of arresting problems with deforestation and declining agricultural yields in warmer climates. The practice is expected to grow significantly in the next several decades. Some areas of special interest include intercropping with coconuts as the woody component, and mixing tree legumes with annual crops. Agroforestry does not seem to lend itself to mechanization as easily as the large scale grain, soybean and vegetable cropping systems used in industrialized nations because practices for each site are individualized and usually labor-intensive. For these reasons they have had less appeal in areas like the United States and Europe. Nevertheless, temperate zone applications have been developed or are under development. Examples include small scale organic gardening and farming, mining wasteland reclamation, and biomass energy crop production on marginal land. [Douglas C. Pratt]
RESOURCES BOOKS Huxley, P. A., ed. Plant Research and Agroforestry. Edinburgh, Scotland: Pillans & Wilson, 1983. Reifsnyder, W. S., and T. O. Darnhofer, eds. Meteorology and Agroforestry. Nairobi, Kenya: International Council for Research in Agroforestry, 1989. Zulberti, E., ed. Professional Education in Agroforestry. Nairobi, Kenya: International Council for Research in Agroforestry, 1987.
AIDS AIDS (acquired immune deficiency syndrome) is an infectious and fatal disease of apparently recent origin. AIDS is pandemic, which means that it is worldwide in distribution. A sufficient understanding of AIDS can be gained only by examining its causation (etiology), symptoms, treatments, and the risk factors for transmitting and contracting the disease. 28
AIDS occurs as a result of infection with the HIV (human immunodeficiency virus). HIV is a ribonucleic acid (RNA) virus that targets and kills special blood cells, known as helper T-lymphocytes, which are important in immune protection. Depletion of helper T-lymphocytes leaves the AIDS victim with a disabled immune system and at risk for infection by organisms that ordinarily pose no special hazard to the individual. Infection by these organisms is thus opportunistic and is frequently fatal. The initial infection with HIV may entail no symptoms at all or relatively benign symptoms of short duration that may mimic infectious mononucleosis. This initial period is followed by a longer period (from a few to as many as 10 years) when the infected person is in apparent good health. The HIV infected person, despite the outward image of good health, is in fact contagious, and appropriate care must be exercised to prevent spread of the virus at this time. Eventually the effects of the depletion of helper T cells become manifest. Symptoms include weight loss, persistent cough, persistent colds, diarrhea, periodic fever, weakness, fatigue, enlarged lymph nodes, and malaise. Following this, the AIDS patient becomes vulnerable to chronic infections by opportunistic pathogens. These include, but are not limited to oral yeast infection (thrush), pneumonia caused by the fungus Pneumocystis carinii, and infection by several kinds of herpes viruses. The AIDS patient is vulnerable to Kaposi’s sarcoma, which is a cancer seldom seen except in those individuals with depressed immune systems. Death of the AIDS patient may be accompanied by confusion, dementia, and coma. There is no cure for AIDS. Opportunistic infections are treated with antibiotics, and drugs such as AZT (azidothymidine), which slow the progress of the HIV infection, are available. But viral diseases in general, including AIDS, do not respond well to antibiotics. Vaccines, however, can provide protection against viral diseases. Research to find a vaccine for AIDS has not yet yielded satisfactory results, but scientists have been encouraged by the development of a vaccine for feline leukemia—a viral disease that has similarities to AIDS. Unfortunately, this does not provide hope of a cure for those already infected with the HIV virus. Prevention is crucial for a lethal disease with no cure. Thus, modes of transmission must be identified and avoided. Everyone is at risk, males constitute 52% and females 48% of the infected population. In 2002 there are about 40 million people infected with HIV or AIDS, and it is thought that this number will grow to 62 million by 2005. In the United States alone, 40,000 new cases are diagnosed each year. AIDS cases in heterosexual males and women are on the increase, and no sexually active person can be considered “safe” from AIDS any longer. Therefore, everyone who is sexually active should be aware of the principal modes of
Environmental Encyclopedia 3
Air pollution
transmission of the HIV virus—infected blood, semen from the male and genital tract secretions of the female—and use appropriate means to prevent exposure. While the virus has been identified in tears, saliva, and breast milk, contagions by exposure to those substances seems to be significantly less. [Robert G. McKinnell]
RESOURCES BOOKS Alcamo, I. E. AIDS, the Biological Basis. Dubuque, Iowa: William C. Brown, 1993. Fan, H., R. F. Connor, and L. P. Villarreal. The Biology of AIDS. 2nd edition. Boston: Jones and Bartlett, 1991. Stine, Gerald J. AIDS Update 2002. Prentice Hall, 2001.
to speak to elementary, middle school, and high school audiences on environmental management topics. The association’s 12,000 members, all of whom are volunteers, are involved in virtually every aspect of every A& WMA project. There are 21 association sections across the United States, facilitating meetings at regional and even local levels to discuss important issues. Training seminars are an important part of A&WMA membership, and members are taught the skills necessary to run public outreach programs designed for students of all ages and the general public. A&WMA’s publications deal primarily with air pollution and waste management, and include the Journal of the Air & Waste Management Association, a scientific monthly; a bimonthly newsletter; a wide variety of technical books; and numerous training manuals and educational videotapes. [Cathy M. Falk]
Ailuropoda melanoleuca see Giant panda
Air and Waste Management Association Founded in 1907 as the International Association for the Prevention of Smoke, this group changed its name several times as the interests of its members changed, becoming the Air and Waste Management Association (A&WMA) in the late 1980s. Although an international organization for environment professionals in more than 50 countries, the association is most active in North America and most concerned with North American environmental issues. Among its main concerns are air pollution control, environmental management, and waste processing and control. A nonprofit organization that promotes the basic need for a clean environment, A&WMA seeks to educate the public and private sectors of the world by conducting seminars, holding workshops and conferences, and offering continuing education programs for environmental professionals in the areas of pollution control and waste management. One of its main goals is to provide “a neutral forum where all viewpoints of an environmental management issue (technical, scientific, economic, social, political and public health) receive equal consideration.” Approximately 10–12 specialty conferences are held annually, as well as five or six workshops. The topics continuously revolve and change as new issues arise. Education is so important to A&WMA that it funds a scholarship for graduate students pursuing careers in fields related to waste management and pollution control. Although A&WMA members are all professionals, they seek to educate even the very young by sponsoring essay contests, science fairs, and community activities, and by volunteering
RESOURCES ORGANIZATIONS Air & Waste Management Association, 420 Fort Duquesne Blvd, One Gateway Center , Pittsburgh, PA USA 15222 (412) 232-3444, Fax: (412) 232-3450, Email:
[email protected],
Air pollution Air pollution is a general term that covers a broad range of contaminants in the atmosphere. Pollution can occur from natural causes or from human activities. Discussions about the effects of air pollution have focused mainly on human health but attention is being directed to environmental quality and amenity as well. Air pollutants are found as gases or particles, and on a restricted scale they can be trapped inside buildings as indoor air pollutants. Urban air pollution has long been an important concern for civic administrators, but increasingly, air pollution has become an international problem. The most characteristic sources of air pollution have always been combustion processes. Here the most obvious pollutant is smoke. However, the widespread use of fossil fuels have made sulfur and nitrogen oxides pollutants of great concern. With increasing use of petroleum-based fuels, a range of organic compounds have become widespread in the atmosphere. In urban areas, air pollution has been a matter of concern since historical times. Indeed, there were complaints about smoke in ancient Rome. The use of coal throughout the centuries has caused cities to be very smoky places. Along with smoke, large concentrations of sulfur dioxide were produced. It was this mixture of smoke and sulfur dioxide that typified the foggy streets of Victorian London, paced by such figures as Sherlock Holmes and Jack the Ripper, 29
Air pollution
whose images remain linked with smoke and fog. Such situations are far less common in the cities of North America and Europe today. However, until recently, they have been evident in other cities, such as Ankara, Turkey, and Shanghai, China, that rely heavily on coal. Coal is still burnt in large quantities to produce electricity or to refine metals, but these processes are frequently undertaken outside cities. Within urban areas, fuel use has shifted towards liquid and gaseous hydrocarbons (petrol and natural gas). These fuels typically have a lower concentration of sulfur, so the presence of sulfur dioxide has declined in many urban areas. However, the widespread use of liquid fuels in automobiles has meant increased production of carbon monoxide, nitrogen oxides, and volatile organic compounds (VOCs). Primary pollutants such as sulfur dioxide or smoke are the direct emission products of the combustion process. Today, many of the key pollutants in the urban atmospheres are secondary pollutants, produced by processes initiated through photochemical reactions. The Los Angeles, California, type photochemical smog is now characteristic of urban atmospheres dominated by secondary pollutants. Although the automobile is the main source of air pollution in contemporary cities, there are other equally significant sources. Stationary sources are still important and the oil-burning furnaces that have replaced the older coalburning ones are still responsible for a range of gaseous emissions and fly ash. Incineration is also an important source of complex combustion products, especially where this incineration burns a wide range of refuse. These emissions can include chlorinated hydrocarbons such as dioxin. When plastics, which often contain chlorine, are incinerated, hydrochloric acid results in the waste gas stream. Metals, especially where they are volatile at high temperatures, can migrate to smaller, respirable particles. The accumulation of toxic metals, such as cadmium, on fly ash gives rise to concern over harmful effects from incinerator emissions. In specialized incinerators designed to destroy toxic compounds such as PCBs, many questions have been raised about the completeness of this destruction process. Even under optimum conditions where the furnace operation has been properly maintained, great care needs to be taken to control leaks and losses during transfer operations (fugitive emissions). The enormous range of compounds used in modern manufacturing processes have also meant that there has been an ever-widening range of emissions from both from the industrial processes and the combustion of their wastes. Although the amounts of these exotic compounds are often rather small, they add to the complex range of compounds found in the urban atmosphere. Again, it is not only the deliberate loss of effluents through discharge from pipes 30
Environmental Encyclopedia 3 and chimneys that needs attention. Fugitive emissions of volatile substances that leak from valves and seals often warrant careful control. Air pollution control procedures are increasingly an important part of civic administration, although their goals are far from easy to achieve. It is also noticeable that although many urban concentrations of primary pollutants, for example, smoke and sulfur dioxide, are on the decline in developed countries, this is not always true in the developing countries. Here the desire for rapid industrial growth has often lowered urban air quality. Secondary air pollutants are generally proving a more difficult problem to eliminate than primary pollutants like smoke. Urban air pollutants have a wide range of effects, with health problems being the most enduring concern. In the classical polluted atmospheres filled with smoke and sulfur dioxide, a range of bronchial diseases were enhanced. While respiratory diseases are still the principal problem, the issues are somewhat more subtle in atmospheres where the air pollutants are not so obvious. In photochemical smog, eye irritation from the secondary pollutant peroxyacetyl nitrate (PAN) is one on the most characteristic direct effects of the smog. High concentrations of carbon monoxide in cities where automobiles operate at high density means that the human heart has to work harder to make up for the oxygen displaced from the blood’s hemoglobin by carbon monoxide. This extra stress appears to reveal itself by increased incidence of complaints among people with heart problems. There is a widespread belief that contemporary air pollutants are involved in the increases in asthma, but the links between asthma and air pollution are probably rather complex and related to a whole range of factors. Lead, from automotive exhausts, is thought by many to be a factor in lowering the IQs of urban children. Air pollution also affects materials in the urban environment. Soiling has long been regarded as a problem, originally the result of the smoke from wood or coal fires, but now increasingly the result of fine black soot from diesel exhausts. The acid gases, particularly sulfur dioxide, increase the rate of destruction of building materials. This is most noticeable with calcareous stones, which are the predominant building material of many important historic structures. Metals also suffer from atmospheric acidity. In the modern photochemical smog, natural rubbers crack and deteriorate rapidly. Health problems relating to indoor air pollution are extremely ancient. Anthracosis, or black lung disease, has been found in mummified lung tissue. Recent decades have witnessed a shift from the predominance of concern about outdoor air pollution into a widening interest in indoor air quality.
Environmental Encyclopedia 3
Air pollution control
The production of energy from combustion and the release of solvents is so large in the contemporary world that it causes air pollution problems of a regional and global nature. Acid rain is now widely observed throughout the world. The sheer quantity of carbon dioxide emitted in combustion process is increasing the concentration of carbon dioxide in the atmosphere and enhancing the greenhouse effect. Solvents, such as carbon tetrachloride and aerosol propellants (such as chlorofluorocarbons are now detectable all over the globe and responsible for such problems as ozone layer depletion. At the other end of the scale, it needs to be remembered that gases leak indoors from the polluted outdoor environment, but more often the serious pollutants arise from processes that take place indoors. Here there has been particular concern with indoor air quality as regards to the generation of nitrogen oxides by sources such as gas stoves. Similarly formaldehyde from insulating foams causes illnesses and adds to concerns about our exposure to a substance that may induce cancer in the long run. In the last decade it has become clear that radon leaks from the ground can expose some members of the public to high levels of this radioactive gas within their own homes. Cancers may also result from the emanation of solvents from consumer products, glues, paints, and mineral fibers (asbestos). More generally these compounds and a range of biological materials, animal hair, skin and pollen spores, and dusts can cause allergic reactions in some people. At one end of the spectrum these simply cause annoyance, but in extreme cases, such as found with the bacterium Legionella, a large number of deaths can occur. There are also important issues surrounding the effects of indoor air pollutants on materials. Many industries, especially the electronics industry, must take great care over the purity of indoor air where a speck of dust can destroy a microchip or low concentrations of air pollutants change the composition of surface films in component design. Museums must care for objects over long periods of time, so precautions must be taken to protect delicate dyes from the effects of photochemical smog, paper and books from sulfur dioxide, and metals from sulfide gases. [Peter Brimblecombe]
RESOURCES BOOKS Bridgman, H. Global Air Pollution: Problems for the 1990s. New York: Columbia University Press, 1991. Elsom, D. M. Atmospheric Pollution. Oxford: Blackwell, 1992. Kennedy, D., and R. R. Bates, eds. Air Pollution, the Automobile, and Public Health. Washington, DC: National Academy Press, 1988.
MacKenzie, J. J. Breathing Easier: Taking Action on Climate Change, Air Pollution, and Energy Efficiency. Washington, DC: World Resources Institute, 1989. Smith, W. H. Air Pollution and Forests. 2nd ed. New York: SpringerVerlag, 1989.
Air pollution control The need to control air pollution was recognized in the earliest cities. In the Mediterranean at the time of Christ, laws were developed to place objectionable sources of odor and smoke downwind or outside city walls. The adoption of fossil fuels in thirteenth century England focused particular concern on the effect of coal smoke on health, with a number of attempts at regulation with regard to fuel type, chimney heights, and time of use. Given the complexity of the air pollution problem it is not surprising that these early attempts at control met with only limited success. The nineteenth century was typified by a growing interest in urban public health. This developed against a background of continuing industrialization, which saw smoke abatement clauses incorporated into the growing body of sanitary legislation in both Europe and North America. However, a lack of both technology and political will doomed these early efforts to failure, except in the most blatantly destructive situations (for example, industrial settings such as those around Alkali Works in England). The rise of environmental awareness has reminded people that air pollution ought not to be seen as a necessary product of industrialization. This has redirected responsibility for air pollution towards those who create it. The notion of “making the polluter pay” is seen as a central feature of air pollution control. History has also seen the development of a range of broad air pollution control strategies, among them: (1) Air quality management strategies that set ambient air quality standards so that emissions from various sources can be monitored and controlled; (2) Emission standards strategy that sets limits for the amount of pollutant that can be emitted from a given source. These may be set to meet air quality standards, but the strategy is optimally seen as one of adopting best available techniques not entailing excessive costs (BATNEEC); (3) Economic strategies that involve charging the party responsible for the pollution. If the level of charge is set correctly, some polluters will find it more economical to install air pollution control equipment than continue to pollute. Other methods utilize a system of tradable pollution rights; (4) Cost-benefit analysis, which attempts to balance economic benefits with environmental costs. This is an appealing strategy but difficult to implement because of its controversial and imprecise nature. In general air pollution strategies have either been airquality or emission-based. In the United Kingdom, emis31
Air pollution control
Environmental Encyclopedia 3
An industrial complex releases smoke from multiple chimneys. (Photograph by Josef Polleross. The Stock Market. Reproduced by permission.)
sion strategy is frequently used; for example the Alkali and Works Act of 1863 specifies permissible emissions of hydrochloric acid. By contrast, the United States has aimed to achieve air quality standards, as evidenced by the Clean Air Act. One criticism of using air quality strategy has been that while it improves air in poor areas it leads to degradation in areas with high air quality. Although the emission standards approach is relatively simple, it is criticized for failing to make explicit judgments about air quality and assumes that good practice will lead to an acceptable atmosphere. Until the mid-twentieth century, legislation was primarily directed towards industrial sources, but the passage of the United Kingdom Clean Air Act (1956), which followed the disastrous smog of December 1952, directed attention towards domestic sources of smoke. While this particular act may have reinforced the improvements already under way, rather than initiating improvements, it has served as a catalyst for much subsequent legislative thinking. Its mode of operation was to initiate a change in fuel, perhaps one of the oldest methods of control. The other well-tried
32
aspects were the creation of smokeless zones and an emphasis on tall chimneys to disperse the pollutants. As simplistic as such passive control measures seem, they remain at the heart of much contemporary thinking. Changes from coal and oil to the less polluting gas or electricity have contributed to the reduction in smoke and sulfur dioxide concentrations in cities all around the world. Industrial zoning has often kept power and large manufacturing plants away from centers of human population, and “superstacks,” chimneys of enormous height are now quite common. Successive changes in automotive fuels—lead-free gasoline, low volatility gas, methanol, or even the interest in the electric automobile—are further indications of continued use of these methods of control. There are more active forms of air pollution control that seek to clean up the exhaust gases. The earliest of these were smoke and grit arrestors that came into increasing use in large electrical stations during the twentieth century. Notable here were the cyclone collectors that removed large particles by driving the exhaust through a tight spiral that
Environmental Encyclopedia 3
Air quality
threw the grit outward where it could be collected. Finer particles could be removed by electrostatic precipitation. These methods were an important part of the development of the modern pulverized fuel power station. However they failed to address the problem of gaseous emissions. Here it has been necessary to look at burning fuel in ways that reduce the production of nitrogen oxides. Control of sulfur dioxide emissions from large industrial plants can be achieved by desulfurization of the flue gases. This can be quite successful by passing the gas through towers of solid absorbers or spraying solutions through the exhaust gas stream. However, these are not necessarily cheap options. Catalytic converters are also an important element of active attempts to control air pollutants. Although these can considerably reduce emissions, they have to be offset against the increasing use of the automobile. There is much talk of the development of zero pollution vehicles that do not emit any pollutants. Legislation and control methods are often associated with monitoring networks that assess the effectiveness of the strategies and inform the general public about air quality where they live. A balanced approach to the control of air pollution in the future may have to look far more broadly than simply at technological controls. It will become necessary to examine the way people structure their lives in order to find more effective solutions to air pollution. [Peter Brimblecombe]
Air Pollution Stages Index Value 0 100 200 300 400 500
Interpretations No concentration National Ambient Air Quality Standard Alert Warning Emergency Significant harm
The subindex of each pollutant or pollutant product is derived from a PSI nomogram which matches concentrations with subindex values. The highest subindex value becomes the PSI. The PSI has five health-related categories:
PSI Range 50 100 200 300
0 to 50 to 100 to 200 to 300 to 500
Category Good Moderate Unhealthful Very unhealthful Hazardous
RESOURCES BOOKS Elsom, D. M. Atmospheric Pollution. Oxford: Blackwell, 1992. Luoma, J. R. The Air Around Us: An Air Pollution Primer. Raleigh, NC: The Acid Rain Foundation, 1989. Wark, K., and C. F. Warner. Air Pollution: Its Origin and Control. 3rd ed. New York: Harper & Row, 1986.
Air pollution index The air pollution index is a value derived from an air quality scale which uses the measured or predicted concentrations of several criteria pollutants and other air quality indicators, such as coefficient of haze (COH) or visibility. The best known index of air pollution is the pollutant standard index (PSI). The PSI has a scale that spans from 0 to 500. The index represents the highest value of several subindices; there is a subindex for each pollutant, or in some cases, for a product of pollutant concentrations and a product of pollutant concentrations and COH. If a pollutant is not monitored, its subindex is not used in deriving the PSI. In general, the subindex for each pollutant can be interpreted as follows
Air quality Air quality is determined with respect to the total air pollution in a given area as it interacts with meteorological conditions such as humidity, temperature and wind to produce an overall atmospheric condition. Poor air quality can manifest itself aesthetically (as a displeasing odor, for example), and can also result in harm to plants, animals, people, and even damage to objects. As early as 1881, cities such as Chicago, Illinois, and Cincinnati, Ohio, had passed laws to control some types of pollution, but it wasn’t until several air pollution catastrophes occurred in the twentieth century that governments began to give more attention to air quality problems. For instance, in 1930, smog trapped in the Meuse River Valley in Belgium caused 60 deaths. Similarly, in 1948, smog was blamed for 20 deaths in Donora, Pennsylvania. Most dramatically, in 1952 a sulfur-laden fog enshrouded London for five days and caused as many as 4,000 deaths over two weeks. 33
Environmental Encyclopedia 3
Air quality control region
Disasters such as these prompted governments in a number of industrial countries to initiate programs to protect air quality. The year of the London tragedy, the United States passed the Air Pollution Control Act granting funds to assist the states in controlling airborne pollutants. In 1963, the Clean Air Act, which began to place authority for air quality into the hands of the federal government, was established. Today the Clean Air Act, with its 1970 and 1990 amendments, remains the principal air quality law in the United States. The Act established a National Ambient Air Quality Standard under which federal, state, and local monitoring stations at thousands of locations, together with temporary stations set up by the Environmental Protection Agency (EPA) and other federal agencies, directly measure pollutant concentrations in the air and compare those concentrations with national standards for six major pollutants: ozone, carbon monoxide, nitrogen oxides, lead, particulates, and sulfur dioxide. When the air we breathe contains amounts of these pollutants in excess of EPA standards, it is deemed unhealthy, and regulatory action is taken to reduce the pollution levels. In addition, urban and industrial areas maintain an air pollution index. This scale, a composite of several pollutant levels recorded from a particular monitoring site or sites, yields an overall air quality value. If the index exceeds certain values public warnings are given; in severe instances residents might be asked to stay indoors and factories might even be closed down. While such air quality emergencies seem increasingly rare in the United States, developing countries, as well as Eastern European nations, continue to suffer poor air quality, especially in urban areas such as Bangkok, Thailand and Mexico City, Mexico. In Mexico City, for example, seven out of 10 newborns have higher lead levels in their blood than the World Health Organization considers acceptable. At present, many Third World countries place national economic development ahead of pollution control—and in many countries with rapid industrialization, high population growth, or increasing per capita income, the best efforts of governments to maintain air quality are outstripped by rapid proliferation of automobiles, escalating factory emissions, and runaway urbanization. For all the progress the United States has made in reducing ambient air pollution, indoor air pollution may pose even greater risks than all of the pollutants we breathe outdoors. The Radon Gas and Indoor Air Quality Act of 1986 directed the EPA to research and implement a public information and technical assistance program on indoor air quality. From this program has come monitoring equipment to measure an individual’s “total exposure” to pollutants both in indoor and outdoor air. Studies done using this equipment 34
have shown indoor exposures to toxic air pollutants far exceed outdoor exposures for the simple reason that most people spend 90% of their time in office buildings, homes, and other enclosed spaces. Moreover, nationwide energy conservation efforts following the oil crisis of the 1970s led to building designs that trap pollutants indoors, thereby exacerbating the problem. [David Clarke and Jeffrey Muhr]
RESOURCES BOOKS Brown, Lester, ed. The World Watch Reader On Global Environmental Issues. Washington, DC: Worldwatch Institute, 1991. Council on Environmental Quality. Environmental Trends. Washington, DC: U. S. Government Printing Office, 1989. Environmental Progress and Challenges: EPA’s Update. Washington, DC: U. S. Environmental Protection Agency, 1988.
Air quality control region The Clean Air Act defines an air quality control region (AQCR) as a contiguous area where air quality, and thus air pollution, is relatively uniform. In those cases where topography is a factor in air movement, AQCRs often correspond with airsheds. AQCRs may consist of two or more cities, counties or other governmental entities, and each region is required to adopt consistent pollution control measures across the political jurisdictions involved. AQCRs may even cross state lines and, in these instances, the states must cooperate in developing pollution control strategies. Each AQCR is treated as a unit for the purposes of pollution reduction and achieving National Ambient Air Quality Standards. As of 1993, most AQCRs had achieved national air quality standards; however the remaining AQCRs where standards had not been achieved were a significant group, where a large percentage of the United States population dwelled. AQCRs involving major metro areas like Los Angeles, New York, Houston, Denver, and Philadelphia were not achieving air quality standards because of smog, motor vehicle emissions, and other pollutants.
Air quality criteria The relationship between the level of exposure to air pollutant concentrations and the adverse effects on health or public welfare associated with such exposure. Air quality criteria are critical in the development of ambient air quality standards which define levels of acceptably safe exposure to an air pollutant.
Environmental Encyclopedia 3
Alaska Highway
Air-pollutant transport
that Uniroyal conduct further studies on possible health risks from daminozide and UDMH. Even without a ban, Uniroyal felt the impact of the EPA’s research well before its own studies were concluded. Apple growers, fruit processors, legislators, and the general public were all frightened by the possibility that such a widely used chemical might be carcinogenic. Many growers, processors, and store owners pledged not to use the compound nor to buy or sell apples on which it had been used. By 1987, sales of Alar had dropped by 75%. In 1989, two new studies again brought the subject of Alar to the public’s attention. The consumer research organization Consumers’ Union found that, using a very sensitive test for the chemical, 11 of 20 red apples they tested contained Alar. In addition, 23 of 44 samples of apple juice tested contained detectable amounts of the compound. The Natural Resources Defense Council (NRDC) announced their findings on the compound at about the same time. The NRDC concluded that Alar and certain other agricultural chemicals pose a threat to children about 240 times higher than the one-in-a-million risk traditionally used by the EPA to determine the acceptability of a product used in human foods. The studies by the NRDC and the Consumers’ Union created a panic among consumers, apple growers, and apple processors. Many stores removed all apple products from their shelves, and some growers destroyed their whole crop of apples. The industry suffered millions of dollars in damage. Representatives of the apple industry continued to question how much of a threat Alar truly posed to consumers, claiming that the carcinogenic risks identified by the EPA, NRDC, and Consumers’ Union were greatly exaggerated. But in May of that same year, the EPA announced interim data from its most recent study, which showed that UDMH caused blood-vessel tumors in mice. The agency once more declared its intention to ban Alar, and within a month, Uniroyal announced it would end sales of the compound in the United States.
Air-pollutant transport is the advection or horizontal convection of air pollutants from an area where emission occurs to a downwind receptor area by local or regional winds. It is sometimes referred to as atmospheric transport of air pollutants. This movement of air pollution is often simulated with computer models for point sources as well as for large diffuse sources such as urban regions. In some cases, strong regional winds or low-level nocturnal jets can carry pollutants hundreds of miles from source areas of high emissions. The possibility of transport over such distances can be increased through topographic channeling of winds through valleys. Air-pollutant transport over such distances is often referred to as long-range transport. Air-pollutant transport is an important consideration in air quality planning. Where such impact occurs, the success of an air quality program may depend on the ability of air pollution control agencies to control upwind sources.
Airshed A geographical region, usually a topographical basin, that tends to have uniform air quality. The air quality within an airshed is influenced predominantly by emission activities native to that airshed, since the elevated topography around the basin constrains horizontal air movement. Pollutants move from one part of an airshed to other parts fairly quickly, but are not readily transferred to adjacent airsheds. An airshed tends to have a relatively uniform climate and relatively uniform meteorological features at any given point in time.
Alar Alar is the trade name for the chemical compound daminozide, manufactured by the Uniroyal Chemical Company. The compound has been used since 1968 to keep apples from falling off trees before they are ripe and to keep them red and firm during storage. As late as the early 1980s, up to 40% of all red apples produced in the United States were treated with Alar. In 1985, the Environmental Protection Agency (EPA) found that UDMH (N,N-dimethylhydrazine), a compound produced during the breakdown of daminozide, was a carcinogen. UDMH was routinely produced during the processing of apples, as in the production of apple juice and apple sauce, and the EPA suggested a ban on the use of Alar by apple growers. An outside review of the EPA studies, however, suggested that they were flawed, and the ban was not instituted. Instead, the agency recommended
[David E. Newton]
RESOURCES PERIODICALS “Alar: Not Gone, Not Forgotten.” Consumer Reports 52 (May 1989): 288–292. Roberts, L. “Alar: The Numbers Game.” Science 243 (17 March 1989): 1430. ———. “Pesticides and Kids.” Science 243 (10 March 1989): 1280–1281.
Alaska Highway The Alaska Highway, sometimes referred to as the Alcan (Alaska-Canada) Highway, is the final link of a binational 35
Environmental Encyclopedia 3
Alaska Highway
transportation corridor that provides an overland route
between the lower United States and Alaska. The first, allweather, 1,522-mi (2,451 km) Alcan Military Highway was hurriedly constructed during 1942–1943 to provide land access between Dawson Creek, a Canadian village in northeastern British Columbia, and Fairbanks, a town on the Yukon River in central Alaska. Construction of the road was motivated by perception of a strategic, but ultimately unrealized, Japanese threat to maritime supply routes to Alaska during World War II. The route of the Alaska Highway extended through what was then a wilderness. An aggressive technical vision was supplied by the United States Army Corps of Engineers and the civilian U.S. Public Roads Administration and labor by approximately 11,000 American soldiers and 16,000 American and Canadian civilians. In spite of the extraordinary difficulties of working in unfamiliar and inhospitable terrain, the route was opened for military passage in less than two years. Among the formidable challenges faced by the workers was a need to construct 133 bridges and thousands of smaller culverts across energetic watercourses, the infilling of alignments through a boggy muskeg capable of literally swallowing bulldozers, and working in winter temperatures that were so cold that vehicles were not turned off for fear they would not restart (steel dozer-blades became so brittle that they cracked upon impact with rock or frozen ground). In hindsight, the planning and construction of the Alaska Highway could be considered an unmitigated environmental debacle. The enthusiastic engineers were almost totally inexperienced in the specialized techniques of arctic construction, especially about methods dealing with permafrost, or permanently frozen ground. If the integrity of permafrost is not maintained during construction, then this underground, ice-rich matrix will thaw and become unstable, and its water content will run off. An unstable morass could be produced by the resulting erosion, mudflow, slumping, and thermokarst-collapse of the land into subsurface voids left by the loss of water. Repairs were very difficult, and reconstruction was often unsuccessful, requiring abandonment of some original alignments. Physical and biological disturbances caused terrestrial landscape scars that persist to this day and will continue to be visible (especially from the air) for centuries. Extensive reaches of aquatic habitat were secondarily degraded by erosion and/or sedimentation. The much more careful, intensively scrutinized, and ecologically sensitive approaches used in the Arctic today, for example during the planning and construction of the trans Alaska pipeline, are in marked contrast with the unfettered and free-wheeling engineering associated with the initial construction of the Alaska Highway. 36
Map of the Alaska (Alcan) Highway. (Line drawing by Laura Gritt Lawson. Reproduced by permission.)
The Alaska Highway has been more-or-less continuously upgraded since its initial completion and was opened to unrestricted traffic in 1947. Non-military benefits of the Alaska Highway include provision of access to a great region of the interior of northwestern North America. This access fostered economic development through mining, forestry, trucking, and tourism, as well as helping to diminish the perception of isolation felt by many northern residents living along the route. Compared with the real dangers of vehicular passage along the Alaska Highway during its earlier years, today the route safely provides one of North America’s most spectacular ecotourism opportunities. Landscapes range from alpine tundra to expansive boreal forest, replete with abundantly cold and vigorous streams and rivers. There are abundant opportunities to view large mammals such as moose (Alces alces), caribou (Rangifer tarandus), and bighorn sheep (Ovis canadensis), as well as charismatic smaller mammals and birds and a wealth of interesting arctic, boreal, and alpine species of plants. [Bill Freedman Ph.D.]
RESOURCES BOOKS Christy, J. Rough Road to the North. Markham, ON: Paperjacks, 1981.
PERIODICALS Alexandra, V., and K. Van Cleve. “The Alaska Pipeline: A Success Story.” Annual Review of Ecological Systems 14 (1983): 443–63.
Environmental Encyclopedia 3
Alaska National Interest Lands Conservation Act (1980)
Alaska National Interest Lands Conservation Act (1980) Commonly known as the Alaska Lands Act, The Alaska National Interest Lands Conservation Act (ANILCA) law protected 104 million acres (42 million ha), or 28%, of the state’s 375 million acres (152 million ha) of land. The law added 44 million acres (18 million ha) to the national park system, 55 million acres (22.3 million ha) to the fish and wildlife refuge system, 3 million acres (1.2 million ha) to the national forest system, and made 26 additions to the national wild and scenic rivers system. The law also designated 56.7 million acres (23 million ha) of land as wilderness, with the stipulation that 70 million acres (28.4 million ha) of additional land be reviewed for possible wilderness designation. The genesis of this act can be traced to 1959, when Alaska became the forty-ninth state. As part of the statehood act, Alaska could choose 104 million acres (42.1 million ha) of federal land to be transferred to the state. This selection process was halted in 1966 to clarify land claims made by Alaskan indigenous peoples. In 1971, the Alaska Native Claims Settlement Act (ANSCA) was passed to satisfy the native land claims and allow the state selection process to continue. This act stipulated that the Secretary of the Interior could withdraw 80 million acres (32.4 million ha) of land for protection as national parks and monuments, fish and wildlife refuges, and national forests, and that these lands would not be available for state or native selection. Congress would have to approve these designations by 1978. If Congress failed to act, the state and the natives could select any lands not already protected. These lands were referred to as national interest or d-2 lands. Secretary of the Interior Rogers Morton recommended 83 million acres (33.6 million ha) for protection in 1973, but this did not satisfy environmentalists. The ensuing conflict over how much and which lands should be protected, and how these lands should be protected, was intense. The environmental community formed the Alaska Coalition, which by 1980 included over 1,500 national, regional, and local organizations with a total membership of 10 million people. Meanwhile, the state of Alaska and developmentoriented interests launched a fierce and well-financed campaign to reduce the area of protected land. In 1978, the House passed a bill protecting 124 million acres (50.2 million ha). The Senate passed a bill protecting far less land, and House-Senate negotiations over a compromise broke down in October. Thus, Congress would not act before the December 1978 deadline. In response, the executive branch acted. Department of the Interior Secretary Cecil Andrus withdrew 110 million acres (44.6 million ha) from state selection and mineral entry. President Jimmy
Carter then designated 56 million acres (22.7 million ha) of these lands as national monuments under the authority of the Antiquities Act. Forty million additional acres (16.2 million ha) were withdrawn as fish and wildlife refuges, and 11 million acres (4.5 million ha) of existing national forests were withdrawn from state selection and mineral entry. Carter indicated that he would rescind these actions once Congress had acted. In 1979, the House passed a bill protecting 127 million acres (51.4 million ha). The Senate passed a bill designating 104 million acres (42.1 million ha) as national interest lands in 1980. Environmentalists and the House were unwilling to reduce the amount of land to be protected. In November, however, Ronald Reagan was elected President, and the environmentalists and the House decided to accept the Senate bill rather than face the potential for much less land under a President who would side with development interests. President Carter signed ANILCA into law on December 2, 1980. ANILCA also mandated that the U.S. Geological Service (USGS) conduct biological and petroleum assessments of the coastal plain section of the Arctic National Wildlife Refuge, 19.8 million acres (8 million ha) known as area 1002. While the USGS did determine a significant quantity of oil reserves in the area, they also reported that petroleum development would adversely impact many native species, including caribou (Rangifer tarandus), snow geese (Chen caerulescens), and muskoxen (Ovibos moschatus). In 2001, the Bush administration unveiled a new energy policy that would open up this area to oil and natural gas exploration. In June 2002, a House version of the energy bill (H.R.4) that favors opening ANWR to drilling and a Senate version (S.517) that does not were headed into conference to reconcile the differences between the two bills. [Christopher McGrory Klyza and Paula Anne Ford-Martin]
RESOURCES BOOKS Lentfer, Hank and C. Servid, eds. Arctic Refuge: A Circle of Testimony. Minneapolis, MN: Milkweed Editions, 2001.
OTHER Alaska National Interest Lands Conservation Act. 16 USC 3101-3223; Public Law 96-487. [June 2002]. . Douglas, D. C., et al., eds. Arctic Refuge Coastal Plain Terrestrial Wildlife Research Summaries. Biological Science Report USGS/BRD/BSR-20020001. [June 2002]. .
ORGANIZATIONS The Alaska Coalition, 419 6th St, #328 , Juneau, AK USA 99801 (907) 586-6667, Fax: (907) 463-3312, Email:
[email protected],
37
Environmental Encyclopedia 3
Albedo
Alaska National Wildlife Refuge see Arctic National Wildlife Refuge
by clouds. The mean albedo for the earth, called the planetary albedo, is about 30–35%. [Mark W. Seeley]
Algal bloom
Alaska pipeline see Trans-Alaska pipeline
Albedo The reflecting power of a surface, expressed as a ratio of reflected radiation to incident or incoming radiation; it is sometimes expressed as a percentage. Albedo is also called the “reflection coefficient” and derives from the Latin root word albus, which means whiteness. Sometimes expressed as a percentage, albedo is more commonly measured as a fraction on a scale from zero to one, with a value of one denoting a completely reflective, white surface, while a value of zero would describe an absolutely black surface that reflects no light rays. Albedo varies with surface characteristics such as color and composition, as well as with the angle of the sun. The albedo of natural earth surface features such as oceans, forests, deserts, and crop canopies varies widely. Some measured values of albedo for various surfaces are shown below:
Types of Surface
Albedo
Fresh, dry snow cover Aged or decaying snow cover Oceans Dense clouds Thin clouds Tundra Desert Coniferous forest Deciduous forest Field crops Bare dark soils
0.80–0.95 0.40–0.70 0.07–0.23 0.70–0.80 0.25–0.50 0.15–0.20 0.25–0.29 0.10–0.15 0.15–0.20 0.20–0.30 0.05–0.15
The albedo of clouds in the atmosphere is important to life on Earth because extreme levels of radiation absorbed by the earth would make the planet uninhabitable; at any moment in time about 50% of the planet’s surface is covered 38
Algae are simple, single-celled, filamentous aquatic plants; they grow in colonies and are commonly found floating in ponds, lakes, and oceans. Populations of algae fluctuate with the availability of nutrients, and a sudden increase in nutrients often results in a profusion of algae known as algal bloom. The growth of a particular algal species can be both sudden and massive. Algal cells can increase to very high densities in the water, often thousands of cells per milliliter, and the water itself can be colored brown, red, or green. Algal blooms occur in freshwater systems and in marine environments, and they usually disappear in a few days to a few weeks. These blooms consume oxygen, increase turbidity, and clog lakes and streams. Some algal species release water-soluble compounds that may be toxic to fish and shellfish, resulting in fish kills and poisoning episodes. Algal groups are generally classified on the basis of the pigments that color their cells. The most common algal groups are blue-green algae, green algae, red algae, and brown algae. Algal blooms in freshwater lakes and ponds tend to be caused by blue-green and green algae. The excessive amounts of nutrients that cause these blooms are often the result of human activities. For example, nitrates and phosphates introduced into a lake from fertilizer runoff during a storm can cause rapid algal growth. Some common blue-green algae known to cause blooms as well as release nerve toxins are Microcystis, Nostoc, and Anabaena. Red tides in coastal areas are a type of algal bloom. They are common in many parts of the world, including the New York Bight, the Gulf of California, and the Red Sea. The causes of algal blooms are not as well understood in marine environments as they are in freshwater systems. Although human activities may well have an effect on these events, weather conditions probably play a more important role: turbulent storms that follow long, hot, dry spells have often been associated with algal blooms at sea. Toxic red tides most often consist of genera from the dinoflagellate algal group such as Gonyaulax and Gymnodinium. The potency of the toxins has been estimated to be 10 to 50 times higher than cyanide or curare, and people who eat exposed shellfish may suffer from paralytic shellfish poisoning within 30 minutes of consumption. A fish kill of 500 million fish was reported from a red tide in Florida in 1947. A number of blue-green algal genera such as Oscillatoria and Trichodesmium have also been associated with red blooms, but they
Environmental Encyclopedia 3 are not necessarily toxic in their effects. Some believe that the blooms caused by these genera gave the Red Sea its name. The economic and health consequences of algal blooms can be sudden and severe, but the effects are generally not long lasting. There is little evidence that algal blooms have long-term effects on water quality or ecosystem structure. [Usha Vedagiri and Douglas Smith]
RESOURCES BOOKS Lerman, M. Marine Biology: Environment, Diversity and Ecology. Menlo Park, CA: Benjamin/Cummings, 1986.
PERIODICALS Culotta, E. “Red Menace in the World’s Oceans.” Science 257 (11 September 1992): 1476–77. Mlot, C. “White Water Bounty: Enormous Ocean Blooms of WhitePlated Phytoplankton Are Attracting the Interest of Scientists.” Bioscience 39 (April 1989): 222–24.
Algicide The presence of nuisance algae can cause unsightly appearance, odors, slime, and coating problems in aquatic media. Algicides are chemical agents used to control or eradicate the growth of algae in aquatic media such as industrial tanks, swimming pools, and lakes. These agents used may vary from simple inorganic compounds such as copper sulphate which are broad-spectrum in effect and control a variety of algal groups to complex organic compounds that are targeted to be species-specific in their effects. Algicides usually require repeated application or continuous application at low doses in order to maintain effective control.
Aline, Tundra see Tundra
Allelopathy Derived from the Greek words allelo (other) and pathy (causing injury to), allelopathy is a form of competition among plants. One plant produces and releases a chemical into the surrounding soil that inhibits the germination or growth of other species in the immediate area. These chemical substances are both acids and bases and are called secondary compounds. For example, black walnut (Jugans nigra) trees release a chemical called juglone that prevents other plants such as tomatoes from growing in the immediate area around
Alligator, American
each tree. In this way, plants such as black walnut reduce competition for space, nutrients, water, and sunlight.
Allergen Any substance that can bring about an allergic response in an organism. Hay fever and asthma are two common allergic responses. The allergens that evoke these responses include pollen, fungi, and dust. Allergens can be described as hostspecific agents in that a particular allergen may affect some individuals, but not others. A number of air pollutants are known to be allergens. Formaldehyde, thiocyanates, and epoxy resins are examples. People who are allergic to natural allergens, such as pollen, are more inclined to be sensitive also to synthetic allergens, such as formaldehyde.
Alligator, American The American alligator (Alligator mississippiensis) is a member of the reptilian family Crocodylidae, which consists of 21 species found in tropical and subtropical regions throughout the world. It is a species that has been reclaimed from the brink of extinction. Historically, the American alligator ranged in the Gulf and Atlantic coast states from Texas to the Carolinas, with rather large populations concentrated in the swamps and river bottomlands of Florida and Louisiana. From the late nineteenth century into the middle of the twentieth century, the population of this species decreased dramatically. With no restrictions on their activities, hunters killed alligators as pests or to harvest their skin, which was highly valued in the leather trade. The American alligator was killed in such great numbers that biologists predicted its probable extinction. It has been estimated that about 3.5 million of these reptiles were slaughtered in Louisiana between 1880 and 1930. The population was also impacted by the fad of selling young alligators as pets, principally in the 1950s. States began to take action in the early 1960s to save the alligator from extinction. In 1963 Louisiana banned all legalized trapping, closed the alligator hunting season, and stepped up enforcement of game laws against poachers. By the time the Endangered Species Act was passed in 1973, the species was already experiencing a rapid recovery. Because of the successful re-establishment of alligator populations, its endangered classification was downgraded in several southeastern states, and there are now strictly regulated seasons that allow alligator trapping. Due to the persistent demand for its hide for leather goods and an increasing market for the reptile’s meat, alligator farms are now both legal and profitable. 39
Environmental Encyclopedia 3
Alligator, American
An American alligator (Alligator mississippiensis). (Photograph by B. Arroyo. U. S. Fish & Wildlife Service. Reproduced by permission.)
Human fascination with large, dangerous animals, along with the American alligator’s near extinction, have made it one of North America’s best studied reptile species. Population pressures, primarily resulting from being hunted so ruthlessly for decades, have resulted in a decrease in the maximum size attained by this species. The growth of a reptile is indeterminate, and they continue to grow as long as they are alive, but old adults from a century ago attained larger sizes than their counterparts do today. The largest recorded American alligator was an old male killed in January 1890, in Vermilion Parish, Louisiana, which measured 19.2 ft (6 m) long. The largest female ever taken was only about half that size. Alligators do not reach sexual maturity until they are about 6 ft (1.3 m) long and nearly 10 years old. Females construct a nest mound in which they lay about 35–50 eggs. The nest is usually 5–7 ft (1.5–2.1 m) in diameter and 2–3 ft (0.6–0.9 m) high, and decaying vegetation produces heat which keeps the eggs at a fairly constant temperature during incubation. The young stay with their mother through their 40
first winter, striking out on their own when they are about 1.5 ft (0.5 m) in length. [Eugene C. Beckham]
RESOURCES BOOKS Crocodiles. Proceedings of the 9th Working Meeting of the IUCN/SSC Crocodile Specialist Group, Lae, Papua New Guinea. Vol. 2. Gland, Switzerland: IUCN-The World Conservation Union, 1990. Dundee, H. A., and D. A. Rossman. The Amphibians and Reptiles of Louisiana. Baton Rouge: LSU Press, 1989. Webb, G. J. W., S. C. Manolis, and P. J. Whitehead, eds. Wildlife Management: Crocodiles and Alligators. Chipping Norton, Australia: Surrey Beatty and Sons, 1987.
OTHER “Alligator mississippiensis in the Crocodilians, Natural History and Conservation.” Florida Museum of Natural History. [cited May 2002] . “The American Alligator.” University of Florida, Gainesville. [cited May 2002]. .
Environmental Encyclopedia 3
Alligator mississippiensis see Alligator, American
All-terrain vehicle see Off-road vehicles
Alpha particle A particle emitted by certain kinds of radioactive materials. An alpha particle is identical to the nucleus of a helium atom, consisting of two protons and two neutrons. Some common alpha-particle emitters are uranium-235, uranium238, radium-226, and radon-222. Alpha particles have relatively low penetrating power. They can be stopped by a thin sheet of paper or by human skin. They constitute a health problem, therefore, only when they are taken into the body. The inhalation of alpha-emitting radon gas escaping from bedrock into houses in some areas is thought to constitute a health hazard.
Alternative energy sources Coal, oil, and natural gas provide over 85% of the total primary energy used around the world. Although figures differ in various countries, nuclear reactors and hydroelectric power together produce less than 10% of the total world energy. Wind power, active and passive solar systems, and geothermal energy are examples of alternative energy sources. Collectively, these make up the final small fraction of total energy production. The exact contribution alternative energy sources make to the total primary energy used around the world is not known. Conservative estimates place their share at 3–4%, but some energy experts dispute these figures. Amory Lovins has argued that the statistics collected are based primarily on large electric utilities and the regions they serve. They fail to account for areas remote from major power grids, which are more likely to use solar energy, wind energy, or other sources. When these areas are taken into consideration, Lovins claims, alternative energy sources contribute as much as 11% to the total primary energy used in the United States. Animal manure, furthermore, is widely used as an energy source in India, parts of China, and many African nations, and when this is taken into account the percentage of the worldwide contribution alternative sources make to energy production could rise as high as 10–15%. Now an alternative energy source, wind power is one of the earliest forms of energy used by humankind. Wind is caused by the uneven heating of the earth’s surface, and its energy is equal to about
Alternative energy sources
2% of the solar energy that reaches the earth. In quantitative terms, the amount of kinetic energy within the earth’s atmosphere is equal to about 10,000 trillion kilowatt hours. The kinetic energy of wind is proportional to the wind velocity, and the ideal location for a windmill generator is an area with constant and relatively fast winds and no obstacles such as buildings or trees. An efficient windmill can produce 175 watts per square meter of propeller blade area at a height of 75 ft (25 m). The estimated cost of generating one kilowatt hour by wind power is about eight cents, as compared to five cents for hydropower and 15 cents for nuclear power. The largest two utilities in California purchase wind-generated electricity, and though this state leads the country in the utilization of wind power, Denmark leads the world. The Scandinavian nation has refused to use nuclear power, and it expects to obtain 10% of its energy needs from windmills. Solar energy can be utilized either directly as heat or indirectly by converting it to electrical power using photovoltaic cells. Greenhouses and solariums are the most common examples of the direct use of solar energy, with glass windows concentrating the visible light from the sun but restricting the heat from escaping. Flatplate collectors are another direct method, and mounted on rooftops they can provide one third of the energy required for space heating. Windows and collectors alone are considered passive systems; an active solar system uses a fan, pump, or other machinery to transport the heat generated from the sun. Photovoltaic cells are made of semiconductor materials such as silicon. These cells are capable of absorbing part of the solar flux to produce a direct electric current with about 14% efficiency. The current cost of producing photovoltaic current is about four dollars a watt. However, a thin-film technology is being perfected for the production of these cells, and the cost per watt will eventually be reduced because less materials will be required. Photovoltaics are now being used economically in lighthouses, boats, rural villages, and other remote areas. Large solar systems have been most effective using trackers that follow the sun or mirror reflectors that concentrate its rays. Geothermal energy is the natural heat generated in the interior of the earth, and like solar energy it can also be used directly as heat or indirectly to generate electricity. Steam is classified as either dry (no water droplets), or wet (mixed with water). When it is generated in certain areas containing corrosive sulfur compounds, it is known as “sour steam,” and when generated in areas that are free of sulfur it is known as “sweet steam.” Geothermal energy can be used to generate electricity by the flashed steam method, in which high temperature geothermal brine is used as a heat exchanger to convert injected water into steam. The produced steam is used to turn a turbine. When geothermal 41
Environmental Encyclopedia 3
Aluminum
wells are not hot enough to create steam, a fluid which evaporates at a much lower temperature than water, such as isobutane or ammonia, can be placed in a closed system where the geothermal heat provides the energy to evaporate the fluid and run the turbine. There are 20 countries worldwide that utilize this energy source, and they include the United States, Mexico, Italy, Iceland, Japan, and the former Soviet Union. Unlike solar energy and wind power, geothermal energy is not free of environmental impact. It contributes to air pollution, it can emit dissolved salts and, in some cases, toxic heavy metals such as mercury and arsenic. Though there are several ways of utilizing energy from the ocean, the most promising are the harnessing of tidal power and ocean thermal energy conversion. The power of ocean tides is based on the difference between high and low water. In order for tidal power to be effective the differences in height need to be very great, more than 15 ft (3 m), and there are only a few places in the world where such differences exist. These include the Bay of Fundy and a few sites in China. Ocean thermal energy conversion utilizes temperature changes rather than tides. Ocean temperature is stratified, especially near the tropics, and the process takes advantage of this fact by using a fluid with a low boiling point, such as ammonia. The vapor from the fluid drives a turbine, and cold water from lower depths is pumped up to condense the vapor back into liquid. The electrical power generated by this method can be shipped to shore or used to operate a floating plant such as a cannery. Other sources of alternative energy are currently being explored, some of which are still experimental. These include harnessing the energy in biomass through the production of wood from trees or the production of ethanol from crops such as sugar cane or corn. Methane gas can be generated from the anaerobic breakdown of organic waste in sanitary landfills and from wastewater treatment plants. With the cost of garbage disposal rapidly increasing, the burning of garbage is becoming a viable option as an energy source. Adequate air pollution controls are necessary, but trash can be burned to heat buildings, and municipal garbage is currently being used to generate electricity in Hamburg, Germany. In an experimental method known as magnetohydrodynamics, hot gas is ionized (potassium and sulfur) and passed through a strong magnetic field where it produces an electrical current. This process contains no moving parts and has an efficiency of 20–30%. Ethanol and methanol can be produced from biomass and used in transportation; in fact, methanol currently powers Indianapolis race cars. Hydrogen could be valuable if problems of supply and storage can be solved. It is very clean-burning, forming water, and may be combined with
42
oxygen in fuel cells to generate electricity. Also, it is not nearly as explosive as gasoline. Of all the alternative sources, energy conservation is perhaps the most important, and improving energy efficiency is the best way to meet energy demands without adding to air and water pollution. One reason the United States survived the energy crises of the 1970s was that they were able to curtail some of their immense waste. Relatively easy lifestyle alterations, vehicle improvements, building insulation, and more efficient machinery and appliances have significantly reduced their potential energy demand. Experts have estimated that it is possible to double the efficiency of electric motors, triple the intensity of light bulbs, quadruple the efficiency of refrigerators and air conditioners, and quintuple the gasoline mileage of automobiles. Several automobile manufacturers in Europe and Japan have already produced prototype vehicles with very high gasoline mileage. Volvo has developed the LCP 2000, a passenger sedan that holds four to five people, meets all United States safety standards, accelerates from 0–50 MPH (0–80.5 km/hr) in 11 seconds, and has a high fuel efficiency rating. Alternative fuels will be required to meet future energy needs. Enormous investments in new technology and equipment will be needed, and potential supplies are uncertain, but there is clearly hope for an energy-abundant future. [Muthena Naseri and Douglas Smith]
RESOURCES BOOKS Alternative Energy Handbook. Englewood Cliffs, NJ: Prentice Hall, 1993. Brower, M. Cool Energy: Renewable Solutions to Environmental Problems. Cambridge: MIT Press, 1992. Brown, Lester R., ed. The World Watch Reader on Global Environmental Issues. New York: W. W. Norton, 1991. Goldemberg, J. Energy for a Sustainable World. New York: Wiley, 1988. Schaeffer, J. Alternative Energy Sourcebook: A Comprehensive Guide to Energy Sensible Technologies. Ukiah, CA: Real Goods Trading Corp., 1992. Shea, C. P. Renewable Energy: Today’s Contribution, Tomorrow’s Promise. Washington, DC: Worldwatch Institute, 1988.
PERIODICALS Stein, J. “Hydrogen: Clean, Safe, and Inexhaustible.” Amicus Journal 12 (Spring 1990): 33-36.
Alternative fuels see Renewable energy
Aluminum Aluminum, a light metal, comprises about 8% of the earth’s crust, ranking as the third-most abundant element after oxygen (47%) and silicon (28%). Virtually all environmental
Environmental Encyclopedia 3 aluminum is present in mineral forms that are almost insoluble in water, and therefore not available for uptake by organisms. Most common among these forms of aluminum are various aluminosilicate minerals, aluminum clays and sesquioxides, and aluminum phosphates. However, aluminum can also occur as chemical species that are available for biological uptake, sometimes causing toxicity. In general, bio-available aluminum is present in various water-soluble, ionic or organically complexed chemical species. Water-soluble concentrations of aluminum are largest in acidic environments, where toxicity to nonadapted plants and animals can be caused by exposure to Al3+ and Al(OH)2+ ions, and in alkaline environments, where Al(OH)4- is most prominent. Organically bound, watersoluble forms of aluminum, such as complexes with fulvic or humic acids, are much less toxic than ionic species. Aluminum is often considered to be the most toxic chemical factor in acidic soils and aquatic habitats.
Amazon basin The Amazon basin, the region of South America drained by the Amazon River, represents the largest area of tropical rain forest in the world. Extending across nine different countries and covering an area of 2.3 million square mi (6 million sq. km), the Amazon basin contains the greatest abundance and diversity of life anywhere on the earth. Tremendous numbers of plant and animal species that occur there have yet to be discovered or properly named by scientists, as this area has only begun to be explored by competent researchers. It is estimated that the Amazon basin contains over 20% of all higher plant species on Earth, as well as about 20% of all birdlife and 10% of all mammals. More than 2,000 known species of freshwater fishes live in the Amazon river and represent about 8% of all fishes on the planet, both freshwater and marine. This number of species is about three times the entire ichthyofauna of North America and almost ten times that of Europe. The most astonishing numbers, however, come from the river basin’s insects. Every expedition to the Amazon basin yields countless new species of insects, with some individual trees in the tropical forest providing scientists with hundreds of undescribed forms. Insects represent about three-fourths of all animal life on Earth, yet biologists believe the 750,000 species that have already been scientifically named account for less than 10% of all insect life that exists. However incredible these examples of biodiversity are, they may soon be destroyed as the rampant deforestation in the Amazon basin continues. Much of this destruction is directly attributable to human population growth.
Amazon basin
The number of people who have settled in the Amazonian uplands of Colombia and Ecuador has increased by 600% over the past 40 years, and this has led to the clearing of over 65% of the region’s forests for agriculture. In Brazil, up to 70% of the deforestation is tied to cattle ranching. In the past large governmental subsidies and tax incentives have encouraged this practice, which had little or no financial success and caused widespread environmental damage. Tropical soils rapidly lose their fertility, and this allows only limited annual meat production. It is often only 300 lb (136 kg) per acre, compared to over 3,000 lb (1,360 kg) per acre in North America. Further damage to the tropical forests of the Amazon basin is linked to commercial logging. Although only five of the approximately 1,500 tree species of the region are extensively logged, tremendous damage is done to the surrounding forest as these are selectively removed. When loggers build roads move in heavy equipment, they may damage or destroy half of the trees in a given area. The deforestation taking place in the Amazon basin has a wide range of environmental effects. The clearing and burning of vegetation produces smoke or air pollution, which at times has been so abundant that it is clearly visible from space. Clearing also leads to increased soil erosion after heavy rains, and can result in water pollution through siltation as well as increased water temperatures from increased exposure. Yet the most alarming, and definitely the most irreversible, environmental problem facing the Amazon basin is the loss of biodiversity. Through the irrevocable process of extinction, this may cost humanity more than the loss of species. It may cost us the loss of potential discoveries of medicines and other beneficial products derived from these species. [Eugene C. Beckham]
RESOURCES BOOKS Caufield, C. In the Rainforest: Report From a Strange, Beautiful, Imperiled World. Chicago: University of Chicago Press, 1986. Cockburn, A., and S. Hecht. The Fate of the Forest: Developers, Destroyers, and Defenders of the Amazon. New York: Harper/Perennial, 1990. Collins, M. The Last Rain Forests: A World Conservation Atlas. London: Oxford University Press, 1990. Cowell, A. Decade of Destruction: The Crusade to Save the Amazon Rain Forest. New York: Doubleday, 1991. Margolis, M. The Last New World: The Conquest of the Amazon Frontier. New York: Norton, 1992. Wilson, E. O. The Diversity of Life. Cambridge, MA: Belknap Press, 1992.
PERIODICALS Holloway, M. “Sustaining the Amazon.” Scientific American 269 (July 1993): 90–96+.
43
Ambient air
Environmental Encyclopedia 3
Ambient air The air, external to buildings and other enclosures, found in the lower atmosphere over a given area, usually near the surface. Air pollution standards normally refer to ambient air.
Amenity value The idea that something has worth because of the pleasant feelings it generates to those who use or view it. This value is often used in cost-benefit analysis, particularly in shadow pricing, to determine the worth of natural resources that will not be harvested for economic gain. A virgin forest will have amenity value, but its value will decrease if the forest is harvested, thus the amenity value is compared to the value of the harvested timber.
American alligator see Alligator, American
American Box Turtle Box turtles are in the Order Chelonia, Family Emydidae, and genus Terrapene. There are two major species in the United States: carolina (Eastern box turtle) and ornata (Western or ornate box turtle). Box turtles are easily recognized by their dome-shaped upper shell (carapace) and by their lower shell (plastron) which is hinged near the front. This hinging allows them to close up tightly into the “box” when in danger (hence their name). Box turtles are fairly small, having an adult maximum length of 4–7 in (10–18 cm). Their range is restricted to North America, with the Eastern species located over most of the eastern United States and the Western species located in the Central and Southwestern United States and into Mexico, but not as far west as California. Both species are highly variable in coloration and pattern, ranging from a uniform tan to dark brown or black, with yellow spots or streaks. They prefer a dry habitat such as woodlands, open brush lands, or prairie. They typically inhabit sandy soil, but are sometimes found in springs or ponds during hot weather. During the winter, they hibernate in the soil below the frost line, often as deep as 2 ft (60 cm). Their home range is usually fairly small, and they often live within areas less than 300 yd2 (300 m2). 44
Eastern box turtle. (Photograph by Robert Huffman. Fieldmark Publications. Reproduced by permission.)
Box turtles are omnivorous, feeding on living and dead insects, earthworms, slugs, fruits, berries (particularly blackberries and strawberries), leaves, and mushrooms. They have been known to ingest some mushrooms which are poisonous to humans, and there have been reports of people eating box turtles and getting sick. Other than this, box turtles are harmless to humans and are commonly collected and sold as pets (although this should be discouraged because they are now a threatened species). They can be fed raw hamburger, canned pet food, or leafy vegetables. Box turtles normally live as long as 30–40 years. Some have been reported with a longevity of more than one hundred years, and this makes them the longest-lived land turtle. They are active from March until November and are diurnal, usually being more active in the early morning. During the afternoons they typically seek shaded areas. They breed during the spring and autumn, and the females build nests from May until July, typically in sandy soil where they dig a hole with their hind feet. The females can store sperm for several years. They typically hatch three to eight eggs that are elliptically-shaped and about 1.5 in (4 cm) in diameter. Male box turtles have a slight concavity in their plastron that aids in mounting females during copulation. All four toes on the male’s hind feet are curved, which aids in holding down the posterior portion of the female’s plastron during copulation. Females have flat plastrons, shorter tails, and yellow or brown eyes. Most males have bright red or pink eyes. The upper jaw of both sexes ends in a down-turned beak. Predators of box turtles include skunks, raccoons, foxes, snakes, and other animals. Native American Indians used to eat box turtles and incorporated their shells into their ceremonies as rattles. [John Korstad]
Environmental Encyclopedia 3
American Committee for International Conservation
RESOURCES BOOKS
and a quarterly journal of scientific articles on the same subjects, called Whalewatcher. [Douglas Smith]
Conant, R. A Field Guide to Reptiles and Amphibians of Eastern and Central North America. Boston: Houghton Mifflin, 1998. Tyning, T. F. A Guide to Amphibians and Reptiles. Boston: Little, Brown and Co., 1990.
OTHER
RESOURCES ORGANIZATIONS
“Conservation and Preservation of American Box Turtles in the Wild.” The American Box Turtle Page. Fall 2000 [cited May 2002]. .
American Cetacean Society, P.O. Box 1391, San Pedro, CA USA 90733-1391 (310) 548-6279, Fax: (310) 548-6950, Email:
[email protected],
American Cetacean Society
American Committee for International Conservation
The American Cetacean Society (ACS), located in San Pedro, California, is dedicated to the protection of whales and other cetaceans, including dolphins and porpoises. Principally an organization of scientists and teachers (though its membership does include students and laypeople) the ACS was founded in 1967 and claims to be the oldest whale conservation group in the world. The ACS believes the best protection for whales, dolphins, and porpoises is better public awareness about “these remarkable animals and the problems they face in their increasingly threatened habitat.” The organization is committed to political action through education, and much of its work has been in improving communication between marine scientists and the general public. The ACS has developed several educational resource materials on cetaceans, making such products as the “Gray Whale Teaching Kit,” “Whale Fact Pack,” and “Dolphin Fact Pack,” which are widely available for use in classrooms. There is a cetacean research library at the national headquarters in San Pedro, California, and the organization responds to thousands of inquiries every year. The ACS supports marine mammal research and sponsors a biennial conference on whales. It also assists in conducting whale-watching tours. The organization also engages in more traditional and direct forms of political action. A representative in Washington, DC, monitors legislation that might affect cetaceans, attends hearings at government agencies, and participates as a member of the International Whaling Commission. The ACS also networks with other conservation groups. In addition, the ACS directs letter-writing campaigns, sending out “Action Alerts” to citizens and politicians. The organization is currently emphasizing the threats to marine life posed by oil spills, toxic wastes from industry and agriculture, and particular fishing practices (including commercial whaling). The ACS publishes a quarterly newsletter on whale research, conservation, and education, called WhaleNews,
The American Committee for International Conservation (ACIC), located in Washington, DC, is an association of nongovernmental organizations (NGOs) that is concerned about international conservation issues. The ACIC, founded in 1930, includes 21 member organizations. It represents conservation groups and individuals in 40 countries. While ACIC does not fund conservation research, it does promote national and international conservation research activities. Specifically, ACIC promotes conservation and preservation of wildlife and other natural resources, and encourages international research on the ecology of endangered species. Formerly called the American Committee for International Wildlife Protection, ACIC assists IUCN—The World Conservation Union, an independent organization of nations, states, and NGOs, in promoting natural resource conservation. ACIC also coordinates its members’ overseas research activities. Member organizations of the ACIC include the African Wildlife Leadership Foundation, National Wildlife Federation, World Wildlife Fund (US)/RARE, Caribbean Conservation Corporation, National Audubon Society, Natural Resources Defense Council, Nature Conservancy, International Association of Fish and Wildlife Agencies, and National Parks and Conservation Association. Members also include The Conservation Foundation, International Institute for Environment and Development; Massachusetts Audubon Society; Chicago Zoological Society; Wildlife Preservation Trust; Wildfowl Trust; School of Natural Resources, University of Michigan; World Resources Institute; Global Tomorrow Coalition; and The Wildlife Society, Inc. ACIC holds no formal meetings or conventions, nor does it publish magazines, books, or newsletters. Contact: American Committee for International Conservation, c/o 45
Environmental Encyclopedia 3
American Farmland Trust
Center for Marine Conservation, 1725 DeSales Street, NW, Suite 500, Washington, DC 20036. [Linda Rehkopf]
American Farmland Trust Headquartered in Washington, DC, the American Farmland Trust (AFT) is an advocacy group for farmers and farmland. It was founded in 1980 to help reverse or at least slow the rapid decline in the number of productive acres nationwide, and it is particularly concerned with protecting land held by private farmers. The principles that motivate the AFT are perhaps best summarized in a line from William Jennings Bryan that the organization has often quoted: “Destroy our farms, and the grass will grow in the streets of every city in the country.” Over one million acres (404,700 ha) of farmland in the United States is lost each year to development, according to the AFT, and in Illinois one and a half bushels of topsoil are lost for every bushel of corn produced. The AFT argues that such a decline poses a serious threat to the future of the American economy. As farmers are forced to cultivate increasingly marginal land, food will become more expensive, and the United States could become a net importer of agricultural products, damaging its international economic position. The organization believes that a declining farm industry would also affect American culture, depriving the country of traditional products such as cherries, cranberries, and oranges and imperiling a sense of national identity that is still in many ways agricultural. The AFT works closely with farmers, business people, legislators, and environmentalists “to encourage sound farming practices and wise use of land.” The group directs lobbying efforts in Washington, working with legislators and policymakers and frequently testifying at congressional and public hearings on issues related to farming. In addition to mediating between farmers and state and federal government, the trust is also involved in political organizing at the grassroots level, conducting public opinion polls, contesting proposals for incinerators and toxic waste sites, and drafting model conservation easements. They conduct workshops and seminars across the country to discuss farming methods and soil conservation programs, and they worked with the State of Illinois to establish the Illinois Sustainable Agriculture Society. The group is currently developing kits for distribution to schoolchildren in both rural and urban areas called “Seed for the Future,” which teach the benefits of agriculture and help each child grow a plant. The AFT has a reputation for innovative and determined efforts to realize its goals, and former Secretary of Agriculture John R. Block has said that “this organization 46
has probably done more than any other to preserve the American farm.” Since its founding the trust has been instrumental in protecting nearly 30,000 acres (12,140 ha) of farmland in 19 states. In 1989, the group protected a 507acre (205-ha) cherry farm known as the Murray Farm in Michigan, and it has helped preserve 300 acres (121 ha) of farm and wetlands in Virginia’s Tidewater region. The AFT continues to battle urban sprawl in areas such as California’s Central Valley and Berks County, Pennsylvania, as well as working to support farms in states such as Vermont, which are threatened not so much by development but by a poor agricultural economy. The AFT promotes a wetland policy that is fair to farmers while meeting environment standards, and it recently won a national award from the Soil and Water Conservation Society for its publication Does Farmland Protection Pay? The AFT has 20,000 members and an annual budget of $3,850,000. The trust publishes a quarterly magazine called American Farmland, a newsletter called Farmland Update, and a variety of brochures and pamphlets which offer practical information on soil erosion, the cost of community services, and estate planning. They also distribute videos, including The Future of America’s Farmland, which explains the sale and purchase of development rights. [Douglas Smith]
RESOURCES ORGANIZATIONS The American Farmland Trust (AFT), 1200 18th Street, NW, Suite 800, Washington, D.C. USA 20036 (202) 331-7300, Fax: (202) 659-8339, Email:
[email protected],
American Forests Located in Washington, DC, American Forests was founded in 1875, during the early days of the American conservation movement, to encourage forest management. Originally called the American Forestry Association, the organization was renamed in the later part of the twentieth century. The group is dedicated to promoting the wise and careful use of all natural resources, including soil, water, and wildlife, and it emphasizes the social and cultural importance of these resources as well as their economic value. Although benefiting from increasing national and international concern about the environment, American Forests takes a balanced view on preservation, and it has worked to set a standard for the responsible harvesting and marketing of forest products. American Forests sponsors the Trees for People program, which is designed to help meet the national demand for wood and paper products by increasing the productivity of private woodlands. It provides educational
Environmental Encyclopedia 3
American Indian Environmental Office
and technical information to individual forest owners, as well as making recommendations to legislators and policymakers in Washington. To draw attention to the greenhouse effect, American Forests inaugurated their Global ReLeaf program in October 1988. Global ReLeaf is what American Forests calls “a treeplanting crusade.” The message is, “Plant a tree, cool the globe,” and Global ReLeaf has organized a national campaign, challenging Americans to plant millions of trees. American Forests has gained the support of government agencies and local conservation groups for this program, as well as many businesses, including such Fortune-500 companies as Texaco, McDonald’s, and Ralston-Purina. The goal of the project is to plant 20 million trees by 2002. In August of 2001, there had been 19 million trees planted. Global ReLeaf also launched a cooperative effort with the American Farmland Trust called Farm ReLeaf, and it has also participated in the campaign to preserve Walden Woods in Massachusetts. In 1991 American Forests brought Global ReLeaf to Eastern Europe, running a workshop in Budapest, Hungary, for environmental activists from many former communist countries. American Forests has been extensively involved in the controversy over the preservation of old-growth forests in the American Northwest. They have been working with environmentalists and representatives of the timber industry, and consistent with the history of the organization, American Forests is committed to a compromise that both sides can accept: “If we have to choose between preservation and destruction of old-growth forests as our only options, neither choice will work.” American Forests supports an approach to forestry known as New Forestry, where the priority is no longer the quantity of wood or the number of board feet that can be removed from a site, but the vitality of the ecosystem the timber industry leaves behind. The organization advocates the establishment of an Old Growth Reserve in the Pacific Northwest, which would be managed by the principles of New Forestry under the supervision of a Scientific Advisory Committee. American Forests publishes the National Registry of Big Trees, which celebrated its sixtieth anniversary in 2000. The registry is designed to encourage the appreciation of trees, and it includes such trees as the recently fallen Dyerville Giant, a redwood tree in California; the General Sherman, a giant sequoia in Texas; and the Wye Oak in Maryland. The group also publishes American Forests, a bimonthly magazine, and Resource Hotline, a biweekly newsletter, as well as Urban Forests: The Magazine of Community Trees. It presents the Annual Distinguished Service Award, the John Aston Warder Medal, and the William B. Greeley Award, among others. American Forests has over 35,000 members, a staff of 21, and a budget of $2,725,000. [Douglas Smith]
RESOURCES ORGANIZATIONS American Forests, P.O. Box 2000, Washington, DC USA 20013 (202) 955-4500, Fax: (202) 955-4588, Email:
[email protected],
American Indian Environmental Office The American Indian Environmental Office (AIEO) was created to increase the quality of public health and environmental protection on Native American land and to expand tribal involvement in running environmental programs. Native Americans are the second-largest landholders besides the government. Their land is often threatened by environmental degradation such as strip mining, clearcutting, and toxic storage. The AIEO, with the help of the President’s Federal Indian Policy (January 24, 1983), works closely with the U.S. Environmental Protection Agency (EPA) to prevent further degradation of the land. The AIEO has received grants from the EPA for environmental cleanup and obtained a written policy that requires the EPA to continue with the trust responsibility, a clause expressed in certain treaties that requires the EPA to notify the Tribe when performing any activities that may affect reservation lands or resources. This involves consulting with tribal governments, providing technical support, and negotiating EPA regulations to ensure that tribal facilities eventually comply. The pollution of Dine Reservation land is an example of an environmental injustice that the AIEO wants to prevent in the future. The reservation has over 1,000 abandoned uranium mines that leak radioactive contaminants and is also home to the largest coal strip mine in the world. The cancer rate for the Dine people is 17 times the national average. To help tribes with pollution problems similar to the Dine, several offices now exist that handle specific environmental projects. They include the Office of Water, Air, Environmental Justice, Pesticides and Toxic Substances; Performance Partnership Grants; Solid Waste and Emergency Response; and the Tribal Watershed Project. Each of these offices reports to the National Indian Headquarters in Washington, DC. At the Rio Earth Summit in 1992, the Biodiversity Convention was drawn up to protect the diversity of life on the planet. Many Native American groups believe that the convention also covered the protection of indigenous communities, including Native American land. In addition, the groups demand that prospecting by large companies for rare forms of life and materials on their land must stop. Tribal Environmental Concerns Tribal governments face both economic and social problems dealing with the demand for jobs, education, health care, and housing for tribal members. Often the reservations’ 47
Environmental Encyclopedia 3
American Oceans Campaign
largest employer is the government, which owns the stores, gaming operations, timber mills, and manufacturing facilities. Therefore, the government must deal with the conflicting interests of protecting both economic and environmental concerns. Many tribes are becoming self-governing and manage their own natural resources along with claiming the reserved right to use natural resources on portions of public land that border their reservation. As a product of the reserved treaty rights, Native Americans can use water, fish, and hunt anytime on nearby federal land. Robert Belcourt, Chippewa-Cree tribal member and director of the Natural Resources Department in Montana stated: “We have to protect nature for our future generations. More of our Indian people need to get involved in natural resource management on each of our reservations. In the long run, natural resources will be our bread and butter by our developing them through tourism and recreation and just by the opportunity they provide for us to enjoy the outdoor world.” Belcourt has fought to destroy the negative stereotypes of conservation organizations that exist among Native Americans who believe, for example, that conservationists are extreme tree-huggers and insensitive to Native American culture. These stereotypes are a result of cultural differences in philosophy, perspective, and communication. To work together effectively, tribes and conservation groups need to learn about one another’s cultures, and this means they must listen both at meetings and in one-on-one exchanges. The AIEO also addresses the organizational differences that exist in tribal governments and conservation organizations. They differ greatly in terms of style, motivation, and the pressures they face. Pressures on the Wilderness Society, for example, include fending off attempts in Washington, D.C. to weaken key environmental laws or securing members and raising funds. Pressures on tribal governments more often are economic and social in nature and have to do with the need to provide jobs, health care, education, and housing for tribal members. Because tribal governments are often the reservations’ largest employers and may own businesses like gaming operations, timber mills, manufacturing facilities, and stores, they function as both governors and leaders in economic development. Native Americans currently occupy and control over 52 million acres (21.3 million ha) in the continental United States and 45 million more acres (18.5 million ha) in Alaska, yet this is only a small fraction of their original territories. In the nineteenth century, many tribes were confined to reservations that were perceived to have little economic value, although valuable natural resources have subsequently been found on some of these land. Pointing to their treaties and other agreements with the federal government, many 48
tribes assert that they have reserved rights to use natural resources on portions of public land. In previous decades these natural resources on tribal lands were managed by the Bureau of Indian Affairs (BIA). Now many tribes are becoming self-governing and are taking control of management responsibilities within their own reservation boundaries. In addition, some tribes are pushing to take back management over some federally managed lands that were part of their original territories. For example, the Confederated Salish and Kootenai tribes of the Flathead Reservation are taking steps to assume management of the National Bison Range, which lies within the reservation’s boundaries and is currently managed by the U.S. Fish and Wildlife Service. Another issue concerns Native American rights to water. There are legal precedents that support the practice of reserved rights to water that is within or bordering a reservation. In areas where tribes fish for food, mining pollution has been a continues threat to maintaining clean water. Mining pollution is monitored, but the amount of fish that Native Americans consume is higher than the government acknowledges when setting health guidelines for their consumption. This is why the AIEO is asking that stricter regulations be imposed on mining companies. As tribes increasingly exercise their rights to use and consume water and fish, their roles in natural resource debates will increase. Many tribes are establishing their own natural resource management and environmental quality protection programs with the help of the AIEO. Tribes have established fisheries, wildlife, forestry, water quality, waste management, and planning departments. Some tribes have prepared comprehensive resource management plans for their reservations while others have become active in the protection of particular species. The AIEO is uniting tribes in their strategy and involvement level with improving environmental protection on Native American land. [Nicole Beatty]
RESOURCES ORGANIZATIONS American Indian Environmental Office, 1200 Pennsylvania Avenue, NW, Washington, D.C. USA 20460 (202) 564-0303, Fax: (202) 564-0298,
American Oceans Campaign Located in Los Angeles, California, the American Oceans Campaign (AOC) was founded in 1987 as a political interest group dedicated primarily to the restoration, protection, and preservation of the health and vitality of coastal waters, estu-
Environmental Encyclopedia 3 aries, bays, wetlands, and oceans. More national and conservationist (rather than international and preservationist) in its focus than other groups with similar concerns, the AOC tends to view the oceans as a valuable resource whose use should be managed carefully. As current president Ted Danson puts it, the oceans must be regarded as far more than a natural preserve by environmentalists; rather, healthy oceans “sustain biological diversity, provide us with leisure and recreation, and contribute significantly to our nation’s GNP.” The AOC’s main political efforts reflect this focus. Central to the AOC’s lobbying strategy is a desire to build cooperative relations and consensus among the general public, public interest groups, private sector corporations and trade groups, and public/governmental authorities around responsible management of ocean resources. The AOC is also active in grassroots public awareness campaigns through mass media and community outreach programs. This highprofile media campaign has included the production of a series of informational bulletins (Public Service Announcements) for use by local groups, as well as active involvement in the production of several documentary television series that have been broadcast on both network and public television. The AOC also has developed extensive connections with both the news and entertainment industries, frequently scheduling appearances by various celebrity supporters such as Jamie Lee Curtis, Whoopi Goldberg, Leonard Nimoy, Patrick Swayze, and Beau Bridges. As a lobbying organization, the AOC has developed contacts with government leaders at all levels from local to national, attempting to shape and promote a variety of legislation related to clean water and oceans. It has been particularly active in lobbying for strengthening various aspects of the Clean Water Act, the Safe Drinking Water Act, the Oil Pollution Act, and the Ocean Dumping Ban Act. The AOC regularly provides consultation services, assistance in drafting legislation, and occasional expert testimony on matters concerning ocean ecology. Recently this has included AOC Political Director Barbara Polo’s testimony before the U.S. House of Representatives Subcommittee on Fisheries, Conservation, Wildlife, and Oceans on the substance and effect of legislation concerning the protection of coral reef ecosystems. Also very active at the grassroots level, AOC has organized numerous cleanup operations which both draw attention to the problems caused by ocean dumping and make a practical contribution to reversing the situation. Concentrating its efforts in California and the Pacific Northwest, the AOC launched its “Dive for Trash” program in 1991. As many as 1,000 divers may team up at AOC-sponsored events to recover garbage from the coastal waters. In cooperation with the U.S. Department of Commerce’s National
American Oceans Campaign
Maritime Sanctuary Program, the AOC is planning to add a marine environmental assessment component to this diving program, and to expand the program into Gulf and Atlantic coastal waters. Organizationally, the AOC divides its political and lobbying activity into three separate substantive policy areas: “Critical Oceans and Coastal Habitats,” which includes issues concerning estuaries, watersheds, and wetlands; “Coastal Water Pollution,” which focuses on beach water quality and the effects of storm water runoff, among other issues; and “Living Resources of the Sea,” which include coral reefs, fisheries, and marine mammals (especially dolphins). Activities in all these areas have run the gamut from public and legislative information campaigns to litigation. The AOC has been particularly active along the California coastline and has played a central role in various programs aimed at protecting coastal wetland ecosystems from development and pollution. It has also been active in the Santa Monica Bay Restoration Project, which seeks to restore environmental balance to Santa Monica Bay. Typical of the AOC’s multi-level approach, this project combines a program of public education and citizen (and celebrity) involvement with the monitoring and reduction of privatesector pollution and with the conducting of scientific studies on the impact of a various activities in the surrounding area. These activities are also combined with an attempt to raise alternative revenues to replace funds recently lost due to the reduction of both federal (National Estuary Program) and state government support for the conservation of coastal and marine ecosystems. In addition, the AOC has been involved in litigation against the County of Los Angeles over a plan to build flood control barriers along a section of the Los Angeles River. AOC’s major concern is that these barriers will increase the amount of polluted storm water runoff being channeled into coastal waters. The AOC contends that prudent management of this storm water would be better used in recharging Southern California’s scant water resources via storage or redirection into underground aquifers before this runoff becomes polluted. In February of 2002, AOC teamed up with a new nonprofit ocean advocacy organization called Oceana. The focus of this partnership is the Oceans at Risk program that concentrates on the impact that wasteful fisheries have on the marine environment. [Lawrence J. Biskowski]
RESOURCES ORGANIZATIONS American Oceans Campaign, 6030 Wilshire Blvd Suite 400, Los Angeles, CA USA 90036 (323) 936-8242, Fax: (323) 936-2320, Email:
[email protected],
49
Environmental Encyclopedia 3
American Wildlands
American Wildlands American Wildlands (AWL) is a nonprofit wildland resource conservation and education organization founded in 1977. AWL is dedicated to protecting and promoting proper management of America’s publicly owned wild areas and to securing wilderness designation for public land areas. The organization has played a key role in gaining legal protection for many wilderness and river areas in the U.S. interior west and in Alaska. Founded as the American Wilderness Alliance, AWL is involved in a wide range of wilderness resource issues and programs including timber management policy reform, habitat corridors, rangeland management policy reform, riparian and wetlands restoration, and public land management policy reform. AWL promotes ecologically sustainable uses of public wildlands resources including forests, wilderness, wildlife, fisheries, and rivers. It pursues this mission through grassroots activism, technical support, public education, litigation, and political advocacy. AWL maintains three offices: the central Rockies office in Lakewood, Colorado; the northern Rockies office in Bozeman, Montana; and the Sierra-Nevada office in Reno, Nevada. The organization’s annual budget of $350,000 has been stable for many years, but with programs that are now being considered for addition to its agenda, that figure is expected to increase over the next few years. The Central Rockies office in Bozeman considers its main concern timber management reform. It has launched the Timber Management Reform Policy Program, which monitors the U.S. Forest Service and works toward a better management of public forests. Since initiation of the program in 1986, the program includes resource specialists, a wildlife biologist, forester, water specialist, and an aquatic biologist who all report to an advisory council. A major victory of this program was stopping the sale of 4.2 million board feet (1.3 million m) of timber near the Electric Peak Wilderness Area. Other programs coordinated by the Central Rockies office include: 1) Corridors of Life Program which identifies and maps wildlife corridors, land areas essential to the genetic interchange of wildlife that connect roadless lands or other wildlife habitat areas. Areas targeted are in the interior West, such as Montana, North and South Dakota, Wyoming, and Idaho; 2) The Rangeland Management Policy Reform Program monitors grazing allotments and files appeals as warranted. An education component teaches citizens to monitor grazing allotments and to use the appeals process within the U.S. Forest Service and Bureau of Land Management; 3) The Recreation-Conservation Connection, through newsletters and travel-adventure programs, teaches the public how to enjoy 50
the outdoors without destroying nature. Six hundred travelers have participated in ecotourism trips through AWL. AWL is also active internationally. The AWL/Leakey Fund has aided Dr. Richard Leakey’s wildlife habitat conservation and elephant poaching elimination efforts in Kenya. A partnership with the Island Foundation has helped fund wildlands and river protection efforts in Patagonia, Argentina. AWL also is an active member of Canada’s Tatshenshini International Coalition to protect that river and its 2.3 million acres (930,780 ha) of wilderness. [Linda Rehkopf]
RESOURCES ORGANIZATIONS American Wildlands, 40 East Main #2, Bozeman, MT USA 59715 (406) 586-8175, Fax: (406) 586-8242, Email:
[email protected],
Ames test A laboratory test developed by biochemist Bruce N. Ames to determine the possible carcinogenic nature of a substance. The Ames test involves using a particular strain of the bacteria Salmonella typhimurium that lacks the ability to synthesize histidine and is therefore very sensitive to mutation. The bacteria are inoculated into a medium deficient in histidine but containing the test compound. If the compound results in DNA damage with subsequent mutations, some of the bacteria will regain the ability to synthesize histidine and will proliferate to form colonies. The culture is evaluated on the basis of the number of mutated bacterial colonies it produced. The ability to replicate mutated colonies leads to the classification of a substance as probably carcinogenic. The Ames test is a test for mutagenicity not carcinogenicity. However, approximately nine out of 10 mutagens are indeed carcinogenic. Therefore, a substance that can be shown to be mutagenic by being subjected to the Ames test can be reliably classified as a suspected carcinogen and thus recommended for further study. [Brian R. Barthel]
RESOURCES BOOKS Taber, C. W. Taber’s Cyclopedic Medical Dictionary. Philadelphia: F. A. Davis, 1990. Turk J., and A. Turk. Environmental Science. Philadelphia: W. B. Saunders, 1988.
Environmental Encyclopedia 3
Cleveland Armory
Amoco Cadiz This shipwreck in March 1978 off the Brittany coast was the first major supertanker accident since the Torrey Canyon 11 years earlier. Ironically, this spill, more than twice the size of the Torrey Canyon, blackened some of the same shores and was one of four substantial oil spills there since 1967. It received great scientific attention because it occurred near several renowned marine laboratories. The cause of the wreck was a steering failure as the ship entered the English Channel off the northwest Brittany coast, and failure to act swiftly enough to correct it. During the next 12 hours, the Amoco Cadiz could not be extricated from the site. In fact, three separate lines from a powerful tug broke trying to remove the tanker before it drifted onto rocky shoals. Eight days later the Amoco Cadiz split in two. Seabirds seemed to suffer the most from the spill, although the oil devastated invertebrates within the extensive, 20–30 ft (6-9 m) high intertidal zone. Thousands of birds died in a bird hospital described by one oil spill expert as a bird morgue. Thirty percent of France’s seafood production was threatened, as well as an extensive kelp crop, harvested for fertilizer, mulch, and livestock feed. However, except on oyster farms located in inlets, most of the impact was restricted to the few months following the spill. In an extensive journal article, Erich Grundlach and others reported studies on where the oil went and summarized the findings of biologists. Of the 223,000 metric tons released, 13.5% was incorporated within the water column, 8% went into subtidal sediments, 28% washed into the intertidal zone, 20–40% evaporated, and 4% was altered while at sea. Much research was done on chemical changes in the hydrocarbon fractions over time, including that taken up within organisms. Researchers found that during early phases, biodegradation was occurring as rapidly as evaporation. The cleanup efforts of thousands of workers were helped by storm and wave action that removed much of the stranded oil. High energy waves maintained an adequate supply of nutrients and oxygenated water, which provided optimal conditions for biodegradation. This is important because most of the biodegradation was done by aerobic organisms. Except for protected inlets, much of the impact was gone three years later, but some effects were expected to last a decade. [Nathan H. Meleen]
RESOURCES PERIODICALS Grove, N. “Black Day for Brittany: Amoco Cadiz Wreck.” National Geographic 154 (July 1978): 124–135.
The Amoco Cadiz oil spill in the midst of being contained. (Photograph by Leonard Freed. Magnum Photos, Inc. Reproduced by permission.)
Grundlach, E. R., et al. “The Fate of Amoco Cadiz Oil.” Science 221 (8 July 1983): 122–129. Schneider, E. D. “Aftermath of the Amoco Cadiz: Shorline Impact of the Oil Spill.” Oceans 11 (July 1978): 56–9. Spooner, M. F., ed. Amoco Cadiz Oil Spill. New York: Pergamon, 1979. (Reprint of Marine Pollution Bulletin, v. 9, no. 11, 1978)
Cleveland Amory (1917 – 1998) American Activist and writer Amory is known both for his series of classic social history books and his work with the Fund for Animals. Born in Nahant, Massachusetts, to an old Boston family, Amory attended Harvard University, where he became editor of The Harvard Crimson. This prompted his well-known remark, “If you have been editor of The Harvard Crimson in your senior year at Harvard, there is very little, in after life, for you.” Amory was hired by The Saturday Evening Post after graduation, becoming the youngest editor ever to join that publication. He worked as an intelligence officer in the United States Army during World War II, and in the years after the war, wrote a trilogy of social commentary books, now considered to be classics. The Proper Bostonians was 51
Environmental Encyclopedia 3
Anaerobic
published to critical acclaim in 1947, followed by The Last Resorts (1948), and Who Killed Society? (1960), all of which became best sellers. Beginning in 1952, Amory served for 11 years as social commentator on NBC’s “The Today Show.” The network fired him after he spoke out against cruelty to animals used in biomedical research. From 1963 to 1976, Amory served as a senior editor and columnist for Saturday Review magazine, while doing a daily radio commentary, entitled “Curmudgeon-at-Large.” He was also chief television critic for TV Guide, where his biting attacks on sport hunting angered hunters and generated bitter but unsuccessful campaigns to have him fired. In 1967, Amory founded The Fund for Animals “to speak for those who can’t,” and served as its unpaid president. Animal protection became his passion and his life’s work, and he was considered one of the most outspoken and provocative advocates of animal welfare. Under his leadership, the Fund became a highly activist and controversial group, engaging in such activities as confronting hunters of whales and seals, and rescuing wild horses, burros, and goats. The Fund, and Amory in particular, are well known for their campaigns against sport hunting and trapping, the fur industry, abusive research on animals, and other activities and industries that engage in or encourage what they consider cruel treatment of animals. In 1975, Amory published ManKind? Our Incredible War on Wildlife, using humor, sarcasm, and graphic rhetoric to attack hunters, trappers, and other exploiters of wild animals. The book was praised by The New York Times in a rare editorial. His next book, AniMail, (1976) discussed animal issues in a question-and-answer format. In 1987, he wrote The Cat Who Came for Christmas, a book about a stray cat he rescued from the streets of New York, which became a national best seller. This was followed in 1990 by its sequel, also a best seller, The Cat and the Curmudgeon. Amory had been a senior contributing editor of Parade magazine since 1980, where he often profiled famous personalities. Amory died of an aneurysm at the age of 81 on October 14, 1998. He remained active right up until the end, spending the day in his office at the Fund for Animals and then passing away in his sleep later that evening. Staffers at both the Fund for Animals have vowed that Amory’s work will continue, “just the way Cleveland would have wanted it.” [Lewis G. Regenstein]
RESOURCES BOOKS Amory, C. The Cat and the Curmudgeon. New York: G. K. Hall, 1991. ———. The Cat Who Came for Christmas. New York: Little Brown, 1987.
52
Cleveland Amory. (The Fund for Animals. Reproduced by permission.)
PERIODICALS Pantridge, M. “The Improper Bostonian.” Boston Magazine 83 (June 1991): 68–72.
Anaerobic This term refers to an environment lacking in molecular oxygen (O2), or to an organism, tissue, chemical reaction, or biological process that does not require oxygen. Anaerobic organisms can use a molecule other than O2 as the terminal electron acceptor in respiration. These organisms can be either obligate, meaning that they cannot use O2, or facultative, meaning that they do not require oxygen but can use it if it is available. Organic matter decomposition in poorly aerated environments, including water-logged soils, septic tanks, and anaerobically-operated waste treatment facilities, produces large amounts of methane gas. The methane can become an atmospheric pollutant, or it may be captured and used for fuel, as in “biogas"-powered electrical generators. Anaerobic decomposition produces the notorious “swamp gases” that have been reported as unidentified flying objects (UFOs).
Environmental Encyclopedia 3
Anaerobic digestion Refers to the biological degradation of either sludges or solid waste under anaerobic conditions, meaning that no oxygen is present. In the digestive process, solids are converted to noncellular end products. In the anaerobic digestion of sludges, the goals are to reduce sludge volume, insure the remaining solids are chemically stable, reduce disease-causing pathogens, and enhance the effectiveness of subsequent dewatering methods, sometimes recovering methane as a source of energy. Anaerobic digestion is commonly used to treat sludges that contain primary sludges, such as that from the first settling basins in a wastewater treatment plant, because the process is capable of stabilizing the sludge with little biomass production, a significant benefit over aerobic sludge digestion, which would yield more biomass in digesting the relatively large amount of biodegradable matter in primary sludge. The microorganisms responsible for digesting the sludges anaerobically are often classified in two groups, the acid formers and the methane formers. The acid formers are microbes that create, among others, acetic and propionic acids from the sludge. These chemicals generally make up about a third of the by-products initially formed based on a chemical oxygen demand (COD) mass balance, and some of the propionic and other acids are converted to acetic acid. The methane formers convert the acids and by-products resulting from prior metabolic steps (e.g., alcohols, hydrogen, carbon dioxide) to methane. Often, approximately 70% of the methane formed is derived from acetic acid, about 10–15% from propionic acid. Anaerobic digesters are designed as either standardor high-rate units. The standard-rate digester has a solids retention time of 30–90 days, as opposed to 10–20 days for the high-rate systems. The volatile solids loadings of the standard- and high-rate systems are in the area of 0.5–1.6 and 1.6–6.4 Kg/m3/d, respectively. The amount of sludge introduced into the standard-rate is therefore generally much less than the high-rate system. Standard-rate digestion is accomplished in single-stage units, meaning that sludge is fed into a single tank and allowed to digest and settle. Highrate units are often designed as two-stage systems in which sludge enters into a completely-mixed first stage that is mixed and heated to approximately 98°F (35°C) to speed digestion. The second-stage digester, which separates digested sludge from the overlying liquid and scum, is not heated or mixed. With the anaerobic digestion of solid waste, the primary goal is generally to produce methane, a valuable source
Anemia
of fuel that can be burned to provide heat or used to power motors. There are basically three steps in the process. The first involves preparing the waste for digestion by sorting the waste and reducing its size. The second consists of constantly mixing the sludge, adding moisture, nutrients, and pH neutralizers while heating it to about 143°F (60°C) and digesting the waste for a week or longer. In the third step, the generated gas is collected and sometimes purified, and digested solids are disposed of. For each pound of undigested solid, about 8–12 ft3 of gas is formed, of which about 60% is methane. [Gregory D. Boardman]
RESOURCES BOOKS Corbitt, R. A. Standard Handbook of Environmental Engineering. New York: McGraw-Hill, 1990. Davis, M. L., and D. A. Cornwell. Introduction to Environmental Engineering. New York: McGraw-Hill, 1991. Viessman, W., Jr., and M. J. Hammer. Water Supply and Pollution Control. 5th ed. New York: Harper Collins, 1993.
Anemia Anemia is a medical condition in which the red cells of the blood are reduced in number or volume or are deficient in hemoglobin, their oxygen-carrying pigment. Almost 100 different varieties of anemia are known. Iron deficiency is the most common cause of anemia worldwide. Other causes of anemia include ionizing radiation, lead poisoning, vitamin B12 deficiency, folic acid deficiency, certain infections, and pesticide exposure. Some 350 million people worldwide—mostly women of child-bearing age—suffer from anemia. The most noticeable symptom is pallor of the skin, mucous membranes, and nail beds. Symptoms of tissue oxygen deficiency include pulsating noises in the ear, dizziness, fainting, and shortness of breath. The treatment varies greatly depending on the cause and diagnosis, but may include supplying missing nutrients, removing toxic factors from the environment, improving the underlying disorder, or restoring blood volume with transfusion. Aplastic anemia is a disease in which the bone marrow fails to produce an adequate number of blood cells. It is usually acquired by exposure to certain drugs, to toxins such as benzene, or to ionizing radiation. Aplastic anemia from radiation exposure is well-documented from the Chernobyl experience. Bone marrow changes typical of aplastic anemia can occur several years after the exposure to the offending agent has ceased. 53
Environmental Encyclopedia 3
Animal cancer tests
Aplastic anemia can manifest itself abruptly and progress rapidly; more commonly it is insidious and chronic for several years. Symptoms include weakness and fatigue in the early stages, followed by headaches, shortness of breath, fever and a pounding heart. Usually a waxy pallor and hemorrhages occur in the mucous membranes and skin. Resistance to infection is lowered and becomes the major cause of death. While spontaneous recovery occurs occasionally, the treatment of choice for severe cases is bone marrow transplantation. Marie Curie, who discovered the element radium and did early research into radioactivity, died in 1934 of aplastic anemia, most likely caused by her exposure to ionizing radiation. While lead poisoning, which leads to anemia, is usually associated with occupational exposure, toxic amounts of lead can leach from imported ceramic dishes. Other environmental sources of lead exposure include old paint or paint dust, and drinking water pumped through lead pipes or leadsoldered pipes. Cigarette smoke is known to cause an increase in the level of hemoglobin in smokers, which leads to an underestimation of anemia in smokers. Studies suggest that carbon monoxide (a by-product of smoking) chemically binds to hemoglobin, causing a significant elevation of hemoglobin values. Compensation values developed for smokers can now detect possible anemia. [Linda Rehkopf]
RESOURCES BOOKS Harte, J., et. al. Toxics A to Z. Berkeley: University of California Press, 1991. Nordenberg, D., et al. “The Effect of Cigarette Smoking on Hemoglobin Levels and Anemia Screening.” Journal of the American Medical Association (26 September 1990): 1556. Stuart-Macadam, P., ed. Diet, Demography and Disease: Changing Perspectives on Anemia. Hawthrone: Aldine de Gruyter, 1992.
Animal cancer tests Cancer causes more loss of life-years than any other disease in the United States. At first reading, this statement seems to be in error. Does not cardiovascular disease cause more deaths? The answer to that rhetorical question is “yes.” However, many deaths from heart attack and stroke occur in the elderly. The loss of life-years of an 85 year old person (whose life expectancy at the time of his/her birth was between 55 and 60) is, of course, zero. However, the loss of life-years of a child of 10 who dies of a pediatric leukemia is between 65 to 70 years. This comparison of youth with the elderly is not meant in any way to demean the value that reasonable
54
people place on the lives of the elderly. Rather, the comparison is made to emphasize the great loss of life due to malignant tumors. The chemical causation of cancer is not a simple process. Many, perhaps most, chemical carcinogens do not in their usual condition have the potency to cause cancer. The non-cancer causing form of the chemical is called a “procarcinogen.” Procarcinogens are frequently complex organic compounds that the human body attempts to dispose of when ingested. Hepatic enzymes chemically change the procarcinogen in several steps to yield a chemical that is more easily excreted. The chemical changes result in modification of the procarcinogen (with no cancer forming ability) to the ultimate carcinogen (with cancer causing competence). Ultimate carcinogens have been shown to have a great affinity for DNA, RNA, and cellular proteins, and it is the interaction of the ultimate carcinogen with the cell macromolecules that causes cancer. It is unfortunate indeed that one cannot look at the chemical structure of a potential carcinogen and predict whether or not it will cause cancer. There is no computer program that will predict what hepatic enzymes will do to procarcinogens and how the metabolized end product(s) will interact with cells. Great strides have been made in the development of chemotherapeutic agents designed to cure cancer. The drugs have significant efficacy with certain cancers (these include but are not limited to pediatric acute lymphocytic leukemia, choriocarcinoma, Hodgkin’s disease, and testicular cancer), and some treated patients attain a normal life span. While this development is heartening, the cancers listed are, for the most part, relatively infrequent. More common cancers such as colorectal carcinoma, lung cancer, breast cancer, and ovarian cancer remain intractable with regard to treatment. These several reasons are why animal testing is used in cancer research. The majority of Americans support the effort of the biomedical community to use animals to identify potential carcinogens with the hope that such knowledge will lead to a reduction of cancer prevalence. Similarly, they support efforts to develop more effective chemotherapy. Animals are used under terms of the Animal Welfare Act of 1966 and its several amendments. The act designates that the U. S. Department of Agriculture is responsible for the humane care and handling of warm-blooded and other animals used for biomedical research. The act also calls for inspection of research facilities to insure that adequate food, housing, and care are provided. It is the belief of many that the constraints of the current law have enhanced the quality of biomedical research. Poorly maintained animals do not provide quality research. The law also has enhanced the care of animals used in cancer research. [Robert G. McKinnell]
Environmental Encyclopedia 3
Animal rights
RESOURCES PERIODICALS
RESOURCES ORGANIZATIONS
Abelson, P. H. “Tesing for Carcinogens With Rodents.” Science 249 (21 September 1990): 1357.
Animal Legal Defense Fund, 127 Fourth Street, Petaluma, CA USA 94952 Fax: (707) 769-7771, Toll Free: (707) 769-0785, Email:
[email protected],
Donnelly, S., and K. Nolan. “Animals, Science, and Ethics.” Hastings Center Report 20 (May-June 1990): suppl 1–l32. Marx, J. “Animal Carcinogen Testing Challenged: Bruce Ames Has Stirred Up the Cancer Research Community.” Science 250 (9 November 1990): 743–5.
Animal Legal Defense Fund Originally established in 1979 as Attorneys for Animal Rights, this organization changed its name to Animal Legal Defense Fund (ALDF) in 1984, and is known as “the law firm of the animal rights movement.” Their motto is “we may be the only lawyers on earth whose clients are all innocent.” ALDF contends that animals have a fundamental right to legal protection against abuse and exploitation. Over 350 attorneys work for ALDF, and the organization has more than 50,000 supporting members who help the cause of animal rights by writing letters and signing petitions for legislative action. The members are also strongly encouraged to work for animal rights at the local level. ALDF’s work is carried out in many places including research laboratories, large cities, small towns, and the wild. ALDF attorneys try to stop the use of animals in research experiments, and continue to fight for expanded enforcement of the Animal Welfare Act. ALDF also offers legal assistance to humane societies and city prosecutors to help in the enforcement of anti-cruelty laws and the exposure of veterinary malpractice. The organization attempts to protect wild animals from exploitation by working to place controls on trappers and sport hunters. In California, ALDF successfully stopped the hunting of mountain lions and black bears. ALDF is also active internationally bringing legal action against elephant poachers as well as against animal dealers who traffic in endangered species. ALDF’s clear goals and swift action have resulted in many court victories. In 1992 alone, the organization won cases involving cruelty to dolphins, dogs, horses, birds, and cats. It has also blocked the importation of over 70,000 monkeys from Bangladesh for research purposes, and has filed suit against the National Marine Fisheries Services to stop the illegal gray market in dolphins and other marine mammals. ALDF also publishes a quarterly magazine, The Animals’ Advocate. [Cathy M. Falk]
Animal rights Recent concern about the way humans treat animals has spawned a powerful social and political movement driven by the conviction that humans and certain animals are similar in morally significant ways, and that these similarities oblige humans to extend to those animals serious moral consideration, including rights. Though animal welfare movements, concerned primarily with humane treatment of pets, date back to the 1800s, modern animal rights activism has developed primarily out of concern about the use and treatment of domesticated animals in agriculture and in medical, scientific, and industrial research. The rapid growth in membership of animal rights organizations testifies to the increasing momentum of this movement. The leading animal rights group today, People for the Ethical Treatment of Animals (PETA), was founded in 1980 with 100 individuals; today, it has over 300,000 members. The animal rights activist movement has closely followed and used the work of modern philosophers who seek to establish a firm logical foundation for the extension of moral considerability beyond the human community into the animal community. The nature of animals and appropriate relations between humans and animals have occupied Western thinkers for millennia. Traditional Western views, both religious and philosophical, have tended to deny that humans have any moral obligations to nonhumans. The rise of Christianity and its doctrine of personal immortality, which implies a qualitative gulf between humans and animals, contributed significantly to the dominant Western paradigm. When seventeenth century philosopher Rene´ Descartes declared animals mere biological machines, the perceived gap between humans and nonhuman animals reached its widest point. Jeremy Bentham, the father of ethical utilitarianism, challenged this view and fostered a widespread anticruelty movement and exerted powerful force in shaping our legal and moral codes. Its modern legacy, the animal welfare movement, is reformist in that it continues to accept the legitimacy of sacrificing animal interests for human benefit, provided animals are spared any suffering which can conveniently and economically be avoided. In contrast to the conservatively reformist platform of animal welfare crusaders, a new radical movement began in the late 1970s. This movement, variously referred to as animal liberation or animal rights, seeks to put an end to the routine sacrifice of animal interests for human benefit. In 55
Animal rights
Environmental Encyclopedia 3
Animal rights activists dressed as monkeys in prison suits block the entrance to the Department of Health and Human Services in Washington, DC, in protest of the use of animals in laboratory research. (Corbis-Bettmann. Reproduced by permission.)
seeking to redefine the issue as one of rights, some animal protectionists organized around the well-articulated and widely disseminated utilitarian perspective of Australian philosopher Peter Singer. In his 1975 classic Animal Liberation, Singer argued that because some animals can experience pleasure and pain, they deserve our moral consideration. While not actually a rights position, Singer’s work nevertheless uses the language of rights and was among the first to abandon welfarism and to propose a new ethic of moral considerability for all sentient creatures. To assume that humans are inevitably superior to other species simply by virtue of their species membership is an injustice which Singer terms speciesism, an injustice parallel to racism and sexism. Singer does not claim all animal lives to be of equal worth, nor that all sentient beings should be treated identically. In some cases, human interests may outweigh those of nonhumans, and Singer’s utilitarian calculus would allow us to engage in practices which require the use of animals 56
in spite of their pain, where those practices can be shown to produce an overall balance of pleasure over suffering. Some animal advocates thus reject utilitarianism on the grounds that it allows the continuation of morally abhorrent practices. Lawyer Christopher Stone and philosophers Joel Feinberg and Tom Regan have focused on developing cogent arguments in support of rights for certain animals. Regan’s 1983 book The Case For Animal Rights developed an absolutist position which criticized and broke from utilitarianism. It is Regan’s arguments, not reformism or the pragmatic principle of utility, which have come to dominate the rhetoric of the animal rights crusade. The question of which animals possess rights then arises. Regan asserts it is those who, like us, are subjects experiencing their own lives. By “experiencing” Regan means conscious creatures aware of their environment and with goals, desires, emotions, and a sense of their own identity. These characteristics give an individual inherent value, and this value entitles the bearer to certain inalienable rights,
Environmental Encyclopedia 3
Animal Welfare Institute
especially the right to be treated as an end in itself, and never merely as a means to human ends. The environmental community has not embraced animal rights; in fact, the two groups have often been at odds. A rights approach focused exclusively on animals does not cover all the entities such as ecosystems that many environmentalists feel ought to be considered morally. Yet a rights approach that would satisfy environmentalists by encompassing both living and nonliving entities may render the concept of rights philosophically and practically meaningless. Regan accuses environmentalists of environmental fascism, insofar as they advocate the protection of species and ecosystems at the expense of individual animals. Most animal rightists advocate the protection of ecosystems only as necessary to protect individual animals, and assign no more value to the individual members of a highly endangered species than to those of a common or domesticated species. Thus, because of its focus on the individual, animal rights can offer no realistic plan for managing natural systems or for protecting ecosystem health, and may at times hinder the efforts of resource managers to effectively address these issues. For most animal activists, the practical implications of the rights view are clear and uncompromising. The rights view holds that all animal research, factory farming, and commercial or sport hunting and trapping should be abolished. This change of moral status necessitates a fundamental change in contemporary Western moral attitudes towards animals, for it requires humans to treat animals as inherently valuable beings with lives and interests independent of human needs and wants. While this change is not likely to occur in the near future, the efforts of animal rights advocates may ensure that wholesale slaughter of these creatures for unnecessary reasons that is no longer routinely the case, and that when such sacrifice is found to be necessary, it is accompanied by moral deliberation. [Ann S. Causey]
RESOURCES BOOKS Hargrove, E. C. The Animal Rights/Environmental Ethics Debate. New York: SUNY Press, 1992. Regan, T. The Case For Animal Rights. Los Angeles: University of California Press, 1983. ———, and P. Singer. Animal Rights and Human Obligations. 2nd ed. Englewood Cliffs, NJ: Prentice-Hall, 1989. Singer, P. Animal Liberation. New York: Avon Books, 1975. Zimmerman, M. E., et al, eds. Environmental Philosophy: From Animal Rights To Radical Ecology. Englewood Cliffs, NJ: Prentice-Hall, 1993.
Animal waste Animal wastes are commonly considered the excreted materials from live animals. However, under certain production conditions, the waste may also include straw, hay, wood shavings, or other sources of organic debris. It has been estimated that there may be as much as 2 billion tons of animal wastes produced in the United States annually. Application of excreta to soil brings benefits such as improved soil tilth, increased water-holding capacity, and some plant nutrients. Concentrated forms of excreta or high application rates to soils without proper management may lead to high salt concentrations in the soil and cause serious on- or offsite pollution.
Animal Welfare Institute Founded in 1951, the Animal Welfare Institute (AWI) is a non-profit organization that works to educate the public and to secure needed action to protect animals. AWI is a highly respected, influential, and effective group that works with Congress, the public, the news media, government officials, and the conservation community on animal protection programs and projects. Its major goals include improving the treatment of laboratory animals and a reduction in their use; eliminating cruel methods of trapping wildlife; saving species from extinction; preventing painful experiments on animals in schools and encouraging humane science teaching; improving shipping conditions for animals in transit; banning the importation of parrots and other exotic wild birds for the pet industry; and improving the conditions under which farm animals are kept, confined, transported, and slaughtered. In 1971 AWI launched the Save the Whales Campaign to help protect whales. The organization provides speakers and experts for conferences and meetings around the world, including Congressional hearings and international treaty and commission meetings. Each year, the institute awards its prestigious Albert Schweitzer Medal to an individual for outstanding achievement in the advancement of animal welfare. Its publications include The AWI Quarterly; books such as Animals and Their Legal Rights; Facts about Furs; and The Endangered Species Handbook; booklets, brochures, and other educational materials, which are distributed to schools, teachers, scientists, government officials, humane societies, libraries, and veterinarians. AWI works closely with its associate organization, The Society for Animal Protective Legislation (SAPL), a lobbying group based in Washington, D.C. Founded in 1955, SAPL devotes its efforts to supporting legislation to protect animals, often mobilizing its 14,000 “correspondents” in letter-writing campaigns to members of Congress. 57
Environmental Encyclopedia 3
Antarctic Treaty (1961)
SAPL has been responsible for the passage of more animal protection laws than any other organization in the country, and perhaps the world, and it has been instrumental in securing the enactment of 14 federal laws. Major federal legislation which SAPL has promoted includes the first federal Humane Slaughter Act in 1958 and its strengthening in 1978; the 1959 Wild Horse Act; the 1966 Laboratory Animal Welfare Act and its strengthening in 1970, 1976, 1985, and 1990; the 1969 Endangered Species Act and its strengthening in 1973; a 1970 measure banning the crippling or “soring” of Tennessee Walking Horses; measures passed in 1971 prohibiting hunting from aircraft, protecting wild horses, and resolutions calling for a moratorium on commercial whaling; the 1972 Marine Mammal Protection Act; negotiation of the 1973 Convention on International Trade in Endangered Species of Fauna and Flora (CITES); the 1979 Packwood-Magnuson Amendment protecting whales and other ocean creatures; the 1981 strengthening of the Lacey Act to restrict the importation of illegal wildlife; the 1990 Pet Theft Act; and, in 1992, The Wild Bird Conservation Act, protecting parrots and other exotic wild birds; the International Dolphin Conservation Act, restricting the killing of dolphins by tuna fishermen; and the Driftnet Fishery Conservation Act, protecting whales, sea birds, and other ocean life from being caught and killed in huge, 30-mi-long (48-km-long) nets. Major goals of SAPL include enacting legislation to end the use of cruel steel-jaw leg-hold traps and to secure proper enforcement, funding, administration, and reauthorization of existing animal protection laws. Both AWI and SAPL have long been headed by their chief volunteer, Christine Stevens, a prominent Washington, D.C. humanitarian and community leader. [Lewis G. Regenstein]
RESOURCES ORGANIZATIONS Animal Welfare Institute, P.O. Box 3650, Washington, D.C USA 20007 (202) 337-2332, Email:
[email protected], Society for Animal Protective Legislation, P.O. Box 3719, Washington, D.C. USA 20007 (202) 337-2334, Fax: (202) 338-9478, Email:
[email protected],
Anion see Ion
Antarctic Treaty (1961) The Antarctic Treaty, signed in 1961, established an international administrative system for the continent. The impetus 58
for the treaty was the International Geophysical Year, 1957– 1958, which had brought scientists from many nations together to study Antarctica. The political situation in Antarctica was complex at the time, with seven nations having made sometimes overlapping territorial claims to the continent: Argentina, Australia, Chile, France, New Zealand, Norway, and the United Kingdom. Several other nations, most notably the former USSR and the United States, had been active in Antarctic exploration and research and were concerned with how the continent would be administered. Negotiations on the treaty began in June 1958 with Belgium, Japan, and South Africa joining the original nine countries. The treaty was signed in December 1959 and took effect in June 1961. It begins by “recognizing that it is in the interest of all mankind that Antarctica shall continue forever to be used exclusively for peaceful purposes.” The key to the treaty was the nations’ agreement to disagree on territorial claims. Signatories of the treaty are not required to renounce existing claims, nations without claims shall have an equal voice as those with claims, and no new claims or claim enlargements can take place while the treaty is in force. This agreement defused the most controversial and complex issue regarding Antarctica, and in an unorthodox way. Among the other major provisions of the treaty are: the continent will be demilitarized; nuclear explosions and the storage of nuclear wastes are prohibited; the right of unilateral inspection of all facilities on the continent to ensure that the provisions of the treaty are being honored is guaranteed; and scientific research can continue throughout the continent. The treaty runs indefinitely and can be amended, but only by the unanimous consent of the signatory nations. Provisions were also included for other nations to become parties to the treaty. These additional nations can either be “acceding parties,” which do not conduct significant research activities but agree to abide by the terms of the treaty, or “consultative parties,” which have acceded to the treaty and undertake substantial scientific research on the continent. Twelve nations have joined the original 12 in becoming consultative parties: Brazil, China, Finland, Germany, India, Italy, Peru, Poland, South Korea, Spain, Sweden, and Uruguay. Under the auspices of the treaty, the Convention on the Conservation of Antarctic Marine Living Resources was adopted in 1982. This regulatory regime is an effort to protect the Antarctic marine ecosystem from severe damage due to overfishing. Following this convention, negotiations began on an agreement for the management of Antarctic mineral resources. The Convention on the Regulation of Antarctic Mineral Resource Activities was concluded in June 1988, but in 1989 Australia and France rejected the convention, urging that Antarctica be declared an international
Environmental Encyclopedia 3 wilderness closed to mineral development. In 1991 the Protocol on Environmental Protection, which included a 50-year ban on mining, was drafted. At first the United States refused to endorse this protocol, but it eventually joined the other treaty parties in signing the new convention in October 1991.
[Christopher McGrory Klyza]
RESOURCES BOOKS Shapley, D. The Seventh Continent: Antarctica in a Resource Age. Baltimore: Johns Hopkins University Press for Resources for the Future, 1985.
Antarctica The earth’s fifth largest continent, centered asymmetrically around the South Pole. Ninety-eight percent of this land mass, which covers approximately 5.4 million mi2 (13.8 million km2), is covered by snow and ice sheets to an average depth of 1.25 mi (2 km). This continent receives very little precipitation, less than 5 in (12 cm) annually, and the world’s coldest temperature was recorded here, at -128°F (-89°C). Exposed shorelines and inland mountain tops support life only in the form of lichens, two species of flowering plants, and several insect species. In sharp contrast, the ocean surrounding the Antarctic continent is one of the world’s richest marine habitats. Cold water rich in oxygen and nutrients supports teeming populations of phytoplankton and shrimp-like Antarctica krill, the food source for the region’s legendary numbers of whales, seals, penguins, and fish. During the nineteenth and early twentieth century, whalers and sealers severely depleted Antarctica’s marine mammal populations. In recent decades the whale and seal populations have begun to recover, but interest has grown in new resources, especially oil, minerals, fish, and tourism. The Antarctic’s functional limit is a band of turbulent ocean currents and high winds that circle the continent at about 60 degrees south latitude. This ring is known as the Antarctic convergence zone. Ocean turbulence in this zone creates a barrier marked by sharp differences in salinity and water temperature. Antarctic marine habitats, including the limit of krill populations, are bounded by the convergence. Since 1961 the Antarctic Treaty has formed a framework for international cooperation and compromise in the use of Antarctica and its resources. The treaty reserves the Antarctic continent for peaceful scientific research and bans all military activities. Nuclear explosions and radioactive waste are also banned, and the treaty neither recognizes nor establishes territorial claims in Antarctica. However, neither does the treaty deny pre-1961 claims, of which seven exist. Furthermore, some signatories to the treaty, including
Antarctica Project
the United States, reserve the right to make claims at a later date. At present the United States has no territorial claims, but it does have several permanent stations, including one at the South Pole. Questions of territorial control could become significant if oil and mineral resources were to become economically recoverable. The primary resources currently exploited are fin fish and krill fisheries. Interest in oil and mineral resources has risen in recent decades, most notably during the 1973 “oil crisis.” The expense and difficulty of extraction and transportation has so far made exploitation uneconomical, however. Human activity has brought an array of environmental dangers to Antarctica. Oil and mineral extraction could seriously threaten marine habitat and onshore penguin and seal breeding grounds. A growing and largely uncontrolled fishing industry may be depleting both fish and krill populations in Antarctic waters. The parable of the Tragedy of the Commons seems ominously appropriate to Antarctica fisheries, which have already nearly eliminated many whale, seal, and penguin species. Solid waste and oil spills associated with research stations and with tourism pose an additional threat. Although Antarctica remains free of “permanent settlement,” 40 year-round scientific research stations are maintained on the continent. The population of these bases numbers nearly 4,000. In 1989 the Antarctic had its first oil spill when an Argentine supply ship, carrying 81 tourists and 170,000 gal (643,500 l) of diesel fuel, ran aground. Spilled fuel destroyed a nearby breeding colony of Adele penguins (Pygoscelis adeliae). With more than 3,000 cruise ships visiting annually, more spills seem inevitable. Tourists themselves present a further threat to penguins and seals. Visitors have been accused of disturbing breeding colonies, thus endangering the survival of young penguins and seals. [Mary Ann Cunningham]
RESOURCES BOOKS Child, J. Antarctica and South American Geopolitics. New York: Praeger, 1988. Parsons, A. Antarctica: The Next Decade. Cambridge: Cambridge University Press, 1987. Shapely, D. The Seventh Continent: Antarctica in a Resource Age. Baltimore: Johns Hopkins University Press for Resources for the Future, 1985. Suter, K. D. World Law and the Last Wilderness. Sydney: Friends of the Earth, 1980.
Antarctica Project The Antarctica Project, founded in 1982, is an organization designed to protect Antarctica and educate the public, government, and international groups about its current and 59
Environmental Encyclopedia 3
Anthrax
future status. The group monitors activities that affect the Antarctic region, conducts policy research and analysis in both national and international arenas, and maintains an impressive library of books, articles, and documents about Antarctica. It is also a member of the Antarctic and Southern Ocean Coalition (ASOC), which has 230 member organizations in 49 countries. In 1988, ASOC received a limited observer status to the Convention on the Conservation of Antarctic Marine Living Resources (CCAMLR). So far, the observer status continues to be renewed, providing ASOC with a way to monitor CCAMLR and to present proposals. In 1989, the Antarctica Project served as an expert adviser to the U.S. Office of Technology Assessment on its study and report of the Minerals Convention. The group prepared a study paper outlining the need for a comprehensive environmental protection convention. Later, a conservation strategy on Antarctica was developed with IUCN—The World Conservation Union. Besides continuing the work it has already begun, the Antarctica Project has several goals for the future. One calls for the designation of Antarctica as a world park. Another focuses on developing a bilateral plan to pump out the oil and salvage the Bahia Parasio, a ship which sank in early 1989 near the U.S. Palmer Station. Early estimated salvage costs ran at $50 million. One of the more recent projects is the Southern Ocean Fisheries Campaign. This campaign targets the illegal fishing taking place in the Southern Ocean which is depleting the Chilean sea bass population. The catch phrase of this movement is “Take a Pass on Chilean Sea Bass.” Three to four times a year, The Antarctica Project publishes ECO, an international publication which covers current political topics concerning the Antarctic Treaty System (provided free to members). Other publications include briefing materials, critiques, books, slide shows, videos, and posters for educational and advocacy purposes. [Cathy M. Falk]
RESOURCES ORGANIZATIONS The Antarctica Project, 1630 Connecticut Ave., NW, 3rd Floor, Washington, D.C. USA 20009 (202) 234-2480, Email:
[email protected],
Anthracite coal see Coal 60
Anthrax Anthrax is a bacterial infection caused by Bacillus anthracis. It usually affects cloven-hoofed animals, such as cattle, sheep, and goats, but it can occasionally spread to humans. Anthrax is almost always fatal in animals, but it can be successfully treated in humans if antibiotics are given soon after exposure. In humans, anthrax is usually contracted when spores are inhaled or come in contact with the skin. It is also possible for people to become infected by eating the meat of contaminated animals. Anthrax, a deadly disease in nature, gained worldwide attention in 2001 after it was used as a bioterrorism agent in the United States. Until the 2001 attack, only 18 cases of anthrax had been reported in the United States in the previous 100 years. Anthrax occurs naturally. The first reports of the disease date from around 1500 B.C., when it is believed to have been the cause of the fifth Egyptian plague described in the Bible. Robert Koch first identified the anthrax bacterium in 1876 and Louis Pasteur developed an anthrax vaccine for sheep and cattle in 1881. Anthrax bacteria are found in nature in South and Central America, southern and eastern Europe, Asia, Africa, the Caribbean, and the Middle East. Anthrax cases in the United States are rare, probably due to widespread vaccination of animals and the standard procedure of disinfecting animal products such as cowhide and wool. Reported cases occur most often in Texas, Louisiana, Mississippi, Oklahoma, and South Dakota. Anthrax spores can remain dormant (inactive) for years in soil and on animal hides, wool, hair, and bones. There are three forms of the disease, each named for its means of transmission: cutaneous (through the skin), inhalation (through the lungs), and intestinal (caused by eating anthraxcontaminated meat). Symptoms appear within several weeks of exposure and vary depending on how the disease was contracted. Cutaneous anthrax is the mildest form of the disease. Initial symptoms include itchy bumps, similar to insect bites. Within two days, the bumps become inflamed and a blister forms. The centers of the blisters are black due to dying tissue. Other symptoms include shaking, fever, and chills. In most cases, cutaneous anthrax can be treated with antibiotics such as penicillin. Intestinal anthrax symptoms include stomach and intestinal inflammation and pain, nausea, vomiting, loss of appetite, and fever, all becoming progressively more severe. Once the symptoms worsen, antibiotics are less effective, and the disease is usually fatal. Inhalation anthrax is the form of the disease that occurred during the bioterrorism attacks of October and November 2001 in the eastern United States. Five people died after being exposed to anthrax through contaminated mail. At least 17 other people contracted the disease but survived.
Environmental Encyclopedia 3
Anthropogenic
spores released from a military laboratory infected 77 people, 69 of whom died. Anthrax is an attractive weapon to bioterrorists. It is easy to transport and is highly lethal. The World Health Organization (WHO) estimates that 110 lb (50 kg) of anthrax spores released upwind of a large city would kill tens of thousands of people, with thousands of others ill and requiring medical treatment. The Geneva Convention, which established a code of conduct for war, outlawed the use of anthrax as a weapon in 1925. However, Japan developed anthrax weapons in the 1930s and used them against civilian populations during World War II. During the 1980s, Iraq mass produced anthrax as a weapon. [Ken R. Wells] Anthrax lesion on the shoulder of a patient. (NMSB/Custom Medical Stock Photo. Reproduced by permission.)
One or more terrorists sent media organizations in Florida and New York envelopes containing anthrax. Anthrax-contaminated letters also were sent to the Washington, D.C. offices of two senators. Federal agents were still investigating the incidents as of May 2002 but admitted they had no leads in the case. Initial symptoms of inhalation anthrax are flulike, but breathing becomes progressively more difficult. Inhalation anthrax can be treated successfully if antibiotics are given before symptoms develop. Once symptoms develop, the disease is usually fatal. The only natural outbreak of anthrax among people in the United States occurred in Manchester, New Hampshire, in 1957. Nine workers in a textile mill that processed wool and goat hair contracted the disease, five with inhalation anthrax and four with cutaneous anthrax. Four of the five people with inhalation anthrax died. By coincidence, workers at the mill were participating in a study of an experimental anthrax vaccine. No workers who had been vaccinated contracted the disease. Following this outbreak, the study was stopped, all workers at the mill were vaccinated, and vaccination became a condition of employment. After that, no mill workers contracted anthrax. The mill closed in 1968. However, in 1966 a man who worked across the street from the mill died from inhalation anthrax. He is believed to have contracted it from anthrax spores carried from the mill by the wind. The United States Food and Drug Administration approved the anthrax vaccine in 1970. It is used primarily for military personnel and some health care workers. During the 2001 outbreak, thousands of postal workers were offered the vaccine after anthrax spores from contaminated letters were found at several post office buildings. The largest outbreak worldwide of anthrax in humans occurred in the former Soviet Union in 1979, when anthrax
RESOURCES BOOKS The Parents’ Committee for Public Awareness. Anthrax: A Practical Guide for Citizens. Cambridge, MA: Harvard Perspectives Press, 2001.
PERIODICALS Consumers’ Research Staff. “What You Need to Know About Anthrax.” Consumers’ Research Magazine (Nov. 2001):10–14. Belluck, Pam. “Anthrax Outbreak of ’57 Felled a Mill but Yielded Answers.” The New York Times (Oct. 27, 2001). Bia, Frank, et al. “Anthrax: What You—And Your Patients—Need To Know Now.” Consultant (Dec. 2001):1797–1804. Masibay, Kim Y. “Anthrax: Facts, Not Fear.” Science World (Nov. 26, 2001):4–6. Spencer, Debbi Ann, et al. “Inhalation Anthrax.” MedSurg Nursing (Dec. 2001):308ndash;313.
ORGANIZATIONS Centers for Disease Control and Prevention, 1600 Clifton Road, Atlanta, GA USA 30333 (404)639-3534, Toll Free: (888) 246-2675, Email:
[email protected], >http://www.cdc.gov
International Fabricare Institute, 12251 Tech Road, Silver Spring, MD USA 20904 (301) 622-1900, Fax: (301) 236-9320, Toll Free: (800) 6382627, Email:
[email protected], Neighborhood Cleaners Association, 252 West 29th Street, New York, NY USA 10001 (212) 967-3002, Fax: (212) 967-2240, Email:
[email protected],
Dry deposition A process that removes airborne materials from the atmosphere and deposits them on a surface. Dry deposition includes the settling or falling-out of particles due to the 389
Environmental Encyclopedia 3
Dryland farming
influence of gravity. It also includes the deposition of gasphase compounds and particles too small to be affected by gravity. These materials may be deposited on surfaces due to their solubility with the surface or due to other physical and chemical attractions. Airborne contaminants are removed by both wet deposition, such as rainfall scavenging, and by dry deposition. The sum of wet and dry deposition is called total deposition. Deposition processes are the most important way contaminants such as acidic sulfur compounds are removed from the atmosphere; they are also important because deposition processes transfer contaminants to aquatic and terrestrial ecosystems. Cross-media transfers, such as transfers from air to water, can have adverse environmental impacts, and an example of this is how dry deposition of sulfur and nitrogen compounds can acidify poorly buffered lakes. See also Acid rain; Nitrogen cycle; Sulfur cycle
Dryland farming Dryland farming is the practice cultivating crops without irrigation (rainfed agriculture). In the United States, the term usually refers to crop production in low-rainfall areas without irrigation, using moisture-conserving techniques such as mulches and fallowing. Non-irrigated farming is practiced in the Great Plains, inter-mountain, and Pacific regions of the country, or areas west of the 23.5 in (600 mm) annual precipitation line, where native vegetation was short prairie grass. In some parts of the world dryland farming means all rainfed agriculture. In the western United States, dryland farming has often resulted in severe or moderate wind erosion. Alternating seasons of fallow and planting has left the land susceptible to both wind and water erosion. High demand for a crop sometimes resulted in cultivating lands not suitable for longtime farming, degrading the soil measurably. Conservation tillage, leaving all or most of the previous crop residues on the surface, decreases erosion and conserves water. Methods used are stubble mulch, mulch, and ecofallow. In the wetter parts of the Great Plains, fallowing land has given over to annual cropping, or three-year rotations with one year of fallow. See also Arable land; Desertification; Erosion; Soil; Tilth [William E. Larson]
RESOURCES BOOKS Anderson, J. R. Risk Analysis in Dryland Farming Systems. Rome: Food and Agriculture Organization of the United Nations, 1992.
390
Rene´ Jules Dubos (1901 – 1982) French/American microbiologist, ecologist, and writer Dubos, a French-born microbiologist, spent most of his career as a researcher and teacher at Rockefeller University in New York state. His pioneering work in microbiology, such as isolating the anti-bacterial substance gramicidin from a soil organism and showing the feasibility of obtaining germ-fighting drugs from microbes, led to the development of antibiotics. Nevertheless, most people know Dubos as a writer. Dubos’s books centered on how humans relate to their surroundings, books informed by what he described as “the main intellectual attitude that has governed all aspects of my professional life...to study things, from microbes to man, not per se but in their complex relationships.” That pervasive intellectual stance, carried throughout his research and writing, reflected what Saturday Review called “one of the bestformed and best-integrated minds in contemporary civilization.” A related theme was Dubos’s conviction that “the total environment” played a role in human disease. By total environment, he meant “the sum of the facts which are not only physical and social conditions but emotional conditions as well.” Though not a medical doctor, he became an expert on disease, especially tuberculosis, and headed Rockefeller’s clinical department on that disease for several years. “Despairing optimism” also pervaded Dubos’s humanenvironment writings, his own title for a column he wrote for The American Scholar, beginning in 1970. Time magazine even labeled him the “prophet of optimism:” “My life philosophy is based upon a faith in the immense resiliency of nature,” he once commented. Dubos held a lifelong belief that a constantly changing environment meant organisms, including humans, had to adapt constantly to keep up, survive, and prosper. But he worried that humans were too good at adapting, resulting in both his optimism and his despair: “Life in the technologized environment seems to prove that [humans] can become adapted to starless skies, treeless avenues, shapeless buildings, tasteless bread, joyless celebrations, spiritless pleasures—to a life without reverence for the past, love for the present, or poetical anticipations of the future.” He stated that “the belief that we can manage the earth may be the ultimate expression of human conceit,” but insisted that nature is not always right and even that humankind often improves on nature. As Thomas Berry suggested, “Dubos sought to reconcile the existing technological order and the planet’s survival through the resilience of nature and changes in human consciousness.” [Gerald L. Young Ph.D.]
Environmental Encyclopedia 3
Ducks Unlimited
populations were maintained by breeding habitat in the wetlands of Canada’s southern prairies in Saskatchewan, Mani-
Rene´ Dubos. (Corbis-Bettmann. Reproduced by permission.)
RESOURCES BOOKS Piel, G., and O. Segerberg, eds. The World of Rene Dubos: A Collection from His Writings. New York: Henry Holt, 1990. Ward, B., and R. Dubos. Only One Earth: The Care and Maintenance of a Small Planet. New York: Norton, 1972.
PERIODICALS Culhane, J. “En Garde, Pessimists! Enter Rene Dubos.” New York Times Magazine 121 (17 October 1971): 44–68. Kostelanetz, R. “The Five Careers of Rene Dubos.” Michigan Quarterly Review 19 (Spring 1980): 194–202.
Ducks Unlimited Ducks Unlimited (DU) is an international (United States, Canada, Mexico, New Zealand, and Australia), membership organization founded during the depression years in the United States by a group of sportsmen interested in waterfowl conservation. DU was incorporated in early 1937, and DU (Canada) was established later that spring. The organization was established to preserve and maintain waterfowl populations through habitat protection and development, primarily to provide game for sport hunting. During the Dust Bowl of the 1930s, the founding members of DU recognized that most of the continental waterfowl
toba, and Alberta. The organizers established DU Canada and used their resources to protect the Canadian prairie breeding grounds. Cross-border funding has since been a fundamental component of DU’s operation, although in recent years funds also have been directed to the northern American prairie states. In 1974 Ducks Unlimited de Mexico was established to restore and maintain wetlands south of the U.S.-Mexican border where many waterfowl spend the winter months. Throughout most of its existence, DU has funded habitat restoration projects and worked with landowners to provide water management benefits on farmlands. But, from its inception DU has been subject to criticism. Early opponents characterized it as an American intrusion into Canada to secure hunting areas. More recently, critics have suggested that DU defines waterfowl habitat too narrowly, excluding upland areas where many ducks and geese nest. The group plans to broaden its focus to encompass preservation of these upland breeding and nesting areas. Since many of these areas are found on private land, DU also plans to expand its cooperative programs with farmers and ranchers. Most commonly, however, DU is criticized for placing the interests of waterfowl hunters above wildlife management concerns. The organization does allow duck hunting on its preserves. Following the fundamental principle of “users pay,” duck hunters still provide the majority of DU’s funding. For that reason DU has not addressed some issues that have a serious effect on continental waterfowl populations. The combination of illegal hunting and liberal bag limits is blamed by some for the continued decline in waterfowl numbers. DU has not addressed this issue, preferring to leave management issues to government agencies in the United States and Canada, while focusing on habitat preservation and restoration. Critics of DU suggest that the organization will not act on population matters and risk offending the hunters who provide their financial support. In North America DU has expanded its scope and activities to address ecological and land use problems through the work of the North American Waterfowl Management Plan (NAWMP) and the Prairie CARE (Conservation of Agriculture, Resources and Environment) program. The wetlands conservation and other habitat projects addressed in these and similar programs, not only benefit game species, but other endangered species of plants and animals as well. NAWMP (an agreement between the United States and Canada) alone protects over 5.5 million acres (2.2 million ha) of waterfowl habitat. In 2002, the North American Wetlands Conservation Act (NAWCA) granted the DU one million dollars to be put towards a new wetlands in Ohio. 391
Environmental Encyclopedia 3
Ducktown, Tennessee
On balance, DU has had a major, positive impact on North American waterfowl habitat and management. Millions of acres of wetlands have been protected, enhanced, and managed in Canada, the United States, and Mexico. However, the continued decline in waterfowl populations may require the organization to redirect some of its efforts to population management and preservation issues. [David A. Duffus]
RESOURCES ORGANIZATIONS Ducks Unlimited, Inc., One Waterfowl Way, Memphis, TN USA 38120 (901) 758-3825, Toll Free: (800) 45DUCKS,
Ducktown, Tennessee Tucked in a valley of the Cherokee National Forest, on the border of Tennessee, North Carolina, and Georgia, Ducktown once reflected the beauty of the surrounding Appalachian Mountains. Instead, Ducktown and the valley known as the Copper Basin now form the only desert east of the Mississippi. Mined for its rich copper lode since the 1850s, it had become a vast stretch of lifeless, red-clay hills. It was an early and stark lesson in the devastation that acid rain and soil erosion can wreak on a landscape, one of the few man-made landmarks visible to the astronauts who landed on the moon. Prospectors came to the basin during a gold rush in 1843, but the closest thing to gold they discovered was copper, and most went home. But by 1850, entrepreneurs realized the value of the ore, and a new rush began to mine the area. Within five years, 30 companies had dug beneath the topsoil and made the basin the country’s leading producer of copper. The only way to separate copper from the zinc, iron, and sulfur present in Copper Basin rock was to roast the ore at extremely high temperatures. Mining companies built giant open pits in the ground for this purpose, some as wide as 600 ft (183 m) and as deep as a 10-story building. Fuel for these fires came from the surrounding forests. The forests must have seemed a limitless resource, but it was not long before every tree, branch, and stump for 50 mi2 (130 km2) had been torn up and burned. The fires in the pits emitted great billows of sulfur dioxide gas—so thick people could get lost in the clouds even at high noon—and this gas mixed with water and oxygen in the air to form sulfuric acid, which is main component in acid rain. Saturated by acidic moisture and choked by the remaining sulfur dioxide gas and dust, the undergrowth died and the soil became poisonous to new plants. Wildlife fled the shelterless hillsides. Without root systems, virtually all the soil washed into the Ocoee River, 392
smothering aquatic life. Open-range grazing of cattle, allowed in Tennessee until 1946, denuded the land of what little greenery remained. Soon after the turn of the century, Georgia filed suit to stop the air pollution which was drifting out of this corner of Tennessee. In 1907, the Supreme Court, in a decision written by Justice Oliver Wendell Holmes, ruled in Georgia’s favor, and the sulfur clouds ceased in the Copper Basin. It was one of the first environmental-rights decisions in the United States. That same year, the Tennessee Copper Company designed a way to capture the sulfur fumes, and sulfuric acid, rather than copper, became the area’s main product. It remains so today. Ducktown was the first mining settlement in the area, and residents now take a curious pride not only in the town’s history, but in the eerie moonscape of red hills and painted cliffs that surrounds it. Since the 1930s, Tennessee Copper Company, the Tennessee Valley Authority, and the Soil Conservation Service have worked to restore the land, planting hundreds of loblolly pine and black locust trees. Their efforts have met with little success, but new reforestation techniques such as slow-release fertilizer have helped many new plantings survive. Scientists hope to use the techniques practiced here on other deforested areas of the world. Ironically, many of the townspeople want to preserve a piece of the scar, both for its unique beauty and the environmental lesson of what human enterprise can do to nature, as well as what it can undo. See also Acid waste; Ashio, Japan; Mine spoil waste; Smelter; Sudbury, Ontario; Surface mining; Trail Smelter arbitration [L. Carol Ritchie]
RESOURCES PERIODICALS Barnhardt, W. “The Death of Ducktown.” Discover 8 (October 1987): 34-6+.
Dunes and dune erosion Dunes are small hills, mounds or ridges of wind-blown soil material, usually sand, that are formed in both coastal and inland areas. The formation of coastal or inland dunes requires a source of loose sandy material and dry periods during which the sand can be picked up and transported by the wind. Dunes exist independently of any fixed surface feature and can move or drift from one location to another over time. They are the result of natural erosion processes and are natural features of the landscape in many coastal areas and deserts, yet they also can be symptoms of land degradation. Inland dunes are either an expression of aridity or can be
Environmental Encyclopedia 3 indicators of desertification—the result of long-term land degradation in dryland areas. Coastal dunes are the result of marine erosion in which sand is deposited on the shore by wave action. During low tide, the beach sand dries and is dislodged and transported by the wind, usually over relatively short distances. Depending on the local topography and direction of the prevailing winds, a variety of shapes and forms can develop— from sand ridges to parabolic mounds. The upper few centimeters of coastal dunes generally contain chlorides from salt spray and wind-blown salt. As a result, attempts to stabilize coastal dunes with vegetation are often limited to salt-tolerant plants. The occurrence of beaches and dunes together have important implications for coastal areas. A beach absorbs the energy of waves and acts as a buffer between the sea and the dunes behind it. Low lying coastlines are best defended against high tides by consolidated sand dunes. In such cases, maintaining a wide, high beach that is backed by stable dunes is desirable. Engineering structures along coastal areas and the mouths of rivers can affect the formation and erosion of beaches and coastal dunes. In some instances it is desirable to build and widen beaches to protect coastal areas. This can require the construction of structures that trap littoral drift, rock mounds to check wave action, and sea walls that protect areas behind the beach from heavy wave action. Where serious erosion has occurred, artificial replacement of beach sands may be necessary. Such methods are expensive and require considerable engineering effort and the use of heavy equipment. The weathering of rocks, mainly sandstone, is the origin of material for inland dunes. However, whether or not sand dunes form, depends on the vegetative cover condition and use of the land. In contrast to coastal dunes, that are often considered to be beneficial to coastal areas, inland dunes can be indicators of land degradation where the protective cover of vegetation has been removed as a result of inappropriate cultivation, overgrazing, construction activities, and so forth. When vegetative cover is absent, soil is highly susceptible to both water and wind erosion. The two work together in drylands to create sources of soil that can be picked up and transported either downwind or downstream. The flow of water moves and exposes sand grains and supplies fresh material that results in deposits of sand in flood plains and ephemeral drainage systems. Before dunes can develop in such areas, there must be long dry periods between periodic or episodic sediment-laden flows of water. Wind erosion occurs where such sand deposits from water erosion are exposed to the energy of wind, or in areas that are devoid of vegetative cover.
Dunes and dune erosion
Where sand is the principle size soil particle and where high wind velocities are common, sand particles are moved by a process called saltation and creep. Sand dunes form under such conditions and are shaped by wind patterns over the landscape. Complex patterns can be formed—the result of interactions of wind, sand, the ground surface topography, and any vegetation or other physical barriers that exist. These patterns can be sword like ridges, called longitudinal dunes, crescentic accumulations or barchans, turret-shaped mounds, shallow sheets of sand, or large seas of transverse dunes. The typical pattern is one of a gradual long slope on the windward side of the dune, dropping off sharply on the leeward side. Exposed sand dunes can move up to 11 yd (10 m) annually in the direction of the prevailing wind. Such dunes encroach upon areas, covering farmlands, pasture lands, irrigation canals, urban areas, railroads and highways. Blowing sand can mechanically injure and kill vegetation in its path and can eventually bury croplands or rangelands. If left unchecked, the drifting sand will expand and lead to serious economic and environmental losses. Worldwide, dryland areas are those most susceptible to wind erosion. For example, 22% of Africa north of the Equator is severely affected by wind erosion as is over 35% of the land area in the Near East. As a result, inland dunes represent a significant landscape component in many desert regions. For example, dunes represent 28%, 26%, and 38% of the landscape of the Saharan Desert, Arabian Desert, and Australia, respectively (Heathcote 1983). In 1980, Walls estimated that 1.3 billion hectares of land were covered by sand dunes globally. Although dunes can be symptoms of land use problems, in some areas they are part of a natural dryland landscape that are considered to be features of beauty and interest. Sand dune have become popular recreational areas in parts of the United States, including the Great Sand Dune National Monument in southern Colorado with its 229-yd (210-m) high dunes that cover a 158-mi2 (254.4km2) area, and the Indiana Dunes State Park along the shore of Lake Michigan. When dune formation and encroachment represent significant environmental and economic problems, sand dune stabilization and control should be undertaken. Dune stabilization may initially require one or more of the following: applications of water, oil, bitumens emulsions, or chemical stabilizers to improve the cohesiveness of surface sands; the reshaping of the landscape such as construction of foredunes that are upwind of the dunes, and armoring of the surface using techniques such as hydroseeding, jute mats, mulching and asphalt; and constructing fences to reduce wind velocity near the ground surface. Although sand dune stabilization is the necessary first step in controlling this process, the establishment of a vegetative cover is a necessary 393
Environmental Encyclopedia 3
Dunes and dune erosion
The dunes of Nags Head, North Carolina. (Photograph by Jack Dermid, National Audubon Society Collection. Photo Researchers Inc. Reproduced by permission.)
condition to achieve long-term control of sand dune formation and erosion. Furthermore, stabilization and revegetation must be followed with appropriate land management that deals with the causes of dune formation in the first place. Where dune erosion has not progressed to a seriously degraded state, dunes can become reclaimed through natural regeneration simply by protecting the area against livestock grazing, all-terrain vehicles, and foot traffic. Vegetation stabilizes dunes by decreasing wind speed near the ground and by increasing the cohesiveness of sandy material by the addition of organic colloids and the binding action of roots. Plants trap the finer wind-blown soil particles, which helps improve soil texture, and they also improve the microclimate of the site, reducing soil surface temperatures. Upwind barriers or windbreak plantings of vegetation, often trees or other woody perennials, can be effective in improving the success of revegetating sand dunes. They reduce wind velocities, help prevent exposure of plant roots from the drifting sand, and protect plantings from the abrasive action of blowing sand. Areas that are susceptible to sand dune encroachment can likewise be protected by using fences or windbreak plantings that reduce wind veloci394
ties near the ground surface. Because of the severity of sand dune environments, it can be difficult to find plant species that can be established and survive. In addition, any plantings must be protected against exploitation, for example, from grazing or fuelwood harvesting. The expansion of sand dunes resulting from desertification not only represent environmental problems, but they also represent serious losses of productive land and a financial hardship for farmers and others who depend upon the land for their livelihood. Such problems are particularly acute in many of the poorer dryland countries of the world and deserve the attention of governments, international agencies, and nongovernmental organizations who need to direct their efforts toward the causes of soil erosion and dune formation. [Kenneth N. Brooks]
RESOURCES BOOKS Brooks, K.N., P.F. Folliott, H. M. Gregersen and L. F. DeBano. Hydrology and the Management of Watersheds. 2nd ed. Ames, Iowa: Iowa State University Press, 1997.
Environmental Encyclopedia 3
Dust Bowl
The effects of the Oklahoma dust bowl. (Corbis-Bettmann. Reproduced by permission.)
Folliott, P. F., K. N. Brooks, H.M. Gregersen and A.L. Lundgren. Dryland Forestry—Planning and Management. New York: John Wiley & Sons, 1995. Food and Agricultural Organization (FAO) of the United Nations. Sand Dune Stabilization, Shelterbelts, and Afforestation in Dry Zones. Rome: FAO Conservation Guide 10, 1985.
Dust Bowl “Dust Bowl” is a term coined by a reporter for the Washington (D.C.) Evening Star to describe the effects of severe wind erosion in the Great Plains during the 1930s, caused by severe drought and lack of conservation practices. For a time after World War I, agriculture prospered in the Great Plains. Land was rather indiscriminantly plowed and planted with cereals and row crops. In the 1930s, the total cultivated land in the United States increased, reaching 530 million acres (215 million ha), its highest level ever. Cereal crops, especially wheat, were most prevalent in the Great Plains. Summer fallow (cultivating the land, but only planting every other season) was practiced on much of the land. Moisture, stored in the soil during the fallow (uncropped) period, was used by the crop the following year.
In a process called dust mulch, the soil was frequently clean tilled to leave no crop residues on the surface, control weeds, and, it was thought at the time, preserve moisture from evaporation. Frequent cultivation and lack of crop canopy and residues optimized conditions for wind erosion during the droughts and high winds of the 1930s. During the process of wind erosion, the finer particles (silt and clay) are removed from the topsoil, leaving coarsertextured sandy soil. The fine particles carry with them higher concentrations of organic matter and plant nutrients, leaving the remaining soil impoverished and with a lower water storage capacity. Wind erosion of the Dust Bowl reduced the productivity of affected lands, often to the point that they could not be farmed economically. While damage was particularly severe in Texas, Oklahoma, Colorado, and Kansas, erosion occurred in all of the Great Plains states, from Texas to North Dakota and Montana, even into the Canadian Prairie Provinces. The eroding soil not only prevented the growth of plants, it uprooted established ones. Sediment filled fence rows, stream channels, road ditches, and farmsteads. Dirt coated 395
Environmental Encyclopedia 3
Dust Bowl
the insides of buildings. Airborne dust made travel difficult because of decreased visibility; it also impaired breathing and caused respiratory diseases. Dust from the Great Plains was carried high in the air and transported as far east as the Atlantic seaboard. In places, 3–4 in (7–10 cm) of topsoil was blown away, forming dunes 15–20 ft (4.6–6.1 m) high where the dust finally came to rest. In a 20-county area covering parts of southwestern Kansas, the Oklahoma strip, the Texas Panhandle, and southeastern Colorado, a soil-erosion survey by the Soil Conservation Service showed that 80% of the land was affected by wind erosion, 40% of it to a serious degree. The droughts and resultant wind erosion of the 1930s created widespread economic and social problems. Large numbers of people migrated out of the Dust Bowl area during the 1930s. The migration resulted in the disappearance of many small towns and community services such as churches, schools, and local units of government. Following the disaster of the Dust Bowl, the 1940s saw dramatically improved economic and social conditions with increased precipitation and improved crop prices. Gradually, changes in farming practices have also taken place. Much of the severely damaged and marginal land has been
396
returned to grass for livestock grazing. Non-detrimental tillage and management practices, such as conservation tillage (stubble mulch, mulch, and residue tillage); use of tree, shrub, and grass windbreaks; maintenance of crop residues on the soil surface; and better machinery have all contributed to improved soil conditions. Annual cropping or a three-year rotation of wheat-sorghum-fallow has replaced the alternate crop-fallow practice in many areas, particularly in the more humid areas of the West. While the extreme conditions of drought and land mismanagement of the Dust Bowl years have not been repeated since the 1930s, wind erosion is still a serious problem in much of the Great Plains. According to the Soil Conservation Service, the states with the most serious erosion per unit area in 1982 were Texas, Colorado, Nevada, and Montana. See also Arable land; Desertification; Overgrazing; Soil eluviation; Tilth; Water resources [William E. Larson]
RESOURCES BOOKS Hurt, R. D. The Dust Bowl: An Agricultural and Social History. Chicago: Nelson-Hall, 1981.
E
Earth Charter It is the objective of the Earth Charter to set forth an inspiring vision of the fundamental principles of a global partnership for sustainable development and environmental conservation. The Earth Charter initiative reflects the conviction that a radical change in humanity’s attitudes and values is essential to achieve social, economic, and ecological well-being in the twenty-first century. The Earth Charter project is part of an international movement to clarify humanity’s shared values and to develop a new global ethics, ensuring effective human cooperation in an interdependent world. There were repeated efforts to draft the Earth Charter beginning in 1987. Early in 1997 an Earth Charter Commission was formed by the Earth Council and Green Cross International. The Commission prepared an Earth Charter which it was circulated as a people’s treaty beginning in 1998. The Charter was then submitted to the United Nations General Assembly in the year 2000. On June 29, 2000, the official Earth Charter was established at the Peace Palace in The Hague, Holland. Historical background, 1945–1992 The role and significance of the Earth Charter are best understood in the context of the United Nations’ ongoing efforts to identify the fundamental principles essential to world security. When the U.N. was established in 1945, its agenda for world security emphasized peace, human rights, and equitable socioeconomic development. No mention was made of the environment as a common concern, and little attention was given to ecological well-being in the U.N.’s early years. However, since the Stockholm Conference on the Human Environment in 1972, ecological security has emerged as a fourth major concern of the United Nations. Starting with the Stockholm Declaration, the world’s nations have adopted a number of declarations, charters, and treaties that seek to create a global alliance that effectively integrates and balances development and conservation. In
addition, a variety of nongovernmental organizations have drafted and circulated their own declarations and people’s treaties. These documents reflect a growing awareness that humanity’s social, economic, and environmental problems are interconnected and require integrated solutions. The Earth Charter initiative builds on these efforts. The World Charter for Nature, which was adopted by the U.N. General Assembly in 1982, was a progressive declaration of ecological and ethical principles for its time. It remains a stronger document than any that have followed from the point of view of environmental ethics. However, in its 1987 report, Our Common Future, the U.N. World Commission on Environment and Development (WCED) issued a call for “a new charter” that would “consolidate and extend relevant legal principles,” creating “new norms...needed to maintain livelihoods and life on our shared planet” and “to guide state behavior in the transition to sustainable development.” The WCED also recommended that the new charter “be subsequently expanded into a Convention, setting out the sovereign rights and reciprocal responsibilities of all states on environmental protection and sustainable development.” The WCED recommendations, together with deepening environmental and ethical concerns, spurred efforts in the late 1980s to create an Earth Charter. However, before any U.N. action was initiated on the Earth Charter, the Commission on Environmental Law of the World Conservation Union (IUCN) drafted the convention proposed in Our Common Future. The IUCN Draft International Covenant on Environment and Development presents an integrated legal framework for existing and future international and national environmental and sustainable development law and policy. Even though the IUCN Draft Covenant was presented at the United Nations in 1995 official negotiations have not yet begun on this treaty which many environmentalists believe is urgently needed to clarify, synthesize, and further develop international sustainable development law. The United Nations Conference on Environment and Development (UNCED), or Earth Summit held in Rio de 397
Environmental Encyclopedia 3
Earth Charter
Janeiro, Brazil, in 1992 did take up the challenge of drafting the Earth Charter. A number of governments prepared recommendations. Many nongovernmental organizations, including groups representing the major faiths, became actively involved. While the resulting Rio Declaration on Environment and Development is a valuable document, it falls short of the aspirations that many groups have had for the Earth Charter. The Earth Charter Project, 1994–2000 A new Earth Charter initiative began in 1994 under the leadership of Maurice Strong, the former secretary general of UNCED and chairman of the newly formed Earth Council, and Mikhail Gorbachev, acting in his capacity as chairman of Green Cross International. The Earth Council was created to complete the unfinished business of UNCED and to promote implementation of Agenda 21, the Earth Summit’s action plan. Jim MacNeill, former secretary general of the WCED and Prime Minister Ruud Lubbers of The Netherlands were instrumental in facilitating the organization of the new Earth Charter project. Ambassador Mohamed Sahnoun of Algeria served as the executive director of the project during its initial phase, and its first international workshop was held at the Peace Palace in The Hague in May 1995. Representatives from 30 countries and more than 70 different organizations participated in the workshop. Following this event, the secretariat for the Earth Charter project was established at the Earth Council in San Jose´, Costa Rica. A worldwide Earth Charter consultation process was organized by the Earth Council in connection with the Rio& plus;5 review in 1996 and 1997. The Rio+5 review, which culminated with a special session of the United Nations General Assembly in June 1997, sought to assess progress toward sustainable development since the Rio Earth Summit and to develop new partnerships and plans for implementation of Agenda 21. The Earth Charter consultation process engaged men and women from all sectors of society and all cultures in contributing to the Earth Charter’s development. A special program was created to contact and involve the world’s religions, interfaith organizations, and leading religious and ethical thinkers. A special indigenous people’s network was also organized by the Earth Council. Early in 1997, an Earth Charter Commission was formed to oversee the project. The 23 members represent the major regions of the world and different sectors of society. The Commission issued a Benchmark Draft Earth Charter in March 1997 at the conclusion of the Rio+5 Forum in Rio de Janeiro. The Forum was organized by the Earth Council as part of its independent Rio+5 review, and it brought together more than 500 representatives from civil society and national councils of sustainable development. The Benchmark Draft reflected the many and diverse 398
contributions received through the consultation process and from the Rio+5 Forum. The Commission extended the Earth Charter consultation until early 1998, and the Benchmark Draft was circulated widely as a document in progress. The Earth Charter concept A consensus developed that the Earth Charter should be: a statement of fundamental principles of enduring significance that are widely shared by people of all races, cultures, and religions; a relatively brief and concise document composed in a language that is inspiring, clear, and meaningful in all tongues; the articulation of a spiritual vision that reflects universal spiritual values, including but not limited to ethical values; a call to action that adds significant new dimensions of value to what has been expressed in earlier relevant documents; a people’s charter that serves as a universal code of conduct for ordinary citizens, educators, business executives, scientists, religious leaders, nongovernmental organizations, and national councils of sustainable development; and a declaration of principles that can serve as a “soft law” document when adopted by the U.N. General Assembly. The Earth Charter was designed to focus on fundamental principles with the understanding that the IUCN Covenant and other treaties will set forth the more specific practical implications of these principles. The Earth Charter draws upon a variety of resources, including ecology and other contemporary sciences, the world’s religious and philosophical traditions, the growing literature on global ethics and the ethics of environment and development, the practical experience of people living sustainably, as well as relevant intergovernmental and nongovernmental declarations and treaties. At the heart of the new global ethics and the Earth Charter is an expanded sense of community and moral responsibility that embraces all people, future generations, and the larger community of life on Earth. Among the values affirmed by the Benchmark Draft are: respect for Earth and all life; protection and restoration of the health of Earth’s ecosystems; respect for human rights, including the right to an environment adequate for human well-being; eradication of poverty; nonviolent problem solving and peace; the equitable sharing of resources; democratic participation in decision making; accountability and transparency in administration; universal education for sustainable living; and a sense of shared responsibility for the well-being of the Earth community. [Steven C. Rockefeller]
RESOURCES BOOKS Earth Ethics, Special Earth Charter Double Issue. Washington, D.C.: Center for Respect of Life and Environment, 7, nos. 3/4 (Spring/Summer, 1996): 1-7 and 8, nos. 2/3 (Winter/Spring, 1997): 3-8.
Environmental Encyclopedia 3 Rockefeller, S.C. Principles of Environmental Conservation and Sustainable Development: Summary and Survey. The Earth Council website, The Earth Charter Consultation page: ..
ORGANIZATIONS The Earth Charter Initiative, The Earth Council, P.O. Box 319-6100, San Jose, Costa Rica +506-205-1600 , Fax: +506-249-3500, Email:
[email protected],
Earth Day The first Earth Day, April 22, 1970, attracted over 20 million participants in the United States. It launched the modern environmental movement and spurred the passage of several important environmental laws. It was the largest demonstration in history. People from all walks of life took part in marches, teach-ins, rallies, and speeches across the country. Congress adjourned so that politicians could attend hometown events, and cars were banned from New York’s Fifth Avenue. The event had a major impact on the nation. Following Earth Day, conservation organizations saw their memberships double and triple. Within months, the Environmental Protection Agency (EPA) was created; Congress also revised the Clean Air Act, the Clean Water Act, and other environmental laws. The concept for Earth Day began with Senator Gaylord Nelson, a Wisconsin Democrat, who in 1969 proposed a series of environmental teach-ins on college campuses across the nation. Hoping to satisfy a course requirement at Harvard by organizing a teach-in there, law student Denis Hayes flew to Washington, DC, to interview Nelson. The senator persuaded Hayes to drop out of Harvard and organize the nationwide series of events that were only a few months away. According to Hayes, Wednesday, April 22 was chosen because it was a weekday and would not compete with weekend activities. It also came before students would start “cramming” for finals, but after the winter thaw in the North. Twenty years later, Earth Day anniversary celebrations attracted even greater participation. An estimated 200 million people in over 140 nations were involved in events ranging from a concert and rally of over a million people in New York’s Central Park, to a festival in Los Angeles that attracted 30,000, to a rally of 350,000 at the National Mall in Washington, D.C. Earth Day 1990 activities included planting trees; cleaning up roads, highways, and beaches; building bird houses; ecology teach-ins; and recycling cans and bottles. A convoy of garbage trucks drove through the streets of Portland, Oregon, to dramatize the lack of landfill space. Elsewhere, children wore gas masks to protest air pollution, others marched in parades wearing costumes made from
Earth Day
recycled materials, and some even released ladybugs into the air to demonstrate alternatives to harmful pesticides. The gas-guzzling car that was buried in San Jose, California, during the first Earth Day was dug up and recycled. Abroad, Berliners planted 10,000 trees along the EastWest border. In Myanmar, there were protests against the killing of elephants. Brazilians demonstrated against the destruction of their tropical rain forests. In Japan, there were demonstrations against disposable chopsticks, and 10,000 people attended a concert on an island built on reclaimed land in Tokyo Bay. The 1990 version was also organized by Denis Hayes, with help from hundreds of volunteers. This time, the event was well organized and funded; it was widely-supported by both environmentalists and the business community. The United Auto Workers Union sent Earth Day booklets to all of its members, the National Education Association sent information to almost every teacher in the country, and the Methodist Church mailed Earth Day sermons to over 30,000 ministers. The sophisticated advertising and public relations campaign, licensing of its logo, and sale of souvenirs provoked criticism that, Earth Day had become too commercial. Even oil, chemical, and nuclear firms joined in and proclaimed their love for nature. But Hayes defended the professional approach as necessary to maximize interest and participation in the event, to broaden its appeal, and to launch a decade of environmental activism that would force world leaders to address the many threats to the planet. He also pointed out that while foundations, corporations, and individuals had donated $3.5 million, organizers turned down over $4 million from companies that were thought to be harming the environment. The 30-year anniversary of the event was also organized by Hayes. Unfortunately, it did not produce the large numbers of the prior anniversary celebration. The movement had reached over 5,000 environmental groups who helped organize local rallies, and hundreds of thousands of people met in Washington to hear political, environmental, and celebrity speakers. Hayes believes that the long-term success of Earth Day in securing a safe future for the planet depends on getting as many people as possible involved in environmentalism. The Earth Day celebrations he helped organize a have been a major step in that direction. [Lewis G. Regenstein]
RESOURCES PERIODICALS Borrelli, P. “Can Earth Day Be Every Day?” Amicus Journal 12 (Spring 1990): 22-26.
399
Earth Day
Environmental Encyclopedia 3
Marchers carry a tree down Fifth Avenue in New York City on the first Earth Day, April 22, 1970. (Corbis-Bettmann. Reproduced by permission.)
400
Environmental Encyclopedia 3
Earth First!
Two Earth First! Members hold a sign in front of the statue of Abraham Lincoln at the Lincoln Memorial to protest the destruction of the earth’s rain forests.
Hayes, D. “Earth Day, 1990: Threshold of the Green Decade.” Natural History 99 (April 1990): 55-60. Stenger, Richard. “Thousands observe Earth Day 2000 in Washington.” CNN.com (22 April 2000) [June 2002]. .
Earth First! Earth First! is a radical and often controversial environmental group founded in 1979 in response to what Dave Foreman and other founders believed to be the increasing co-optation of the environmental movement. For Earth First! members, too much of the environmental movement has become lethargic, compromising, and corporate in its orientation. To avoid a similar fate, Earth First! members have restricted their use of traditional fund-raising techniques and have sought a non-hierarchical organization with neither a professional staff nor formal leadership. A movement established by and for self-acknowledged environmental hardliners, Earth First!’s general stance is reflected in its slogan, “No compromise in the defense of
Mother Earth.” Its policy positions are based upon principles of deep ecology and in particular on the group’s belief in the intrinsic value of all natural things. Its goals include preserving all remaining wilderness, ending environmental degradation of all kinds, eliminating major dams, establishing large-scale ecological preserves, slowing and eventually reversing human population growth, and reducing excessive and environmentally harmful consumption. Combining biocentrism with a strong commitment to activism, Earth First! does not restrict itself to lobbying, lawsuits, and letter-writing, but also employs direct action, civil disobedience, “guerrilla theater,” and other confrontational tactics, and in fact is probably best known for its various clashes with the logging industry, particularly in the Pacific Northwest. Earth First! members and sympathizers have been associated with controversial tactics including the chopping down of billboards and monkey-wrenching, which includes pouring sand in bulldozer gas tanks, spiking trees, sabotaging drilling equipment, and so forth. Officially, the organization purports neither to condone nor condemn such tactics. 401
Environmental Encyclopedia 3
Earth Island Institute
Earth First! encourages people to respect species and wilderness, to refrain from having children, to recycle, to live simpler, less destructive lives, and to engage in civil disobedience to thwart environmental destruction. During the summer of 1990, the group sponsored its most noted event, “Redwood Summer.” Activists from around the United States gathered in the Northwest to protest largescale logging operations, to call attention to environmental concerns, to educate and establish dialogues with loggers and the local public, and to engage in civil disobedience. Earth First! also sponsors a Biodiversity Project for protecting and restoring natural ecosystems. Its Ranching Task Force educates the public about the consequences of overgrazing in the American West. The Grizzly Bear Task Force focuses on the preservation of the grizzly bear in the Rockies and the reintroduction of the species to its historical range throughout North America. Earth First!’s wider Predator Project seeks the restoration of all native predators to their respective habitats and ecological roles. Carmageddon is an anti-car campaign Earth First! sponsors in the United Kingdom. Other Earth First! projects seek to defend redwoods and other native forests, encourage direct action against the fur industry, intervene in governmentsponsored wolf-control programs in the U.S. and Canada, and protest government and business decisions which have environmentally destructive consequences for tropical rain forests. [Lawrence J. Biskowski]
Earth Island Institute The Earth Island Institute (EII) was founded by David Brower in 1982 as a nonprofit organization dedicated to developing innovative projects for the conservation, preservation, and restoration of the global environment. In its earliest years, the Institute worked primarily with a volunteer staff and concentrated on projects like the first Conference on the Fate of the Earth, publication of Earth Island Journal, and the production of films about the plight of indigenous peoples. In 1985 and again in 1987, EII expanded its facilities and scope, opening office space and providing support for a number of allied groups and projects. Its membership now numbers approximately 35,000. EII conducts research on, and develops critical analyses of, a number of contemporary issues. With sponsored projects ranging from saving sea turtles to encouraging land restoration in Central America, EII does not restrict its scope to traditionally “environmental” goals but rather pursues what it sees as ecologically-related concerns such as human rights, economic development of the Third World, economic conversion from military to peaceful production, and inner city 402
poverty, among others. But much of its mission is to be an environmental educator and facilitator. In that role EII sponsors or participates in numerous programs designed to provide information, exchange viewpoints and strategies, and coordinate efforts of various groups. EII even produces music videos as part of its environmental education efforts. EII is perhaps best known for its efforts to halt the use of drift nets by tuna boats, a practice which is often fatal to large numbers of dolphins. After an EII biologist signed on as a crew member aboard a Latin American tuna boat and managed to document the slaughter of dolphins in drift nets, EII brought a lawsuit to compel more rigorous enforcement of existing laws banning tuna caught on boats using such nets. EII also joined with other environmental groups in urging a consumer boycott of canned tuna. These efforts were successful in persuading the three largest tuna canners to pledge not to purchase tuna caught in drift nets. The monitoring of tuna fishing practices is an ongoing EII project. EII also sponsors a wide variety of other projects. Its Energy Program promotes energy efficient technology. Its Friends of the Ancient Forest program aims at the protection of old-growth forests on the Pacific Coast. Baikal Watch works for the permanent protection of biologically unique Lake Baikal, Russia. The Climate Protection Institute publishes the Greenhouse Gas-ette and develops public education material about changes in global climate. EII participates in the International Green Circle, an ecological restoration program which matches volunteers with ongoing projects worldwide. EII’s Sea Turtle Restoration Project investigates threats to the world’s endangered sea turtles, organizes and educates United States citizens to protect the turtles, and works with Central American sea turtle restoration projects. The Rain Forest Health Alliance develops educational materials and programs about the biological diversity of tropical rain forests. The Urban Habitat Program develops multicultural environmental leadership and organizes efforts to restore urban neighborhoods. EII administers a number of funds designed to support creative approaches to environmental conservation, preservation, and restoration to support activists exploring the use of citizen suit provisions of various statutes to enforce environmental laws and to help develop a Green political movement in the United States. EII also sponsors several international conferences, exchange programs, and publication projects in support of various environmental causes. [Lawrence J. Biskowski]
RESOURCES ORGANIZATIONS Earth Island Institute, 300 Broadway, Suite 28, San Francisco, CA USA 94133-3312 (415) 788-3666, Fax: (415) 788-7324,
Environmental Encyclopedia 3
Earth Liberation Front Earth Liberation Front (ELF) is a grassroots environmental group that the Federal Bureau of Investigation labeled “a serious terrorism threat.” Since 1996, ELF and the Animal Liberation Front (ALF) committed more than 600 acts of vandalism that resulted in more than $43 million in damage, an FBI terrorism expert reported to Congress in 2002. James Jarboe, the FBI section chief for Domestic Terrorism, told Congress that it was hard to track down ELF and AlF members because the two groups have little organized structure. According to the ELF link on the ALF Web site, there is no designated leadership nor formal membership. ELF and ALF claim responsibility for their activities by e-mail, fax, and other communications usually sent to the media. In 2002, media information was provided by the North American Earth Liberation Front Press Office. Spokesman Leslie James Pickering wrote that he received anonymous communique´s from ELF and further distributed them. ELF describes itself as an international underground organization dedicated “to stop the continued destruction of the natural environment.” Members join anonymous cells that may consist of one person or more. Members of one cell do not know the identity of members in other cells, a structure that prevents activists in one cell from being compromised should members in another cell become disaffected. People act on their own and carry out actions following anonymous ELF postings. ELF actions include sabotage and property damage. ELF postings include guidelines for taking action. One guideline is to inflict “economic damage on people who profit from the destruction and exploitation of the natural environment.” Another guideline is to reveal and educate the public about the “atrocities” committed against the environment. The third guideline is to take all needed precautions against harming any animal, human and non-human. ELF is an outgrowth of Earth First!, a group formed during the 1980s to promote environmental causes. Earth First! held protests and civil disobedience events, according to the FBI. In 1984, Earth First! members began a campaign of “tree spiking.” to prevent loggers from cutting down trees. Members inserted metal or ceramic spikes into trees. The spikes damaged saws when loggers tried to cut down trees. When Earth First! members in Brighton, England disagreed with proposals to make their group more mainstream, radical members of the group founded the Earth Liberation Front in 1992. The following year, ELF aligned itself with ALF. The Animal Liberation Front was started in Great Britain during the mid-1970s. The loosely organized movement had the goal of ending animal abuse and exploitation
Earth Liberation Front
of animals. An American ALF branch was started in the late 1970s. According to the FBI, people became members by participating in “direct action” activities against companies or people using animals for research or economic gain. ALF activists targeted animal research laboratories, fur companies, mink farms, and restaurants. ELF and ALF declared mutual solidarity in a 1993 announcement. The following year, the San Francisco branch of Earth First! recommended that the group mainstream itself away from ELF and its unlawful activities. ELF calls its activities “monkeywrenching.", a term that refers to actions such as tree spiking, arson, sabotage of logging equipment, and property destruction. In a 43-page document titled “Year End Report for 2001,” ELF and ALF claimed responsibility for 67 illegal acts that year. ELF claimed sole credit for setting a fire that year that caused $5.4 million in damage to a University of Washington horticulture building. The group also took sole credit for a 1998 fire set at a Vail, Colorado ski resort. Damage totaled $12 million for the arson that destroyed four ski lifts, a restaurant, a picnic facility, and a utility building, according to the FBI. ELF issued a statement after the arson saying that the fire was set to protect the lynx, which was being reintroduced to the Rocky Mountains. “Vail, Inc. is already the largest ski operation in North American and now wants to expand even further...This action is just a warning. We will be back if this greedy corporation continues to trespass into wild and unroaded areas,” the statement said. In 2002, ELF claimed credit for a fire that caused $800,000 in damage at the University of Minnesota Microbial Plant and Genomics Center. ELF targeted the genetic crop laboratory because of its efforts to “control and exploit” nature. “Eco-terrorism” is the term used by the FBI to define illegal activities related to ecology and the environment. These activities involve the “use of criminal violence against innocent victims or property by an environmentally-oriented group.” Although ELF members are difficult to track, several arrests have been made. In February 2001, two teen-age boys pleaded guilty to setting fires at a home construction site in Long Island, New York. In December of that year, a man was also charged with spiking 150 trees in Indiana state forests. In his Congress testimony, Jarboe said that cooperation among law enforcement agencies was essential to respond efficiently to eco-terrorism. As of 2002, the FBI has joint terrorism task forces in 44 cities and the bureau plans to have task forces in all 56 of its field offices by the end of 2003. [Liz Swain] 403
Earth Pledge Foundation
RESOURCES BOOKS Alexander, Yonah and Edgar H. Brenner. U. S. Federal Responses to Terrorism. Ardsley, NY: Transnational Publishers, 2002. Newkirk, Ingrid and Chrissie Hynde. Free the Animals: The Story of the Animal Liberation Front. New York, NY: Lantern Books, 2000.
PERIODICALS Society of American Foresters. “Legislator Focuses on Ecoterrorism McInnis Asks Environmental Groups to Denounce Violence.” The Forestry Source(January, 2002).
ORGANIZATIONS Federal Bureau of Investigation, 935 Pennsylvania Ave., Washington, DC USA (202) 324-3000, , North American Earth Liberation Front Press Office, James Leslie Pickering, P.O. Box 14098, Portland, OR USA 97293 (503) 804-4965, Email:
[email protected],
Earth Pledge Foundation Created in 1991 by attorney Theodore W. Kheel, the Earth Pledge Foundation (EPF) is concerned with the impact of technology on society. Recognizing the often delicate balance between economic growth and environmental protection, EPF encourages the implementation of sustainable practices, especially in community development, tourism, cuisine, and architecture. As a result of the United Nations Earth Summit (Rio de Janeiro) the UN pledged its commitment to the principles of sustainable development—to foster “development meeting the needs of the present without compromising the ability of future generations to meet their own needs.” Created for the Summit in support of the principles, the Earth Pledge was prominently displayed throughout the event. Heads of state, ambassadors, delegates, and prominent dignitaries from around the world stood in line to sign their names on a large Earth Pledge board. Since the Summit, millions have taken the Earth Pledge: “Recognizing that people’s actions towards nature and each other are the source of growing damage to the environment and to resources needed to meet human needs and ensure survival, I pledge to act to the best of my ability to help make the Earth a secure and hospitable home for present and future generations.” In early 1996, Earth Pledge created the Business Coalition for Sustainable Cities (BCSC) to influence the development of cities as centers of commerce, employment, recreation, and settlement. Chaired by William L. Lurie, former president of The Business Roundtable, the BCSC provides business leaders a forum to address issues of major importance to our cities in ways that ensure economic viability 404
Environmental Encyclopedia 3 while at the same time promoting respect for the environment. One event sponsored by the BCSC was a seven-course dinner prepared by 12 of the nation’s most environmentallyconscious chefs to showing off that restaurants, one of the largest industries and employers, can practice the principles of sustainable cuisine. The BCSC hosted the event with the theme that good food can be well-prepared without adversely impacting health, culture or environment. This theme was elaborated on in 2000 when the Sustainable Cuisine Project was established to develop and teach cooking classes. The latest development by Earth Pledge is the creation of the web site . This web site highlights local farmers of the New York region and give consumers a direct link to fresh food news. Earth Pledge has also formed a number of alliances to further their goals with groups such as: The Foundation for Prevention and Resolution of Conflict (PERC—founded by Theodore Kheel); the United Nations Environmental Programme (UNEP); EarthKind International; the New England Aquarium. As a joint project with the New England Aquarium, EPF sponsors a marine awareness project that educates people on the importance of coastlines and aquatic resources to the sustainable development of the world’s cities. The project emphasizes that many countries have water shortages due to inefficient use of their water supply, degradation of their water by pollution and unsustainable usage of groundwater resources. In late 1995, Earth Pledge co-sponsored the first Caribbean Conference on Sustainable Tourism with the UN Department for Policy Coordination and Sustainable Development, EarthKind International and UNEP. The Conference brought together officials from government and business to discuss strategies for developing a healthy tourist economy, sound infrastructure, environmental protection, and community participation. The foundation has constructed an environmentallysensitive building, Foundation House, to display their solutions for improving air quality and energy efficiency. Sustainable features include heating, cooling, and lighting systems that minimize consumption of fossil fuels; increased ventilation and use of natural daylight; an auditorium for conferences with Internet access and computer lab for training. The Foundation House houses exhibits, including one on enhancing efficiency of the workplace for the benefit of the staff. Earth Pledge continues to develop smaller organizations and promote companies that participate in sustainable practices. [Nicole Beatty]
Environmental Encyclopedia 3 RESOURCES ORGANIZATIONS Earth Pledge Foundation, 122 East 38th Street , New York , NY USA 10016 (212) 725-6611, Fax: (212) 725-6774, ,
Earth Summit see United Nations Earth Summit (1992)
Earthquake Earthquakes have been around for as long as the planet and have plagued humans throughout history. With no warning, major earthquakes strike populated areas of the world every year, killing hundreds, injuring thousands, and causing hundreds of millions of dollars in damage. Yet despite millions of dollars and decades of research, seismologists (scientists who study earthquakes) are still unable to predict precisely when and where an earthquake will happen. An earthquake is a geological event in which rock masses below the surface of the earth suddenly shift, releasing energy and sending out strong vibrations to the surface. Most earthquakes are caused by movement along a fault line, which is a fracture in the earth’s crust. Thousands of earthquakes happen each day around the world, but most are too small to be felt. Earth is covered by a crust of rock that is broken into numerous plates. The plates float on a layer of molten (liquid) rock within the earth called the mantel. This molten rock moves and flows, and this movement is thought to cause the shifting of the plates. When plates move, they either slide past, bump into, overrun, or pull away from each other. The movement of plates is called plate tectonics. Boundaries between plates are called faults. Earthquakes can occur when there are any of the four types of movement along a fault. Earthquakes along the San Andreas and Hayward faults in California occur because of two plates sliding past one another. Earthquakes also occur if one plate overruns another. When this happens one plate is pushed under the other plate, as on the western coast of South America, the northwest coast of North America, and in Japan. If plates collide but neither is pushed downwards, as they do crossing Europe and Asia from Spain to Vietnam, earthquakes result as the plates are pushed into each other and are forced upwards, creating high mountain ranges. Many faults at the floor of the ocean are between two plates moving apart. Many earthquakes with centers at the floor of the ocean are caused by this kind of movement. The relative size of earthquakes is measured by the Richter Scale, which measures the energy an earthquake releases. Each whole number increase in value on the Richter
Earthquake
scale indicates a ten-fold increase in the energy released and a thirty–fold increase in ground motion. An earthquake measuring 8 on the Richter scale is ten times more powerful, therefore, than an earthquake with a Richter magnitude of 7. Another scale, called the Mercalli Scale uses observations of damage (such as fallen chimneys) or people’s assessments of effects (such as mild or severe ground shaking) to describe the intensity of a quake. The Richter Scale is open-ended, while the Mercalli scale ranges from 1–12. Catastrophic earthquakes happened just as often in past human history as they do today. Earthquakes shattered stone-walled cities in the ancient world, sometimes hastening the ends of civilizations. Earthquakes destroyed Knossos, Chattusas, and Mycenae, ancient cities in Europe located in tectonically active mountain ranges. Scribes have documented earthquakes in the chronicles of ancient countries. An earthquake is recorded in the Bible in the Book of Zachariah, and the Apostle Paul wrote that he escaped from jail when the building fell apart around him during an earthquake. Many faults are located in California because two large plates are sliding past each other there. Of the 15 largest recorded earthquakes ever to hit the continental United States, eight have occurred in California, according to the United States Geological Survey (USGS). The San Francisco earthquake of 1906 is perhaps the most famous. It struck on April 4, 1906, killing an estimated 3,000 people, injuring thousands, and causing $524 million in property loss. Many of the casualties and much of the damage resulted from the ensuing fires. This earthquake registered a 7.7 magnitude on the Richter Scale and 11 on the Mercalli Scale. Four other devastating earthquakes have occurred in California in the twentieth century: 1933 in Long Beach, 1971 in the San Fernando Valley, 1989 in the San Francisco Bay area, and 1994 in Los Angeles. The Long Beach earthquake struck on March 10, 1933, killing 120, injuring hundreds, and causing more than $50 million in property damage. It led to the passage of the state’s Field Act, which established strict building code standards designed to make structures better able to withstand strong earthquakes. Centered about 30 mi (48 km) north of downtown Los Angeles, the San Fernando earthquake killed 65, injured more than 2,000, and caused an estimated $505 million in property damage. The quake hit on February 9, 1971, and registered 6.5 on the Richter Scale and 11 on the Mercalli Scale. Most of the deaths occurred when the Veterans Administration Hospital in San Fernando collapsed. The Loma Prieta earthquake occurred on October 18, 1989, in the Santa Cruz Mountains about 62 mi (100 km) south of San Francisco. It killed 63, injured 3,757, and caused an estimated $6 billion in property damage, mostly 405
Environmental Encyclopedia 3
Earthwatch
in San Francisco, Oakland, and Santa Cruz. The earthquake was a 6.9 on the Richter Scale and 9 on the Mercalli Scale. The Northridge earthquake that struck Los Angeles on January 17, 1994, killed 72, injured 11,800, and caused an estimated $40 billion in damage. It registered 6.7 on the Richter Scale and 9 on the Mercalli Scale. It was centered about 30 mi (48 km) northwest of downtown Los Angeles. In the past 100 years, Alaska has had many more severe earthquakes than California. However, they have occurred in mostly sparsely populated areas, so deaths, injuries, and damage have been light. Of the 15 strongest earthquakes ever recorded in the 50 states, 10 have been in Alaska, with the strongest registering a 9.2 (the second strongest ever recorded in the world) on the Richter Scale and 12 on the Mercalli Scale. It struck the Anchorage area on March 28, 1964, killing 125 (most from a tsunami or tidal wave caused by the earthquake), injuring hundreds, and causing $311 million in property damage. The strongest earthquake ever recorded in the world registered 9.5 on the Richter scale and 12 on the Mercalli Scale. It occurred on May 22, 1960, and was centered off the coast of Chile. It killed 2,000, injured 3,000, and caused $675 million in property damage. A resulting tsunami caused death, injuries, and significant property damage in Hawaii, Japan, and the West Coast of the United States. Every major earthquake raises the question of whether scientists will ever be able to predict exactly when and where one will strike. Today, scientists can only make broad predictions. For example, scientists believe there is at least a 50% chance that a devastating earthquake will strike somewhere along the San Andreas fault within the next 100 years. A more precise prediction is not yet possible. However, scientists in the United States and Japan are working on ways they might be able to make predictions more specific. Ultra sensitive instruments placed across faults at the surface can measure the slow, almost imperceptible movement of fault blocks. This measurement records the great amount of potential energy stored at the fault boundary. In some areas, small earthquakes called foreshocks that precede a larger earthquake may help seismologists predict the larger earthquake. In other areas where seismologists believe earthquakes should be occurring but are not, this discrepancy between what is expected and what is observed may be used to predict an inevitable large-scale earthquake. Other instruments measure additional fault-zone phenomena that seem to be related to earthquakes. The rate at which radon gas issues from rocks near faults has been observed to change before an earthquake. The properties of the rocks themselves (such as their ability to conduct electricity) have been observed to change as the tectonic force exerted on them slowly alters the rocks of the fault zone between earthquakes. Unusual animal behavior has 406
been reported before many earthquakes, and research into this phenomenon is a legitimate area of scientific inquiry, even though no definite answers have been found. Techniques of studying earthquakes from space are also being explored. Scientists have found that ground displacements cause waves in the air that travel into the ionosphere and disturb electron densities. By using the network of satellites and ground stations that are part of the Global Positioning System (GPS), and data about the ionosphere that is already being collected, scientists may better understand the energy released from earthquakes. This may help scientists to predict them. [Ken R. Wells]
RESOURCES BOOKS Henyey, Tom. Natural Disasters: Earthquakes: A Reference Handbook. Santa Barbara, CA: ABC-CLIO, 2002. Hough, Susan Elizabeth. Earthshaking Science: What We Know, and Don’t Know, About Earthquakes. Princeton, NJ: Princeton University Press, 2002. Nicolson, Cynthia Pratt. Earthquake. Tonawanda, NY: Kids Can Press, 2002.
PERIODICALS Chan-Kai, Alex. “Skate Disaster.” Stone Soup (July 2001):26. Johnson, Rita. “Whole Lotta Shakin’ Goin’ On!” Boys’ Life (Dec. 2001):7–8. Matty, Jane M. “Recent Quakes.” Rocks & Minerals (March 2000):90. Middleton, Nick. “Managing Earthquake Hazards In Los Angeles.” Geography Review (May 2001):22. Nur, Amos. “And the Walls Came Tumbling Down.” New Scientist (July 6, 1991): 45–49. Thompson, Dick. “Can We Save California? Predicting Earthquakes is One Thing; Preventing Them Would Be Something Else.” Time (April 10, 2000):104+.
ORGANIZATIONS National Earthquake Information Center, P.O. Box 25046, DFC, MS 967, Denver, CO USA 80225 (303) 273-8500, Fax: (303) 273-8450, Email:
[email protected],
Earthwatch Earthwatch is a non-profit institution that provides paying volunteers to help scientists around the world conduct field research on environmental and cultural projects. It is one of the world’s largest private sponsors of field research expeditions. Its mission is “to improve human understanding of the planet, the diversity of its inhabitants, and the processes which affect the quality of life on earth” by working “to sustain the world’s environment, monitor global change, conserve endangered habitats and species, explore the vast heritage of our peoples, and foster world health and international cooperation.”
Environmental Encyclopedia 3
Eastern European pollution
The group carries out its work by recruiting volunteers to serve in an environmental EarthCorps and to work with research scientists on important environmental issues. The volunteers, who pay from $800 to over $2,500 to join twoor three-week expeditions to the far corners of the globe, gain valuable experience and knowledge on situations that affect the earth and human welfare. By 2002, Earthwatch has sponsored over 1,180 projects in 50 countries around the world. By the end of the year it expects to have mobilized 4,300 volunteers, ranging in ages from 16 to 85, on 780 research teams. They will address such topics as tropical rain forest ecology and conservation; marine studies (ocean ecology); geosciences (climatology, geology, oceanography, glaciology, volcanology, paleontology); life sciences (wildlife management, biology, botany, ichthyology, herpetology, mammalogy, ornithology, primatology, zoology); social sciences (agriculture, economic anthropology, development studies, nutrition, public health); and art and archaeology (architecture, archaeoastronomy, ethnomusicology, folklore). Since it was founded in 1971, Earthwatch has organized over 60,000 EarthCorps volunteers, who have contributed over $22 million and more than four million hours on some 1,500 projects in 150 countries and 36 states. No special skills are needed to be part of an expedition, and anyone 16 years or older can apply. Scholarships for students and teachers are also available. Earthwatch’s affiliate, The Center for Field Research, receives several hundred grant applications and proposals every year from scientists and scholars who need volunteers to assist them on study expeditions. Earthwatch publishes Earthwatch magazine six times a year, describing its research work in progress and the findings of previous expeditions. The group has offices in Los Angeles, Oxford, Melbourne, Moscow, and Tokyo and is represented in all 50 American states as well as in Germany, Holland, Italy, Spain, and Switzerland by volunteer field representatives. [Lewis G. Regenstein]
RESOURCES ORGANIZATIONS Earthwatch, 3 Clock Tower Place, Suite 100 Box 75, Maynard, MA USA 01754 (978) 461-0081, Fax: (978) 461-2332, Toll Free: (800) 776-0188, Email:
[email protected],
Eastern European pollution Between 1987 and 1992 the disintegration of Communist governments of Eastern Europe allowed the people and press of countries from the Baltic to the Black Sea to begin re-
counting tales of life-threatening pollution and disastrous environmental conditions in which they lived. Villages in Czechoslovakia were black and barren because of acid rain, smoke, and coal dust from nearby factories. Drinking water from Estonia to Bulgaria was tainted with toxic chemicals and untreated sewage. Polish garden vegetables were inedible because of high lead and cadmium levels in the soil. Chronic health problems were endemic to much of the region, and none of the region’s new governments had the spare cash necessary to alleviate their environmental liabilities. The air, soil, and water pollution exposed by new environmental organizations and by a newly vocal press had its roots in Soviet-led efforts to modernize and industrialize Eastern Europe after 1945. (Often the term “Central Europe” is used to refer to Poland, Czech Republic, Slovakia, Hungary, Yugoslavia, and Bulgaria, and “Eastern Europe” to refer to the Baltic states, Belarus, and Ukraine. For the sake of simplicity, this essay uses the latter term for all these states.) Following Stalinist theory that modernization meant industry, especially heavy industries such as coal mining, steel production, and chemical manufacturing, Eastern European leaders invested heavily in industrial buildup. Factories were often built in resource-poor areas, as in traditionally agricultural Hungary and Romania, and they rarely had efficient or clean technology. Production quotas generally took precedence over health and environmental considerations, and billowing smokestacks were considered symbols of national progress. Emission controls on smokestacks and waste effluent pipes were, and are, rare. Soft, brown lignite coal, cheap and locally available, was the main fuel source. Lignite contains up to 5% sulfur and produces high levels of sulfur dioxide, nitrogen oxides, particulates, and other pollutants that contaminate air and soil in population centers, where many factories and power plants were built. The region’s water quality also suffers, with careless disposal of toxic industrial wastes, untreated urban waste, and runoff from chemical-intensive agriculture. By the 1980s the effects of heavy industrialization began to show. Dependence on lignite coal led to sulfur dioxide levels in Czechoslovakia and Poland eight times greater than those of Western Europe. The industrial triangle of Bohemia and Silesia had Europe’s highest concentrations of ground-level ozone, which harms human health and crops. Acid rain, a result of industrial air pollution, had destroyed or damaged half of the forests in the former East Germany and the Czech Republic. Cities were threatened by outdated factory equipment and aging chemical storage containers and pipelines, which leaked chlorine, aldehydes, and other noxious gases. People in cities and villages experienced alarming numbers of birth defects and short life expectancies. Economic losses, from health care 407
Eastern European pollution
expenses, lost labor, and production inefficiency further handicapped hard-pressed Eastern European governments. Popular protests against environmental conditions crystallized many of the movements that overturned Eastern and Central European governments. In Latvia, expose´s on petrochemical poisoning and on environmental consequences of a hydroelectric project on Daugava River sparked the Latvian Popular Front’s successful fight for independence. Massive campaigns against a proposed dam on the Danube River helped ignite Hungary’s political opposition in 1989. In the same year, Bulgaria’s Ecoglasnost group held Sofia’s first non-government rally since 1945. The Polish Ecological Club, the first independent environmental organization in Eastern Europe, assisted the Solidarity movement in overturning the Polish government in the mid1980s. Citizens of these countries rallied around environmental issues because they had first-hand experience with the consequences of pollution. In Espenhain, of former East Germany, 80% of children developed chronic bronchitis or heart ailments before they were eight years old. Studies showed that up to 30% of Latvian children born in 1988 may have suffered from birth defects, and both children and adults showed unusually high rates of cancer, leukemia, skin diseases, bronchitis, and asthma. Czech children in industrial regions had acute respiratory diseases, weakened immune systems, and retarded bone development, and concentrations of lead and cadmium were found in children’s hair. In the industrial regions of Bulgaria skin diseases were seven times more common than in cleaner areas, and cases of rickets and liver diseases were four times as common. Much of the air and soil contamination that produced these symptoms remains today and continues to generate health problems. Water pollution is at least as threatening as air and soil pollution. Many cities and factories in the region have no facilities for treating wastewater and sewage. Existing treatment facilities are usually inadequate or ineffective. Toxic waste dumps containing old and rusting barrels of hazardous materials are often unmonitored or unidentified. Chemical leaching from poorly monitored waste sites threatens both surface water and groundwater, and water clean enough to drink has become a rare commodity. In Poland untreated sewage, mine drainage, and factory effluents make 95% of water unsafe for drinking. At least half of Polish rivers are too polluted, by government assessment, even for industrial use. According to government officials, 70% of all rivers in the industrial Czech region of Bohemia are heavily polluted, 40% of wastewater goes untreated, and nearly a third of the rivers have no fish. In Latvia’s port town of Ventspils, heavy oil lies up to 3 ft (1 m) thick on 408
Environmental Encyclopedia 3 the river bottom. Phenol levels in the nearby Venta River exceed official limits by 800%. Few pollution problems are geographically restricted to the country in which they were generated. Shared rivers and aquifers and regional weather patterns carry both airborne and water-borne pollutants from one country to another. The Chernobyl nuclear reactor disaster, which spread radioactive gases and particulates from Belarus across northern Europe and the Baltic Sea to northern Norway and Sweden is one infamous example of trans-border pollution, but other examples are common. The town of Ruse, Bulgaria has long been contaminated by chlorine gas emissions from a Romanian plant just across the Danube. Protests against this poisoning have unsettled Bulgarian and Romanian relations since 1987. Toxic wastes flowing into the Baltic Sea from Poland’s Vistula River continue to endanger fisheries and shoreline habitats in Sweden, Germany, and Finland. The Danube River is a particularly critical case. Accumulating and concentrating urban and industrial waste from Vienna to the Black Sea, this river supports industrial complexes of Austria, Czechia, Hungary, Croatia, Serbia, Bulgaria, and Romania. Before the Danube leaves Budapest, it is considered unsafe for swimming. Like other rivers, the Danube flows through a series of industrial cities and mining regions, each river uniting the pollution problems of several countries. Each city and farm along the way uses the contaminated water and contributes some pollutants of its own. Also like other rivers, the Danube carries its toxic load into the sea, endangering the marine environment. Western countries from Sweden to the United States have their share of pollution and environmental disasters. The Rhine and the Elbe have disastrous chemical spills like those on the Danube and the Vistula. Like recent communist regimes, most western business leaders would prefer to disregard environmental and human health considerations in their pursuit of production quotas. Yet several factors set apart environmental conditions in Eastern Europe. Aside from its aged and outdated equipment and infrastructure, Eastern Europe is handicapped by its compressed geography, intense urbanization near factories, a long-standing lack of information and accurate records on environmental and health conditions, and severe shortages of clean-up funds, especially hard currency. Eastern Europe’s dense settlement crowds all the industrial regions of the Baltic states, Poland, the Czech and Slovak republics, and Hungary into an area considerably smaller than Texas but with a much higher population. This industrial zone lies adjacent to crowded manufacturing regions of Western Europe. In this compact region, people farm the same fields and live on the same mountains that are stripped for mineral extraction. Cities and farms rely on aquifers and rivers that receive factory effluent and pesticide
Environmental Encyclopedia 3 runoff immediately upstream. Furthermore, post-1945 industrialization gathered large labor forces into factory towns more quickly than adequate infrastructure could be built. Expanding urban populations had little protection from the unfiltered pollutants of nearby furnaces. At the same time that many Eastern Europeans were eye witnesses to environmental transgressions, little public discussion about the problem was possible. Official media disliked publicizing health risks or the destruction of forests, rivers, and lakes. Those statistics that existed were often unreliable. Air and water quality data were collected and reported by industrial and government officials, who could not afford bad test results. Now that environmental conditions are being exposed, cleanup efforts remain hampered by a shortage of funding. Poland’s long-term environmental restoration may cost $260 billion, or nearly eight times the country’s annual GNP in the mid-1980s. Efforts to cut just sulfur dioxide emissions to Western standards would cost Poland about $2.4 billion a year. Hungary, with a mid-1980s GNP of $25 billion, could begin collecting and treating its sewage for about $5 billion. Cleanup in the port of Ventspils, Latvia, is expected to cost 3.6 billion rubles and $1.5 billion in hard currency. East German air, soil, and water remediation get a boost from their western neighbors, but the bill is expected to run between $40 and $150 billion. Ironically, East European leaders see little choice for raising this money aside from expanded industrial production.Meanwhile, business leaders urge production expansion for other capital needs. Some Western investment in cleanup work has begun, especially on the part of such countries as Sweden and Germany, which share rivers and seas with polluting neighbors. Already in 1989 Sweden had begun work on water quality monitoring stations along Poland’s Vistula River, which carries pollutants into the Baltic Sea. Capital necessary to purchase mitigation equipment, improve factory conditions, rebuild rusty infrastructure, and train environmental experts will probably be severely limited for decades to come, however. Meanwhile, western investors are flocking to Eastern and Central Europe in hopes to build or rebuild business ventures for their own gain. The region is seen as one of quick growth and great potential. Manufacturers in heavy and light industries, automobiles, power plants, and home appliances are coming from Western Europe, North America, and Asia. From textile manufacturing to agribusiness, outside investors hope to reshape Eastern economies. Many Western companies are improving and updating equipment and adding pollution control devices. In a climate of uncertain regulation and rushed economic growth, however, no one knows if the region’s new governments will be able or willing to enforce environmental safeguards or if
Ebola
the new investors will take advantage of weak regulations and poor enforcement as did their predecessors. [Mary Ann Cunningham Ph.D.]
RESOURCES BOOKS French, H. F. “Restoring the Eastern European and Soviet Environments.” In State of the World 1991. New York: Norton, 1991. Feshbach, M., and A. Friendly, Jr. Ecocide in the USSR. New York: Basic Books, 1992.
PERIODICALS Hartsock, J. “Latvia’s Toxic Legacy.” Audubon 94 (1992): 27-8. Wallich, P. “Dark Days: Eastern Europe Brings to Mind the West’s Polluted History.” Scientific American 263 (1990): 16, 20.
Ebola Ebola is a highly deadly viral hemorrhagic disease. As the disease progresses, the walls of blood vessels break down and blood gushes from every tissue and organ. The disease is caused by the Ebola virus, named after the river in Zaire (now the Democratic Republic of Congo) where the first known outbreak occurred. The disease is extremely contagious and exceptionally lethal. Where a 10% mortality rate is considered high for most infectious diseases, Ebola can kill up to 90% of its victims, usually within only a few days after exposure. It seems to take direct contact with contaminated blood or bodily fluids to catch the disease. Health personnel and caregivers are often the most likely to be infected. Even after a patient has died, preparing the body for a funeral can be deadly for families members. The Ebola virus is one of two members of a family of RNA viruses called the Filoviridae. The other filovirus causes Marburg fever, an equally contagious and lethal hemorrhagic disease, named after a German town where it was first contracted by laboratory workers who handled imported monkeys infected with the virus. Together with members of three other families (arenaviruses, bunyanviruses, and flaviviruses), these viruses cause a group of deadly, episodic diseases including Lassa fever, Rift Valley fever, Bolivian fever, and Hanta or Four-Corners fever (named after the region of the southwestern United States where it was first reported). The viruses associated with most of these emergent, hemorrhagic fevers are zoonotic. That means a reservoir of pathogens naturally resides in an animal host or arthropod vector. We don’t know the specific host or vector for Ebola, but monkeys and other primates can contract related diseases. People who initially become infected with Ebola often 409
Ebola
have been involved in killing, butchering, and eating gorillas, chimps, or other primates. Why the viruses remain peacefully in their hosts for many years without causing much more trouble than a common cold, but then erupt sporadically and unpredictably into terrible human epidemics, is a new and growing question in environmental health. The geographical origin for Ebola is unknown, but all recorded outbreaks have occurred in or around Central Africa, or in animals or people from this area. Ebola appears every few years in Africa. Confirmed cases have occurred in the Democratic Republic of the Congo, Gabon, Sudan, Uganda, and the Ivory Coast. No case of the disease in humans has ever been reported in the United States, but a variant called Ebola-Reston virus killed a number of monkeys in a research facility in Reston, Virginia. A fictionalized account of this outbreak was made into a movie called “Hot Zone.” There probably are isolated cases in remote areas that go unnoticed. In fact, the disease may have been occurring in secluded villages deep in the jungle for a long time without outside attention. The most recent Ebola outbreak was in 2002 when about 100 people died in a remote part of Gabon and an adjacent area in Congo. The worst epidemic of Ebola in humans occurred in 1995, in Kikwit, Zaire (now the Democratic Republic of Congo). Although many more people died in Kikwit that in any other outbreak, in many ways, the medical and social effects of the epidemic there was typical of what happens elsewhere. The first Kikwit victim was a 36-year-old laboratory technician named Kimfumu, who checked into a medical clinic complaining of a severe headache, stomach pains, fever, dizziness, weakness, and exhaustion. Surgeons did an exploratory operation to try to find the cause of his illness. To their horror, they found his entire gastrointestinal tract was necrotic and putrefying. He bled uncontrollably, and within hours was dead. By the next day, the five medical workers who had cared for Kimfumu, including an Italian nun who assisted in the operation, began to show similar symptoms, including high fevers, fatigue, bloody diarrhea, rashes, red and itchy eyes, vomiting, and bleeding from every body orifice. Less than 48 hours later, they, too, were dead, and the disease was spread throughout the city of 600,000. As panicked residents fled into the bush, government officials responded to calls for help by closing off all travel— including humanitarian aid—into or out of Kikwit, about 250 mi (400 km) from Kinshasa, the national capitol. Fearful neighboring villages felled trees across the roads to seal off the pestilent city. No one dared enter houses where dead corpses rotted in the intense tropical heat. Boats plying the adjacent Kwilu River refused to stop to take on or discharge passengers or cargo. Food and clean water became scarce. Hospitals could hardly function as medicines and medical personal became scarce. Within a few weeks, about 400 410
Environmental Encyclopedia 3
An electron micrograph of the ebola virus and the hanta virus. (Delmar Publishers Inc. Reproduced by permission.)
people in Kikwit had contracted the disease and at least 350 were dead. Eventually, the epidemic dissipated and disappeared. It isn’t known why the infection rate dropped or what residents might do to prevent a further reappearance of the terrible disease. Because health professionals are among the most likely to be exposed to Ebola when an outbreak occurs, it is important for them to have access to rapid antigen or antibody assays and isolation facilities to prevent further spread of the virus. Unfortunately, these advanced medical procedures generally are lacking in the African hospitals where the disease is most likely to occur. There is no standard treatment for Ebola other than supportive therapy. Patients are given replacement fluids and electrolytes, and oxygen levels and blood pressure are stabilized as much as possible. During the Kikwit outbreak, eight patients were given blood of individuals who had been infected with the virus but who had recovered. It was hoped that their blood might have antibodies to fight the infection. Seven of the eight transfusion patients survived, but the number tested is too small to be sure this was statistically significant. There is no vaccine or other antiviral drug available to prevent or halt an infection. Several factors seem to be contributing to the appearance and spread of highly contagious diseases such as Ebola and Marburg fevers. With 6 billion people now inhabiting the planet, human densities are much higher enabling germs to spread further and faster than ever before. Expanding populations push people into remote areas where they encounter new pathogens and parasites. Environmental change is occurring on a larger scale: cutting forests, creating unhealthy urban surroundings, and causing global-climate change, among other things. Elimination of predators and
Environmental Encyclopedia 3 habitat changes favor disease-carrying organisms such as
mice, rats, cockroaches, and mosquitoes. Another important factor in the spread of many diseases is the speed and frequency of modern travel. Millions of people go every day from one place to another by airplane, boat, train, or automobile. Very few places on earth are more than 24 hours by jet plane from any other place. In 2001, a woman flying from the Congo arrived in Canada delirious with a high fever. She didn’t, in fact, have Ebola, but Canadian officials were concerned about the potential spread of the disease. Finding ways to cure Ebola and prevent its spread may be more than simply a humanitarian concern for its victims in Central Africa. It might be very much in our own self-interest to make sure that this terrible disease doesn’t cross our borders either accidentally or intentionally through actions of a terrorist organization. [William P. Cunningham Ph.D.]
RESOURCES BOOKS Close, William T. Ebola: Through the Eyes of the People. London: Meadowlark Springs Productions, 2001. Drexler, Madeline. Secret Agents: the Menace of Emerging Infections. Joseph Henry Press, 2002. Preston, Richard. The Hot Zone. Anchor Books, 1995.
OTHER Disease Information Fact Sheets: Ebola hemorrhagic fever. 2002. Center for Disease Control and Prevention. [cited July 9, 2002]. .
PERIODICALS Daszak, P. et al. 2000."Emerging Infectious Diseases of Wildlife-Threats to Biodiversity and Human Health.” Science(US) 287: 443-449. Hughes, J.M. “Emerging infectious diseases: a CDC perspective.” Emerging Infectious Diseases. 7(2001):494-6. Osterholm, M. T. “Emerging Infections—Another Warning.” The New England Journal of Medicine 342(17): 4-5.
Eco Mark The Japanese environmental label known as “Eco Mark” is a relatively new addition to a worldwide effort to designate products that are environmentally friendly. The Eco Mark program was launched in February 1989. The symbol is two arms embracing the world, symbolizing the protection of the earth. The arms create the letter “e” with the earth in the center. Indicating English as the international language, the Japanese use “e” to stand for environment, earth, and ecology. The Japanese program is entirely government funded, although a small fee is charged to applicant industries. The annual fee is based on the retail price of a product, not
Eco Mark
annual product sales as is the case for other national green labeling programs. Products ranging in price from $0–7 are charged an annual fee of $278.00; from $7–70 are charged an annual fee of $417; from $70–700 are charged an annual fee of $556; and products priced over $700 are charged an annual fee of $700. Obviously, those products that are low in price and high in volume sold are most likely to apply for the Eco Mark label. The Eco Mark program seeks to sanction products with the following four qualities: 1) minimal environmental impact from use; 2) significant potential for improvement of the environment by using the product; 3) minimal environmental impact from disposal after use; and 4) other significant contributions to improve the environment. In addition, labeled products must comply with the following guidelines: 1) appropriate environmental pollution control measures are provided at the stage of production; 2) ease of treatment for disposal of product; 3) energy or resources are conserved with use of product; 4) compliance with laws, standards, and regulations pertaining to quality and safety; 5) price is not extraordinarily higher than comparable products. The Environment Association, supervised by the Japanese Environment Agency, is in charge of the Eco Mark program. All technical, research, and administrative support is provided by the government. The labeling program is guided by two committees. The Eco Mark Promotion Committee acts primarily in a supervisory capacity, approving the guidelines for the program’s operation and advising on operations, including evaluation of the program categories and criteria. The promotion committee consists of nine members representing industry, marketing groups, local governments, environmental agencies, and the National Institute for Environmental Studies. In addition to the Promotion Committee there is a committee for approval of products. This committee consists of five members with representation from the science community, the consumer protection community, and, as in the Promotion Committee, a representative each from the Environment Agency and the National Institute for Environmental Studies. The Japanese program is completely voluntary for manufacturers. Once a product is approved by the Approval Committee, a two-year renewable licensing contract for the use of the Eco Mark is signed with the Japan Environment Association. The Eco Mark program is very goal-oriented and places great emphasis on overall environmental impact. The attention to production impacts, as well as use and disposal impacts, makes the program unique within the family of green labeling programs worldwide. Its primary goals are to encourage innovation by industry and elevate the environ411
Environmental Encyclopedia 3
Ecocide
mental awareness and consumer behavior of the Japanese people in order to enhance environmental quality. Japan’s Environment Agency claims that responses from consumer and environmental organizations have been positive, while industry has been less than enthusiastic. In fact, the Eco Mark only covered seven products in 1989 and now covers over 5,000. Some scientists have voiced concern over the superficiality of the analysis procedure used to determine Eco Mark products. However, despite criticisms, the Japanese Eco Mark program is a strong national effort to encourage environmentally sound decisions and protect the environment for future generations in that country. See also Environmental policy; Green packaging; Green products; Precycling; Recycling; Reuse; Waste reduction [Cynthia Fridgen]
FURTHER READING Salzman, J. Environmental Labeling in OECD Countries. Paris, France: OECD Technology and Environmental Program, 1991.
Ecoanarchism see Ecoterrorism
Ecocide Any substance that enters an ecological system, spreads throughout that system, and kills enough members of the ecosystem to disrupt its structure and function. For example, on July 14, 1991, a freight train carrying the pesticide metam sodium fell off a bridge near Dunsmuir, California, spilling its contents into the Sacramento River. When mixed with water this pesticide becomes highly poisonous, and all animal life for some distance downstream of the spill site was killed.
Ecofeminism Coined in 1974 by the French feminist Francoise d’Eaubonne, ecofeminism, or ecological feminism, is a recent movement that asserts that the environment is a feminist issue and that feminism is an environmental issue. The term ecofeminism has come to describe two related movements operating at somewhat different levels: (1) the grassroots, women-initiated activism aimed at eliminating the oppression of women and nature; and (2) a newly emerging branch of philosophy that takes as its subject matter the foundational questions of meaning and justification in feminism and environmental ethics. The latter, more properly termed ecofeminist philosophy, stands in relation to the former as 412
theory stands to practice. Though closely related, there nevertheless remain important methodological and conceptual distinctions between action- and theory-oriented ecofeminism. The ecofeminist movement developed from diverse beginnings, nurtured by the ideas and writings of a number of feminist thinkers, including Susan Griffin, Carolyn Merchant, Rosemary Radford Ruether, Ynestra King, Ariel Salleh, and Vandana Shiva. The many varieties of feminism (liberal, marxist, radical, socialist, etc.) have spawned as many varieties of ecofeminism, but they share a common ground. As described by Karren Warren, a leading ecofeminist philosopher, ecofeminists believe that there are important connections—historical, experiential, symbolic, and theoretical—between the domination of women and the domination of nature. In the broadest sense, then, ecofeminism is a distinct social movement that blends theory and practice to reveal and eliminate the causes of the dominations of women and of nature. While ecofeminism seeks to end all forms of oppression, including racism, classism, and the abuse of nature, its focus is on gender bias, which ecofeminists claim has dominated western culture and led to a patriarchal, masculine value-oriented hierarchy. This framework is a socially constructed mindset that shapes our beliefs, attitudes, values, and assumptions about ourselves and the natural world. Central to this patriarchal framework is a pattern of thinking that generates normative dualisms. These are created when paired complementary concepts such as male/ female, mind/body, culture/nature, and reason/emotion are seen as mutually exclusive and oppositional. As a result of socially-entrenched gender bias, the more “masculine” member of each dualistic pair is identified as the superior one. Thus, a value hierarchy is constructed which ranks the masculine characteristics above the feminine (e.g., culture above nature, man above woman, reason above emotion). When paired with what Warren calls a “logic of domination,” this value hierarchy enables people to justify the subordination of certain groups on the grounds that they lack the “superior” or more “valuable” characteristics of the dominant groups. Thus, men dominate women, humans dominate nature, and reason is superior to emotion. Within this patriarchal conceptual framework, subordination is legitimized as the necessary oppression of the inferior. Until we reconceptualize ourselves and our relation to nature in non-patriarchal ways, ecofeminists maintain, the continued dual denigration of women and nature is assured. Val Plumwood, an Australian ecofeminist philosopher, has traced the roots of the development of the oppression of women and the exploitation of nature to three points, the first two points sharing historical origins, the third having its genesis in human psychology. In the first of these histori-
Environmental Encyclopedia 3 cal women-nature connections, dualism has identified higher and lower “halves.” The lower halves, seen as possessing less or no intrinsic value relative to their polar opposites, are instrumentalized and subjugated to serve the needs of the members of the “higher” groups. Thus, due to their historical association and supposedly shared traits, women and nature have been systematically devalued and exploited to serve the needs of men and culture. The second of these historical women-nature connections is said to have originated with the rise of mechanistic science before and during the Enlightenment period. According to some ecofeminists, dualism was not necessarily negative or hierarchical; however, the rise of modern science and technology, reflecting the transition from an organic to a mechanical view of nature, gave credence to a new logic of domination. Rationality and scientific method became the only socially sanctioned path to true knowledge, and individual needs gained primacy over community. On this fertile soil were sown the seeds for an ethic of exploitation. A third representation of the connections between women and nature has its roots in human psychology. According to this account, the features of masculine consciousness which allow men to objectify and dominate are the result of sexually-differentiated personality development. As a result of women’s roles in both creating and maintaining/ nurturing life, women develop “softer” ego boundaries than do men, and thus they generally maintain their connectedness to other humans and to nature, a connection which is reaffirmed and recreated generationally. Men, on the other hand, psychologically separate both from their human mothers and from Mother Earth, a process which results in their desire to subdue both women and nature in a quest for individual potency and transcendence. Thus, sex differences in the development of self/other identity in childhood are said to account for women’s connectedness with, and men’s alienation from, both humanity and nature. Ecofeminism has attracted criticism on a number of points. One is the implicit assumption in certain ecofeminist writings that there is some connection between women and nature that men either do not possess or cannot experience. And, why female activities such as birth and childcare should be construed as more “natural” than some traditional male activities remains to be demonstrated. This assumption, though, has left some ecofeminists open to charges of having constructed a new value hierarchy to replace the old, rather than having abandoned hierarchical conceptual frameworks altogether. Hints of hierarchical thinking can be found in such ecofeminist practices as goddess worship and in the writings of some radical ecofeminists who advocate the abandonment of reason altogether in the search for an appropriate human-nature relationship. Rather than having destroyed gender bias, some ecofeminists are accused of merely at-
Ecofeminism
tempting to reverse its polarity, possibly creating new, subtle forms of women’s oppression. Additionally, some would argue that ecofeminism runs the risk of oversimplification in suggesting that all struggles between dominator and oppressed are one and the same and thus can be won through unity. A lively debate is currently underway concerning the compatibility of ecofeminism with other major theories or schools of thought in environmental philosophy. For instance, discussions of the similarities and differences between ecofeminism and deep ecology occupy a large portion of the recent theoretical literature on ecofeminism. While deep ecologists are primarily concerned with anthropocentrism as the primary cause of our destruction of nature, ecofeminists point instead to androcentrism as the key problem in this regard. Nevertheless, both groups aim for the expansion of the concept of “self” to include the natural world, for the establishment of a biocentric egalitarianism, and for the creation of connection, wholeness, and empathy with nature. Given the newness of ecofeminism as a theoretical discipline, it is no surprise that the nature of ecofeminist ethics is still emerging. A number of different feministinspired positions are gaining prominence, including feminist animal rights, feminist environmental ethics based on caregiving, feminist social ecology, and feminist bioregionalism. Despite the apparent lack of a unified and overarching environmental philosophy, all forms of ecofeminism do share a commitment to developing ethics which do not sanction or encourage either the domination of any group of humans or the abuse of nature. Already, ecofeminism has shown us that issues in environmental ethics and philosophy cannot be meaningfully or adequately discussed apart from considerations of social domination and control. If ecofeminists are correct, then a fundamental reconstruction of the value and structural relations of our society, as well as a reexamination of the underlying assumptions and attitudes, is necessary. [Ann S. Causey]
RESOURCES BOOKS Des Jardins, J. Environmental Ethics: An Introduction to Environmental Philosophy. Belmont, CA: Wadsworth, 1993. Griffin, S. Woman and Nature: The Roaring Inside Her. New York: Harper & Row, 1978.
PERIODICALS Adams, C., and K. Warren. “Feminism and the Environment: A Selected Bibliography.” APA Newsletter on Feminism and Philosophy (Fall 1991). Vance, Linda. “Remapping the Terrain: Books on Ecofeminism.” Choice 30 (June 1993): 1585-93.
413
Environmental Encyclopedia 3
Ecojustice
Ecojustice The concept of ecojustice has at least two different usages among environmentalists. The first refers to a general set of attitudes about justice and the environment at the center of which is dissatisfaction with traditional theories of justice. With few exceptions (notably a degree of concern about excessive cruelty to animals), anthropocentric and egocentric Western moral and ethical systems have been unconcerned with individual plants and animals, species, oceans, wilderness areas, and other parts of the biosphere, except as they may be used by humans. In general, that which is nonhuman is viewed mainly as raw material for human uses, largely or completely without moral standing. Relying upon holistic principles of biocentrism and deep ecology, the “ecojustice” alternative suggests that the value of non-human life-forms is independent of the usefulness of the non-human world for human purposes. Antecedents of this view can be found in sources as diverse as Eastern philosophy, Aldo Leopold’s “land ethic,” Albert Schweitzer’s “reverence for life,” and Martin Heidegger’s injunction to “let beings be.” The central idea of ecojustice is that the categories of ethical and moral reflection relevant to justice should be expanded to encompass nature itself and its constituent parts, and human beings have an obligation to take the inherent value of other living things into consideration whenever these living things are affected by human actions. Some advocates of an ecojustice perspective base standards of just treatment on the evident capacity of many lifeforms to experience pain. Others assert the equal inherent worth of all individual life-forms. More typically, environmental ethicists assert that all life-forms have at least some inherent worth, and thus deserve moral consideration, although perhaps not the same worth. The practical goals associated with ecojustice include the fostering of stability and diversity within and between self-sustaining ecosystems, harmony and balance in nature and within competitive biological systems, and sustainable development. Ecojustice can also refer simply to the linking of environmental concerns with various social justice issues. The advocate of ecojustice typically strives to understand how the logic of a given economic system results in certain groups or classes of people bearing the brunt of environmental degradation. This entails, for example, concern with the frequent location of polluting industries and hazardous waste dumps near the economically disadvantaged (i.e., those with the least mobility and fewest resources to resist). In much the same way, ecojustice also involves the fostering of sustainable development in less-developed areas of the globe, so that economic development does not mean the export of polluting industries and other environmental 414
problems to these less-developed areas. An additional point of concern is the allocation of costs and benefits in environmental reclamation and preservation—for example, the preservation of Amazonian rain forests affects the global environment and may benefit the whole world, but the costs of this preservation fall disproportionately upon Brazil and the other countries of the region. An advocate of ecojustice would be concerned that the various costs and benefits of development be apportioned fairly. See also Biodiversity; Ecological and environmental justice; Environmental ethics; Environmental racism; Environmentalism; Holistic approach [Lawrence J. Biskowski]
RESOURCES BOOKS Miller, A. S. Gaia Connections. Savage, MD: Rowman and Littlefield, 1991.
Ecological consumers Organisms that feed either directly or indirectly on producers, plants that convert solar energy into complex organic molecules. Primary consumers are animals that eat plants directly. They are also called herbivores. Secondary consumers are animals that eat other animals. They are also called carnivores. Consumers that eat both plants and animals are omnivores. Parasites are a type of consumer that lives in or on the plant or animal on which it feeds. Detrivores (detritus feeders and decomposers) constitute a specialized class of consumers that feed on dead plants and animals. See also Biotic community
Ecological integrity Ecological (or biological) integrity is a measure of how intact or complete an ecosystem is. Ecological integrity is a relatively new and somewhat controversial notion, however, which means that it cannot be defined exactly. Human activities cause many changes in environmental conditions, and these can benefit some species, communities, and ecological processes, while causing damages to others at the same time. The notion of ecological integrity is used to distinguish between ecological responses that represent improvements, and those that are degradations. Challenges to ecological integrity Ecological integrity is affected by changes in the intensity of environmental stressors. Environmental stressors can be defined as physical, chemical, and biological constraints on the productivity of species and the processes of ecosystem development. Many environmental stressors are associated
Environmental Encyclopedia 3 with the activities of humans, but some are also natural factors. Environmental stressors can exert their influence on a local scale, or they may be regional or even global in their effects. Stressors represent environmental challenges to ecological integrity. Environmental stressors are extremely complex, but they can be categorized in the following ways: (1) Physical stressors are associated with brief but intense exposures to kinetic energy. Because of its acute, episodic nature, this represents a type of disturbance. Examples include volcanic eruptions, windstorms, and explosions; (2) Wildfire is another kind of disturbance, characterized by the combustion of much of the biomass of an ecosystem, and often the deaths of the dominant plants; (3) Pollution occurs when chemicals are present in concentrations high enough to affect organisms and thereby cause ecological changes. Toxic pollution may be caused by such gases as sulfur dioxide and ozone, metals such as mercury and lead, and pesticides. Nutrients such as phosphate and nitrate can affect ecological processes such as productivity, resulting in a type of pollution known as eutrophication; (4) Thermal stress occurs when releases of heat to the environment cause ecological changes, as occurs near natural hot-water vents in the ocean, or where there are industrial discharges of warmed water; (5) Radiation stress is associated with excessive exposures to ionizing energy. This is an important stressor on mountaintops because of intense exposures to ultraviolet radiation, and in places where there are uncontrolled exposures to radioactive wastes; (6) Climatic stressors are associated with excessive or insufficient regimes of temperature, moisture, solar radiation, and combinations of these. Tundra and deserts are climatically stressed ecosystems, while tropical rain forests occur in places where the climatic regime is relatively benign; (7) Biological stressors are associated with the complex interactions that occur among organisms of the same or different species. Biological stresses result from competition, herbivory, predation, parasitism, and disease. The harvesting and management of species and ecosystems by humans can be viewed as a type of biological stress. All species and ecosystems have a limited capability for tolerating changes in the intensity of environmental stressors. Ecologists refer to this attribute as resistance. When the limits of tolerance to environmental stress are exceeded, however, substantial ecological changes are caused. Large changes in the intensity of environmental stress result in various kinds of ecological responses. For example, when an ecosystem is disrupted by an intense disturbance, there will be substantial mortality of some species and other damages. This is followed by recovery of the ecosystem through the process of succession. In contrast, a longerterm intensification of environmental stress, possibly caused
Ecological integrity
by chronic pollution or climate change, will result in longer lasting ecological adjustments. Relatively vulnerable species become reduced in abundance or are eliminated from sites that are stressed over the longer term, and their modified niches will be assumed by more tolerant species. Other common responses of an intensification of environmental stress include a simplification of species richness, and decreased rates of productivity, decomposition, and nutrient cycling. These changes represent a longer-term change in the character of the ecosystem. Components of ecological integrity Many studies have been made of the ecological responses to both disturbance and to longer-term changes in the intensity of environmental stressors. Such studies have, for instance, examined the ecological effects of air or water pollution, of the harvesting of species or ecosystems, and the conversion of natural ecosystems into managed agroecosystems. The commonly observed patterns of change in stressed ecosystems have been used to develop indicators of ecological integrity, which are useful in determining whether this condition is improving or being degraded over time. It has been suggested that greater ecological integrity is displayed by systems that, in a relative sense: (1) are resilient and resistant to changes in the intensity of environmental stress. Ecological resistance refers to the capacity of organisms, populations, or communities to tolerate increases in stress without exhibiting significant responses. Once thresholds of tolerance are exceeded, ecological changes occur rapidly. Resilience refers to the ability to recover from disturbance; (2) are biodiverse. Biodiversity is defined as the total richness of biological variation, including genetic variation within populations and species, the numbers of species in communities, and the patterns and dynamics of these over large areas; (3) are complex in structure and function. The complexity of the structural and functional attributes of ecosystems is limited by natural environmental stresses associated with climate, soil, chemistry, and other factors, and also by stressors associated with human activities. As the overall intensity of stress increases or decreases, structural and functional complexity responds accordingly. Under any particular environmental regime, older ecosystems will generally be more complex than younger ecosystems; (4) have large species present. The largest species in any ecosystem appropriate relatively large amounts of resources, occupy a great deal of space, and require large areas to sustain their populations. In addition, large species tend to be long-lived, and consequently they integrate the effects of stressors over an extended time. As a result, ecosystems that are affected by intense environmental stressors can only support a few or no large species. In contrast, mature ecosystems occurring in a relatively benign environmental 415
Ecological integrity
regime are dominated by large, long-lived species; (5) have higher-order predators present. Top predators are sustained by a broad base of ecological productivity, and consequently they can only occur in relatively extensive and/or productive ecosystems; (6) have controlled nutrient cycling. Ecosystems that have recently been disturbed lose some of their biological capability for controlling the cycling of nutrients, and they may lose large amounts of nutrients dissolved or suspended in stream water. Systems that are not “leaky” of their nutrient capital are considered to have greater ecological integrity; (7) are efficient in energy use and transfer. Large increases in environmental stress commonly result in community-level respiration exceeding productivity, resulting in a decrease in the standing crop of biomass in the system. Ecosystems that are not losing their capital of biomass are considered to have greater integrity than those in which biomass is decreasing over time; (8) have an intrinsic capability for maintaining natural ecological values. Ecosystems that can naturally maintain their species, communities, and other important characteristics, without being managed by humans, have greater ecological integrity. If, for example, a population of a rare species can only be maintained by management of its habitat by humans, or by a program of captive-breeding and release, then its population, and the ecosystem of which it is a component, are lacking in ecological integrity; (9) are components of a “natural” community. Ecosystems that are dominated by non-native, introduced species are considered to have less ecological integrity than ecosystems composed of indigenous species. Indicators (8) and (9) are related to “naturalness” and the roles of humans in ecosystems, both of which are philosophically controversial topics. However, most ecologists would consider that self-organizing, unmanaged ecosystems composed of native species have greater ecological integrity than those that are strongly influenced by humans. Examples of strongly human-dominated systems include agroecosystems, forestry plantations, and urban and suburban areas. None of these ecosystems can maintain their character in the absence of management by humans, including large inputs of energy and nutrients. Indicators of ecological integrity Indicators of ecological integrity vary greatly in their intent and complexity. For instance, certain metabolic indicators have been used to monitor the responses by individuals and populations to toxic stressors, as when bioassays are made of enzyme systems that respond vigorously to exposures to dichlorodiphenyl-trichloroethane (DDT), pentachlorophenols (PCBs), and other chlorinated hydrocarbons. Other simple indicators include the populations of endangered species; these are relevant to the viability of those species as well as the integrity of the ecosystem of 416
Environmental Encyclopedia 3 which they are a component. There are also indicators of ecological integrity at the level of landscape, and even global indicators relevant to climate change, depletion of stratospheric ozone, and deforestation. Relatively simple indicators can sometimes be used to monitor the ecological integrity of extensive and complex ecosystems. For example, the viability of populations of spotted owls (Strix occidentalis) is considered to be an indicator of the integrity of the old-growth forests in which this endangered species breeds in the western United States. These forests are commercially valuable, and if plans to harvest and manage them are judged to threaten the viability of a population of spotted owls, this would represent an important challenge to the integrity of the old-growth forest ecosystem. Ecologists are also beginning to develop composite indicators of ecological integrity. These are designed as summations of various indicators, and are analogous to such economic indices such as the Dow-Jones Index of stock markets, the Consumer Price Index, and the gross domestic product of an entire economy. Composite economic indicators of this sort are relatively simple to design, because all of the input data are measured in a common way (for example, in dollars). In ecology, however, there is no common currency among the many indicators of ecological integrity. Consequently it is difficult to develop composite indicators that ecologists will agree upon. Still, some research groups have developed composite indicators of ecological integrity that have been used successfully in a number of places and environmental contexts. For instance, the ecologist James Karr and his co-workers have developed composite indicators of the ecological integrity of aquatic ecosystems, which are being used in modified form in many places in North America. In spite of all of the difficulties, ecologists are making substantial progress in the development of indicators of ecological integrity. This is an important activity, because our society needs objective information about complex changes that are occurring in environmental quality, including degradations of indigenous species and ecosystems. Without such information, actions may not be taken to prevent or repair unacceptable damages that may be occurring. Increasingly, it is being recognized that human economies can only be sustained over the longer term by ecosystems with integrity. Ecosystems with integrity are capable of supplying continuous flows of such renewable resources as timber, fish, agricultural products, and clean air and water. Ecosystems with integrity are also needed to sustain populations of native species and their natural ecosystems, which must be sustained even while humans are exploiting the resources of the biosphere. [Bill Freedman Ph.D.]
Environmental Encyclopedia 3 RESOURCES BOOKS Freedman, B. Environmental Ecology, 2nd ed. San Diego: Academic Press, 1995. Woodley, S., J. Kay, and G. Francis, eds. Ecological Integrity and the Management of Ecosystems. Boca Raton, FL: St. Lucie Press, 1993.
PERIODICALS Karr, J. “Defining and assessing ecological integrity: Beyond water quality.” Environmental Toxicology and Chemistry 12 (1993): 1521-1531.
Ecological productivity One of the most important properties of an ecosystem is its productivity, which is a measure of the rate of incorporation of energy by plants per unit area per unit time. In terrestrial ecosystems, ecologists usually estimate plant production as the total annual growth—the increase in plant biomass over a year. Since productivity reflects plant growth, it is often used loosely as a measure of the organic fertility of a given area. The flow of energy through an ecosystem starts with the fixation of sunlight by green plants during photosynthesis. Photosynthesis supplies both the energy (in the form of chemical bonds) and the organic molecules (glucose) that plants use to make other products in a process known as biosynthesis. During biosynthesis, glucose molecules are rearranged and joined together to become complex carbohydrates (such as cellulose and starch) and lipids (such as fats and plant oils). These products are also combined with nitrogen, phosphorus, sulfur, and magnesium to produce the proteins, nucleic acids, and pigments required by the plant. The many products of biosynthesis are transported to the leaves, flowers, and roots, where they are stored to be used later. Ecologists measure the results of photosynthesis as increases in plant biomass over a given time. To do this more accurately, ecologists distinguish two measures of assimilated light energy: gross primary production (GPP), which is the total light energy fixed during photosynthesis, and net primary production (NPP), which is the chemical energy that accumulates in the plant over time. Some of this chemical energy is lost during plant respiration (R) when it is used for maintenance, reproduction, and biosynthesis. The proportion of GPP that is left after respiration is counted as net production (NPP). In an ecosystem, it is the energy stored in plants from net production that is passed up the food chain/web when the plants are eaten. This energy is available to consumers either directly as plant tissue or indirectly through animal tissue. One measure of ecological productivity in an ecosystem is the production efficiency. This is the rate of accumula-
Ecological risk assessment
tion of biomass by plants, and it is calculated as the ratio of net primary production to gross primary production. Production efficiency varies among plant types and among ecosystems. Grassland ecosystems which are dominated by nonwoody plants are the most efficient at 60-85%, since grasses and annuals do not maintain a high supporting biomass. On the other end of the efficiency scale are forest ecosystems; they are dominated by trees, and large old trees spend most of their gross production in maintenance. For example, eastern deciduous forests have a production efficiency of about 42%. Ecological productivity in terrestrial ecosystems is influenced by physical factors such as temperature and rainfall. Productivity is also affected by air and water currents, nutrient availability, land forms, light intensity, altitude, and depth. The most productive ecosystems are tropical rain forests, coral reefs, salt marshes and estuaries; the least productive are deserts, tundra, and the open sea. See also Ecological consumers; Ecology; Habitat; Restoration ecology [Neil Cumberlidge Ph.D.]
Ecological risk assessment Ecological risk assessment is a procedure for evaluating the likelihood that adverse ecological effects are occurring, or may occur, in ecosystems as a result of one or more human activities. These activities may include the alteration and destruction of wetlands and other habitats, the introduction of herbicides, pesticides, and other toxic materials into the environment, oil spills, or the cleanup of contaminated hazardous waste sites. Ecological risk assessments consider many aspects of an ecosystem, both the biotic plants and animals and the abiotic water, soils, and other elements. Ecosystems can be as small as a pond or stretch of a river or as large as thousands of square miles or lengthy coastlines in which communities exist. Although closely related to human health risk assessment, ecological risk assessment is not only a newer discipline but also uses different procedures, terminology, and concepts. Both human health and ecological risk assessment provide frameworks for collecting information to define a risk and to help make risk management or regulatory decisions. But human health risk assessment follows four basic steps that were defined in a 1983 by the National Research Council: hazard assessment, dose response assessment, exposure assessment, and risk characterization. In contrast, ecological risk assessment relies on a Framework for Ecological Risk Assessment published by Environmental Protection Agency in 1992 as part of a long-term plan to develop ecological risk assessment guidelines. The Framework defines three steps: problem formulation, analysis, and risk characterization. 417
Environmental Encyclopedia 3
Ecological Society of America
The problems that human health risk assessments seek to address are clearly defined: cancer, birth defects, mortality, and the like. But the problems that ecological risk assessments tries to understand and deal with are less straightforward. For instance, a major challenge ecological risk assessors face is distinguishing natural changes in an ecosystem from changes caused by human activities and defining what changes are unacceptable. As a result, the initial problem formulation step of an ecological risk assessment requires extensive discussions between risk assessors and risk managers to define “ecological significance,” a key concept in ecological risk assessment. Because it is not immediately clear whether an ecological change is positive or negative—unlike cancer or birth defects, which are known to be adverse— judgments must be made early in the assessment about whether a change is significant and whether it will alter a socially valued ecological condition. For example, Lake Erie was declared “dead” in the 1960s as a result of phosphorous loadings from cities and farms. But, in fact, there were more fish in the lake after it was “dead” than before; however, these fish were carp, suckers, catfish, not the walleyed pike, yellow perch, and other fish that had made Lake Erie one of the highest valued freshwater sport fishing lakes in the United States. More recently, with pollution inputs greatly reduced, the lake has recovered much of its former productivity. Choosing one ecological condition over the other is a social value of the kind fundamental to ecological risk assessment problem formulation. Once judgments have been made about what values to protect, analysis can proceed to examine the “stressors” that ecosystems are exposed to and a characterization can be made of the “ecological effects” likely from such stressors. Since 1989, EPA has held workshops on “ecological significance” and other technical issues pertaining to ecological risk assessment and has published the results of its workshops in a series of reports and case studies. In 1996, EPA proposed its first ecological risk assessment guidelines, with final guidelines published in May of 1998. Overall, the direction of ecological protection priorities has been away from earlier concerns with narrow goals (e.g., use of commercially valuable natural resources) toward broader interest in protecting natural areas such as National Parks and Scenic Rivers for both present and future generations to enjoy. [David Clarke]
RESOURCES BOOKS Ecological Risk Assessment Issues Papers. Washington, D.C.: United States Environmental Protection Agency, Risk Assessment Forum, 1994. Framework for Ecological Risk Assessment. Washington, D.C.: United States Environmental Protection Agency, Risk Assessment Forum, 1992.
418
Priorities for Ecological Protection: An Initial List and Discussion Document for EPA. Washington, D.C. United States Environmental Protection Agency, 1997.
PERIODICALS Lackey, R. T. “The Future of Ecological Risk Assessment.” Human and Ecological Risk Assessment, An International Journal 1, no. 4 (October 1995): 339-343.
Ecological Society of America The Ecological Society of America (ESA), representing 7,500 ecological researchers in the United States, Canada, Mexico, and 62 other countries, was founded in 1915 as a non-profit, scientific organization and today is the nation’s leading professional society of ecologists. Members include ecologists from academia, government agencies, industry, and non-profit organizations. In pursuing its goal of promoting “the responsible application of ecological principles to the solution of environmental problems,” the Society publishes reports, membership research, and three scientific journals a year, and it provides expert testimony to Congress. In addition, ESA holds a conference every summer attended by more than 3,000 scientists and students at which members present the latest ecological research. The Society’s three journals are: Ecology (eight issues per year), Ecological Monographs (four issues per year), and Ecological Applications (four issues per year). ESA also publishes a bimonthly member newsletter. A milestone in the Society’s development was its 1991 proposal for a Sustainable Biosphere Initiative (SBI), which was published in a 1991 issue of Ecology as a “call-toarms for ecologists.” Based on research priorities identified in the proposal, ESA chartered the SBI Project Office in the same year to focus on global change, biodiversity, and sustainable ecosystems. The SBI marked a commitment by the Society to more actively convey its members’ findings to the public and to policy makers, and, as such, included research, education, and environmental decision-making components. The three-pronged SBI proposal grew out of a “period of introspection” during which ESA led its members to examine “the whole realm of ecological activities” in the face of decreasing funds for research, an urgent need to set priorities, and “the need to ameliorate the rapidly deteriorating state of the environment and to enhance its capacity to sustain the needs of the world’s population.” Since the SBI Project Office was chartered, it has focused on linking the ecological scientific community to other scientists and decision makers through a multi-disciplinary 12-member Steering Committee and five-member staff. For instance, in 1995, the SBI began a series of semiannual meetings with federal government officials to discuss “overlapping areas of interest and possible collaborative op-
Environmental Encyclopedia 3
Ecological economics
portunities.” In addition, SBI has hosted discussions of key ecological topics, such as a symposium on “ The Effects of Fishing Activities on Benthic Habitats” that SBI and the American Fisheries Society, the Ecological Society of America, the National Oceanic and Atmospheric Administration, and the US Geological Survey organized in 2002 and another SBI-hosted discussion on “Ecosystem Simplification: Why a Patchwork Quilt is More Valuable than a Burlap Sack” at the ESA’s 2002 Annual Meeting. In 1993 ESA chartered a Special Committee on the Scientific Basis of Ecosystem Management to establish the scientific grounds for discussing the increasingly prominent ecosystem approach to addressing land and natural resource management problems. The committee published its findings in the August 1996 issue of Ecological Applications. Articles discussed the emerging consensus on essential elements of ecosystem management, including its “holistic” nature—incorporating the biological and physical elements of an ecosystem and their interrelationships—and the concept of “sustainability” as the “essential element and precondition” of ecosystem management. ESA’s headquarters in Washington, D.C., consistent with the SBI’s goal of broader public education, includes a Public Affairs Office. Its Publications Office is in Ithaca, New York. [David Clarke]
RESOURCES PERIODICALS “Forum: Perspectives on Ecosystem Management". Ecological Applications, 6, no. 3 (August 1996): 694–747. “The Sustainable Biosphere Initiative: An Ecological Research Agenda". Ecology, 72, no. 2 (1991): 371–412,
ORGANIZATIONS Ecological Society of America, 1707 H St, NW, Suite 400, Washington, D.C. USA 20006 (202) 833-8773, Fax: (202) 833-8775, Email:
[email protected],
Ecological economics Although ecology and economics share the common root “eco-” (from Greek Oikos or household), these disciplines have tended to be at odds with each other in recent years over issues such as the feasibility of continued economic growth and the value of natural resources and environmental services. Economics deals with resource allocation or trade-offs between competing wants and needs. Economists ask, “what shall we produce, for whom, or for what purpose?” Furthermore, they ask, “when and in what manner should we produce these goods and services?” In mainstream, neoclassical
economics, these questions are usually limited to human concerns: what will it cost to obtain the things we desire and what benefits will we derive from them? According to classical economists, the costs of goods and services are determined by the interaction of supply and demand in the marketplace. If the supply of a particular commodity or service is high but the demand is low, the price will be low. If the commodity is scarce but everyone wants it, the price will be high. But high prices also encourage invention of new technology and substitutes that can satisfy the same demands. The cyclic relationship of scarce resources and development of new technology or new materials, in this view, allows for unlimited growth. And continued economic growth is seen as the best, perhaps the only, solution to poverty and environmental degradation. Ecologists, however, view the world differently than economists. From their studies of the interactions between organisms and their environment, ecologists see our world as a dynamic, but finite system that can support only a limited number of humans with their demands for goods and services. Many ecological processes and the nonrenewable natural resources on which our economy is based have no readily available substitutes. Further, much of the natural world is being degraded or depleted at unsustainable rates. Ecologists criticize the narrow focus of conventional economics and its faith in unceasing growth, market valuation, and endless substitutability. Ecologists warn that unless we change our patterns of production and consumption to ways that protect natural resources and ecological systems, we will soon be in deep trouble. Ecological economics Ecological or environmental economics is a relatively new field that introduces ecological understanding into our economic discourse. It takes a transdisciplinary, holistic, contextual, value-sensitive approach to economic planning and resource allocation. This view recognizes our dependence on the natural world and the irreplaceable life-support services it renders. Rather than express values solely in market prices, ecological economics pays attention to intangible values, nonmarketed resources, and the needs and rights of future generations and other species. Issues of equitable distribution of access to resources and the goods and services they provide need to be solved, in this perspective, by means other than incessant growth. Where neoclassical economics sees our environment as simply a supply of materials, services, and waste sinks, ecological economics regards human activities as embedded in a global system that places limits on what we can and cannot do. Uncertainty and dynamic change are inherent characteristics of this complex natural system. Damage caused by human activities may trigger sudden and irreversible changes. The precautionary principle suggests that we 419
Ecological economics
should leave a margin for error in our use of resources and plan for adaptive management policies. Natural capital Conventional economists see wealth generated by human capital (human knowledge, experience, and enterprise) working with manufactured capital (buildings, machines, and infrastructure) to transform raw materials into useful goods and services. In this view, economic growth and efficiency are best accomplished, by increasing the throughput of raw materials extracted from nature. Until they are transformed by human activities, natural resources are regarded as having little value. In contrast, ecological economists see natural resources as a form of capital equally important with human-made capital. In addition to raw materials such as minerals, fuels, fresh water, food, and fibers, nature provides valuable services on which we depend. Natural systems assimilate our wastes and regulate the earth’s energy balance, global climate, material recycling, the chemical composition of the atmosphere and oceans, and the maintenance of biodiversity. Nature also provides aesthetic, spiritual, cultural, scientific and educational opportunities that are rarely given a monetary value but are, nevertheless, of great significance to many of us. Ecological economists argue that the value of natural capital should be taken into account rather than treated as a set of unimportant externalities. Our goal, in this view, should be to increase our efficiency in natural resource use and to reduce its throughput. Harvest rates for renewable resources (those like organisms that regrow or those like fresh water that are replenished by natural processes) should not exceed regeneration rates. Waste emissions should not exceed the ability of nature to assimilate or recycle those wastes. Nonrenewable resources (such as minerals) may be exploited by humans, but only at rates equal to the creation of renewable substitutes. Accounting for natural capital Where neoclassical economics seeks to maximize present value of resources, ecological economics calls for recognition of the real value of those resources in calculating economic progress. A market economist, for example, once argued that the most rational management policy for whales was to harvest all the remaining ones immediately and to invest the proceeds in some profitable business. Whales reproduce too slowly, he claimed, and are too dispersed to make much money in the long run by allowing them to remain wild. Ecologists reject this limited view of whales as only economic units of production. They see many other values in these wild, beautiful, sentient creatures. Furthermore whales may play important roles in marine ecology that we don’t yet fully understand. Ecologists are similarly critical of Gross National Product (GNP) as a measure of national progress or well420
Environmental Encyclopedia 3 being. GNP measures only the monetary value of goods and services produced in a national economy. It doesn’t attempt to distinguish between economic activities that are beneficial or harmful. People who develop cancer from smoking, for instance, contribute to the GNP by running up large hospital bills. The pain and suffering they experience doesn’t appear on the balance sheets. When calculating GNP in conventional economics, a subtraction is made, for capital depreciation in the form of wear and tear on machines, vehicles, and buildings used in production, but no account is made for natural resources used up or ecosystems damaged by that same economic activity. Robert Repeto of the World Resources Institute estimates that soil erosion in Indonesia reduces the value of crop production about 40% per year. If natural capital were taken into account, total Indonesian GNP would be reduced by at least 20% annually. Similarly, Costa Rica experienced impressive increases in timber, beef, and banana production between 1970 and 1990. But decreased natural capital during this period represented by soil erosion, forest destruction, biodiversity losses, and accelerated water runoff add up to at least $4 billion, or about 25%, of annual GNP. Ecological economists call for a new System of National Accounts that recognizes the contribution of natural capital to economic activity. Valuation of natural capital Ecological economics requires new tools and new approaches to represent nature in GNP. Some categories in which natural capital might fit include: Ouse values: the price we pay to use or consume a resource Ooption value: preserving options for the future Oexistence value: those things we like to know still exist even though we may never use or even see them Oaesthetic value: things we appreciate for their beauty Ocultural value: things important for cultural identity Oscientific and educational value: information or experiencerich aspects of nature. How can we measure this value of natural resources and ecological services not represented in market systems? Ecological economists often have to resort to “shadow pricing” or other indirect valuation methods for natural resources. For instance, what is the worth of a day of canoeing on a wild river? We might measure opportunity costs such as how much we pay to get to the river or to rent a canoe. The direct out-of-pocket costs might represent only a small portion, however, of what it is really worth to participants. Another approach is contingent valuation in which potential resource users are asked, “how much would you be willing to pay for this experience?” or “what price would you be willing to accept to sell your access or forego this opportunity?” These approaches are controversial because people
Environmental Encyclopedia 3 may report what they think they ought to pay rather than what they would really pay for these activities. Carrying capacity and sustainable ddevelopment Carrying capacity is the maximum number of organisms of a particular species that a given area can sustainably support. Where neoclassical economists believe that technology can overcome any obstacle and that human ingenuity frees us from any constraints on population or economic growth, ecological economists argue that nature places limits on us just as it does on any other species. One of the ultimate limits we face is energy. Because of the limits of the second law of thermodynamics, whenever work is done, some energy is converted to a lower quality, less useful form and ultimately is emitted as waste heat. This means that we require a constant input of external energy. Many fossil fuel supplies are nearing exhaustion, and continued use of these sources by current technology carries untenable environmental costs. Vast amounts of solar energy reach the earth, and this solar energy already drives the generation of all renewable resources and ecological services. By some calculations, humans now control or directly consume about 40% of all the solar energy reaching the earth. How much more can we monopolize for our own purposes without seriously jeopardizing the integrity of natural systems for which there is no substitute? And even if we had an infinite supply of clean, renewable energy, how much heat can we get rid of without harming our environment? Ecological economics urges us to restrain growth of both human populations and the production of goods and services in order to conserve natural resources and to protect remaining natural areas and biodiversity. This does not necessarily mean that the billion people in the world who live in absolute poverty and cannot, on their own, meet the basic needs for food, shelter, clothing, education, and medical care are condemned to remain in that state. Ecological economics calls for more efficient use of resources and more equitable distribution of the benefits among those now living as well as between current generations and future ones. A mechanism for attaining this goal is sustainable development, that is, a real improvement in the overall welfare of all people on a long-term basis. In the words of the World Commission on Economy and Development, sustainable development means “meeting the needs of the present without compromising the ability of future generations to meet their own needs.” This requires increased reliance on renewable resources in harmony with ecological systems in ways that do not deplete or degrade natural capital. It doesn’t necessarily mean that all growth must cease. There are many human attributes such as knowledge, kindness, compassion, cooperation, and creativity that can expand infinitely without damaging our environment. While ecological economics offers a sensible framework for approaches to
Ecology
resource use that can be in harmony with ecological systems over the long term, it remains to be seen whether we will be wise enough to adopt this framework before it is too late. [William P. Cunningham Ph.D.]
RESOURCES BOOKS Jansson, A.M., et al., eds. Investing in Natural Capital: the Ecological Economics Approach to Sustainability. Washington, D.C.: Island Press, 1994. Krishnan, R., J.M. Harris, and N.R. Goodwin, eds. A Survey of Ecological Economics. Washington, D.C.: Island Press, 1995. Prugh, T. Natural Capital and Human Economic Survival. Solomons, MD: International Society for Ecological Economics, 1995. Turner, R.K., D. Pearce, and I. Bateman. Environmental Economics: an Elementary Introduction. Baltimore: The Johns Hopkins University Press, 1993.
Ecological succession see Succession
Ecology The word ecology was coined in 1870 by the German zoologist Ernst Haeckel from the Greek words oikos (house) and logos (logic or knowledge) to describe the scientific study of the relationships among organisms and their environment. Biologists began referring to themselves as ecologists at the end of the nineteenth century and shortly thereafter the first ecological societies and journals appeared. Since that time ecology has become a major branch of biological science. The contextual, historical understanding of organisms as well as the systems basis of ecology set it apart from the reductionist, experimental approach prevalent in many other areas of science. This broad ecological view is gaining significance today as modern resource-intensive lifestyles consume much of nature’s supplies. Although intuitive ecology has always been a part of some cultures, current environmental crises make a systematic, scientific understanding of ecological principles especially important. For many ecologists the basic structural units of ecological organization are species and populations. A biological species consists of all the organisms potentially able to interbreed under natural conditions and to produce fertile offspring. A population consists of all the members of a single species occupying a common geographical area at the same time. An ecological community is composed of a number of populations that live and interact in a specific region. 421
Ecology
This population-community view of ecology is grounded in natural history—the study of where and how organisms live—and the Darwinian theory of natural selection and evolution. Proponents of this approach generally view ecological systems primarily as networks of interacting organisms. Abiotic forces such as weather, soils, and topography are often regarded as external factors that influence but are apart from the central living core of the system. In the past three decades the emphasis on species, populations, and communities in ecology has been replaced by a more quantitative, thermodynamic analysis of the processes through which energy flows and the cycling of nutrients and toxins are carried out in ecosystems. This processfunctional approach is concerned more with the ecosystem as a whole than the particular species or populations that make it up. In this perspective, both the living organisms and the abiotic physical components of the environment are equal members of the system. The feeding relationships among different species in a community are a key to understanding ecosystem function. Who eats whom, where, how, and when determine how energy and materials move through the system. They also influence natural selection, evolution, and species adaptation to a particular set of environmental conditions. Ecosystems are open systems, insofar as energy and materials flow through them. Nutrients, however, are often recycled extremely efficiently so that the annual losses to sediments or through surface water runoff are relatively small in many mature ecosystems. In undisturbed tropical rain forests, for instance, nearly 100% of leaves and detritus are decomposed and recycled within a few days after they fall to the forest floor. Because of thermodynamic losses every time energy is exchanged between organisms or converted from one form to another, an external energy source is an indispensable component of every ecological system. Green plants capture solar energy through photosynthesis and convert it into energy-rich organic compounds that are the basis for all other life in the community. This energy capture is referred to as “primary productivity.” These green plants form the first trophic (or feeding) level of most communities. Herbivores (animals that eat plants) make up the next trophic level, while carnivores (animals that eat other animals) add to the complexity and diversity of the community. Detritivores (such as beetles and earthworms) and decomposers (generally bacteria and fungi) convert dead organisms or waste products to inorganic chemicals. The nutrient recycling they perform is essential to the continuation of life. Together, all these interacting organisms form a food chain/web through which energy flows and nutrients and toxins are recycled. Due to intrinsic inefficiencies in transferring material and energy between organisms, the energy 422
Environmental Encyclopedia 3 content in successive trophic levels is usually represented as a pyramid in which primary producers form the base and the top consumers occupy the apex. This introduces the problem of persistent contaminants in the food chain. Because they tend not to be broken and metabolized in each step in the food chain in the way that other compounds are, persistent contaminants such as pesticides and heavy metals tend to accumulate in top carnivores, often reaching toxic levels many times higher than original environmental concentrations. This biomagnification is an important issue in pollution control policies. In many lakes and rivers, for instance, game fish have accumulated dangerously high levels of mercury and chlorinated hydrocarbons that present a health threat to humans and other fish-eating species. Diversity, in ecological terms, is a measure of the number of different species in a community, while abundance is the total number of individuals. Tropical rain forests, although they occupy only about five% of the earth’s land area, are thought to contain somewhere around half of all terrestrial plant and animals species, while coral reefs and estuaries are generally the most productive and diverse aquatic communities. Community complexity refers to the number of species at each trophic level as well as the total number of trophic levels and ecological niches in a community. Structure describes the patterns of organization, both spatial and functional, in a community. In a tropical rain forest, for instance, distinctly different groups of organisms live on the surface, at mid-levels in the trees, and in the canopy, giving the forest vertical structure. A patchy mosaic of tree species, each of which may have a unique community of associated animals and smaller plants living in its branches, gives the forest horizontal structure as well. For every physical factor in the environment there are both maximum and minimum tolerable limits beyond which a given species cannot survive. The factor closest to the tolerance limit for a particular species at a particular time is the critical factor that will determine the abundance and distribution of that species in that ecosystem. Natural selection is the process by which environmental pressures—including biotic factors such as predation, competition, and disease, as well as physical factors such as temperature, moisture, soil type, and space—affect survival and reproduction of organisms. Over a very long time, given a large enough number of organisms, natural selection works on the randomly occurring variation in a population to allow evolution of species and adaptation of the population to a particular set of environmental conditions. Habitat describes the place or set of environmental conditions in which an organism lives; niche describes the role an organism plays. A yard and garden, for instance, may
Environmental Encyclopedia 3 provide habitat for a family of cottontail rabbits. Their niche is being primary consumers (eating vegetables and herbs). Organisms interact within communities in many ways. Symbiosis is the intimate living together of two species; commensalism describes a relationship in which one species benefits while the other is neither helped nor harmed. Lichens, the thin crusty plants often seen on exposed rocks, are an obligate symbiotic association of a fungus and an alga. Neither can survive without the other. Some orchids and bromeliads (air plants), on the other hand, live commensally on the branches of tropical trees. The orchid benefits by having a place to live but the tree is neither helped nor hurt by the presence of the orchid. Predation—feeding on another organism—can involve pathogens, parasites, and herbivores as well as carnivorous predators. Competition is another kind of antagonistic relationship in which organisms vie for space, food, or other resources. Predation, competition, and natural selection often lead to niche specialization and resource partitioning that reduce competition between species. The principle of competitive exclusion states that no two species will remain in direct competition for very long in the same habitat because natural selection and adaptation will cause organisms to specialize in when, where, or how they live to minimize conflict over resources. This can contribute to the evolution of a given species into new forms over time. It is also possible, on the other hand, for species to co- evolve, meaning that each changes gradually in response to the other to form an intimate and often highly dependent relationship either as predator and prey or for mutual aid. Because individuals of a particular species may be widely dispersed in tropical forests, many plants have become dependent on insects, birds, or mammals to carry pollen from one flower to another. Some amazing examples of coevolution and mutual dependence have resulted. Ecological succession, the process of ecosystem development, describes the changes through which whole communities progress as different species colonize an area and change its environment. A typical successional series starts with pioneer species such as grasses or fireweed that colonize bare ground after a disturbance. Organic material from these pioneers helps build soil and hold moisture, allowing shrubs and then tree seedlings to become established. Gradual changes in shade, temperature, nutrient availability, wind protection, and living space favor different animal communities as one type of plant replaces its predecessors. Primary succession starts with a previously unoccupied site. Secondary succession occurs on a site that has been disturbed by external forces such as fires, storms, or humans. In many cases, succession proceeds until a mature “climax” community is established. Introduction of new species by natural processes, such as opening of a land bridge, or by human
Ecology
intervention can upset the natural relationships in a community and cause catastrophic changes for indigenous species. Biomes consist of broad regional groups of related communities. Their distribution is determined primarily by climate, topography, and soils. Often similar niches are occupied by different but similar species (called ecological equivalents) in geographically separated biomes. Some of the major biomes of the world are deserts, grasslands, wetlands, forests of various types, and tundra. The relationship between diversity and stability in ecosystems is a controversial topic in ecology. F. E. Clements, an early biogeographer, championed the concept of climax communities: stable, predictable associations towards which ecological systems tend to progress if allowed to follow natural tendencies. Deciduous, broad-leaved forests are climax communities in moist, temperate regions of the eastern United States according to Clements, while grasslands are characteristic of the dryer western plains. In this view, homeostasis (a dynamic steady-state equilibrium), complexity, and stability are endpoints in ecological succession. Ecological processes, if allowed to operate without external interference, tend to create a natural balance between organisms and their environment. H. A. Gleason, another pioneer biogeographer and contemporary of Clements, argued that ecological systems are much more dynamic and variable than the climax theory proposes. Gleason saw communities as temporary or even accidental combinations of continually changing biota rather than predictable associations. Ecosystems may or may not be stable, balanced, and efficient; change, in this view, is thought to be more characteristic than constancy. Diversity may or may not be associated with stability. Some communities such as salt marshes that have only a few plant species may be highly resilient and stable while species-rich communities such as coral reefs may be highly sensitive to disturbance. Although many ecologists now tend to agree with the process-functional view of Gleason rather than the population-community view of Clements, some retain a belief in the balance of nature and the tendency for undisturbed ecosystems to reach an ideal state if left undisturbed. The efficacy and ethics of human intervention in natural systems may be interpreted very differently in these divergent understandings of ecology. Those who see stability and constancy in nature often call for policies that attempt to maintain historic conditions and associations. Those who see greater variability and individuality in communities may favor more activist management and be willing to accept change as inevitable. In spite of some uncertainty, however, about how to explain ecological processes and the communities they create, we have learned a great deal about the world around us 423
Environmental Encyclopedia 3
EcoNet
through scientific ecological studies in the past century. This important field of study remains a crucial component in our ability to manage resources sustainably and to avoid or repair environmental damage caused by human actions. [William P. Cunningham Ph.D.]
RESOURCES BOOKS Ricklefs, R. E. Ecology. 3rd ed. New York: W. H. Freeman, 1990.
Ecology, deep see Deep ecology
Ecology, human see Human ecology
Ecology, restoration see Restoration ecology
Ecology, social see Social ecology
EcoNet EcoNet is a computer network that focuses on environmental topics and, through the Institute for Global Communications, has links to the international community. Several thousand organizations and individuals have accounts on the network. EcoNet’s electronic conferences contain press releases, reports, and electronic discussions on hundreds of topics, ranging from clean air to pesticides. Subscribers can also send e-mail to other users throughout the country and around the world. EcoNet is a branch of ICG Internet, as are PeacNet, WomensNet, and AntiRacismNet. RESOURCES ORGANIZATIONS Institute for Global Communications, P.O. Box 29904, San Francisco, CA USA 94129-0904 Email:
[email protected],
Economic growth and the environment The issue of economic growth and the environment essentially concerns the kinds of pressures that economic growth, 424
at the national and international level, places on the environment over time. The relationship between ecology and the economy has become increasingly significant as humans gradually understand the impact that economic decisions have on the sustainability and quality of the planet. Economic growth is commonly defined as increases in total output from new resources or better use of existing resources; it is measured by increased real incomes per capita. All economic growth involves transforming the natural world, and it can effect environmental quality in one of three ways. Environmental quality can increase with growth. Increased incomes, for example, provide the resources for public services such as sanitation and rural electricity. With these services widely available, individuals need to worry less about day-to-day survival and can devote more resources to conservation. Second, environmental quality can initially worsen but then improve as the growth rate rises. In the cases of air pollution, water pollution, and deforestation and encroachment there is little incentive for any individual to invest in maintaining the quality of the environment. These problems can only improve when countries deliberately introduce long-range policies to ensure that additional resources are devoted to dealing with them. Third, environmental quality can decrease when the rate of growth increases. In the cases of emissions generated by the disposal of municipal solid waste, for example, abatement is relatively expensive and the costs associated with the emissions and wastes are not perceived as high because they are often borne by someone else. The World Bank estimated that, under present productivity trends and given projected population increases, the output of developing countries would be about five times higher by the year 2030 than it is today. The output of industrial countries would rise more slowly, but it would still triple over the same period. If environmental pollution were to rise at the same pace, severe environmental hardships would occur. Tens of millions of people would become sick or die from environmental causes, and the planet would be significantly and irreparably harmed. Yet economic growth and sound environmental management are not incompatible. In fact, many now believe that they require each other. Economic growth will be undermined without adequate environmental safeguards, and environmental protection will fail without economic growth. The earth’s natural resources place limits on economic growth. These limits vary with the extent of resource substitution, technical progress, and structural changes. For example, in the late 1960s many feared that the world’s supply of useful metals would run out. Yet, today, there is a glut of useful metals and prices have fallen dramatically. The demand for other natural resources such as water, however, often exceeds supply. In arid regions such as the Middle
Environmental Encyclopedia 3 East and in non-arid regions such as northern China, aquifers have been depleted and rivers so extensively drained that not only irrigation and agriculture are threatened but the local ecosystems. Some resources such as water, forests, and clean air are under attack, while others such as metals, minerals, and energy are not threatened. This is because the scarcity of metals and similar resources is reflected in market prices. Here, the forces of resource substitution, technical progress, and structural change have a strong influence. But resources such as water are characterized by open access, and there are therefore no incentives to conserve. Many believe that effective policies designed to sustain the environment are most necessary because society must be made to take account of the value of natural resources and governments must create incentives to protect the environment. Economic and political institutions have failed to provide these necessary incentives for four separate yet interrelated reasons: 1) short time horizons; 2) failures in property rights; 3) concentration of economic and political power; and 4) immeasurability and institutional uncertainty. Although economists and environmentalists disagree on the definition of sustainability, the essence of the idea is that current decisions should not impair the prospects for maintaining or improving future living standards. The economic systems of the world should be managed so that societies live off the dividends of the natural resources, always maintaining and improving the asset base. Promoting growth, alleviating poverty, and protecting the environment may be mutually supportive objectives in the long run, but they are not always compatible in the short run. Poverty is a major cause of environmental degradation, and economic growth is thus necessary to improve the environment. Yet, ill-managed economic growth can also destroy the environment and further jeopardize the lives of the poor. In many poor but still forested countries, timber is a good short-run source of foreign exchange. When demand for Indonesia’s traditional commodity export—petroleum—fell and its foreign exchange income slowed, Indonesia began depleting its hardwood forests at non-sustainable rates in order to earn export income. In developed countries, it is competition that can shorten time horizons. Competitive forces in agricultural markets, for example, induce farmers to take short-term perspectives for financial survival. Farmers must maintain cash flow to satisfy bankers and make a sufficient return on their land investment. They therefore adopt high-yield crops, monoculture farming, increased fertilizer and pesticide use, salinizing irrigation methods, and more intensive tillage practices which cause erosion. “The Tragedy of the Commons” is the classic example of property rights failure. When access to a grazing area, or
Economic growth and the environment
commons is unlimited, each herdsman knows that grass not eaten today will not be there tomorrow. As a rational economic being, each herdsman seeks to maximize his gain and adds more animals to his herd. No herdsman has an incentive to prevent his livestock from grazing the area. Degradation follows and the loss of a common resource. In a society without clearly defined property rights, those who pursue their own interests ruin the public good. In Indonesia, political upheaval can void property rights overnight, and so any individual with a concession to harvest trees is motivated to harvest as many and as quickly as possible. The government-granted timber-cutting concession may belong to someone else tomorrow. The same is true of some developed countries. For example, in Louisiana mineral rights revert to the state when wetlands become open water and there has been no mineral development on the property. Thus, the cheapest methods of avoiding loss of mineral revenues has been to hurry the development of oil and gas in areas which might revert to open water, thereby, hastening erosion and saltwater intrusion, or putting up levies around the property to maintain it as private property, thus interfering with normal estuarine processes. Global or transnational problems such as ozone layer depletion or acid rain produce a similar problem. Countries have little incentive to reduce damage to the global environment unilaterally when doing so will not reduce the damaging behavior of others or when reduced fossil fuel use would leave that country at a competitive disadvantage. International agreements are thus needed to impose order on the world’s nations that would be analogous to property rights. Concentration of wealth within the industrialized countries allows for the exploitation and destruction of ecosystems in less developed countries (LDC) through, for example, timber harvests and mineral extraction. The concentration of wealth inside a less developed country skews public policy toward benefiting the wealthy and politically powerful, often at the expense of the ecosystem on which the poor depend. Local sustainability is dependent upon the goals of those who have power—goals which may or may not be in line with a healthy, sustainable ecosystem. Furthermore, when an exploiting party has substitute ecosystems available, it can exploit one and then move to the next. Japanese lumber firms harvest one country and then move on to another. Here the benefits of sustainability are low and exploiters have shorter time horizons than local interests. This is also an example of how the high discount rates in developed countries are imposed on the management of developing countries’ assets. Environmental policy-making is always more complicated than merely measuring the effects that a proposed policy on the environment. But because of scientific uncertainty about biophysical and geological relations and a gen425
Environmental Encyclopedia 3
Ecosophy
eral inability to measure a policy’s effect on the environment, economic rather than ecological effects are more often relied upon to make policy. Policy-makers and institutions are often unable to grasp the direct and indirect effects of policies on ecological sustainability, nor do they know how their actions will affect other areas not under their control. Many contemporary economists and environmentalists argue that the value of the environment should nonetheless be factored into the economic policy decision-making process. The goal is not necessarily to put monetary values on environmental resources; it is rather to determine how much environmental quality is being given up in the name of economic growth, and how much growth is being given up in the name of the environment. A danger always exists that too much income growth may be given up in the future because of a failure to clarify and minimize tradeoffs and to take advantage of policies that are good for both economic growth and the environment. See also Energy policy; Environmental economics; Environmental policy; Environmentally responsible investing; Exponential growth; Sustainable agriculture; Sustainable biosphere; Sustainable development
which guides our conduct toward the environment. He defined ecosophy as a set of beliefs about nature and other people which varies from one individual to another. Everyone, in other words, has their own ecosophy, and though our personal philosophies may share important elements, they are based on norms and assumptions that are particular to each of us. Naess proposed his own ecophilosophy as a model for individual ecosophies, emphasizing the intrinsic value of nature and the importance of cultural and natural diversity. Other discussions of ecosophy concentrate on similar issues. Many environmental philosophers argue that all life has a value that is independent of human perspectives and human uses, and that it is not to be tampered with except for the sake of survival. Human population growth threatens the integrity of other life systems; they argue that our numbers must be reduced substantially and that radical changes in human values and activities are required to integrate humans more harmoniously into the total system. See also Zero population growth [Gerald L. Young and Douglas Smith]
[Kevin Wolf]
RESOURCES BOOKS Farber, S. “Local and Global Incentives for Sustainability: Failures in Economic Systems.” In Ecological Economics: The Science and Management of Sustainability, edited by R. Constanza. New York: Columbia University Press, 1991. World Bank. World Development Report 1992: Development and the Environment. New York: Oxford University Press, 1992.
Ecopsychology see Roszak, Theodore
Ecosophy A philosophical approach to the environment which emphasizes the importance of action and individual beliefs. Often referred to as “ecological wisdom,” it is associated with other environmental ethics, including deep ecology and bioregionalism. Ecosophy originated with the Norwegian philosopher Arne Naess. Naess described a structured form of inquiry he called ecophilosophy, which examines nature and our relationship to it. He defined it as a discipline, like philosophy itself, which is based on analytical thinking, reasoned argument, and carefully examined assumptions. Naess distinguished ecosophy from ecophilosophy; it is not a discipline in the same sense but what he called a “personal philosophy,” 426
RESOURCES BOOKS Naess, A. Ecology, Community and Lifestyle: Outline of an Ecosophy. Translated and revised by D. Rothenberg. Cambridge: Cambridge University Press, 1989.
PERIODICALS Hedgpeth, J. W. “Man and Nature: Controversy and Philosophy.” The Quarterly Review of Biology 61 (March 1986): 45-67.
Ecosystem The term ecosystem was coined in 1935 by the Oxford ecologist Arthur Tansley to encompass the interactions among biotic and abiotic components of the environment at a given site. It was defined in its presently accepted form by Eugene Odum as follows: “Any unit that includes all of the organisms (i.e, the community) in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity, and material cycles (i.e., exchange of materials between living and non-living parts) within the system.” Tansley’s concept had been expressed earlier in 1913 by the Oxford geographer A. J. Herbertson, who suggested the term “macroorganism” for such a combined biotic and abiotic entity. He was, however, too far in advance of his time and the idea was not taken up by ecologists. On the other hand Tansley’s concept— elaborated in terms of the transfer of energy and matter across ecosystem boundaries–was utilized within the next
Environmental Encyclopedia 3 few years by Evelyn Hutchinson, Raymond Lindeman, and the Odum brothers, Eugene and Howard. The boundaries of an ecosystem can be somewhat arbitrary, reflecting the interest of a particular ecologist in studying a certain portion of the landscape. However, such a choice may often represent a recognizable landscape unit such as a woodlot, a wetland, a stream or lake, or—in the most logical case—a watershed within a sealed geological basin, whose exchanges with the atmosphere and outputs via stream flow can be measured quite precisely. Inputs and outputs imply an open system, which is true of all but the planetary or global ecosystem, open to energy flow but effectively closed in terms of materials except in the case of large-scale asteroid impact. Ecosystems exhibit a great deal of structure, as may be seen in the vertical partitioning of a forest into tree, shrub, herb, and moss layers, underlain by a series of distinctive soil horizons. Horizontal structure is often visible as a mosaic of patches, as in forests with gaps where trees have died and herbs and shrubs now flourish, or in bogs with hummocks and hollows supporting different kinds of plants. Often the horizontal structure is distinctly zoned, for instance around the shallow margin of a lake; and sometimes it is beautifully patterned, as in the vast peatlands of North America that reflect a very complicated hydrology. Ecosystems exhibit an interesting functional organization in their processing of energy and matter. Green plants, the primary producers of organic matter, are consumed by herbivores, which in turn are eaten by carnivores that may in turn be the prey of other carnivores. Moreover, all these animals may have parasites as another set of consumers. Such sequences of producers and successive consumers constitute a food chain, which is always part of a complicated, inter-linked food web along which energy and materials pass. At each step along the food chain some of the energy is egested or passed through the organisms as feces. Much more is used for metabolic processes and—in the case of animals—for seeking food or escaping predators; such energy is released as heat. As a consequence only a small fraction (often of the order of 10%) of the energy captured at a given step in the food chain is passed along to the next step. There are two main types of food chains. One is made up of plant producers and animal consumers of living organisms, which constitute a grazing food chain. The other consists of organisms that break down and metabolize dead organic matter, such as earthworms, fungi, and bacteria. These constitute the detritus food chain. Humans rely chiefly on grazing food chains based on grasslands, whereas in a forest it is usual for more than 90% of the energy trapped by photosynthesis to pass along the detritus food chain. Whereas energy flows one way through ecosystems and is dispersed finally to the atmosphere as heat, materials
Ecosystem
are partially and often largely recycled. For example, nitrogen in rain and snow may be taken up from the soil by roots, built into leaf protein that falls with the leaves in autumn, there to be broken down by soil microbes to ammonia and nitrate and taken up once again by roots. A given molecule of nitrogen may go through this nutrient cycle again and again before finally leaving the system in stream outflow. Other nutrients, and toxins such as lead and mercury, follow the same pathway, each with a different residence time in the forest ecosystem. Mature ecosystems exhibit a substantial degree of stability, or dynamic equilibrium, as the endpoint of what is often a rather orderly succession of species determined by the nature of the habitat. Sometimes this successional process is a result of the differing life spans of the colonizing species, at other times it comes about because the colonizing species alter the habitat in ways that are more favorable to their competitors, as in an acid moss bog that succeeds a circumneutral sedge fen that has in its turn colonized a pond as a floating mat. Equilibrium may sometimes be represented on a large scale by a relatively stable mosaic of small-scale patches in various stages of succession, for instance in firedominated pine forests. On the millennial time scale, of course, ecosystems are not stable, changing very gradually owing to immigration and emigration of species and to evolutionary changes in the species themselves. The structure, function, and development of ecosystems are controlled by a series of partially independent environmental factors: climate, soil parent material, topography, the plants and animals available to colonize a given site, and disturbances such as fire and windthrow. Each factor is, of course, divisible into a variety of components, as in the case of temperature and precipitation under the general heading of climate. There are many ways to study ecosystems. Evelyn Hutchinson divided them into two main categories, holistic and meristic. The former treats an ecosystem as a “black box” and examines inputs, storages, and outputs, for example in the construction of a lake’s heat budget or a watershed’s chemical budget. This is the physicist’s or engineer’s approach to how ecosystems work. The meristic point of view emphasizes analysis of the different parts of the system and how they fit together in their structure and function, for example the various zones of a wetland or a soil profile, or the diverse components of food webs. This is the biologist’s approach to how ecosystems work. Ecosystem studies can also be viewed as a series of elements. The first is, necessarily, a description of the system, its location, boundaries, plant and animal communities, environmental characteristics, etc. Description may be followed by any or all of a series of additional elements, including: 1) a study of how a given ecosystem compares with others 427
Environmental Encyclopedia 3
Ecosystem health
locally, regionally, or globally; 2) how it functions in terms of hydrology, productivity, and biogeochemical cycling of nutrients and toxins; 3) how it has changed over time; and 4) how various environmental factors have controlled its structure, function, and development. Such studies involve empirical observations about relationships within and among ecosystems, experiments to test the causality of such relationships, and model-building to assist in forecasting what may happen in the future. The ultimate in ecosystem studies is a consideration of the structure, function, and development of the global or planetary ecosystem, with a view to understanding and mitigating the deleterious impacts upon it of current human activities. See also Biotic community [Eville Gorham Ph.D.]
RESOURCES BOOKS Hagen, J. B. An Entangled Bank, the Origins of Ecosystem Ecology. New Brunswick, NJ: Rutgers University Press, 1992.
PERIODICALS Herbertson, A. J. “The Higher Units: A Geological Essay.” Scientia 14 (1913): 199-212. Tansley, A. G. “The Use and Abuse of Vegetational Concepts and Terms.” Ecology 16 (1935): 284-307.
Ecosystem health Ecosystem health is a new concept that ecologists are exam-
ining as a tool for use in detecting and monitoring changes in the quality of the environment, particularly with regard to ecological conditions. Ecosystem health (and ecological integrity) is an indicator of the well-being and natural condition of ecosystems and their functions. These indicators are influenced by natural changes in environmental conditions, and are related to such factors as climate change and disturbances such as wildfire, windstorms, and diseases. Increasingly, however, ecosystems are being affected by environmental stressors associated with human activities that cause pollution and disturbance, which result in many changes in environmental conditions. Some species, communities, and ecological processes benefit from those environmental changes, but others suffer great damages. The notion of ecosystem health is intended to help distinguish between ecosystem-level changes that represent improvements and those that are considered to be degradations. In the sense meant here, ecosystem-level refers to responses occurring in ecological communities, landscapes, or seascapes. Effects on individual organisms or populations 428
do not represent an ecosystem-level response to changes in environmental conditions. The notion of health The notion of ecosystem health is analogous to that of medical health. In the medical sense, health is a term used to refer to the vitality or well-being of individual organisms. Medical health is a composite attribute, because it is characterized by a diversity of inter-related characteristics and conditions. These include blood pressure and chemistry, wounds and injuries, rational mental function and many other relevant variables. Health is, in effect, a summation of all of these characters related to vitality and well-being. In contrast, a diagnosis of unhealthiness would focus on abnormal values for only one or several variables within the diverse congregation of health-related attributes. For example, an individual might be judged as being unhealthy because they had a broken leg, or a high fever, or unusually high blood pressure, or non-normal behavioral traits, even though they are “normal” with regards to all other traits. To compare human and ecosystem health is, however, imperfect in some important respects. Health is a relative concept. It depends on what we consider “normal” at a particular stage of development. The aches and pains that are considered normal in a human at age 80 would be a serious concern in a 20-year old. It is much more difficult, however, to say what is to be expected in an ecosystem. Ecosystems don’t have a prescribed lifespan and generally don’t die but rather change into some other form. Because of these problems, some ecologists prefer the notions of ecological or biological integrity rather than ecosystem health. It should also be pointed out that many ecologists like none of these notions (that is, ecosystem health, ecological integrity, or biological integrity). The reason is that by their very nature, these concepts are imprecise and difficult to define. For these reasons, scientists have had difficulty in agreeing upon the specific variables that should be included when designing composite indicators of health and integrity in ecological contexts. Ecosystem health Ecosystem health is a summation of conditions occurring in communities, watersheds, landscapes, or seascapes. Ecosystem health conditions are higher-level components of ecosystems, in contrast with individual organisms and their populations. Although ecosystem health cannot be defined precisely, ecologists have identified a number of specific components that are important in this concept. These include the following indicators: (1) an ability of the system to resist changes in environmental conditions without displaying a large response (this is also known as resistance or tolerance); (2) an ability to recover when the intensity of environmental
Environmental Encyclopedia 3
Ecosystem management
stress is decreased (this is known as resilience); (3) relatively high degrees of biodiversity; (4) complexity in the structure and function of the system; (5) the presence of large species and top predators; (6) controlled nutrient cycling and a stable or increasing content of biomass in the system; and (7) domination of the system by native species and natural communities that can maintain themselves without management by humans. Higher values for any of these specific elements imply a greater degree of ecosystem health, while decreasing or lower values imply changes that reflect a less healthy condition. Ecologists are also working to develop composite indicators (or multivariate summations) that would integrate the most important attributes of ecosystem health into a single value. Indicators of this type are similar in structure to composite economic indicators such as the Dow-Jones Stock Market Index and the Consumer Price Index. Because they allow complex situations to be presented in a simple and direct fashion, composite indicators are extremely useful for communicating ecosystem health to the broader public.
[Bill Freedman Ph.D.]
RESOURCES BOOKS Costanza, R., B.G. Norton, and B.D. Haskell. Ecosystem Health. New Goals for Environmental Science. Washington, D.C.: Island Press, 1992. DiGiulio, R., and E. Monosson. Interconnections Between Human and Ecosystem Health. New York: Chapman and Hall, 1996. Woodley, S., J. Kay, and G. Francis, eds. Ecological Integrity and the Management of Ecosystems. Boca Raton, FL: St. Lucie Press, 1993.
Ecosystem management Ecosystem management (EM) is a concept that has germinated within the past 20 years and continues to increase in popularity across the United States and Canada. It is a concept that eludes one concise definition, however, because it embodies different meanings in different contexts and for different people and organizations. This can be witnessed by the multiple variations on its title, e.g., ecosystem-based management or collaborative ecosystem management. The definitions that have been given for EM, though varied, fall into two distinct groups. One group emphasizes long term ecosystem integrity, while the other group emphasizes an intention to address all concerns equally, be they economic, ecological, political or social, by actively engaging and incorporating the multitude of stakeholders (literally, those who “hold a stake” in the issue) into the decision-making process. One usable though incomplete definition of EM is provided by R. Edward Grumbine: “Ecosystem management integrates scientific knowledge of ecological relationships
within a complex sociopolitical and values framework toward the general goal of protecting native ecosystem integrity over the long term.” Ultimately, EM is a new way to make decisions about how we humans should live with each other and with the environment that supports us. And, it is best defined not only by articulating an ideal description of its contents, as Grumbine has done, but also through a rigorous analysis of actual EM examples. EM—a new management Style Between the years 1992 and 1994, each of the four predominant federal land management agencies in the United States the National Park Service, the Bureau of Land Management, the Forest Service and the Fish and Wildlife Service implemented EM as their operative management paradigm. Combined, these four agencies combined control 97% of the 650 million federally-owned acres (267 million ha) in the United States, or roughly 30% of the United States’ entire land area. EM has become the primary management style for these agencies because of a fact that became unavoidably apparent in the 1980s and 1990s the traditional resource management style does not work. It is largely ineffective in addressing the loss and fragmentation of wild areas, the increasing number of threatened or endangered species, and the increased occurrence of environmental disputes. This ineffectiveness has been attributed to the traditional management style’s focus on mainly species with economic value, its exclusion of the public from the decision-making process, and its reliance on outdated ecological beliefs. This explicit acknowledgment that the traditional management style is inadequate has coalesced within state and federal agencies, academia, and environmental organizations, and has been bolstered by advances in other relevant fields, such as ecology and conflict management. If we break the traditional management style into individual components, we see that each ineffective attribute has a new or altered counterpart in EM. One of the best ways to describe what EM actually entails is to explicate this juxtaposition between traditional and new management styles. EM, as its name makes clear, concentrates on managing at the scale of an ecosystem. Alternately, traditional resource management has tended to focus only on one or a handful of species, especially those species that have a utilitarian, or more specifically economic, value. For example, the U.S. Forest Service has traditionally managed the national forests so as to produce a sustained yield of timber. This management style is often harmful to species other than timber and can have negative effects on the entire ecosystem. In EM, all significant biotic and abiotic components of the ecosystem, as well as aspects such as economic factors, are, ideally, reviewed and the important ecological 429
Ecosystem management
data incorporated into the decision-making process. For example, review of a forest ecosystem may include an analysis of habitat for significant song birds, a description of the requirements needed to maintain a healthy black bear (Ursus americanus) population, and a discussion of acceptable levels of timber production. A major problem associated with using an ecosystem to define one’s management area is that boundaries of jurisdictional authority, or political boundaries, rarely follow ecological ones. This implies that by following political boundaries alone, ecological components may be left out of the management plan one may be forced to manage only part of an ecosystem, that part which is within one’s political jurisdiction. For example, the Greater Yellowstone Ecosystem goes far beyond the boundaries of Yellowstone National Park. Therefore, a large scale EM project for Yellowstone would require crossing several political boundaries (e.g., national park lands and national forest lands), which is a difficult task because it entails several political jurisdictions and political entities (e.g., state and federal agencies, and county governments). EM projects address this obstacle by forming decision teams that include, among others, representatives from all of the relevant jurisdictions. These decision-making bodies can either act as advisory committees without decision-making authority, or they can attempt to become vested with the power to make decisions. This collaborative process involves all of the stakeholders, whether that stakeholder is a logging company interested in timber production, or a private citizen concerned with water quality. Such collaboration diverges from the traditional resource management method which made most decisions “behind closed doors” asking for and receiving little public input. These agencies traditionally shied away from actively engaging the public because it is easier and faster to make decisions on one’s own than to ask for input from many sources, there has often been an antagonistic and distrustful relationship between state and federal agencies and the public, and, there has been little institutional support (i.e., within the structure of the agency itself) encouraging the manager in the field to invite the public into the decision-making process. EM’s more collaborative and inclusive decision-making style ideally fosters a wiser and more effective decision. This happens because as the decision team works toward consensus, personal relationships are established, some trust may form between parties, and, ultimately, people are more likely to support a decision or plan they help create. EM attempts to transcend the traditional antagonistic relationship between agency personnel and the public. Because 70% of the United States is privately owned, many environmental issues arise on private land—land that is partially not affected by federal and state natural resource legislation. EM allows 430
Environmental Encyclopedia 3 us to deal with these issues on private lands by establishing a dialogue between private and public decision makers. Finally, even though the EM decision-making style takes longer to conduct, time is saved in the end because the decision achieved is more agreeable to all interested parties. Having all parties agree to a particular management plan decreases the number of potential lawsuits which can arise and delay the plan’s implementation. Nonequilibrium ecology and EM A change in the dominant theories in ecology has encouraged this switch to EM and an ecosystem-level focus. The idea that environments achieve a climax state of homeostasis has been a significant theory in ecology since the early 1900s. This view, now discredited, was most vigorously articulated by Frederic Clements and holds that all ecosystems have a particular end point to which they each progress, and that ecosystems are closed systems. Disturbances such as fires or floods are considered only temporary setbacks on the ecosystem’s ultimate progression to a final state. This theory offers a certain level of predictable stability the type of stability desired within traditional resource management. For example, if a forest is in its climax state, that condition can be maintained by eliminating disturbances such as forest fires, and a predictable level of harvestable timber can be extracted (hence, this theory contributed to the creation of “Smoky the Bear” and the national policy of stopping forest fires on public land). This teleological view of nature has ebbed and waned in importance, but has lost favor especially within the past two decades. Ecologists, among others, have realized that from certain temporal and spatial points of view ecosystems may seem to be in equilibrium, but in the long term all ecosystems are in a state of nonequilibrium. That is, ecosystems always change. They change because their ecological structure and function is often regulated by dynamic external forces such as storms or droughts and because they comprise of varied habitat types which change and affect one another. This acknowledgment of a changing ecosystem means that predictable stability does not really exist and that an adaptive management style is needed to meet the changing requirements of a dynamic ecosystem. EM is adaptive. After an EM decision team has formulated and implemented a management plan, the particular ecosystem is monitored. The team watches significant biotic and abiotic factors are watched to see if and how they are altered by the management practices. For example, if logging produces changes which effect the fish in one of the ecosystem’s streams, the decision team could adapt to the new data and decide to relocate the logging. The ability to adaptively manage is a crucial aspect of EM, one that is not incorporated in traditional resource management. As stated, the traditional management style
Environmental Encyclopedia 3
Ecoterrorism
emphasizes one or a few species and believes that ecosystems are mostly stable. So, one only needs to view how a particular species fares to determine the necessary management practices little monitoring is conducted and previous management practices are rarely altered. In EM, management practices are constantly reviewed and adjusted as a result of ongoing data gathering and the goals articulated by the decision team. The future of EM The future of EM in the United States and Canada looks stable, and other countries such as France are beginning to use EM. There are many impediments, though, to the successful implementation of EM. Institutions, such as the United States’ federal land management agencies, are often hesitant to actually change and when they do change it happens very slowly; there are still many legal questions surrounding the legitimacy of implementing EM on federal, state and private lands; and, even though we attempt to review the entire ecosystem in EM examples, we still lack significant understanding of how even the most basic ecosystems operate. Even given these impediments, it looks as if EM will be the primary land management style of the United States and Canada well into the next century. [Paul Phifer Ph.D.]
RESOURCES BOOKS Gunderson, L.H., C.S. Holling, and S.S. Light. Barriers and Bridges to the Renewal of Ecosystems and Institutions. New York: Columbia University Press, 1995. Lee, K.N. Compass and Gyroscope: Integrating Science and Politics for the Environment. Washington, D.C.: Island Press, 1993. Yaffee, S.L., et al. Ecosystem Management in the United States: An Assessment of Current Experience. Washington, D.C.: Island Press, 1996.
PERIODICALS Grumbine, R.E. “What is ecosystem management?” Conservation Biology 8. (1994):27-38.
Ecotage see Ecoterrorism; Monkey-wrenching
Ecoterrorism In the wake of the terrorist attacks on the World Trade Center in New York City on September 11, 2001, the line between radical environmental protest (sometimes called ecoanarchism) and terrorism became blurred by strong emotions on all sides. Environmentalists in America have long held passionate beliefs about protecting the environment and saving threatened species from extinction, as well as
treating animals humanely and protesting destructive business practices. One of the first and greatest environmentalists, Henry David Thoreau (1817–1862), wrote about the doctrine of “civil disobedience,” or using active protest as a political tool. The author Edward Abbey (1927–1989) became a folk hero among environmentalists when he wrote the novel, The Monkey Wrench Gang, in 1975. In that book, a group of militant environmentalists practiced monkeywrenching, or sabotaging machinery in desperate attempts to stop logging and mining. Monkey-wrenching, in its destruction of private property, goes beyond civil disobedience and is unlawful. The American public has tended to view monkey-wrenchers as idealistic youth fighting for the environment and has not strongly condemned them for their actions. However, the U.S. government views monkeywrenching as domestic terrorism, and since September 11, 2001, law enforcement activity concerning environmental groups has been significantly increased. Ecoanarchism is the philosophy of certain environmental or conservation groups that pursue their goals through radical political action. The name reflects both their relation to older anarchist revolutionary groups and their distrust of official organizations. Nuclear issues, social responsibility, animal rights, and grass-roots democracy are among the concerns of ecoanarchists. Ecoanarchists tend to view mainstream political and environmental organizations as too passive, and those who maintain them as compromising or corrupt. Ecoanarchists may resort to direct confrontation, direct action, civil disobedience, and guerrilla tactics to fight for survival of wild places. Monkey-wrenchers perform sit-ins in front of bulldozers; they disable machinery in various ways including pouring sand in a bulldozer’s gas tank; they ram whaling ships; and they spike trees by driving metal bars into them to discourage logging. Ecoanarchists may practice ecotage, which is sabotage for environmental ends, often of machines that alter the landscape. In the early 2000s, radical environmentalists committed arson on 35 sport utility vehicles (SUVs) at a car dealership in Eugene, Oregon, to protest the gas-guzzling vehicles, and set fire to buildings at the University of Washington to protest genetic engineering, for example. Ecoanarchists do not necessarily view the destruction of machinery as out-of-bounds, but the U.S. government views it as terrorism. For instance, Earth First!, a radical environmental group whose motto is, “No Compromise in Defense of Mother Earth,” takes a stand against violence against humans but does approve of monkey-wrenching. The Federal Bureau of Investigation (FBI) defines terrorism as, “the unlawful use, or threatened use, of violence by a group or individual ... committed against persons or property to intimidate or coerce a government, the civilian population or any segment thereof, in furtherance of political or social 431
Environmental Encyclopedia 3
Ecotone
objectives.” The FBI reports that an estimated 600 criminal acts of ecoterrorism have occurred in the United States since 1996, with damages estimated at $43 million. Two groups associated with these acts of sabotage are the militant Earth Liberation Front (ELF) and Animal Liberation Front (ALF) groups. In 1998, ELF took credit for arson at Vail Ski Resort, which resulted in damages of $12 million. The act was a protest over the resort’s expansion on mountain ecosystems. Most environmental groups are more peaceful and have voiced concern over radical environmentalists, as well as over law enforcement officials who view all environmental protesters as terrorists. For instance, Greenpeace activists have been known to steer boats in the path of whaling ships and throw paint on nuclear vessels as protest, and have been jailed for doing so, although no people were targeted or injured. People for the Ethical Treatment of Animals (PETA) members have thrown pies in the faces of business executives whom they found guilty of inhumane treatment of animals, considering the act a form of civil disobedience and not violence. The issues of ecoterrorism and ecoanarchism become more heated as environmental degradation worsens. Environmentalists become more desperate to protect rapidly disappearing endangered areas or species, while industry continually seeks new resources to replace those being used up. Thrown in the middle are law enforcement officials, who must protect against violence and destruction of property, and also uphold citizens’ basic rights to protest. The most famous ecoterrorist has been Theodore Kaczynski, also known as the Unabomber, who was convicted of murder in the mail-bombing of the president of the California Forestry Association. On the other end of the spectrum of environmental protest is Julia Butterfly Hill, whom ecoterrorists and ecoanarchists would do well to emulate. Hill is an activist who lived in a California redwood tree for two years to prevent it from being cut down by the Pacific Lumber Company. Hill practiced a nonviolent form of civil disobedience, and endured what she perceived as violent actions from loggers and timber company actions. Her peaceful protest brought national attention to the issue of logging in ancient growth forests, and a compromise was eventually reached between environmentalists and the timber company. Unfortunately, the tree in which Hill sat, which she named Luna, was damaged by angry loggers. [Douglas Dupler]
RESOURCES BOOKS Abbey, Edward. The Monkey Wrench Gang. Salt Lake City: Dream Garden Press, 1990.
432
Hill, Julia Butterfly. The Legacy of Luna: The Story of a Tree, a Woman, and the Struggle to Save the Redwoods. San Francisco: Harper, 2000. Thoreau, Henry David. Civil Disobedience, Solitude and Life Without Principle. Amherst, NY: Prometheus Books, 1998. Whitaker, David J. The Terrorism Reader. New York: Routledge, 2001.
PERIODICALS Chase, Alston. “Harvard and the Making of the Unabomber.”Atlantic Monthly, June 2000, 41. Earth First! Journal. P.O. Box 3023, Tucson, AZ 85702. (520) 620-6900. Richardson, Valerie. “FBI Targets Domestic Terrorists.” Insight on the News, 22 April 2002, 30.
Ecotone The boundary between adjacent ecosystems is known as an ecotone. For example, the intermediary zone between a grassland and a forest constitutes an ecotone that has characteristics of both ecosystems. The transition between the two ecosystems may be abrupt or, more commonly, gradual. Because of the overlap between ecosystems, an ecotone usually contains a larger variety of species than is to be found in either of the separate ecosystems and often includes species unique to the ecotone. This effect is known as the edge effect. Ecotones may be stable or variable. Over a period of time, for example, a forest may invade a grassland. Changes in precipitation are an important factor in the movement of ecotones.
Ecotourism Ecotourism is ecology-based tourism, focused primarily on natural or cultural resources such as scenic areas, coral reefs, caves, fossil sites, archeological or historical sites, and wildlife, particularly rare and endangered species. The successful marketing of ecotourism depends on destinations which have biodiversity, unique geologic features, and interesting cultural histories, as well as an adequate infrastructure. In the United States national parks are perhaps the most popular destinations for ecotourism, particularly Yellowstone National Park, the Grand Canyon, the Great Smoky Mountains and Yosemite National Park. In 1999, there were 300 million recreational visits to the national parks. Some of the leading ecotourist destinations outside the United States include the Galapagos Islands in Ecuador, the wildlife parks of Kenya, Tanzania, and South Africa, the mountains of Nepal, and the national parks and forest reserves of Costa Rica. Tourism is the second largest industry in the world, producing over $195 billion in domestic and international receipts and accounting for more than 7% of the world’s trade in goods and services. There were 693 million international tourists in 2001, creating 74 million tourism jobs. Adventure
Environmental Encyclopedia 3
Ecotoxicology
tourism, which includes ecotourism, accounts for 10% of this market. In developing countries tourism can comprises as much as one-third of trade in goods and services, and much of this is ecotourism. Wildlife-based tourism in Kenya, for example, generates $350 million annually. Ecotourism is not a new phenomena. In the late 1800s railroads and steamship companies were instrumental in the establishment of the first national parks in the United States, recognizing even then the demand for experiences in nature and profiting from transporting tourists to destinations such as Yellowstone and Yosemite. However, ecotourism has recently taken on increased significance worldwide. There has been a tremendous increase in demand for such experiences, with adventure tourism increasing at a rate of 30% annually. But there is another reason for the increased significance of ecotourism. It is a key strategy in efforts to protect cultural and natural resources, especially in developing countries, because resource-based tourism provides an economic incentive to protect resources. For example, rather than converting tropical rain forests to farms which may be short-lived, income can be earned by providing goods and services to tourists visiting the rain forests. Although ecotourism has the potential to produce a viable economic alternative to exploitation of the environment, it can also threaten it. Water pollution, litter, disruption of wildlife, trampling of vegetation, and mistreatment of local people are some of the negative impacts of poorly planned and operated ecotourism. To distinguish themselves from destructive tour companies, many reputable tour organizations have adopted environmental codes of ethics which explicitly state policies for avoiding or minimizing environmental impacts. In planning destinations and operating tours, successful firms are also sensitive to the needs and desires of the local people, for without native support efforts in ecotourism often fail. Ecotourism can provide rewarding experiences and produce economic benefits that encourage conservation. The challenge upon which the future of ecotourism depends is the ability to carry out tours which the clients find rewarding, without degrading the natural or cultural resources upon which it is based. See also Earthwatch; National Park Service [Ted T. Cable]
RESOURCES BOOKS Boo, E. Ecotourism: The Potentials and Pitfalls. 2 vols. Washington, DC: World Wildlife Fund, 1990. Ocko, Stephanie. Environmental Vacations: Volunteer Projects to Save the Planet. 2nd ed. Santa Fe, NM: John Muir, 1992. Whelan, T., ed. Nature Tourism: Managing for the Environment. Washington, DC: Island Press, 1991.
A group of tourists visit the Galapagos Islands (Photograph by Anthony Wolff. Phototake. Reproduced by permission.)
Ecotoxicology Ecotoxicology is a field of science that studies the effects of toxic substances on ecosystems. It analyzes environmental damage from pollution and predicts the consequences of proposed human actions in both the short and long term. With more than 100,000 chemicals in commercial use and thousands more being introduced each year, the scale of the task is daunting. Ecotoxicologists have a variety of methods with which they measure the impact of harmful substances on people, plants, and animals. Toxicity tests measure the response of biological systems to a substance to determine if it is toxic. A test could study, for example, how well fish live, grow, and reproduce in various concentrations of industrial effluent. Another could evaluate the point at which metal contaminants in soil damage plants’ ability to convert sunlight into food. Still another could measure how various concentrations of pesticides in agricultural runoff affect sediment and nutrient absorption in wetlands. Analyses of chemical fate (i.e., where a pollutant goes once it is released into the environment) can be combined with toxicity-test information to predict environmental response to pollution. Because toxicity is the interaction between a living system and a substance, only living plants, animals, 433
Environmental Encyclopedia 3
Ecotoxicology
and systems can be used in these experiments. There is no other way to measure toxicity. Another tool used in ecotoxicology is the field survey. It describes ecological conditions in both healthy and damaged natural systems, including pollution levels. Surveys often focus on the number and variety of plants and animals supported by the ecosystem, but they can also characterize other valued attributes such as crop yield, commercial fishing, timber harvest, or aesthetics. Information from a number of field surveys can be combined for an overview of the relationship between pollution levels and the ecological condition. A logical question might be: why not merely measure the concentration of a toxic substance and predict what will happen from that? The answer is because chemical analysis alone cannot predict environmental consequences in most cases. Unfortunately, interactions between toxicants and the components of ecosystems are not clear; in addition, ecotoxicologists have not yet developed simulation models that would allow them to make predictions based on chemical concentration alone. For example: An ecosystem’s response to toxic materials is greatly influenced by environmental conditions. The concentration of zinc that will kill bluegill sunfish in Virginia’s soft waters will not kill them in the much harder waters of Texas. Most of the relationships between environmental conditions and toxicity have not been established. OMost pollution is a complex mixture of chemicals, not just a single substance. In addition, some chemicals are more harmful when combined with other toxicants. OSome chemicals are toxic at concentrations and levels too small to be measured. OAn organism’s response to toxic materials can be influenced by other organisms in the community. For example, a fish exposed to pollution may be unable to escape from its predators. Ecotoxicologists use all three kinds of information: field surveys, chemical analyses, and toxicity tests. Field surveys prove that some important characteristic of the ecosystem has been damaged, chemical measurements confirm the presence of a toxicant, and toxicity tests link a particular toxicant to a particular type of damage. The scope of environmental protection has broadened considerably over the years, and the types of ecotoxicological information required have also changed. Toxicity testing began as an interest in the effects of various substances on human health. Gradually this concern extended to the other organisms that were most obviously important to humans— domestic animals and crop plants—and finally spread to other organisms that are less apparent or universal in their importance. Hunters are interested in deer, fishers in fish, O
434
bird watchers in eagles or pelicans, and beachcombers in loggerhead turtles. Keeping these organisms healthy requires studying the effects of pollution on them. In addition, the toxicant must not eliminate or taint the plants and/or animals upon which they feed nor can it destroy their habitat. Indirect effects of toxicants, which can also be devastating, are difficult to predict. A chemical that is not toxic to an organism, but instead destroys the grasses in which it lays eggs or hides from predators, will be indirectly responsible for the death of that organism. Protecting all the species that people value, along with their food and habitat, is a small step toward universal protection. An ambitious goal is to prevent the loss of any existing species, regardless of its appeal or known value to human society. Because each one of the millions of species on this planet cannot be tested before a chemical is used, ecotoxicologists tested a few “representative” species to characterize toxicity. If a pollutant could be found in rivers, testing might be done on an alga, an insect that eats algae, a fish that eats insects, and a fish that eats fish. Other representative species are chosen by their habitat: on the bottom of rivers, midstream, or in the soil. Regardless of the sampling scheme, however, thousands of organisms that will be affected by a pollutant will not be tested. Some prediction must be made about their response, nonetheless. Statistical techniques can predict the response of organisms in general from information on a few randomly selected organisms. Another approach tests the well-being of higher levels of biological organization. Since natural communities and ecosystems are composed of a large number of interacting species, the response of the whole reflects the responses of its many constituents. The health of a large and complex ecosystem cannot be measured in the same way as the health of a single species, however. Different attributes are important. For example, examining a single species like cattle or trout might require measuring respiration, reproduction, behavior, growth, or tissue damage. The condition of an ecosystem, on the other hand, might be determined by measuring production, nutrient spiralling, or colonization. Since people depend on ecosystems for food production, waste processing, and biodiversity, keeping them healthy is important. [John Cairns, Jr.]
RESOURCES BOOKS Carson, R. Silent Spring. Boston: Houghton Mifflin, 1962. Coˆte´, R. P., and P. G. Wells. Controlling Chemical Hazards. Boston: Unwin Hyman, 1991. Levin, S. A., et al., eds. Ecotoxicology: Problems and Approaches. New York: Springer-Verlag, 1989.
Environmental Encyclopedia 3 Wilson, E. O. Biodiversity. Washington, DC: National Academy Press, 1988.
Ecotype A recognizable geographic variety, population, or ecological race of a widespread species that is equivalent to a taxonomic subspecies. Typically, ecotypes are restricted to one habitat and are recognized by distinctive characteristics resulting from adaptations to local selective pressures and isolation. For example, a population or ecotype of species found at the foot of a mountain may differ in size, color, or physiology from a different ecotype living at higher altitudes, thus reflecting a sharp change in local selective pressures. Members of an ecotype are capable of interbreeding with other ecotypes within the same species without loss of fertility or vigor.
Ectopistes migratorius see Passenger pigeon
Edaphic Refers to the concept that soils have influence on living things, particularly plants. For example, soils with a low pH will more likely have plants growing on them that are adapted to this level of soil acidity. Animals living in the area will most likely eat these acid-loving plants. The more extreme a soil characteristic, the fewer kinds of plants and animals are able to adapt to the soil environment.
Edaphology The ecological study of soil, including its role, value, and management as a medium for plant growth and as a habitat for animals. This branch of soil science covers physical, chemical, and biological properties, including soil fertility, acidity, water relations, gas and energy exchanges, microbial ecology, and organic decay.
Eelgrass Eelgrass is the common name for a genus of perennial grasslike flowering plants referred to as Zostera. Zostera is from the Greek word zostermeaning belt, which describes the dark green, long, narrow, ribbon shape of the leaves that range in size from 20 to 50 cm in length, but can grow up to 2 m. Eelgrass grows under water in estuaries and in shallow coastal areas. Eelgrass is a member of a group of land plants
Effluent
that migrated into the sea in relatively recent geologic times, and is not a seaweed. Eelgrass grows by the spreading of rhizomes and by seed germination. Eelgrass flowers are hidden behind a transparent leaf sheath. Long filamentous pollen is released into the water, where it is spread by waves and currents. Both leaves and rhizomes contain lacunae, which are air spaces that provide buoyancy. Eelgrass grows rapidly in shallow waters and is highly productive, thus providing habitats and food for many marine organisms in its stems, roots, leaves, and rhizomes. Eelgrass communities (meadows) provide many important ecological functions, including: Oanchoring of sediments with the spreading of rhizomes, which prevents erosion and provides stability Odecreasing the impact of waves and currents, resulting in a calm environment where organic materials and sediments can be deposited Oproviding food, breeding areas, shelter and protective nurseries for marine organisms of commercial, recreational, and ecological importance Oconcentrating nutrients from seawater that are then available for use in the food chain Oserving as food for water fowl and other animals such as snails and sea urchins Oas detritus (decaying plant matter), providing nutrition to organisms within the eelgrass community, in adjoining marshes, and in offshore sinks at depths up to 30,000 feet. Eelgrass growth can be adversely impacted by human activities, including dredging, logging, shoreline or overwater construction, power plants, oil spills, pollution, and species invasion. Although transplantation projects designed to restore eelgrass meadows have been initiated on both coasts of the United States, creation of a large-scale meadow with the complex functions and relationships of a natural eelgrass system has not yet been achieved. [Judith L. Sims]
RESOURCES OTHER Gussett, Diana.Eelgrass.Port Townsend Marine Science Center. [cited May 27, 2002]. <www.ptmsc.org/html/eelgrass.html#lore>.
Effluent The etymological meaning of this term is “flowing forth out of.” For geologists, this word refers to a wide range of situations, from lava flowing from a volcano to a river flowing out of a lake. However, effluent is now most commonly used by environmentalists in reference to the Clean 435
Environmental Encyclopedia 3
Effluent tax
Water Act of 1977. In this act, effluent is a discharge from a point source, and the legislation specifies allowable
quantities of pollutants. These discharges are regulated under Section 402 of the act, and these standards must be met before these types of industrial and municipal wastes can be released into surface waters. See also Industrial waste treatment; Thermal pollution; Water pollution; Water quality
Effluent tax Effluent tax refers to the fee paid by a company to discharge
to a sewer. As originally proposed, the fee would have been paid for the privilege of discharging to the environment. However, there is presently no fee structure which allows a company, municipality, or person to contaminate the environment above the levels set by water quality criteria and effluent permits, unless fines levied by a regulatory agency could be deemed to be such fees. Fees are now charged on the bases of simply being connected to a sewer and the types and levels of materials discharged to the sewer. For example, a municipality might charge all sewer customers the same rate for domestic sewage discharges below a certain flowrate. Customers discharging wastewater at a higher strength and/or flowrate would be assessed an incremental fee proportional to the increased amount of contaminants and/or flow. This charge is often referred to as a sewer charge, fee or surcharge. There are cases in which wastewater is collected and tested to ensure that it meets certain criteria (e.g., required level of oil or a toxic metal, etc.) before it is discharged to a sewer. If the criteria are exceeded, the wastewater would require pretreatment. Holding the water before discharge is generally only practical when flowrates are low. In other situations, it may not be possible to discharge to a sewer because the treatment facility is unable to treat the waste; for example, many hazardous materials must be managed in a different manner. Effluent taxes force dischargers to integrate environmental concerns into their economic plans and operational procedures. The fees cause firms and agencies to re-think water conservation policies, waste minimization, processing techniques and additives, and pollution control strategies. Effluent taxes, as originally proposed, might be instituted as an alternative to stringent regulation, but the sentiment of the current public and regulatory agencies is to block any significant degradation of the environment. It appears to be too risky to allow pollution on a fee basis, even when the fees are high. Thus, in the foreseeable future, effluents to the environment will continue to be controlled by means of effluent limits, water quality criteria, and fines. 436
However, the taxing of effluents to a sewer is a viable means of challenging industries to enhance pollution control measures and stabilizing the performance of downstream treatment facilities. [Gregory D. Boardman]
RESOURCES BOOKS Davis, M. L., and D. A. Cornwell. Introduction to Environmental Engineering. New York: McGraw-Hill, 1991. Peavy, H. S., D. R. Rowe, and G. Tchobanoglous. Environmental Engineering. New York: McGraw-Hill, 1985. Tchobanoglous, G., and E. D. Schroeder. Water Quality. Reading, MA: Addison-Wesley, 1985. Viessman, W., Jr., and M. J. Hammer. Water Supply and Pollution Control. 5th ed. New York: Harper Collins, 1993.
Eggshell thinning see Dichlorodiphenyl-trichloroethane
EH A measure of the oxidation/reduction status of a natural water, sediment or soil. It is a relative electrical potential, measured with a potentiometer (e.g., a pH meter adjusted to read in volts) using an inert platinum electrode and a reference electrode (calomel or silver/silver chloride). EH is reported in volts or millivolts, and is referenced to the potential for the oxidation of hydrogen gas to hydrogen ions (H+). This electron transfer reaction is assigned a potential of zero on the relative potential scale. The oxidation and reduction reactions in natural systems are pH dependent and the interpretation of EH values requires a knowledge of the pH. At pH 7, the EH in water in equilibrium with the oxygen in air is +0.76v. The lowest possible potential is 0.4v when oxygen and other electron acceptors are depleted and methane, carbon dioxide, and hydrogen gas are produced by the decay of organic matter. See also Electron acceptor and donor
Paul Ralph Ehrlich (1932 – ) American population biologist and ecologist Born in Philadelphia, Paul Ehrlich had a typical childhood during which he cultivated an early interest in entomology and zoology by investigating the fields and woods around his home. As he entered his teen years, Ehrlich grew to be an avid reader. He was particularly influenced by ecologist William Vogt’s book, Road to Survival (1948), in which the author was the outlined the potential global consequences
Environmental Encyclopedia 3 of imbalance between the growing world population and level of food supplies available. This concept is one Ehrlich has discussed and examined throughout his career. After high school, Ehrlich attended the University of Pennsylvania where he earned his undergraduate degree in zoology in 1953. He received his master’s degree from University of Kansas two years later and continued at the university to receive his doctorate in 1957. His degrees led to post-graduate work on various aspects of entomological projects, including observing flies on the Bering Sea, the behavioral characteristics of parasitic mites, and (his favorite) the population control of butterfly caterpillars with ants rather than pesticides. Other related field projects have taken him to Africa, Alaska, Australia, the South Pacific and South East Asia, Latin America, and Antarctica. His travels enabled him to learn first-hand the ordeals endured by those in overpopulated regions. In 1954, he married Anne Fitzhugh Howland, biological research associate, with whom he wrote the best-selling book, The Population Bomb (1968). In the book, the Ehrlichs focus on a variety of factors contributing to overpopulation and, in turn, world hunger. It is evident throughout the book that the words and warnings of The Survival Game continued to exert a strong influence on Ehrlich. The authors warned that birth and death rates worldwide need to be “brought into line” before nature intervenes and renders (through ozone layer depletion, global warming, and soil exhaustion, among other environmental degradation) the human race extinct. Human reproduction, especially in highly developed countries like the United States, should be discouraged through levying taxes on diapers, baby food, and other related items; compulsory sterilization among the populations of certain countries should be enacted (the authors’ feelings on compulsory sterilization have relaxed somewhat since 1968). Ehrlich himself underwent a vasectomy after the birth of the couple’s first and only child. In 1968, Ehrlich founded Zero Population Growth, Inc., an organization established to create and rally support for balanced population levels and the environment. He has been a faculty member at Stanford University (California) since 1959 and currently holds a full professor position there in the Biological Sciences Department. In addition, Ehrlich has been a news correspondent for NBC since 1989. In 1993, he was awarded the Crafoord Prize in Population Biology and the Conservation of Biological Diversity from the Royal Swedish Academy of Sciences and received the World Ecology Medal from the International Center for Tropical Ecology. In 1994, Ehrlich was bestowed with the United Nations Environment Programme Sasakawa Environment Prize. Among Ehrlich’s published works are The Population Bomb (1968), The Cold and the Dark: The World After Nuclear
˜o El Nin
War (1984); The Population Explosion (1990); Healing the Planet (1991), Betrayal of Science and Reason (1996) which was written with his wife, and his most recent work, Human Natures: Genes, Cultures, and the Human Prospect, which was published in 2000. In 2001, Ehrlich was the recipient of the Distinguished Scientist Award. He continues his teaching as a professor of Population Studies at Stanford. [Kimberley A. Peterson]
RESOURCES BOOKS Ehrlich, P. R. The Population Bomb. New York: Ballantine Books, 1968. ———. Healing the Planet: Strategies for Solving the Environmental Crisis. Redding: Addison Wesley, 1992. ———. Human Natures: Genes, Cultures, and the Human Prospect. Washington, D.C.: Island Press/Shearwater Books, 2000. ———, and A. H. Ehrlich. The Population Explosion. New York: Simon & Schuster, 1990. ———, A. H. Ehrlich, and J. Holdren. Eco-Science: Population, Resources, and Environment. San Francisco: W. H. Freeman, 1970.
PERIODICALS Dailey, G. C., and P. R. Ehrlich. “Population Sustainability and Earth’s Carrying Capacity.” BioScience 42 (November 1992): 761–71. “Distinguished Scientist Award.” BioScience 51, no. 5 (May 2001): 416.
Eichhornia crassipes see Water hyacinth
EIS see Environmental Impact Statement
˜o El Nin El Nin˜o is the most powerful weather event on the earth, disrupting weather patterns across half the earth’s surface. Its three- to seven- year cycle brings lingering rain to some areas and severe drought to others. El Nin˜o develops when currents in the Pacific Ocean shift, bringing warm water eastward from Australia toward Peru and Ecuador. Heat rising off warmer water shifts patterns of atmospheric pressure, interrupting the high-altitude wind currents of the jet stream and causing climate changes. El Nin˜o, “Christ child” or “the child” in Spanish, tends to appear in December. The phenomenon was first noted by Peruvian fishermen in the 1700s, who saw a warming of normally cold Peruvian coastal waters and a simultaneous disappearance of anchovy schools that provided their livelihood. 437
Environmental Encyclopedia 3
Electric utilities
A recent El Nin˜o began to develop in 1989, but significant warming of the Pacific did not begin until late in 1991, reaching its peak in early 1992 and lingering until 1995–the longest runing El Nin˜o on record. Typically, El Nin˜o results in unusual weather and short-term climate changes that cause losses in crops and commercial fishing. El Nin˜o contributed to North America’s mild 1992 winter, torrential flooding in southern California, and severe droughts in southeastern Africa. Wild animals in central and southern Africa died by the thousands, and 20 million people were plagued by famine. The dried prairie of Alberta, Canada, failed to produce wheat, and Latin America received record flooding. Droughts were felt in the Philippines, Sri Lanka, and Australia, and Turkey experienced heavy snowfall. The South Pacific saw unusual numbers of cyclones during the winter of 1992. El Nin˜o’s influence also seems to have suppressed some of the cooling effects of Mount Pinatubo’s 1991 explosion. Scientists mapping the sea floor of the South Pacific near Easter Island found one of the greatest concentration of active volcanoes on Earth. The discovery has intensified debate over whether undersea volcanic activity could change water temperatures enough to affect weather patterns in the Pacific. Some scientists speculate that periods of extreme volcanic activity underwater could trigger El Nin˜o. El Nin˜o ends when the warm water is diverted toward to the North and South Poles, emptying the moving reser˜ o can develop again, voir of stored energy. Before El Nin the western Pacific must “refill” with warm water, which takes at least two years. See also Atmosphere; Desertification [Linda Rehkopf]
RESOURCES BOOKS Diaz, H. R., ed. El Nin˜o: Historical and Paleoclimatic Aspects of the Southern Oscillation. New York: Cambridge University Press, 1993. Glynn, P. W., ed. Global Ecological Consequences of the El-Nin˜o Southern Oscillation, 1982-1983. New York: Elsevier Science, 1990.
PERIODICALS Mathews, N. “The Return of El Nin˜o.” UNESCO Courier (July-August 1992): 44-46. Monastersky, R. “Once Bashful El Nin˜o Now Refuses to Go.” Science News 143 (23 January 1993): 53.
Electric automobiles see Transportation 438
Electric utilities Utilities neither produce energy like oil companies nor consume it like households, but convert it from one form to another. The electricity created is attractive because it is clean and versatile, because it can be moved great distances nearly instantaneously. Demand for electricity has grown even as demand for energy as a whole has contracted, with consumption of electricity increasing from one quarter of total energy consumption in 1973 to about a third. The major participants in the electric power industry are about 200 investor owned utilities that generate 78% of the power and supply 76% of the customers. The industry is very capital intensive and heavily regulated, and it has a large impact on other industries including aluminum, steel, electronics, computers, and robotics. The electrical power industry is the largest consumer of primary energy in the United States: consumes over one-third of the total national energy demand and only supplies one-tenth of that demand, losing from 65–75% of the energy in conversion, transmission, and distribution. The electrical industry has been subjected to pressures and uncertainties which have had a profound impact on its economic viability, forcing it to reexamine numerous assumptions which previously governed its behavior. In the period after World War II, the main strategy the industry followed was to “grow and build.” During this period demand increased at a rate of over 7% per year; new construction was needed to meet the growing demand, and this yielded economies of scale, with greater efficiencies and declining marginal costs. Public utility commissions lowered prices which stimulated additional demand. As long as prices continued to fall, demand continued to rise, and additional construction was necessary. New construction also occurred because the rate of return for the industry was regulated, and the only way for it to increase profits was to expand its rate base by building new plants and equipment. This period of industry growth came to an end in the 1970s, primarily as a result of the energy crisis. Economic growth slowed and fuel prices escalated, including the weighted average cost of all fossil fuels and the spot market price of uranium oxide. As fuel prices rose, operating costs went up, and maintenance costs also increased, including the costs of supplies and materials, labor, and administrative expenses. All this led to higher costs per kilowatt hour, and as the price of electricity went up, sales growth declined. The financial condition of the industry was further affected as capital costs for nuclear power and coal power plants increased. As the rate of inflation accelerated during this period, interest rates escalated. The rates utilities had to pay on bonds grew, and the costs of construction rose. The average cost of new generating capacity, as well as
Environmental Encyclopedia 3 installed capacity per kilowatt hour, went up. Net earnings and revenue per kilowatt hour went down, as both shortterm and long-term debt escalated, and major generating units were cancelled and capital appropriations cut back. During this decade, many people also came to believe that coal and nuclear power plants were a threat to the environment. They argued that new options had to be developed and that conservation was important. The federal government implemented new environmental and safety regulations which further increased utility costs. The government affected utility operations in other ways. In 1978 it deregulated interstate power sales, and required utilities to purchase alternative power such as solar energy from qualifying facilities at fully avoided costs. But perhaps the greatest transformation took place in the relationship electric power companies had to the public utility commissions. Once friendly, it deteriorated under the many economic and environmental changes that were then taking place. The size and number of requests for rate increases grew but the%age of requests granted actually went down. By the end of the 1970s, the “grow and build” strategy was no longer tenable for the electric power industry. Since then, the industry has adopted many different strategies, with different segments following different courses based on divergent perceptions of the future. Almost all utilities have tried to negotiate long-term contracts which would lower their fuel-procurement costs, and attempts have also been made to limit the costs of construction, maintenance, and administration. Many utilities redesigned their rate structures to promote use when excess capacity was available and discourage use when it was not. Multiple rate structures for different classes of customers were also implemented for this purpose. A number of utilities (Commonwealth Edison, Long Island Lighting, Carolina Power and Light, the TVA) have pursued a modified grow and build strategy based on the perception that economic growth would recover and that conservation and renewable energy would not be able to handle the increased demand. Some utilities (Consolidated Edison, Duke Power, General Public Utilities, Potomac Electric Power) pursued an option of capital minimization. They were located in areas of the country that were not growing and where the demand for power was decreasing. In areas of rapidly growing energy demand where regulations discouraged nuclear and coal plant construction, utilities such as Southern California Edison and Pacific Gas and Electric have had no option but to rely on their strong internal research and development capabilities and their progressive leadership to explore alternative energy sources. They have become energy brokers, buying alternative power from third party producers. Many utilities have also diversified, and the main attraction of diversification has been that
Electromagnetic field
it frees these companies from the profit limitations imposed by the public utility commissions. Outside the utility business (in real-estate, banking, and energy-related services), there was more risk but no limits on making money from profitable ventures. See also Alternative fuels; Economic growth and the environment; Energy and the environment; Energy conservation; Energy efficiency; Energy path, hard vs. soft; Energy policy; Geothermal energy; Wind energy [Alfred A. Marcus]
RESOURCES BOOKS Anderson, D. Regulatory Politics and Electric Utilities. Cambridge, Massachusetts: Auburn House, 1981. Navarro, P. The Dimming of America. Cambridge, Massachusetts: Ballinger, 1985. Thomas, S. D. The Realities of Nuclear Power. New York: Cambridge University Press, 1988. Three Mile Island: The Most Studied Nuclear Accident in History. Washington, DC: General Accounting Office, 1980. Zardkoohi, A. “Competition in the Production of Electricity.” In Electric Power, edited by J. Moorhouse. San Francisco: Pacific Research Institute, 1986.
PERIODICALS Joskow, D. “The Evolution of Competition in the Electric Power Industry.” Annual Review of Energy (1988): 215-238.
OTHER Three Mile Island: A Report to the Commissioners and to the Public, Volumes I and II, Parts 1, 2, and 3. Washington, DC: U.S. Nuclear Regulatory Commission, 1980.
Electromagnetic field Electromagnetic fields (EMFs) are low-level radiation generated by electrical devices, including power lines, household appliances, and computer terminals. They penetrate walls, buildings, and human bodies, and virtually all Americans are exposed to them, some to relatively high levels. Several dozen studies, conducted mainly over the last 15 years, suggest that exposure to EMFs at certain levels may cause serious health effects, including childhood leukemia, brain tumors, and damage to fetuses. But other studies show no such connections. EMFs are strongest around power stations, high-current electric power lines, subways, movie projectors, handheld radar guns and large radar equipment, and microwave power facilities and relay stations. Common sources of everyday exposure include electric razors, hair dryers, computer video display terminals (VDTs), television screens, electric power tools, electric blankets, cellular telephones, and appliances such as toasters and food blenders. 439
Electromagnetic field
The electricity used in North American homes, offices, and factories is called alternating current (AC) because it alternates the direction of flow at 60 cycles a second, which is called 60 hertz (Hz) power. Batteries, in contrast, produce direct current (DC). The electric charges of 60 Hz power create two kinds of fields: electric fields, from the strength of the charge, and magnetic fields, from its motion. These fields, taken together, are called electromagnetic fields, and they are present wherever there is electric power. A typical home exposes its residents to electromagnetic fields of 1 or 2 milligauss (a unit of strength of the fields). But under a high voltage power line, the EMF can reach 100–200 milligauss. The electromagnetic spectrum includes several types of energy or radiation. The strongest and most dangerous are x-rays, gamma rays, and ultraviolet rays, all of which are types of ionizing radiation, the kind that contains enough energy to enter cells and atoms and break them apart. Ionizing radiation can cause cancer and even instant death at certain levels of exposure. The other forms of radiation are non-ionizing, and do not have enough energy to break up the chemical bonds holding cells together. Microwave radiation constitutes the middle frequencies of the electromagnetic spectrum and includes radio frequency (RF), radar, and television waves, visible light and heat, and infrared radiation. Microwave radiation is emitted by VDTs, microwave ovens, satellites and earth terminals, radio and television broadcast stations, CB radios, security systems, and sonar, radar, and telephone equipment. Because of the pervasive presence of microwave radiation, virtually the entire American population is routinely exposed to it at some level. It is known that microwave radiation has biological effects on living cells. The type of radiation generally found in homes is Extremely Low Frequency (ELF), and it has, until recently, not been considered dangerous, since it is non-ionizing and non-thermal. However, numerous studies over the last two decades provide evidence that the ELF range of EMFs can have serious biological effects on humans and can cause cancer and other health problems. Some scientists argue that the evidence is not yet conclusive or even convincing, but others contend that exposure to EMFs may represent a potentially serious public health problem. After reviewing much of this evidence and data, the U. S. Environmental Protection Agency (EPA) released a draft report in December 1990 which determined that significant documentation exists linking EMFs to cancer in humans, and called for additional research on the matter. The study concluded that “...several studies showing leukemia, lymphoma, and cancer of the nervous system in children exposed to magnetic fields from residential 60-Hz electrical power distribution systems, supported by similar findings in 440
Environmental Encyclopedia 3 adults in several occupational studies also involving electrical power frequency exposures, show a consistent pattern of response which suggests a causal link.” The report went on to state that “evidence from a large number of biological test systems shows that ELF electric and magnetic fields induce biological effects that are consistent with several possible mechanisms of carcinogenesis...With our current understanding, we can identify 60-Hz magnetic fields from power lines and perhaps other sources in the home as a possible, but not proven, cause of cancer in humans.” EPA cited nine studies of cancer in children as supporting the strongest evidence of a link between the disease and EMFs stating that “these studies have consistently found modestly elevated risks (some statistically significant) of leukemia, cancer of the nervous system, and...lymphomas,” with occupational studies furnishing “additional, but weaker, evidence” of EMFs raising the risk of cancer. Concerning laboratory studies of the effects of EMFs on cells and the responses of animals to exposure, EPA found that “...there is reason to believe that the findings of carcinogenicity in humans are biologically plausible.” EPA scientists further recommended classifying EMFs as a “class B-1 carcinogen,” like cigarettes and asbestos, meaning that they are a probable source of human cancer. Several of the studies done on EMFs show that children living near high voltage power lines, and workers occupationally exposed to EMFs from power lines and electrical equipment, are more than twice as likely to contract cancer, especially leukemia, as are children and workers with average exposure. One study found a five-fold increase in childhood cancer among families exposed to strong EMFs, and another study even documented the leukemia rate among children increasing in direct proportion to the strength of the EMFs. There are suspicions that EMFs may also be linked to the apparent dramatic rise in fatal brain tumors over recent years, which sometimes occur in clusters near power substations and other areas where EMFs are high. There is also concern about hand-held cellular telephones, whose antennae emit EMFs very close to the brain. But none of this evidence is considered conclusive. Fetal damage has been cited as another possible effect of EMFs with higher-than-normal rates of miscarriages reported among pregnant women using electric blankets, waterbeds with electric heaters, and VDTs for a certain number of hours a day. But, other studies have failed to establish a link between EMFs and miscarriages. In October, 1996, the National Research Council of the National Academy of Sciences issued an important report on the feared dangers of power lines, examining and analyzing over 500 published studies conducted over the previous 17 years. In announcing the report’s findings, the
Environmental Encyclopedia 3 Council’s panel of experts stressed that it could find no discernable hazard to human health from EMFs. The announcement was hailed by the electric power industry, and widely reported in the news as disputing the alleged link between power lines and cancer. But what the report itself actually said was that there was “no conclusive and consistent evidence that shows that exposure to residential electric magnetic fields produce cancer...,” a severe burden of proof indeed. While the study did not find proof that electrical lines pose health hazards to humans, it did conclude that the association between proximity to power lines and leukemia in children was “statistically significant” and “robust.” It observed that a dozen or so scientific studies had determined that children living near power lines are 1.5 times more likely to develop childhood leukemia than are other children. Moreover, the study noted that 17 years of research had not identified any other factors that account for the increased cancer risk among children living near power lines. EMFs thus remain in the minds of many of the most likely albeit unproven cause of these cancers. There are various theories to account for how EMFs may cause or promote cancer. Some scientists speculate that when cells are exposed to EMFs, normal cell division and DNA function can be disrupted, leading to genetic damage and increased cell growth and, thus, cancer. Indeed, research has shown that ELF fields can speed the growth of cancer cells, and make them more resistant to the body’s immune system. The effects of EMFs on the cell membrane, on interaction and communication between groups of cells, and on the biochemistry of the brain are also being studied. Hormones may also be involved. Weak EMFs are known to lower production of melatonin, a strong hormone secreted by the pineal gland, a tiny organ near the center of the brain. Melatonin strengthens the immune system and depresses other hormones that help tumors grow. This theory might help explain the tremendous increase in female breast cancer in developed nations, where electrical currents are so pervasive, especially the kitchen. The theory may also apply to electric razors, which operate in close proximity to the gland, and whose motors have relatively high EMFs. A 1992 study found that men with leukemia were more than twice as likely to have used an electric razor for over twoand-one-half minutes a day, compared with men who had not. The study also found weaker associations between leukemia and the use of hand-held massagers and hair dryers. Another hypothesis involves the motions of charged particles within EMFs, with calcium ions, for example, accelerating and damaging the structure of cell membranes under such conditions. With conflicting and inconclusive data being cited by scientists and advocates on both sides of the debate, it is
Electromagnetic field
unlikely that the controversy over EMFs wil be resolved any time soon. In October, 1996, the National Research Council of the National Academy of Sciences issued an important report on the feared dangers of power lines, examining and analyzing over 500 published studies conducted over the previous 17 years. In announcing the report’s findings, the Council’s panel of experts stressed that it could find no discernable hazard to human health from EMFs. The announcement was hailed by the electric power industry, and widely reported in the news media as disputing the alleged link between power lines and cancer. But what the report itself actually said was that there was “no conclusive and consistent evidence that shows that exposure to residential electric magnetic fields produce cancer...". While the study did not find proof that electrical lines pose health hazards to humans, it did conclude that the association between proximity to power lines and leukemia in children was “statistically significant” and “robust.” It observed that a dozen or so scientific studies had determined that children living near power lines are 1.5 times more likely to develop childhood leukemia than are other children. Moreover, the study noted that 17 years of research had not identified any other factors that account for the increased cancer risk among children living near power lines. EMFs thus remained, in the minds of many, the most likely—albeit unproven—cause of these cancers. And a major five-year study, by the National Cancer Institute, the University of Minnesota, and other childhood leukemia specialists, published in July 1997 in The New England Journal of Medicine, found no evidence linking leukemia in children to electric power lines. [Lewis G. Regenstein]
RESOURCES BOOKS Brodeur, P. Currents of Death: Power Lines, Computer Terminals, and the Attempt to Cover Up Their Threat to Your Health. New York: Simon & Schuster, 1989. Sugarman, E. Warning: The Electricity Around You May Be Hazardous to Your Health. New York: Simon & Schuster, 1992. U.S. Environmental Protection Agency. Evaluation of the Potential Carcinogenicity of Electromagnetic Fields. Review Draft. Washington, DC: U. S. Government Printing Office, 1990.
PERIODICALS Savitz, D., and J. Chen. “Parental Occupation and Childhood Cancer: Review of Epidemiological Studies.” Environmental Health Perspectives 88 (1990): 325-337.
441
Environmental Encyclopedia 3
Electron acceptor and donor
where they are then shaken off the plates during short intervals when the DC current is interrupted. (Stack gases can be shunted to a second parallel device during this period). They fall to a collection bin below the plates. Under optimum conditions, electrostatic precipitation is 99% efficient in removing particulates from waste gases.
Elemental analysis
An electrostatic precipitator. (McGraw-Hill Inc. Reproduced by permission.)
Electron acceptor and donor Electron acceptors are ions or molecules that act as oxidizing agents in chemical reactions. Electron donors are ions or molecules that donate electrons and are reducing agents. In the combustion reaction of gaseous hydrogen and oxygen to produce water (H2O), two hydrogen atoms donate their electrons to an oxygen atom. In this reaction, the oxygen is reduced to an oxidation state of -2 and each hydrogen is oxidized to +1. Oxygen is an oxidizing agent (electron acceptor) and hydrogen is a reducing agent (electron donor). In aerobic (with oxygen) biological respiration, oxygen is the electron acceptor accepting electrons from organic carbon molecules; and as a result oxygen is reduced to -2 oxidation state in H2O and organic carbon is oxidized to +4 in CO2. In flooded soils, after oxygen is used up by aerobic respiration, nitrate, sulfate, as well as iron and manganese oxides can act as electron acceptors for microbial respiration. Other common electron acceptors include peroxide and hypochlorite (household bleach) which are bleaching agents because they can oxidize organic molecules. Other common electron donors include antioxidants like sulfite.
Electrostatic precipitation A technique for removing particulate pollutants from waste gases prior to their exhaustion to a stack. A system of thin wires and parallel metal plates are charged by a high-voltage direct current (DC) with the wires negatively charged and the plates positively charged. As waste gases containing fine particulate pollutants (i.e., smoke particles, fly ash, etc.) are passed through this system, electrical charges are transferred from the wire to the particulates in the gases. The charged particulates are then attracted to the plates within the device, 442
Chemists have developed a number of methods by which they can determine the kind of elements present in a material and the amount of each element present. Nuclear magnetic resonance (NMR), flame spectroscopy, and mass spectrometry are examples of elemental analysis. These methods have been improved to a point where concentrations of a few parts per million of an element or less can be detected with relative ease. Elemental analysis is valuable in environmental work to determine the presence of a contaminant or pollutant. As an example, the amount of lead in a paint chip can be determined by means of elemental analysis.
Elephants The elephant is a large mammal with a long trunk and tusks. The trunk is an elongated nose used for feeding, drinking, bathing, blowing dust, and testing the air. The tusks are upper incisor teeth composed entirely of dentine (ivory) used for defense, levering trees, and scraping for water. Elephants are long-lived (50–70 years) and reach maturity at 12 years. They reproduce slowly (one calf every two to three years) due to a 21-month gestation period and an equally long weaning period. A newborn elephant stands 3 ft (1 m) at the shoulder and weighs 200 lb (90 kg). The Elephantidae includes two living species and various extinct relatives. Asian elephants (Elephas maximus) grow to 10 ft (3 m) high and weigh 4 tons. The trunk ends in a single lip, the forehead is high and domed, the back convex, and the ears small. Asian elephants are commonly trained as work animals. They range from India to southeast Asia. There are four subspecies, the most abundant of which is the Indian elephant (E. m. bengalensis) with a wild population of about 20,000. The Sri Lankan (E. m. maximus), Malayan (E. m. hirsutus), and the Sumatran elephants (E. m. sumatranus) are all endangered subspecies. In Africa, adult bush elephants (Loxodonta africana oxyotis) are the world’s largest land mammals, growing 11 ft (3.3 m) tall and weighing 6 tons. The trunk ends in a double lip, the forehead slopes, the back is hollow, and the ears are large and triangular. African elephants are also endangered and have never been successfully trained to work. The rare round-eared African forest elephant (L. a. cyclotis)
Environmental Encyclopedia 3
Elephants
Elephants at the Amboseli National Park in Kenya, Africa. (Photograph by Wolfgang Kaehler. Corbis-Bettmann. Reproduced by permission.)
is smaller than the bush elephant and inhabits dense tropical rain forests. Elephants were once abundant throughout Africa and Asia, but they are now threatened or endangered nearly everywhere because of widespread ivory poaching. In 1970 there were about 4.5 million elephants in Africa, by 1990 there were only 600,000. Protection from poachers and the 1990 ban on the international trade in ivory (which caused a drop in the price of ivory) are slowing the slaughter of African bush elephants. However, the relatively untouched forest elephants are now coming under increasing pressure. In West Africa recent hunting has reduced forest elephants to less than 3,000. Elephants are keystone species in their ecosystems, and their elimination could have serious consequences for other wildlife. For example, wandering elephants disperse fruit seeds in their dung, and the seeds of some plants must pass through elephants to germinate. Elephants are also “bulldozer herbivores,” habitually trampling plants and uprooting small trees. In African forests elephants create open spaces that allow the growth of vegetation favored by gorillas
and forest antelope. In woodland savanna elephants convert wooded land into grasslands, thus favoring grazing animals. However, large populations of elephants confined to reserves can also destroy most of the vegetation in a region. Culling exploding elephant populations in reserves has been practiced in the past to protect the vegetation for other animals that depend on it. [Neil Cumberlidge Ph.D.]
RESOURCES BOOKS ———. Battle for the Elephants. New York: Viking, 1992. Martin, C. The Rainforests of West Africa. Boston: Birkhauser Verlag, 1991. Shoshani, J., ed. Elephants: Majestic Creatures of the Wild. Emmaus, PA: Rodale Press, 1992.
ELI see Environmental Law Institute 443
Charles Sutherland Elton
Charles Sutherland Elton (1900 – 1991) English ecologist A factual, accurate, complete history of ecology as a discipline has yet to be written. When that history is finally compiled, the British ecologist Charles Sutherland Elton will stand as one of the disciplines leading mentors of the twentieth century. Charles Elton was born March 29, 1900, in Manchester, England. His interest in what he later called “scientific natural history” was sparked early by his older brother Geoffrey. By the age of 19, Charles Elton was already investigating species relationships in ponds, streams, and sand-dune areas around Liverpool. His formal education in ecology was shaped by an undergraduate education at Oxford University, and by his participation in three scientific expeditions to Spitsbergen in the Arctic (in 1921, 1923, and 1924), the first one as an assistant to Julian Huxley. Even though an undergraduate, he was allowed to begin an ecological survey of animal life in Spitsbergen, a survey completed on the third trip. These experiences and contacts led him to a position as biological consultant to the Hudson’s Bay Company, which he used to conduct a long-term study of the fluctuations of furbearing mammals, drawing on company records dating back to 1736. During this period, Elton became a member of the Oxford University faculty (in 1923), and was eventually elected a senior research fellow of Corpus Christie College. His whole academic career was spent at Oxford, from which he retired in 1967. He applied the skills and insights gained through the Spitsbergen and Hudson Bay studies to work on the fluctuations of mice and voles in Great Britain. To advance and coordinate this work, he started the Bureau of Animal Populations at Oxford. This institution (and Elton’s leadership of it) played a vital role in the shaping of research in animal ecology and in the training and education of numerous ecologists in the early twentieth century. Elton published a number of books, but four proved to be of particular significance in ecology. He published his first book, Animal Ecology, in 1929, a volume now considered a classic, its author one of the pioneers in the field of ecology, especially animal ecology. In the preface to a 1966 reissue, Elton suggested that the book “must be read as a pioneering attempt to see...the outlines of the subject at a period when our knowledge [of] terrestrial communities was of the roughest, and considerable effort was required to shake off the conventional thinking of an earlier zoology and enter upon a new mental world of populations, inter-relations, movements and communities—a world [of] direct study of natural processes...” Major topics in that book remain major topics 444
Environmental Encyclopedia 3 in ecology today: the centrality of trophic relationships; the significance of niche as a functional concept; ecological succession; the dynamics of dispersal; and the relationships critical to the fluctuation of animal populations, including interactions with habitat and physical environment. His year of work on small mammals in Spitzbergen, for the Hudson’s Bay Company, and in British localities accessible to Oxford, culminated in the publication, in 1942, of Voles, Mice and Lemmings. This work, still in print almost 60 years later, “brought together...his own work and a collection of observations from all over the world and from ancient history onward.” Elton begins the book by establishing a context of “vole and mouse plagues” through history. A second section is on population fluctuations in north-west Europe, voles and mice in Britain, but also lemmings in Scandinavia. The other two sections focus on wildlife cycles in northern Labrador, including chapters on fox and marten, voles, foxes, the lemmings again, and caribou herds. In all this work, the emphasis is on the dynamics of change, on the constant interactions and subsequent fluctuations of these various populations and often stringent environments. Elton’s 1958 book, The Ecology of Invasions by Animals and Plants, focused on a problem that is of even more concern today—the arrival and impact of exotic species introduced from other places, sometimes naturally, increasingly through the actions of humans. As always, Elton is careful to set the historical context by showing how biological “invaders” have been moving around the globe for a long time, but he also emphasizes that “we are living in a period of the world’s history when the mingling of thousands of kinds of organisms from different parts of the world is setting up terrific dislocations in nature.” The Pattern of Animal Communities, published in 1966, emerged from years of surveying species, populations, communities and habitats in the Wytham Woods not far from Oxford. In this book, his primary intent was to describe and classify the diverse habitats available to terrestrial animals, and most of the chapters of the book are given to specific habitats for specialized kinds of organisms. Though not generally considered a theoretical ecologist, his early thinking did help to shape the field. In this book, late in his career, he summarized that thinking in a chapter titled “The Whole Pattern,” in which he presents a set of fifteen “new concepts of the structure of natural systems,” which he stated as a “series of propositions,” though some reviewers labeled them “principles” of ecology. Always the pragmatist, Elton devoted considerable time to the practical, applied aspects of ecology. Nowhere is this better demonstrated than in the work he turned to early in World War II. One of his original purposes in establishing the Bureau of Animal Populations was to better understand the role of disease in the fluctuations of animal
Environmental Encyclopedia 3
Emergency Planning and Community Right-to-Know Act (1986)
numbers. At the beginning of the war, he turned the research focus of the Bureau to the control of rodent pests, especially to help in controlling human disease and to contribute toward the reduction of crop losses to rodents. Elton was an early conservationist, stating in the preface to his 1927 text that “ecology is a branch of zoology which is perhaps more able to offer immediate practical help to mankind than any of the others [particularly important] in the present rather parous state of civilization.” Elton strongly advocated the preservation of biological diversity, and pressed hard for the prevention of extinctions; this is what he emphasizes in his chapter on “The Reasons for Conservation” in the Invasions book. But he also expanded his conception of conservation to mean looking for some “general basis for understanding what it is best to do” and “looking for some wise principle of co-existence between man and nature, even if it has to be a modified kind of man and a modified kind of nature.” He even took the unusual step (unusual for professional ecologists in his time and still unusual today) of going into the broadcast booth to popularize the importance of ecology in helping to achieve those goals though environmental management as applied ecology. Elton’s service to ecology as a learned discipline was enormous. In the early twentieth century, ecology was still in its formative years, so Elton’s ideas and contributions came at a critical time. He took the infant field of animal ecology to maturity, building it to a status equal that of the more established plant ecology. His research Bureau at Oxford fostered innovative research and survey methods, and provided early intellectual nurture and professional development to ecologists who went on to become major contributors to the field, one example being the American ecologist Eugene Odum. As its first editor, Elton was “in a very real sense the creator” of the Journal of Animal Ecology, serving in the position for almost twenty years. He was one of the founders of the British Ecological Society. Elton’s books and ideas continue to influence ecologists today. [Gerald L. Young Ph.D.]
RESOURCES BOOKS Crowcroft, P. Elton’s Ecologists: A History of the Bureau of Animal Population. Chicago: University of Chicago Press, 1991. Elton, C. S. Animal Ecology. London: Sidgwick & Jackson, 1927. ———. The Ecology of Invasions by Animals and Plants. London: Methuen, 1958. ———. The Pattern of Animal Communities. London: Methuen, 1966. ———. Voles, Mice and Lemmings: Problems in Population Dynamics. Oxford: The Clarendon Press, 1942.
PERIODICALS Hardy, A. “Charles Elton’s Influence in Ecology.” The Journal of Animal Ecology 37, no. 1 (February 1968): 1–8.
Emergency Planning and Community Right-to-Know Act (1986) The Emergency Planning and Community Right-to-Know Act (EPCRA), also known as Title III, is a statute enacted by Congress in 1986 as a part of the Superfund Amendments and Reauthorization Act (SARA). It was enacted in response to public concerns raised by the accidental release of poisonous gas from a Union Carbide plant in Bhopal, India which killed over 2,000 people. EPCRA has two distinct yet complementary sets of provisions. First, it requires communities to establish plans for dealing with emergencies created by chemical leaks or spills and defines the general structure these plans must assume. Second, it extends to communities the same kind of right-to-know provisions which were guaranteed to employees earlier in the 1980s. Overall, EPCRA is an important step away from crisis-by-crisis environmental enforcement toward a proactive or preventative approach. This proactive approach depends on government monitoring of potential environmental hazards, which is being accomplished by using computerized files of data submitted by businesses. Under the provisions of EPCRA, the governors of every state were required to establish a State Emergency Response Commission by 1988. Each state commission was required in turn to establish various emergency planning districts and to appoint a local emergency planning committee for each. Each committee was required to prepare plans for potential chemical emergencies in their communities, which includes the identities of facilities, the procedures to be followed in the event of a chemical release, and the identities of community emergency coordinators as well as a facility coordinator from each business subject to EPCRA. A facility is subject to EPCRA if it has a substance in a quantity equal to or greater than the threshold planning quantity specified on a list of about 400 extremely hazardous substances published by the Environmental Protection Agency. Also, after public notice and comment either the state governor or the State Emergency Response Commission may designate facilities to be covered outside of these guidelines. Each covered facility is required to provide facility notification information to the state commission and to designate a facility coordinator to work with the local planning committee. EPCRA requires these facilities to report immediately any accidental releases of hazardous material to the Community Coordinator of its local emergency committee. There are two classifications for such hazardous substances. The substance must be either on the EPA’s extremely hazardous substance list, or defined under the Comprehensive Environmental Response Compensation and Liability Act (CER445
Emergency Planning and Community Right-to-Know Act (1986)
CLA). In addition to the initial emergency notice, followup notices and information are required. EPCRA’s second major set of provisions is designed to establish and implement a community right-to-know program. Information about the presence of chemicals at facilities within the community is collected from businesses and made available to public officials and the general public. Businesses must submit two sets of annual reports: the Hazardous Chemical Inventory and Toxic Chemicals Release Inventories (TRIs), also known as Chemical Release Forms. For the Hazardous Chemical Inventory, each facility in the community must prepare or obtain a Material Safety Data Sheet for each chemical on its premises meeting the threshold quantity. This information is then submitted to the Local Emergency Planning Committee, the local fire department, and the State Emergency Response Commission. These data sheets are identical to those required under the Occupational Safety and Health Act’s worker right-toknow provisions. For each chemical reported in the Hazardous Chemical Inventory, a Chemical Inventory Report must be filed each year. The second set of annual reports required as a part of the community right-to-know program is the Toxic Release Inventory (TRI), which must be filed annually. Releases reported on this form include even those made legally with permits issued by the EPA and its state counterparts. Releases made by the facility into air, land, and water during the preceding twelve months are summarized in this inventory. The form must be filed by companies having ten or more employees if that company manufactures, stores, imports, or otherwise uses designated toxic chemicals at or above threshold levels. The information submitted pursuant to both the emergency planning and the right-to-know provisions of EPCRA is available to the general public through the Local Emergency Planning Committees. In addition, health professionals may obtain access to specific chemical identities even if that information is claimed by the business to be a trade secret in order to treat exposed individuals or protect potentially exposed individuals. During the late 1980s, the EPA and its state counterparts emphasized public awareness and education about the requirements of EPCRA, rather than enforcement. But Congress has provided stiff penalties for noncompliance, and these agencies have now begun to implement their enforcement tools. Civil penalties of up to $25,000 per day for a first violation and up to $75,000 per day for a second may be assessed against a business failing to comply with reporting requirements, and citizens have the right to sue companies that fail to report. Further, enforcement by the government may include criminal prosecution and imprisonment. 446
Environmental Encyclopedia 3
In June of 1996, in response to community pressure, the U.S. Congress took up its first major vote on Community Right to Know since 1986. In the 1996 vote, the House of Representatives removed provisions from an EPA budget appropriation which would have made substantial cuts in funds allocated to compiling of Toxics Release Inventories (TRI). In addition, Congress passed an EPA proposal that added seven additional industries to the number of industries which must report under TRI, thus bringing the total number of industries required to report to twenty-seven. Those twenty-seven include more than 31,000 facilities across the United States. These Congressional votes are viewed by environmentalists as victories for right to know. Looking ahead, EPCRA may be further strengthened through provisions included in The Children’s Environmental Protection and Right to Know Act of 1997 which was introduced to Congress in May of 1997 by Rep. Henry Waxman (D) and Rep. Jim Saxtan (Republican). Studies have revealed that EPCRA has had far-reaching effects on companies and that industrial practices and attitudes toward chemical risk management are changing. Some firms have implemented new waste reduction programs or adapted previous programs. Others have reduced the potential for accidental releases of hazardous chemicals by developing safety audit procedures, reducing their chemical inventories, and using less hazardous chemicals in their operations. As information included in reports such as the annual Toxics Release Inventory has been disseminated throughout the community, businesses have found they must be concerned with risk communication. Various industry groups throughout the United States have begun making the information required by EPCRA readily available and helping citizens to interpret that information. For example, the Chemical Manufacturers Association has conducted workshops for its members on communicating EPCRA information to the community and on how to communicate about risk in general. Similar seminars are now made available to businesses and their employees through trade associations, universities, and other providers of continuing education. See also Chemical spills; Environmental monitoring; Environmental Monitoring and Assessment Program; Hazardous Materials Transportation Act; Toxic substance; Toxic Substances Control Act; Toxics use reduction legislation [Paulette L. Stenzel]
RESOURCES PERIODICALS Stenzel, P. L. “Small Business and the Emergency Planning and Community Right-to-Know Act.” Michigan Bar Journal (February 1990): 181-183. Stenzel, P. L. “Toxics Use Reduction Legislation: An Important ’Next Step’ After Right to Know.” Utah Law Review 76 (1991): 707-747.
Environmental Encyclopedia 3 OTHER Emergency Planning and Community Right-to-Know Act of 1986, 42 U.S.C. Sec. 11001-11050 (1986).
Emergent diseases (human) Although many diseases such as measles, pneumonia, and pertussis (whooping cough) have probably inflicted humans for millennia, at least 30 new infectious diseases have appeared in the past two decades, In addition, many wellknown diseases recently have reappeared in more virulent or drug-resistant forms. An emergent disease is one never known before or one that has been absent for at least 20 years. Ebola fever is a good example of an emergent disease. A kind of viral hemorrhagic fever, Ebola is extremely contagious and often kills up to 90% of those who are exposed to it. The disease was unknown until about 20 years ago, but is thought to have been present in monkeys or other primates. Killing and eating chimps, gorillas, and other primates is thought to be the route of infection in humans. AIDS is another disease that appears to have suddenly moved from other primates to humans. How pathogens suddenly move across species barriers to become highly contagious and terribly lethal is one of the most important questions in environmental health. Some of the most devastating epidemics have occurred when travelers bring new germs to a naïve population lacking immunity. An example was the plague, or Black Death, which swept through Europe and Western Asia repeatedly in the fourteenth and fifteenth centuries. During the firstand worst-episode between 1347 and 1355, about half the population of Europe died. In some cities the mortality rate was as high as 80%. It’s hard to imagine the panic and fear this disease caused. An even worse disaster may have occurred when Europeans brought smallpox, measles, and other infectious diseases to the Americas. By some calculations, up to 90% of the native people perished as diseases swept through their population. One reason European explorers thought the land was an empty wilderness was that these diseases spread out ahead of them, killing everyone in their path. Probably the largest loss of life from an individual disease in a single year was the great influenza pandemic of 1918. Somewhere between 30 and 40 million people succumbed to this virus in less than 12 months. This was more than twice the total number killed in all the battles of World War I, which was occurring at the time. Crowded, unsanitary troop ships carrying American soldiers to Europe started the epidemic. War refugees, soldiers from other nations returning home, and a variety of other travelers quickly spread the virus around the globe. Flu is especially contagious, spreading either by direct contact with an infected
Emergent diseases (human)
person or by breathing airborne particles released by coughing or sneezing. Most flu strains are zoonotic (transmitted from an animal host to humans). Pigs, birds, monkeys, and rodents often serve as reservoirs from which viruses can jump to humans. Although new flu strains seem to appear nearly every year, no epidemic has been as deadly as that of 1918. Malaria, the most deadly of all insect-borne diseases, is an example of the return of a disease that once was thought nearly vanquished. Malaria now claims about 3 million lives every year—90% in Africa and most of them children. With the advent of modern medicines and pesticides, malaria had nearly been wiped out in many places but recently has had a resurgence. The protozoan parasite that causes the disease is now resistant to most antibiotics, while the mosquitoes that transmit it have developed resistance to many insecticides. Spraying of DDT in India and Sri Lanka reduced malaria from millions of infections per year to only a few thousand in the 1950s and 1960s. Now South Asia is back to its pre-DDT level of some 2.5 million new cases of malaria every year. Other places that never had malaria or dengue fever now have them because of climate change and habitat alteration. Gulf-coast states in America, for example, are now home to the Aedes aegypti mosquito that carries these diseases. Why have vectors such as mosquitoes and pathogens such as the malaria parasite become resistant to pesticides and antibiotics? Part of the answer is natural selection and the ability of many organisms to evolve rapidly. Another factor is the human tendency to use control measures carelessly. When we discovered that DDT and other insecticides could control mosquito populations, we spread them indiscriminately without much thought to ecological considerations. In the same way, antimalarial medicines such as chloroquine were given to millions of people, whether they showed symptoms or not. This was a perfect recipe for natural selection. Many organisms were exposed only minimally to control measures. This allowed those with natural resistance to outcompete others and spread their genes through the population. After repeated cycles of exposure and selection, many microorganisms and their vectors are insensitive to almost all our weapons against them. There are many examples of drug resistance in pathogens. Tuberculosis (TB), once the foremost cause of death in the world, had nearly been eliminated—at least from the developed world—by the end of the twentieth century. Drug-resistant varieties of TB are now spreading rapidly, however. One of the places these strains arise is in Russia, where crowded prisons with poor sanitation, little medical care, gross overcrowding, and inadequate nutrition serve as a breeding ground for this deadly disease. Inmates who are treated with antibiotics rarely get a complete dose. Those 447
Environmental Encyclopedia 3
Emergent ecological diseases
with TB aren’t segregated from healthy inmates. Patients with active TB are released from prison and sent home to spread the disease further. And migrants from Russian carry the disease to other countries. Another development is the appearance of drug-resistant strains of Staphylococcus aureus, the most common form of hospital-acquired infections. Staph A has many forms, some of which are extremely toxic—toxic-shock syndrome is one where staphylococcus toxins spread through the body and sometimes bring death in a matter of just hours. Another strain of staphylococcus, sometimes called flesh-eating bacteria, causes massive necrosis (cell death) that destroys skin, connective tissue, and muscle. For 40 years vancomycin has been the last recourse against staph infections. Strains resistant to everything else could be controlled by this antibiotic. Now vancomycin-resistant staph strains are being reported in many places. A number of factors contribute currently to the appearance and spread of these highly contagious diseases. With 6 billion people now inhabiting the planet, human densities are much higher, enabling germs to spread further and faster than ever before. Expanding populations push into remote areas encountering new pathogens and parasites. Environmental change on a larger scale, such as cutting forests, creating unhealthy urban surroundings, and causing globalclimate change, among other things, eliminates predators and habitat changes favor disease-carrying organisms such as mice, rats, cockroaches, and mosquitoes. Another important factor in the spread of many diseases is the speed and frequency of modern travel. Millions of people go every day from one place to another by airplane, boat, train, or automobile. Very few places on earth are more than 24 hours by jet plane from any other place. Many highly virulent diseases take several days for symptoms to appear. Humans aren’t the only ones to suffer from new and devastating diseases. Domestic animals and wildlife also experience sudden and widespread epidemics, sometimes called emergent ecological diseases. In 1998, for example, a distemper virus killed half the seals in Western Europe. It’s thought that toxic pollutants and hormone-disrupting environmental chemicals might have made seals and other marine mammals susceptible to infections. In 2002, more dead seals were found in Denmark, raising fears that distemper might be reappearing. Chronic wasting disease (CWD) is spreading through deer and elk populations in the North America. Caused by a strange protein called a prion, CWD is one of a family of irreversible, degenerative neurological diseases known as transmissible spongiform encephalopathies (TSE) that include mad cow disease in cattle, scrapie in sheep, and Creutzfelt-Jacob disease in humans. CWD probably started 448
when elk ranchers fed contaminated animal by-products to their herds. Infected animals were sold to other ranches, and now the disease has spread to wild populations. First recognized in 1967 in Saskatchewan, CWD has been identified in wild deer populations and ranch operations in at least eight American states. No humans are known to have contracted TSE from deer or elk, but there is a concern that we might see something like the mad cow disaster that inflicted Europe in the 1990s. At least 100 people died, and nearly five million European cattle and sheep were slaughtered in an effort to contain that disease. One of the things all these diseases have in common is that human-caused environmental changes are stressing biological communities and upsetting normal ecological relationships. [William P. Cunningham Ph.D.]
RESOURCES BOOKS Diamond, Jared. Guns, Germs, and Steel: The Fates of Human Societies New York: W.W. Norton & Company, 1999. Drexler, Madeline. Secret Agents: the Menace of Emerging Infections. Joseph Henry Press, 2002. Miller, Judith, Stephen Engelberg, and William J. Broad. Germs: Biological Weapons and America’s Secret War. New York: Simon & Schuster, 2000.
PERIODICALS Daszak, P. et al. “Emerging Infectious Diseases of Wildlife-Threats to Biodiversity and Human Health.” Science 287 (2000): 443-449. Hughes, J. M. “Emerging infectious diseases: a CDC perspective.” Emerging Infectious Diseases. 7(2001):494-6. Osterholm, M. T. “Emerging Infections—Another Warning.” The New England Journal of Medicine 342(2000): 4-5.
Emergent ecological diseases Emergent ecological diseases are relatively recent phenomena involving extensive damage being caused to natural communities and ecosystems. In some cases, the specific causes of the ecological damage are known, but in others they are not yet understood. Examples of relatively well-understood ecological diseases mostly involve cases in which introduced, non-native pathogens are causing extensive damage. There are, unfortunately, many examples of this kind of damage caused by invasive organisms. One case involves the introduced chestnut blight fungus (Endothia parasitica), which has virtually eliminated the once extremely abundant American chestnut (Castanea dentata) from the hardwood forests of eastern North America. A similar ongoing pandemic involves the Dutch elm disease fungus (Ceratocystis ulmi), which is removing white elm (Ulmus americana) and other native elms from
Environmental Encyclopedia 3 North America. Some introduced insects are also causing important forest damage, including the effects of the balsam wooly adelgid (Adelges picea) on Fraser fir (Abies fraseri) in the Appalachian Mountains. Other cases of ecological diseases involve widespread damages that are well-documented, but the causes of which are not yet understood. One of them affects native forests of Hawaii dominated by the tree ohia (Metrosideros polymorpha). For some unknown reason, stands of ohia decline and then die when they reach maturity. This may be caused by the synchronous senescence of a cohort of trees that established following a stand-replacing disturbance, such as a lava flow, and then reached maximum longevity at about the same time. Other causal factors have, however, also been suggested, including nutrient dysfunction and pathogens. Another case is known as birch decline, which occurred over great regions of the northeastern United States and eastern Canada from the 1930s to the 1950s. The disease affected yellow birch (Betula alleghaniensis), paper birch (B. papyrifera), and grey birch (B. populifolia), which suffered mortality over a huge area. The specific cause of this extensive forest damage was never determined, but it could have involved the effects of freezing ground conditions during winters with little snow cover. Rather similar forest declines and diebacks have affected red spruce (Picea rubens) and sugar maple (Acer saccharum) in the same broad region of eastern North America during the 1970s to 1990s. Although the causes of these forest damages are not yet fully understood, it is thought that air pollution or acidifying atmospheric deposition may have played a key role. In western Europe, extensive declines of Norway spruce (Picea abies) and beech (Fagus sylvatica) are also thought to somehow be related to exposure to air pollution and acidification. In comparison, the damage caused by ozone to forests dominated by ponderosa pine (Pinus ponderosa) in California is a relatively well-understood kind of emergent ecological disease. In the marine realm, widespread damage to diverse species of corals has been documented in far-flung regions of the world. The phenomenon is known as coral “bleaching,” and it involves the corals expelling their symbiotic algae (known as zooxanthellae), often resulting in death of the coral. Coral bleaching is thought to possibly be related to climate warming, although it can be caused by both unusually high or low water temperatures, changes in salinity, and other environmental stresses. Another unexplained case of an ecological disease appears to be afflicting species of amphibians in many parts of the world. The amphibian declines involve severe population collapses, and have even caused the extinction of some species. The specific causes are not yet known, but they likely involve introduced microbial pathogens, or possibly
Emission standards
increased exposure to solar ultraviolet radiation, climate change, or some other factor. [Bill Freedman Ph.D.]
RESOURCES BOOKS Freedman, B. Environmental Ecology. San Diego, CA: Academic Press, 1995.
EMF see Electromagnetic field
Emission Release of material into the environment either by natural or human-caused processes. This term is used especially in describing air pollution for volatile or suspended contaminants that result from processes such as burning fuel in an engine. Definitions of pollution are complicated by the fact that many of the materials that damage or degrade our atmosphere have both human and natural origins. Volcanoes emit ash, acid mists, hydrogen sulfide, and other toxic gases. Natural forest fires release smoke, soot, carcinogenic hydrocarbons, dioxins, and other toxic chemicals as well as large amounts of carbon dioxide. Do these emissions constitute pollution when they originate from human sources but not if released by natural processes? Is it reasonable to restrict human emissions if there are already very large natural sources of those same materials in the environment? An important consideration in answering these questions lies in the regenerative capacity of the environment to remove or neutralize contaminants. If we overload that capacity, a marginal additional emission may be important. Similarly, if there are thresholds for response, an incremental addition to ambient levels may be very important.
Emission standards Federal, state, and local stack and automobile exhaust emission limits that regulate the quantity, rate, or concentration of emissions. Emission standards can also regulate the opacity of plumes of smoke and dust from point and area emission sources. They can also assess the type and quality of fuel and the way the fuel is burned, hence the type of technology used. With the exception of plume opacity, such standards are normally applied to the specific type of source for a given pollutant. Federal standards include New Source Performance Standards (NSPS) and National Emission Standards for Hazardous Air Pollutants (NESHAPS). 449
Emphysema
Emission standards may include prohibitory rules that restrict existing and new source emission to specific emission concentration levels, mass emission rates, plume opacity, and emissions relative to process throughput emission rates. They may also require the most practical or best available technology in case of new emission in pristine areas. New sources and modifications to existing sources can be subject to new source permitting procedures which require technology-forcing standards such as Best Available Control Technology and Lowest Achievable Emission Rate (LAER). However, these standards are designed to consider the changing technological and economic feasibility of evermore stringent emission controls. As a result, such requirements are not stable and are determined through a process involving discretionary judgements of appropriateness by the governing air pollution authority. See also Point source
Emissions trading see Trade in pollution permits
Emphysema Emphysema is an abnormal, permanent enlargement of the airways responsible for gas-exchange in the lungs. Primary emphysema is commonly linked to a genetic deficiency of the enzyme &agr;1-antitrypsin which is a major component of &agr;1-globulin, a plasma protein. Under normal conditions &agr;1-antitrypsin inhibits the activity of many proteolytic enzymes which breakdown proteins. This results in the increased likelihood of developing emphysema as a result of proteolysis (breakdown) of the lung tissues. Emphysema begins with destruction of the alveolar septa. This results in “air hunger” characterized by labored or difficult breathing, sometimes accompanied by pain. Although emphysema is genetically linked to deficiency in certain enzymes, the onset and severity of asthmatic symptoms has been definitively linked to irritants and pollutants in the environment. A significantly greater proportion of the individuals manifesting emphysemic symptoms is observed in smokers, populations clustered around industrial complexes, and coal miners. See also Asthma; Cigarette smoke; Respiratory diseases
Endangered species An “endangered species” under United States law (the Endangered Species Act [1973]) is a creature “in danger of extinction throughout all or a significant portion of its range.” A “threatened” species is one that is likely to become endangered in the foreseeable future. 450
Environmental Encyclopedia 3 For most people, the endangered species problem involves the plight of such well-known animals as eagles, tigers, whales, chimpanzees, elephants, wolves, and whooping cranes. However, literally millions of lesserknown or unknown species are endangered or becoming so, and the loss of these life forms could have even more profound effects on humans than that of large mammals with whom we more readily identify and sympathize. Most experts on species extinction, such as Edward O. Wilson of Harvard and Norman Myers, estimate current and projected annual extinctions at anywhere from 15,000 to 50,000 species, or 50 to 150 per day, mainly invertebrates such as insects in tropical rain forests. At this rate, 5–10% of the world’s species, perhaps more, could be lost in the next decade and a similar percentage in coming decades. The single most important common threat to wildlife worldwide is the loss of habitat, particularly the destruction of biologically-rich tropical rain forests. Additional factors have included commercial exploitation, the introduction of non-native species, pollution, hunting, and trapping. Thus, we are rapidly losing a most precious heritage, the diversity of living species that inhabit the earth. Within one generation, we are witnessing the threatened extinction of between one fifth and one half of all species on the planet. Species of wildlife are becoming extinct at a rate that defies comprehension and threatens our own future. These losses are depriving this generation and future ones of much of the world’s beauty and diversity, as well as irreplaceable sources of food, drugs, medicines, and natural processes that are or could prove extremely valuable, or even necessary, to the well-being of our society. Today’s rate of extinction exceeds that of all of the mass extinction in geologic history, including the disappearance of the dinosaurs 65 million years ago. It is impossible to know how many species of plants and animals we are actually losing, or even how many species exist, since many have never been “discovered” or identified. What we do know is that we are rapidly extirpating from the face of the earth countless unique life forms that will never again exist. Most of these species extinctions will occur—and are occurring—in tropical rain forests, which are the richest biological areas on earth and are being cut down at a rate of 1–2 acres (0.4–0.8 ha) a second. Although tropical forests cover only about 5–7% of the world’s land surface, they are thought to contain over half of the species on earth. There are more bird species in one Peruvian preserve than in the entire United States. There are more species of fish in one Brazilian river than in all the rivers of the United States. And a single square mile in lowland Peru or Amazonian Ecuador or Brazil may contain over 1500 species of butterflies, more than twice as many as are found in all of
Environmental Encyclopedia 3 the United States and Canada. Half an acre of Peruvian rain forest may contain over 40,000 species of insects. Eric Eckholm in Disappearing Species: The Social Challenge notes that when a plant species is wiped out, some 10– 30 dependent species can also be jeopardized, such as insects and even other plants. An example of the complex relationship that has evolved between many tropical species is the 40 different kinds of fig trees native to Central America, each of which has a specific insect pollinator. Other insects, including pollinators for other plants, depend on certain of these fig trees for food. Thus, the extinction of one species can set off a chain reaction, the ultimate effects of which cannot be foreseen. As Eckholm puts it, “Crushed by the march of civilization, one species can take many others with it, and the ecological repercussion and arrangements that follow may well endanger people.” The loss of so many unrecorded, unstudied species will deprive the world not only of beautiful and interesting life forms, but also much-needed sources of medicines, drugs, and food that could be of critical value to humanity. Every day, we could be losing plants that could provide cures for cancer or AIDS or could become food staples as important as rice, wheat, or corn. We will simply never know the value or importance of the untold thousands of species vanishing each year. As of spring of 2002, the U.S. Department of the Interior’s list of endangered and threatened species included 1,070 animals (mammals, birds, reptiles, amphibians, fish, snails, clams, crustaceans, insects, and arachnids), and 746 plants, for a total of 1816 endangered or threatened species. Under the Endangered Species Act, the Department of the Interior is given general responsibility for listing and protecting endangered wildlife, except for marine species (such as whales and seals), which are the responsibilities of the Commerce Department. In addition, the United States is subject to the provisions of the Convention on International Trade in Endangered Species of Wild Flora and Fauna (CITES), which regulates global commerce in rare species. But in many cases, the government has not been enthusiastic about administering and enforcing the laws and regulations protecting endangered wildlife. Conservationists have for years criticized the Interior Department for its slowness and even refusal to list hundreds of endangered species that, without government protection, were becoming extinct. Indeed, the Department admits that some three dozen species have become extinct while undergoing review for listing. In December 1992, the department settled a lawsuit brought by animal protection groups by agreeing to expedite the listing process for some 1300 species and to take a more comprehensive “multispecies, ecosystem approach” to protecting wildlife and their habitat. In October 1992, at
Endangered Species Act (1973)
the national conference of the Humane Society of the United States held in Boulder, Colorado, the Secretary of the Interior Bruce Babbitt in his keynote address lauded the Endangered Species Act as “an extraordinary achievement,” emphasized the importance of preserving endangered species and biological diversity, and noted: “The extinction of a species is a permanent loss for the entire world. It is millions of years of growth and development put out forever.” See also Biodiversity [Lewis G. Regenstein]
RESOURCES BOOKS Mitchell, G. J. World on Fire: Saving an Endangered Earth. New York: Charles Scribner’s Sons, 1991. Myers, N. The Sinking Ark: A New Look at Disappearing Species. Oxford: Pergamon Press, 1979. Porritt, J. Save the Earth. Atlanta: Turner Publishing, 1991. Raven, P. H. “Endangered Realm.” In The Emerald Realm. Washington, DC: National Geographic Society, 1990. Wilson, E. O., ed. Biodiversity. Washington, DC: National Academy Press, 1988.
OTHER “Endangered and Threatened Wildlife and Plants.” U. S. Department of the Interior. Federal Register (29 August 1992).
Endangered Species Act (1973) The Endangered Species Act (ESA) is a law designed to save species from extinction. What began as an informal effort to protect several hundred North American vertebrate species in the 1960s has expanded into a program that could involve hundreds of thousands of plant and animal species throughout the world. As of May 2002, 1,816 species were listed as endangered or threatened, 1,258 in the United States and 558 in other countries. The law has become increasingly controversial as it has been viewed by commercial interests as a major impediment to economic development. This issue recently came to a head in the Pacific Northwest, where the northern spotted owl has been listed as threatened. This action has had significant affects on the regional forest products industry. The ESA was due to be re-authorized in 1992, but this was postponed due to that year’s election. Although it expired on October 1, 1992, Congress has allotted enough funds to keep the ESA active. Government action to protect endangered species began in 1964, with the formation of the Committee on Rare and Endangered Wildlife Species within the Bureau of Sport Fisheries and Wildlife (now the Fish and Wildlife Service [FWS]) in the U.S. Department of the Interior. In 1966, this committee issued a list of 83 native species (all verte451
Endangered Species Act (1973)
Environmental Encyclopedia 3
A chart of the estimated annual rate of species loss from 1700–2000. (Beacon Press. Reproduced by permission.)
452
Environmental Encyclopedia 3 brates) that it considered endangered. Two years later, the first act designed to protect species in danger of extinction, the Endangered Species Preservation Act of 1966, was passed. The Secretary of the Interior was to publish a list, after consulting the states, of native vertebrates that were endangered. This law directed federal agencies to protect endangered species when it was “practicable and consistent with the primary purposes” of these agencies. The taking of listed endangered species was prohibited only within the national wildlife refuge system; that is, species could be killed almost anywhere in the United States. Finally, the law authorized the acquisition of critical habitat for these endangered species. In 1969, the Endangered Species Conservation Act was passed, which included several significant amendments to the 1966 Act. Species could now be listed if they were threatened with worldwide extinction. This substantially broadened the scope of species to be covered, but it also limited the listing of specific populations that might be endangered in some parts of the United States but not in danger elsewhere (e.g., grizzly bears, bald eagles, timber wolves, all of which flourish in Canada and Alaska). The 1969 law stated that mollusks and crustaceans could now be included on the list, further broadening of the scope of the law. Finally, trade in illegally taken endangered species was prohibited. This substantially increased the protection offered such species, compared to the 1966 law. The Endangered Species Act of 1973 built upon and strengthened the previous laws. The impetus for the law was a call by President Nixon in his state of the union message for further protection of endangered species and the concern in Congress that the previous acts were not working well enough. The goal of the ESA was to protect all endangered species through the use of “all methods and procedures necessary to bring any endangered or threatened species to the point at which the measures provided pursuant to [the] Act are no longer necessary.” In other words, the goal was to bring endangered species to full recovery. This goal, like others included in environmental legislation at the time, was unrealistic. The ESA also expanded the number of species that could be considered for listing to all animals (except those considered pests) and plants. It stipulated that the listing of such species should be based on the best scientific data available. Additionally, it included a provision that allowed groups or individuals to petition the government to list or de-list a species. If the petition contained reasonable support, the agency had to respond to it. The law created two levels of concern: endangered and threatened. An endangered species was “in danger of extinction throughout all or a significant portion of its range.” A threatened species was “likely to become an endangered species within the foreseeable future throughout all
Endangered Species Act (1973)
or a significant portion of its range.” Also, the species did not have to face worldwide extinction before it could be listed. No taking of any kind was allowed for endangered species; limited taking could be allowed for threatened species. Thus, the distinction between “endangered” and “threatened” species allowed for some flexibility in the program. The 1973 Act divided jurisdiction of the program between the FWS and the National Marine Fisheries Service (NMFS), an agency of the National Oceanic and Atmospheric Administration in the Department of Commerce. The NMFS would have responsibility for species that were primarily marine; responsibility for marine mammals (whales, dolphins, etc.) was shared by the two agencies. The law also provided for the establishment of cooperative agreements between the federal government and the states on endangered species protection. This has not proved very successful due to a lack of funds to entice the states to participate and due to frequent conflict between the states (favoring development and hunting) and the FWS. The most controversial aspect of the Act was Section 7, which required that no action by a federal agency, such as the destruction of critical habitat, jeopardize any endangered species. So, before undertaking, funding, or granting a permit for a project, federal agencies had to consult with the FWS as to the effect the action might have on endangered species. This provision proved to have enormous consequences, as many federal developments could be halted due to their affect on endangered species. The most famous and controversial application of this provision involved the snail darter and Tellico Dam in Tennessee. The Tennessee Valley Authority (TVA) was building the nearly completed dam when the snail darter was listed as endangered. Its only known habitat would be destroyed if the dam was completed. The TVA challenged the FWS evidence, but the TVA was itself soon challenged in the courts by environmentalists. In a case that was appealed through the Supreme Court, TVA v. Hill, the courts ruled that the ESA was clear: no federal action could take place that would jeopardize an endangered species. The dam could not be completed. In response to the conflicts that followed the passage of the ESA, especially the swelling estimates of the number of endangered species and the snail darter-Tellico Dam issue, the 1978 re-authorization of the ESA included heated debate and a few significant changes in the law. Perhaps the most important change was the creation of the Endangered Species Committee, sometimes referred to as the “God Committee.” Created in response to the snail darter controversy and the TVA v. Hill decision, this committee could approve federal projects that were blocked due to their harmful effects on endangered species. If an agency’s actions were blocked due to an endangered species, they could appeal to this 453
Environmental Encyclopedia 3
Endemic species
committee for an exemption from the ESA. The committee, which consists of three cabinet secretaries, the administrators of the EPA and NOAA, the chair of the Council of Economic Advisors, and the governor from the state in which the project is located, weigh the advantages and disadvantages of the project and then make a decision as to the appeal. Ironically, the committee heard the appeal for Tellico Dam and rejected it. The committee has only been used three times, and only once, regarding the northern spotted owl and 1,700 acres (689 ha) of land in Oregon, has it approved an appeal. Nonetheless, the creation of the “God Committee” demonstrated that values beyond species survival had to be weighed into the endangered species equation. Additionally, the 1978 amendments mandated: increased public participation and hearings when species were proposed for listing; a five-year review of all species on the list (to determine if any had improved to the point that they could be removed from the list); a requirement that the critical habitat of a species must be specified at the time the species is listed and that an economic assessment of the critical habitat designation must be done; the mandatory development of a recovery plan for each listed species; and a time limit between when a species was proposed for listing and when the final rule listing the species must be issued. These amendments were designed to do two things: to provide a loophole for projects that might be halted by the ESA and to speed up the listing process. Despite this latter goal, the many new requirements included in the amendments led to a further slowing of the listing process. It should also be noted that these amendments passed with overwhelming majorities in both the House and the Senate; there was still strong support for protecting endangered species, at least in the abstract, in Congress. The 1982 re-authorization of the ESA did not lead to significant changes in the Act. The law was to have been re-authorized again in 1985, but opposition in the Senate prevented re-authorization until 1988. This demonstrated the growing uneasiness in Congress to the economic repercussions of the ESA. In addition to re-authorizing spending to implement the ESA through 1992, the 1988 amendments also increased the procedural requirements for recovery plans. This further underscored the tension between the desire for public participation at all stages of ESA implementation and the need for the government to move quickly to protect endangered species. Overall, the implementation of the ESA has not been successful. The law has suffered from two main problems: poor administrative capacity and fragmentation. The FWS has suffered from its lack of stature within the bureaucracy, its conflicting institutional mission, a severe lack of funds and personnel, and limited public support. Species assigned to the NMFS have fared even worse, as the agency has shown little interest in the ESA. 454
Fragmentation is demonstrated with the division of responsibilities between the FWS and NMFS, the conflict with other federal agencies due to Section 7, and the federal-state conflicts over jurisdictional responsibility. [Christopher McGrory Klyza]
RESOURCES BOOKS Harrington, W., and A. C. Fisher. “Endangered Species.” In Current Issues in Natural Resource Policy, edited by P. R. Portney. Baltimore: Johns Hopkins University Press, 1982. Tobin, R. The Expendable Future: U. S. Politics and the Protection of Biological Diversity. Durham, NC: Duke University Press, 1990.
PERIODICALS Egan, T. “Strongest U. S. Environmental Law May Become Endangered Species.” New York Times (26 May 1992): A1.
Endemic species Endemic species are plants and animals that exist only in one geographic region. Species can be endemic to large or small areas of the earth: some are endemic to a particular continent, some to part of a continent, and others to a single island. Usually an area that contains endemic species is isolated in some way, so that species have difficulty spreading to other areas, or it has unusual environmental characteristics to which endemic species are uniquely adapted. Endemism, or the occurrence of endemic animals and plants, is more common in some regions than in others. In isolated environments such as the Hawaiian Islands, Australia, and the southern tip of Africa, as many of 90% of naturally occurring species are endemic. In less isolated regions, including Europe and much of North America, the%age of endemic species can be very small. Biologists who study endemism do not only consider species, the narrowest classification of living things; they also look at higher level classifications of genus, family, and order. These hierarchical classifications are nested so that, in most cases, an order of plants or animals contains a number of families, each of these families includes several genera (plural of “genus"), and each genus has a number of species. These levels of classification are known as “taxonomic” levels. Species is the narrowest taxonomic classification, with each species closely adapted to its particular environment. Therefore species are often endemic to small areas and local environmental conditions. Genera, a broader class, are usually endemic to larger regions. Families and orders more often spread across continents. As an example, the order Rodentia, or rodents, occurs throughout the world. Within this order, the family Heteromyidae occurs only in western North America and the northern edge of South America.
Environmental Encyclopedia 3 One member of this family, the genus Dipodomys, or kangaroo rats, is restricted to several western states and part of Mexico. Finally, the species Dipodomys ingens, occurs only in a small portion of the California coast. Most often endemism is considered on the lowest taxonomic levels of genus and species. Animals and plants can become endemic in two general ways. Some evolve in a particular place, adapting to the local environment and continuing to live within the confines of that environment. This type of endemism is known as “autochthonous,” or native to the place where it is found. An “allochthonous” endemic species, by contrast, originated somewhere else but has lost most of its earlier geographic range. A familiar autochthonous endemic species is the Australian koala, which evolved in its current environment and continues to occur nowhere else. A well-known example of allochthonous endemism is the California coast redwood (Sequoia sempervirens), which millions of years ago ranged across North America and Eurasia, but today exists only in isolated patches near the coast of northern California. Another simpler term for allochthonous endemics is “relict,” meaning something that is left behind. In addition to geographic relicts, plants or animals that have greatly restricted ranges today, there are what is known as “taxonomic relicts.” These are species or genera that are sole survivors of once-diverse families or orders. Elephants are taxonomic relicts: millions of years ago the family Elephantidae had 25 different species (including woolly mammoths) in five genera. Today only two species remain, one living in Africa (Loxodonta africana) and the other in Asia (Elephas maximus). Horses are another familiar species whose family once had many more branches. Ten million years ago North America alone had at least 10 genera of horses. Today only a few Eurasian and African species remain, including the zebra and the ass. Common horses, all members of the species Equus caballus, returned to the New World only with the arrival of Spanish conquistadors. Taxonomic relicts are often simultaneously geographic relicts. The ginkgo tree, for example was one of many related species that ranged across Asia 100 million years ago. Today the family Ginkgoales contains only one genus, Ginkgo, with a single species, Ginkgo biloba, that occurs naturally in only a small portion of eastern China. Similarly the coelacanth, a rare fish found only in deep waters of the Indian Ocean near Madagascar, is the sole remnant of a large and widespread group that flourished hundreds of millions of years ago. Where living things become relict endemics, some sort of environmental change is usually involved. The redwood, the elephant, the ginkgo, and the coelacanth all originated in the Mesozoic era, 245–65 million years ago, when the earth was much warmer and wetter than it is today. All of these species managed to survive catastrophic environmental
Endemic species
change that occurred at the end of the Cretaceous period, changes that eliminated dinosaurs and many other terrestrial and aquatic animals and plants. The end of the Cretaceous was only one of many periods of dramatic change; more recently two million years of cold ice ages and warmer interglacial periods in the Pleistocene substantially altered the distribution of the world’s plants and animals. Species that survive such events to become relicts do so by adapting to new conditions or by retreating to isolated refuges where habitable environmental conditions remain. When endemics evolve in place, isolation is a contributing factor. A species or genus that finds itself on a remote island can evolve to take advantage of local food sources or environmental conditions, or its characteristics may simply drift away from those of related species because of a lack of contact and interbreeding. Darwin’s Galapagos finches, for instance, are isolated on small islands, and on each island a unique species of finch has evolved. Each finch is now endemic to the island on which it evolved. Expanses of water isolated these evolving finch species, but other sharp environmental gradients can contribute to endemism, as well. The humid southern tip of Africa, an area known as the Cape region, has one of the richest plant communities in the world. A full 90% of the Cape’s 18,500 plant species occur nowhere else. Separated from similar habitat for millions of years by an expanse of dry grasslands and desert, local families and genera have divided and specialized to exploit unique local niches. Endemic speciation, or the evolution of locally unique species, has also been important in Australia, where 32% of genera and 75% of species are endemic. Because of its long isolation, Australia even has family-level endemism, with 40 families and sub-families found only on Australia and a few nearby islands. Especially high rates of endemism are found on longisolated islands, such as St. Helena, New Caledonia, and the Hawaiian chain. St. Helena, a volcanic island near the middle of the Atlantic, has only 60 native plant species, but 50 of these exist nowhere else. Because of the island’s distance from any other landmass, few plants have managed to reach or colonize St. Helena. Speciation among those that have reached the remote island has since increased the number of local species. Similarly Hawaii and its neighboring volcanic islands, colonized millions of years ago by a relatively small number of plants and animals, now has a wealth of locally-evolved species, genera, and sub-families. Today’s 1,200–1,300 native Hawaiian plants derive from about 270 successful colonists; 300–400 arthropods that survived the journey to these remote islands have produced over 6,000 descendent species today. Ninety-five percent of the archipelago’s native species are endemic, including all ground birds. New Caledonia, an island midway between Australia and Fiji, consists partly of continental rock, suggesting that 455
Environmental Encyclopedia 3
Endocrine disruptors
at one time the island was attached to a larger landmass and its resident species had contact with those of the mainland. Nevertheless, because of long isolation 95% of native animals and plants are endemic to New Caledonia. Ancient, deep lakes are like islands because they can retain a stable and isolated habitat for millions of years. Siberia’s Lake Baikal and East Africa’s Lake Tanganyika are two notable examples. Lake Tanganyika occupies a portion of the African Rift Valley, 0.9 mi (1.5 km) deep and perhaps 6 million years old. Fifty percent% of the lake’s snail species are endemic, and most of its fish are only distantly related to the fish of nearby Lake Nyasa. Siberia’s Lake Baikal, another rift valley lake, is 25 million years old and 1 mi (1.6 km) deep. Eighty-four percent of the lake’s 2,700 plants and animals are endemic, including the nerpa, the world’s only freshwater seal. Because endemic animals and plants by definition have limited geographic ranges, they can be especially vulnerable to human invasion and habitat destruction. Island species are especially vulnerable because islands commonly lack large predators, and many island endemics evolved without defenses against predation. Cats, dogs, and other carnivores introduced by sailors have decimated many island endemics. The flora and fauna of Hawaii, exceptionally rich before Polynesians arrived with pigs, rats, and agriculture, were severely depleted because their range was limited and they had nowhere to retreat as human settlement advanced. Tropical rain forests, with extraordinary species diversity and high rates of endemism, are also vulnerable to human invasion. Many of the species eliminated daily in Amazonian rain forests are locally endemic, so that their entire range can be eliminated in a short time. [Mary Ann Cunningham Ph.D.]
RESOURCES BOOKS Berry, E. W. “The Ancestors of the Sequoias.” Natural History 20 (1920): 153-55. Brown, J. H., and A. C. Gibson. Biogeography. St. Louis: Mosby, 1983. Cox, G. W. Conservation Biology. Dubuque, IA: William C. Brown, 1993. Kirch, P. “The Impact of the Prehistoric Polynesians on the Hawaiian Ecosystem.” Pacific Science 36 (1982): 1-14. Nitecki, M. W., ed. Extinctions. Chicago: University of Chicago Press, 1984.
Endocrine disruptors In recent years, scientists have proposed that chemicals released into the environment may be disrupting the endocrine system of humans and wildlife. The endocrine system is a network of glands and hormones that regulates many of the body’s functions, such as growth, development, behavior, 456
and maturation. The endocrine glands include the pituitary, thyroid, adrenal, thymus, pancreas, and the male and female gonads (testes and ovaries). These glands secrete regulated amounts of hormones into the bloodstream, where they act as chemical messengers as they are carried throughout the body to control and regulate many the body’s functions. The hormones bind to specific cell sites called receptors. By binding to the receptors, the hormones trigger various responses in the tissues that contain the receptors. An endocrine disruptor is an external agent that interferes in some way with the role of the hormones in the body. The agent might disrupt the endocrine system by affecting any of the stages of hormone production and activity, such as preventing the synthesis of a hormone, directly binding to hormone receptors, or interfering with the breakdown of a natural hormone. Disruption in endocrine function during highly sensitive prenatal periods is especially critical, as small changes in endocrine functions may have delayed consequences that may become evident later in adult life or in a subsequent generation. Adverse effects that might be a result of endocrine disruption include the development of cancers, reproductive and developmental effects, neurological effects (effects on behavior, learning and memory, sensory function, and psychomotor development), and immunological effects (immunosuppression, with resulting disease susceptibility). Exposure to suspected endocrine disruptors may occur through direct contact with the chemicals or through ingestion of contaminated water, food, or air. Suspected endocrine disruptors can enter air or water from chemical and manufacturing processes and through incineration of products. Industrial workers may be exposed in work settings. Documented examples of health effects of humans exposed to endocrine disrupting chemicals include shortened penises in the sons of women exposed to dioxin-contaminated rice oil in China and reduced sperm count in workers exposed to high doses of Kepone in a Virginia pesticide factory. Diethylstilbestrol (DES), a synthetic estrogen, was used in the 1950s and 1960s by pregnant women to prevent miscarriages. Unfortunately it did not prevent miscarriages, but the teenage daughters of women who had taken DES suffered high rates of vaginal cancers, birth defects of the uterus and ovaries, and immune system suppression. These health effects were traced to their mothers’ use of DES. A variety of chemicals, including some pesticides, have been shown to result in endocrine disruption in animal laboratory studies. However, except for the incidences of endocrine disruption due to chemical exposures in the workplace and to the use of DES, causal relationships between exposure to specific environmental agents and adverse health effects in humans due to endocrine disruption have not yet been firmly established.
Environmental Encyclopedia 3
Energy and the environment
There is more evidence that the endocrine systems of fish and wildlife have been affected by chemical contamination in their habitats. Groups of animals that have been affected by endocrine disruption include snails, oysters, fish, alligators and other reptiles, and birds, including gulls and eagles. Whether effects on individuals of a particular species impact populations of that organism is difficult to prove. Whether endocrine disruption is confined to specific areas or is more widespread is also not known. In addition, proving that a specific chemical causes a particular endocrine effect is difficult, as animals are exposed to a variety of chemicals and non-chemical stressors. However, some persistent organic chemicals such as DDT (dichlorodiphenyltrichloroethane), PCBs (polychlorinated biphenyls), dioxin, and some pesticides have been shown to act as endocrine disruptors in the environment. Adverse effects seen that may be caused by endocrine disrupting mechanisms include abnormal thyroid function and development in fish and birds, decreased fertility in shellfish, fish, birds, and mammals, decreased hatching success in fish, birds, and reptiles, demasculinization and feminization of fish, birds, reptiles, and mammals, defeminization and masculinization of gastropods, fish, and birds, and alteration of immune and behavioral function in birds and mammals. Many potential endocrine disrupting chemicals are persistent and bioaccumulate in fatty tissues of organisms and increase in concentration as they move up through the food web. Because of this persistence and mobility, they can accumulate and harm organisms far from their original source. More information is needed to define the ecological and human health risks of endocrine disrupting chemicals. Epidemiological investigations, exposure assessments, and laboratory testing studies for a wide variety of both naturally occurring and synthetic chemicals are tools that are being used to determine whether these chemicals as environmental contaminants have the potential to disrupt hormonally mediated processes in humans and animals. [Judith L. Sims]
RESOURCES BOOKS Colburn, Theo, Dianne Dumanoski, and John Peterson Myers.Our Stolen Future: Are We Threatening Our Fertility, Intelligence, and Survival? A Scientific Detective Story.New York, NY: Penguin Books USA, 1997. Gillette, Louis J., and D. Andrew Crain. ed.Environmental Endocrine Disruptors.London, England: Taylor & Francis Group, 2000. Krimsky, Sheldon, and Lynn Goldman.Hormonal Chaos: The Scientific and Social Origins of the Environmental Endocrine Hypothesis.Baltimore, MD: Johns Hopkins Press, 1999. Weyer, Peter, and David Riley.Endocrine Disruptors and Pharmaceuticals in Drinking Water.Denver, CO: American Water Works Association, 2001.
PERIODICALS Kavlock, Robert J. et al. “Research Needs for the Risk Assessment of Health and Environmental Effects of Endocrine Disruptors: A Report of the U.S. EPA-Sponsored Workshop.” Environmental Health Perspectives 104 (1996): 715–740.
OTHER Committee on Environment and Natural Resources, National Science and Technology Council. Endocrine Disruptors Research Initiative.U.S. Environmental Protection Agency, Washington, DC, June 29, 1999. [cited June 1, 2002]. . Endocrine Disruptors. Natural Resources Defense Council, November 25, 1998. [cited June 1, 2002]. . Endocrine Disruptors. World Wildlife Fund, [cited June 1, 2002], . Technical Panel, Office of Research and Development, Office of Prevention, Pesticides, and Toxic Substances. Special Report on Environmental Endocrine Disruption: An Effects Assessment and Analysis. EPA/630/R-96/012, U.S. Environmental Protection Agency, Washington, DC, 1997.
Energy and the environment Energy is a prime factor in environmental quality. Extraction, processing, shipping, and combustion of coal, oil, and natural gas are the largest sources of air pollutants, thermal and chemical pollution of surface waters, accumulation of mine tailings and toxic ash, and land degradation caused by surface mining in the United States. On the other hand, a cheap, inexhaustible source of energy would allow us people to eliminate or repair much of the environmental damage done already and to improve the quality of the environment in many ways. Often, the main barrier to reclaiming degraded land, cleaning up polluted water, destroying wastes, restoring damaged ecosystems, or remedying most other environmental problems is that solutions are expensive—and much of that expense is energy costs. Given a clean, sustainable, environmentally benign energy source, people could create a true utopia and extend its benefits to everyone. Our ability to use external energy to do useful work is one of the main characteristics that distinguishes humans from other animals. Clearly, technological advances based on this ability have made our lives much more comfortable and convenient than that of our early ancestors. They have also allowed us to make bigger mistakes, faster than ever before. A large part of our current environmental crisis is that our ability to modify our environment has outpaced our capacity to use energy and technology wisely. In the United States, fossil fuels supply about 85% of the commercial energy. This situation cannot continue for very long because the supplies of these fuels are limited and their environmental effects are unacceptable. Americans now get more than half of their oil from foreign sources at great economic and political costs. At current rates of use, 457
Environmental Encyclopedia 3
Energy and the environment
known, economically extractable world supplies of oil and natural gas will probably last only a century or so. Reserves of coal are much larger, but coal is the dirtiest of all fuels. Its contribution of greenhouse gases that cause global warming are reason enough to curtail our coal use. In addition, coal burning is the largest single source in the United States of sulfur dioxide and nitrogen oxides (which cause respiratory health problems, ecosystem damage, and acid precipitation). Paradoxically, coal-burning power plants also release radioactivity, since radioactive minerals such as uranium and thorium are often present in low concentrations in coal deposits. Nuclear power was once thought to be an attractive alternative to fossil fuels. Billed as “the clean energy alternative” and as an energy source “too cheap to bother metering,” nuclear power was promoted in the 1960s as the energy source for the future. The disastrous consequences of accidents in nuclear plants, such as the explosion and fire at Chernobyl in the Ukraine in 1986, problems with releases of radioactive materials in mining and processing of fuels, and the inability to find a safe, acceptable permanent storage of nuclear waste have made nuclear power seem much less attractive in recent years. Between seventy and ninety percent of the citizens of most European and North American countries now regard nuclear power as unacceptable. The United States Government once projected that 1,500 nuclear plants would be built. In 2002, only 105 plants were in operation and no new construction has been undertaken since 1975. Many of these aging plants are now reaching the end of their useful life. There will be enormous costs and technical difficulties in dismantling them and disposing of the radioactive debris. Some reactor designs are inherently safer than those now in operation, but public confidence in nuclear power technology is at such a low level that it seems unlikely that it will never supply much energy. Damming rivers to create hydroelectric power from spinning water turbines has the attraction of providing a low-cost, renewable, air pollution-free energy source. Only a few locations remain in the United States, however, where large hydroelectric projects are feasible. Many more sites are available in Canada, Brazil, India, and other countries, but the social and ecological effects of building large dams, flooding valuable river valleys, and eliminating free-flowing rivers are such that opposition is mounting to this energy source. An example of the ecological and human damage done by large hydroelectric projects is seen in the James Bay region of Eastern Quebec. A series of huge dams and artificial lakes have flooded thousands of square miles of forest. Migration routes of caribou are disrupted, the habitat for game on which indigenous people depended is destroyed, and decaying vegetation has acidified waters, releasing mercury from the bedrock and raising mercury concentrations in fish 458
to toxic levels. The hunting and gathering way of life of local Cree and Inuit people has probably been destroyed forever. This kind of tragedy has been repeated many times around the world by ill-conceived hydro projects. There are several sustainable, environmentally benign energy sources that should be developed. Among these are wind power, biomass (burning renewable energy crops such as fast-growing trees or shrubs), small-scale hydropower (low head or run-of-the- river turbines), passive-solar space heating, active-solar water heaters, photovoltaic energy (direct conversion of sunlight to electricity), and ocean tidal or wave power. There may be unwanted environmental consequences of some of these sources as well, but they seem much better in aggregate than current energy sources. A big disadvantage is that most of these alternative energy sources are diffuse and not always available when or where we want to use energy. We need ways to store and ship energy generated from these sources. There have been many suggestions that a breakthrough in battery technology could be on the horizon. Other possibilities include converting biomass into methane or methanol fuels or using electricity to generate hydrogen gas through electrolysis of water. These fuels would be easily storable, transportable, and used with current technology without great alterations of existing systems. It is estimated that some combination of these sustainable energy sources could supply all of American energy needs by utilizing only a small fraction (perhaps less than one percent) of United States land area. If means are available to move this energy efficiently, these energy farms could be in remote locations with little other value. Clearly, the best way to protect the environment from damage associated with energy production is to use energy more efficiently. Many experts estimate that people could enjoy the same comfort and convenience but use only half as much energy if they practiced energy conservation using currently available technology. This would not require great sacrifices economically or in one’s lifestyle. See also Acid rain; Air pollution; Greenhouse effect; Photovoltaic cell; Solar energy; Thermal pollution; Wind energy [William P. Cunningham Ph.D.]
RESOURCES BOOKS Davis, G. R. Energy for Planet Earth. New York: W. H. Freeman, 1991.
PERIODICALS Weinberg, C. J., and R. H. Williams. “Energy from the Sun.” Scientific American 263 (September 1990): 146-55.
Environmental Encyclopedia 3
Energy conservation Energy conservation was a concept largely unfamiliar to America—and to much of the rest of the world—prior to 1973. Certainly some thinkers prior to that date thought about, wrote about, and advocated a more judicious use of the world’s energy supplies. But in a practical sense, it seemed that the world’s supply of coal, oil, and natural gas was virtually unlimited. In 1973, however, the Organization of Petroleum Exporting Countries (OPEC) placed an arbitrary limit on the amount of petroleum that non-producing nations could buy from them. Although the OPEC embargo lasted only a short time, the nations of the world were suddenly forced to consider the possibility that they might have to survive on a reduced and ultimately finite supply of the fossil fuels. In the United States, the OPEC embargo set off a flurry of administrative and legislative activity, designed to ensure a dependable supply of energy for the nation’s further needs. Out of this activity came acts such as the Energy Policy and Conservation Act of 1976, the Energy Conservation and Production Act of 1976, and the National Energy Act of 1978. An important feature of the nation’s (and the world’s) new outlook on energy was the realization of how much energy is wasted in transportation, residential and commercial buildings, and industry. When energy supplies appeared to be without limit, waste was a matter of small concern. However, when energy shortages began to be a possibility, conservation of energy sources assumed a high priority. Energy conservation is certainly one of the most attainable goals the federal government can set for the United States. Almost every way we use energy results in enormous waste. Only about 20% of the energy content of gasoline, for example, is actually put to productive work in an automobile. Each time we make use of electricity, we produce waste when coal is burned to heat water to drive a turbine to operate a generator to make electricity. No wonder more than ninety% of the energy generated in the electrical process is wasted. Fortunately, a vast array of conservation techniques are available in each of the major categories of energy use: transportation, residential and commercial buildings, and industry. In the area of transportation, conservation efforts focus on the nation’s use of the private automobile for most personal travel. Certainly, the private automobile is an enormously wasteful method for moving people from one place to another. It is hardly surprising, therefore, that conservationists have long argued for the development of alternative means of transportation: bicycles, motorcycles, mopeds, carpools and van-pools, dial-a-rides, and various forms of mass transit. The amount of energy needed to move a single
Energy conservation
individual on the average is about one-third on a bus what it is in a private car. One need only compare the relative energy cost per passenger for eight people travelling in a commuter van-pool to the cost for a single individual in her private automobile to see the advantages of some form of mass transit. For a number of reasons, however, mass transit systems in the United States are not very popular. While the number of new cars sold continues to rise year after year, investment in and use of heavy and light rail systems, trolley systems, subways and various types of pools remain modest. Many authorities believe that the best hope for energy conservation in the field of transportation is to make private automobiles more efficient or to increase the tax on their use. Some experts argue that technology already exists for the construction of 100-mi-per-gal (42.5 km/l) automobiles if industry will make use of that technology. They also argue for additional research on electric cars as an energy-saving and pollution-reducing alternative to internal combustion vehicles. Increasing the cost of using private automobiles has also been explored. One approach is to raise the tax on gasoline to a point where commuters begin to consider mass transit as an economical alternative. Increased parking fees and more aggressive traffic enforcement have also been tried. Such approaches often fail—or are never attempted—because public officials are reluctant to anger voters. Other methods that have been suggested for increasing energy efficiency in automobiles include the design and construction of smaller, lighter cars, extending the useful life of a car, improving the design of cars and tires, and encouraging the design of more efficient cars through federal grants or tax credits. A number of techniques are well known and could be used, however, to conserve energy in large buildings or even small two-room cottages. A thorough insulation of floors, walls, and ceilings, for example, can save up to 80% of the cost of heating and cooling a building. In addition, buildings can be designed and constructed to take advantage of natural heating and cooling factors in the environment. A home in Canada, for example, should be oriented with windows tilting toward the south so as to take advantage of the sun’s heating rays. A home in Mexico might have quite a different orientation. One of the most extreme examples of environmentalfriendly buildings are those that have been constructed at least partially underground. The earthen walls of these buildings provide a natural cooling effect in the summer and provide excellent insulation during the winter. The kind, number, and placement of trees around a building can also contribute to energy efficiency. Trees that 459
Environmental Encyclopedia 3
Energy efficiency
lose their leaves in the winter will allow sunlight to heat a building during the coldest months, but will shield the building from the sun during the hot summer months. Energy can also be conserved by modifying appliances used within a building. Prior to 1973, consumers became enamored with all kinds of electrical devices, from electric toothbrushes to electric shoe-shine machines to trash compactors. As convenient as these appliances may be, they are energy wasteful and do not always meet a basic human need. Even items as simple as light bulbs can become a factor in energy conservation programs. Fluorescent light bulbs use at least 75% less energy than do incandescent bulbs, and they often last 20 times longer. Although many commercial buildings now use fluorescent lighting exclusively, it still tends to be relatively less popular in private homes. As the largest single user of energy in American society, industry is a prime candidate for conservation measures. Always sensitive to possible money-saving changes, industry has begun to develop and implement energy savings devices and procedures. One such idea is cogeneration, the use of waste heat from an industrial process for use in the generation of electricity. Another approach is the expanded use of recycling by industry. In many cases, re-using a material requires less energy than producing it from raw materials. Finally, researchers are continually testing new designs for equipment that will allow that equipment to operate on less energy. Governments and utilities have two primary methods by which they can encourage energy conservation. One approach is to penalize individuals and companies that use too much energy. For example, an industry that uses large amounts of electricity might be charged at a higher rate per kilowatt hour than one that uses less electricity, a policy just the opposite of that now in practice in most places. A more positive approach is to encourage energy conservation by techniques such as tax credits. Those who insulate their homes might, for example, be given cash bonuses by the local utility or a tax deduction by state or federal government. In recent years, another side of energy conservation has come to the fore, its environmental advantages. Obviously, the less coal, oil, and natural gas that humans use, the fewer pollutants are released into the environment. Thus, a practice that is energy-wise, conservation, can also provide environmental benefits. Those concerned with global warming and climate change have been especially active in this area. They point out that reducing our use of fossil fuels will both reduce our consumption of energy and our release of carbon dioxide to the atmosphere. We can take a step toward heading off climate change, they point out, by taking the wise step of wasting less energy. 460
Energy conservation does not yet appear to have won the heart of most Americans. The general public concern about energy waste engendered by the 1973 OPEC oil embargo eventually dissolved into complacency. To be sure, some of the sensitivity to energy conservation created by that event has not been lost. Many people have switched to more energy-efficient forms of transportation, think more carefully about leaving house lights on all night, and take energy efficiency into consideration when buying major appliances. But some of the more aggressive efforts to conserve energy have become stalled. Higher taxes on gasoline, for example, still are certain to raise an enormous uproar among the populace. And energy-saving construction steps that might well be mandated by law still remain optional, and frequently ignored. In an era of apparently renewed confidence in an endless supply of fossil fuels, many people are no longer convinced that energy conservation is very important or have the will to act on their suspicion that it is. And governments, reflecting the will of the people, do not take leadership action to change that trend. [David E. Newton]
RESOURCES BOOKS Fardo, S. Energy Conservation Guidebook. Englewood Cliffs, NJ: Prentice Hall, 1993.
PERIODICALS Reisner, M. “The Rise and Fall and Rise of Energy Conservation.” Amicus Journal 9 (Spring 1987): 22-31.
Energy crops see Biomass
Energy efficiency The utilization of energy for human purposes is a defining characteristic of industrial society. The conversion of energy from one form to another and the efficient production of mechanical work for heat energy has been studied and improved for centuries. The science of thermodynamics deals with the relationship between heat and work and is based on two fundamental laws of nature, the first and second laws of thermodynamics. The utilization of energy and the conservation of critical, nonrenewable energy resources are controlled by these laws and the technological improvements in the design of energy systems. The First Law of Thermodynamics states the principle of conservation of energy: energy can be neither created nor
Environmental Encyclopedia 3 destroyed by ordinary chemical or physical means, but it can be converted from one form to another. Stated another way, in a closed system, the total amount of energy is constant. An interesting example of energy conversion is the incandescent light bulb. In the incandescent light bulb, electrical energy is used to heat a wire (the bulb filament) until it is hot enough to glow. The bulb works satisfactorily except that the great majority (95%) of the electrical energy supplied to the bulb is converted to heat rather than light. The incandescent bulb is not very efficient as a source of light. In contrast, a fluorescent bulb uses electrical energy to excite atoms in a gas, causing them to give off light in the process at least four times more efficiently than the incandescent bulb. Both light sources, however, conform to the First Law in that no energy is lost and the total amount of heat and light energy produced is equal to the amount of electrical energy flowing to the bulb. The Second Law of Thermodynamics states that whenever heat is used to do work, some heat is lost to the surrounding environment. The complete conversion of heat into work is not possible. This is not the result of inefficient engineering design or implementation but, rather, a fundamental and theoretical thermodynamic limitation. The maximum, theoretically possible efficiency for converting heat into work depends solely on the operating temperatures of the heat engine and is given by the equations: E = 1 - T2/ T1. T1 is the absolute temperature at which heat energy is supplied and T2 is the absolute temperature at which heat energy is exhausted. The maximum possible thermodynamic efficiency of a four-cycle internal combustion engine is about 54%; for a diesel engine, the limit is about 56%; and for a steam engine, the limit is about 32%. The actual efficiency of real engines, which suffer from mechanical inefficiencies and parasitic losses (eg. friction, drag, etc.) is significantly lower than these levels. Although thermodynamic principles limit maximum efficiency, substantial improvements in energy utilization can be obtained through further development of existing equipment such as power plants, refrigerators, and automobiles and the development of new energy sources such as solar and geothermal. Experts have estimated the efficiency of other common energy systems. The most efficient of these appear to be electric power generating plants (33% efficient) and steel plants (23% efficient). Among the least efficient systems are those for heating water (1.5–3%), for heating homes and buildings (2.5–9%), and refrigeration and air-conditioning systems (4–5%). It has been estimated that about 85% of the energy available in the United States is lost due to inefficiency. The predominance of low efficiency systems reflects the fact that such systems were invented and developed when
Energy efficiency
energy costs were low and there was little customer demand for energy efficiency. It made more sense then to build appliances that were inexpensive rather than efficient because the cost to operate them was so low. Since the 1973 oil embargo by the Organization of Petroleum Exporting Countries (OPEC), that philosophy has been carefully reexamined. Experts began to point out that more expensive appliances could be designed and built if they were also more efficient. The additional cost to the manufacturer, industry and homeowner could usually be recovered within a few years because of the savings in fuel costs. The concept of energy efficiency suggests a new way of looking at energy systems and that is the examination of the total lifetime energy use and cost of the system. Consider the common light bulb. The total cost of using a light bulb includes both its initial price and the cost of operating it throughout its lifetime. When energy was cheap, this second factor was small. There was little motivation to make a bulb that was more efficient when the life-cycle savings for its operation was minimal. But as the cost of energy rises, that argument no longer holds true. An inefficient light bulb costs more and more to operate as the cost of electricity rises. Eventually, it makes sense to invent and produce more efficient light bulbs. Even if these bulbs cost more to buy, they pay back that cost in long-term operating savings. Thus, consumers might balk at spending $25 for a fluorescent light bulb unless they knew that the bulb would last ten times as long as an incandescent bulb that costs $3.75. Similar arguments can and have been used to justify the higher initial cost of energy-saving refrigerators, solarheating systems, household insulation, improved internal combustion engines and other energy-efficient systems and appliances. Governmental agencies, utilities, and industries are gradually beginning to appreciate the importance of increasing energy efficiency. The 1990 amendments to the Clean Air Act encourage industries and utilities to adopt more efficient equipment and procedures. Certain leaders in the energy field, such as Pacific Gas and Electric and Southern California Edison have already implemented significant energy efficiency programs. [David E. Newton and Richard A. Jeryan]
RESOURCES BOOKS Miller, G. T., Jr. Energy and Environment: The Four Energy Crises. 2nd edition. Belmont, CA: Wadsworth Publishing Company, 1980. Sears, F. W. and M. W. Zemansky. University Physics. 2nd edition. Reading, MA: Addison-Wesley Publishing, 1957.
461
Environmental Encyclopedia 3
Energy flow
OTHER Council on Environmental Quality. Environmental Quality, 21st annual report. Washington, DC: U. S. Government Printing Office, 1990.
Energy flow Understanding energy flow is vital to many environmental issues. One can describe the way ecosystems function by saying that matter cycles and energy flows. This is based on the laws of conservation of matter and energy and the second law of thermodynamics, or the law of energy degradation. Energy flow is strictly one way, such as from higher to lower or from hotter to colder. Objects cool only by loss of heat. All cooling units, such as refrigerators and air conditioners, are based on this principle: they are essentially heat pumps, absorbing heat in one place and expelling it to another. This heat flow is explained by the laws of radiation, as seen in fire and the color wheel. All objects emit radiation, or heat loss, but the hotter the object the greater the amount of radiation, and the shorter and more energetic the wavelength. As energy intensities rise and wavelengths shorten, the radiation changes from infrared to red, then orange, yellow, green, blue, violet, and ultraviolet. A blue flame, for example, is desired for gas appliances. A well-developed wood fire is normally yellow, but as the fire dies out and cools, the color gradually changes to orange, then red, then black. Black coals may still be very hot, giving off invisible, infrared radiation. These varying wavelengths are the main differences seen in the electromagnetic spectrum. All chemical reactions and radioactivity emit heat as a by-product. Because this heat radiates out from the source, the basis of the second law of thermodynamics, one can never achieve 100% energy efficiency. There will always be a heat-loss tax. One can slow down the rate of heat loss through insulating devices, but never stop it. As the insulators absorb heat, their temperatures rise and they in turn lose heat. There are three main applications of energy flow to environmental concerns. First, only 10% of the food passed on up the food chain/web is retained as body mass; 90% flows to the atmosphere as heat. In terms of caloric efficiency, more calories are obtained by eating plant food than meat. Since fats are more likely to be part of the 10% retained as body mass, pesticides dissolved in fat are subject to bioaccumulation and biomagnification. This explains the high levels of DDT in birds of prey like the peregrine falcon (Falco peregrinus) and the brown pelican (Pelecanus occidentalis). 462
Second, the%age of waste heat is an indicator of energy efficiency. In light bulbs, 5% produces light and 95% heat, just the opposite of the highly efficient fire fly. Electrical generation from fossil fuels or nuclear power produces vast amounts of waste heat. Third, control of heat flow is a key to comfortable indoor air and solving global warming. Well-insulated buildings retard heat flow, reducing energy use. Atmospheric greenhouse gases, such as anthropogenic carbon dioxide and methane, retard heat flow to space, which theoretically should cause global temperatures to rise. Policies that reduce these greenhouse gases allow a more natural flow of heat back to space. See also Greenhouse effect [Nathan H. Meleen]
RESOURCES PERIODICALS “Energy.” National Geographic 159 (February 1981): 2-23.
Energy Information Admistration see U.S. Department of Energy
Energy path, hard vs. soft What will energy use patterns in the year 2100 look like? Such long-term predictions are difficult, risky, and perhaps impossible. Could an American citizen in 1860 have predicted what the pattern of today’s energy use would be like? Yet, there are reasons to believe that some dramatic changes in the ways we use energy may be in store over the next century. Most importantly, the world’s supplies of nonrenewable energy—especially, coal, oil, and natural gas— continue to decrease. Critics have been warning for a least two decades that time was running out for the fossil fuels and that we could not count on using them as prolifically as we had in the past. For at least two decades, experts have debated the best way to structure our energy use patterns in the future. The two most common themes have been described (originally by physicist Amory Lovins) as the “hard path” and the “soft path.” Proponents of the hard path argue essentially that we should continue to operate in the future as we have in the past, except more efficiently. They point out that predictions from the 1960s and 1970s that our oil supplies would be depleted by the end of the century have been proved wrong. If anything, our reserves of fossil fuels may actually have increased as economic incentives have encouraged further exploration.
Environmental Encyclopedia 3
Energy policy
Our energy future, the hard-pathers say, should focus on further incentives to develop conventional energy sources such as fossil fuels and nuclear power. Such incentives might include tax breaks and subsidies for coal, uranium and petroleum companies. When our supplies of fossil fuels do begin to be depleted, our emphasis should shift to a greater reliance on nuclear power. An important feature of the hard energy path is the development of huge, centralized coal-fired and nuclearpowered plants for the generation of electricity. One characteristic of most hard energy proposals, in fact, is the emphasis on very large, expensive, centralized systems. For example, one would normally think of solar energy as a part of the soft energy path. But one proposal developed by the National Aeronautics and Space Administration (NASA) calls for a gigantic solar power station to be orbited around the earth. The station could then transmit power via microwaves to centrally-located transmission stations at various points in the earth’s surface. Those who favor a soft energy path have a completely different scenario in mind. Fossil fuels and nuclear power must diminish as sources of energy as soon as possible, they say. In their place, alternative sources of power such as hydropower, geothermal energy, wind energy, and photovoltaic cells must be developed. In addition, the soft-pathers say, we should encourage conservation to extend coal, oil, and natural gas supplies as long as possible. Also since electricity is one of the most wasteful of all forms of energy, its use should be curtailed. Most importantly, soft-path proponents maintain energy systems of the future should be designed for smallscale use. The development of more efficient solar cells, for example, would make it possible for individual facilities to generate a significant portion of the energy they need. Underlying the debate between hard- and soft-pathers is a fundamental question as to how society should operate. On the one hand are those who favor the control of resources in the hands of a relatively small number of large corporations. On the other hand are those who prefer to have that control decentralized to individual communities, neighborhoods, and families. The choice made between these two competing philosophies will probably determine which energy path the United States and the world will ultimately follow. See also Alternative energy sources; Alternative fuels; Energy and the environment [David E. Newton]
RESOURCES BOOKS Lovins, A. Soft Energy Paths. San Francisco: Friends of the Earth, 1977.
Energy policy Energy policies are the actions governments take to affect the demand for energy as well as the supply of it. These actions include the ways in which governments cope with energy supply disruptions and their efforts to influence energy consumption and economic growth. The energy policies of the United States government have often worked at cross purposes, both stimulating and suppressing demand. Taxes are perhaps the most important kind of energy policy, and energy taxes are much lower in the U. S. than in other countries. This is partially responsible for the fact that energy consumption per capita is higher than elsewhere, and there is less incentive to invest in conservation or alternative technologies. Following the 1973 Arab oil embargo, the federal government instituted price controls which kept energy prices lower than they would otherwise have been, thereby stimulating consumption. Yet the government also instituted policies at the same time, such as fuel-economy standards for automobiles, which were designed to increase conservation and lower energy use. Thus, policies in the period after the embargo were contradictory: what one set of policies encouraged, the other discouraged. The United States government has a long history of different types of interference in energy markets. The Natural Gas Act of 1938 gave the Federal Power Commission the right to control prices and limit new pipelines from entering the market. In 1954 The Supreme Court extended price controls to field production. Before 1970, the Texas Railroad Commission effectively controlled oil output (in the United States) through prorationing regulations that provided multiple owners with the rights to underground pools. The federal government provided tax breaks in the form of intangible drilling expenses and gave the oil companies a depletion allowance. A program was also in place from 1959 to 1973 which limited oil imports and protected domestic producers from cheap foreign oil. The ostensible purpose of this policy was maintaining national security, but it contributed to the depletion of national reserves. After the oil embargo, Congress passed the Emergency Petroleum Allocation Act giving the federal government the right to allocate fuel in a time of shortage. In 1974 President Gerald Ford announced Project Independence which was designed to eliminate dependence on foreign imports. Congress passed the Federal Non-Nuclear Research and Development Act in 1974 to focus government efforts on non-nuclear research. Finally, in 1977 Congress approved the cabinet-level creation of the U. S. Department of Energy (DOE) which had a series of direct and indirect policy approaches at its disposal, designed to encourage and coerce both the energy industry as well as the commercial and residential sectors of the country to make changes. After 463
Energy policy
Ronald Reagan became president, many DOE programs were abolished, though DOE continued to exist, and the net impact has probably been to increase economic uncertainty. Energy policy issues have always been very political in nature. Different segments of the energy industry have often been differently affected by policy changes, and various groups have long proposed divergent solutions. The energy crisis, however, intensified these conflicts. Advocates of strong government action called for policies which would alter consumption habits, reducing dependence on foreign oil and the nation’s vulnerability to an oil embargo. They have been opposed by proponents of free markets, some of whom considered the government itself responsible for the crisis. Few issues were subject to such intensive scrutiny and fundamental conflicts over values as energy policies were during this period. Interest groups representing causes from energy conservation to nuclear power mobilized. Business interests also expanded their lobbying efforts. An influential advocate of the period was Amory Lovins, who helped create the renewable energy movement. His book, Soft Energy Paths: Toward A Durable Peace (1977), argued that energy problems existed because large corporations and government bureaucracies had imposed expensive centralized technologies like nuclear power on society. Lovins argued that the solution was in small scale, dispersed, technologies. He believed that the “hard path” imposed by corporations and the government led to an authoritarian, militaristic society while the “soft path” of small-scale dispersed technologies would result in a diverse, peaceful, selfreliant society. Because coal was so abundant, many in the 1970s considered it a solution to American dependence on foreign oil, but this expectation has proved to be mistaken. During the 1960s, the industry had been controlled by an alliance between management and the union, but this alliance disintegrated by the time of the energy crisis, and wildcat strikes hurt productivity. Productivity also declined because of the need to address safety problems following passage of the 1969 Coal Mine Health and Safety Act. Environmental issues also hurt the industry following passage of the National Environmental Policy Act of 1969, the Clean Air Act of 1970, the Clean Water Act of 1972, and the 1977 Surface Mining Control and Reclamation Act. Worker productivity in the mines dropped sharply from 19 tons per worker day to 14 tons, and this decreased the advantage coal had over other fuels. The 1974 Energy Supply and Environmental Coordination Act and the 1978 Fuel Use Act, which required utilities to switch to coal, had little effect on how coal was used because so few new plants were being built. Other energy-consuming nations responded to the energy crises of 1973–74 and 1979–80 with policies that were 464
Environmental Encyclopedia 3 different from the United States. Japan and France, although via different routes, made substantial progress in decreasing their dependence on Mideast oil. Great Britain was the only major industrialized nation to become completely selfsufficient in energy production, but this fact did not greatly aid its ailing economy. When energy prices declined and then stabilized in the 1980s, many consuming nations eliminated the conservation incentives they had put in place. Japan is the most heavily petroleum-dependent industrialized nation. To pay for a high level of energy and raw material imports, Japan must export the goods which it produces. When energy prices increased after 1973, it was forced to expand exports. The rate of economic growth in Japan began to decline. Annual growth in GNP averaged nearly 10% from 1963–1973, and from 1973–1983 it was just under 4%, although the association between economic growth and energy consumption has weakened. The Energy Rationalization Law of 1979 was the basis for Japan’s energy conservation efforts, providing for the financing of conservation projects and a system of tax incentives. It has been estimated that over 5% of total Japanese national investment in 1980 was for energy-saving equipment. In the cement, steel, and chemical industries over 60% of total investment was for energy conservation, Japanese society shifted from petroleum to a reliance on other forms of energy including nuclear power and liquefied natural gas. In France, energy resources at the time of the oil embargo were extremely limited. It possessed some natural gas, coal, and hydropower, but together these sources constituted only 0.7% of the world’s total energy production. By 1973, French dependence on foreign energy had grown to 76.2%: oil made up 67% of the total energy used in France, up from 25% in 1960. France had long been aware of its dependence on foreign energy and had taken steps to overcome it. Political instability in the Mideast and North Africa had led the government to take a leading role in the development of civilian nuclear power after World War II. In 1945 Charles de Gaulle set up the French Atomic Energy Commission to develop military and peaceful uses for nuclear power. The nuclear program proceeded at a very slow pace until the 1973 embargo, after which there was rapid growth in France’s reliance on nuclear power. By 1990, more than 50 reactors had been constructed and over 70% of France’s energy came from nuclear power. France now exports electricity to nearly all its neighbors, and its rates are about the lowest in Europe. Starting in 1976 the French government also subsidized 3,100 conservation projects at a cost of more than 8.4 billion francs, and these subsidies were particularly effective in encouraging energy conservation.
Environmental Encyclopedia 3
Energy recovery
Concerned about oil supplies during World War I, the British government had taken a majority interest in British Petroleum and tried to play a leading role in the search for new oil. After the World War II, the government nationalized the coal, gas, and electricity industries, creating, for ideological reasons as well as for postwar reconstruction, the National Coal Board, British Gas Corporation, and Central Electricity Generating Board. After the discovery of oil reserves in the 1970s in the North Sea, the government established the British National Oil Company. This government corporation produced about 7% of North Sea oil and ultimately handled about 60% of the oil produced there. All the energy sectors in the United Kingdom were thus either partially or completely nationalized. Government relations with the nationalized industries often were difficult, because the two sides had different interests. The government intervened to pursue macroeconomic objectives such as price restraint, and it attempted to stimulate investment at times of unemployment. The electric and gas industries had substantial operating profits and they could finance their capital requirements from their revenues, but profits in the coal industry were poor, the work force was unionized, and opposition to the closure of uneconomic mines was great. Decision-making was highly politicized in this nationalized industry, and the government had difficulty addressing the problems there. It was estimated that 90% of mining losses came from 30 of the 190 pits in Great Britain, but only since 1984–85 has there been rapid mine closure and enhanced productivity. New power-plant construction was also poorly managed, and comparable coal-fired power stations cost twice as much in Great Britain as in France or Italy. The Conservative Party proposed that the nationalized energy industries be privatized. However, with the exception of coal, these energy industries had natural monopoly characteristics: economies of scale and the need to prevent duplicate investment in fixed infrastructure. The Conservative Party called for regulation after privatization to deal with the natural monopoly characteristics of these industries, and it took many steps toward privatization. In only one area, however, did it carry its program to completion, abolishing the British National Oil Company and transferring its assets to private companies. See also Alternative energy sources; Corporate Average Fuel Efficiency Standards; Economic growth and the environment; Electric utilities; Energy and the environment; Energy efficiency; Energy path, hard vs. soft [Alfred A. Marcus]
RESOURCES BOOKS Marcus, A. A. Controversial Issues in Energy Policy. Phoenix, AZ: Sage Press, 1992.
Energy recovery A fundamental fact about energy use in modern society is that huge quantities are lost or wasted in almost every field and application. For example, the series of processes by which nuclear energy is used to heat a home with electricity results in a loss of about 85 percent of all the energy originally stored in the uranium used in the nuclear reactor. Industry, utilities, and individuals could use energy far more efficiently if they could find ways to recover and reuse the energy that is being lost or wasted. One such approach is cogeneration, the use of waste heat for some useful purpose. For example, a factory might be redesigned so that the steam from its operations could be used to run a turbine and generate electricity. The electricity could then be used elsewhere in the factory or sold to power companies. Cogeneration in industry can result in savings of between 10 and 40 percent of energy that would otherwise be wasted. Cogeneration can work in the opposite direction also. Hot water produced in a utility plant can be sold to industries that can use it for various processes. Proposals have been made to use the wasted heat from electricity plants to grow flowers and vegetables in greenhouses, to heat water for commercial fish and shell-fish farms, and to maintain warehouses at constant temperatures. The total energy efficiency resulting from this sharing is much greater than it would be if the utility’s water was simply discarded. Another possible method of recovering energy is by generating or capturing natural gas from biomass. For example, as organic materials decay naturally in a landfill, one of the products released is methane, the primary component of natural gas. Collecting methane from a landfill is a relatively simple procedure. Vertical holes are drilled into the landfill and porous pipes are sunk into the holes. Methane diffuses into the pipes and is drawn off by pumps. The recovery system at the Fresh Kills landfill on Staten Island, New York, for example, produces enough methane to heat 10,000 homes. Biomass can also be treated in a variety of ways to produce methane and other combustible materials. Sewage, for example, can be subjected to anaerobic digestion, the primary product of which is methane. Pyrolysis is a process in which organic wastes are heated to high temperatures in the absence of oxygen. The products of this reaction are solid, liquid, and gaseous hydrocarbons whose composition is similar to those of petroleum and natural gas. Perhaps the most known example of this approach is the manufacture of methanol from biomass. When mixed with gasoline, a new fuel, gasohol, is obtained. Energy can also be recovered from biomass simply by combustion. The waste materials left after sugar is extracted from sugar cane, known as bagasse, have long been used as 465
Environmental Encyclopedia 3
Energy Reorganization Act (1973)
a fuel for the boilers in which the sugar extraction occurs. The burning of garbage has also been used as an energy source in a wide variety of applications such as the heating of homes in Sweden, the generation of electricity to run streetcars and subways in Milan, Italy, and the operation of a desalination plant in Hempstead, Long Island. The recovery of energy that would otherwise be lost or wasted has a secondary benefit. In many cases, that wasted energy might cause pollution of the environment. For example, the wasted heat from an electric power plant may result in thermal pollution of a nearby waterway. Or the escape of methane into the atmosphere from a landfill could contribute to air pollution. Capture and recovery of the waste energy not only increases the efficiency with which energy is used, but may also reduce some pollution problems.
areas. These included projects to convert coal and solid wastes into liquid and gaseous fuels; experiments on methods to extract and process oil shale and tar sands, as well as an effort to develop a viable breeder reactor that would ensure a virtually inexhaustible source of uranium for electricity. The agency also supported research on solar energy for space heating, industrial process heat, and electricity. In the short time available, basic problems could not be solved, and the achievements of many of the demonstration projects were disappointing. Nevertheless, many important advances were made in commercializing cost-effective technologies for energy conservation, such as energy-efficient lighting systems, improved heat pumps, and better heating systems. The agency also conducted successful research in environmental, safety, and health areas.
[David E. Newton]
RESOURCES BOOKS Franke, R. G., and D. N. Franke. Man and the Changing Environment. New York: Holt, Rinehart and Winston, 1975. Moran, J. M., M. D. Morgan, and J. H. Wiersma. Introduction to Environmental Science. 2nd ed. New York: W. H. Freeman, 1986.
Energy Reorganization Act (1973) Passed in 1974 during the Ford Administration, this act created the Energy Research and Development Administration (ERDA) and Nuclear Regulatory Commission (NRC). The purpose of the act was to begin an extensive non-nuclear federal research program, separating the regulation of the nuclear power from research functions. Regulation was carried out by the NRC, while nuclear power research was carried out by the ERDA. The passage of the 1974 Energy Reorganization Act ended the existence of the Atomic Energy Commission, which had been the main instrument to implement nuclear policy. In 1977 ERDA incorporated into the newly created U.S. Department of Energy. See also Alternative energy sources; Energy policy
Energy Research and Development Administration This agency was created in 1974 from the non-regulatory parts of the Atomic Energy Commission (AEC), and it existed until 1977, when it was incorporated into the U.S. Department of Energy. In its short life span, the Energy Research and Development Administration (ERDA) started to diversify U.S. energy research outside of nuclear power. Large-scale demonstration projects were begun in numerous 466
Energy taxes The main energy tax levied in the United States is the one on petroleum, though the United States tax is half of the amount levied in other major industrialized nations. As a result, gasoline prices in the United States are much lower than elsewhere, and both environmentalists and others have argued that this encourages energy consumption and environmental degradation and causes national and international security problems. In 1993, the House passed a Btu tax while the Senate passed a more modest tax on transportation fuels. A Btu tax would restrict the burning of coal and other fossil fuels and proponents maintain that this would be both environmentally and economically beneficial. Every barrel of oil and every ton of coal that is burned adds greenhouse gases to the atmosphere, increasing the likelihood that future generations will face a global climatic calamity. United States dependence on foreign oil, much of it from potentially unstable nations like Iraq, now approaches 50%. A Btu tax would create incentives for energy conservation, and it would help stimulate the search for alternatives to oil. It would also help reduce the burgeoning trade deficit, of which foreign petroleum and petroleum-based products now constitute nearly 40%. President Bill Clinton urged Americans to support higher energy taxes because of the considerable effect they could have on the federal budget deficit. For instance, if the government immediately raised gasoline prices to levels commonly found in other industrial nations (about $4.15 a gallon), the budget deficit would almost be eliminated. It is estimated that every penny increase in gasoline taxes, yields a billion dollars in revenue for the federal treasury, and in June 2002 the budget deficit was estimated to be about $6 trillion.
Environmental Encyclopedia 3
Environment
Of course, to raise gasoline taxes immediately to these levels is utterly impractical, as the effects on the economy would be catastrophic. It would devastate the economics of rural and western states. Inflation across the country would soar and job losses would skyrocket. Supporters of increasing energy taxes agree that the increases must be gradual and predictable, so people can adjust. Many believe they should take place over a 15 year period, after which energy prices in the United States would be roughly equivalent to those in other industrial nations. Many economists emphasize that the positive effects of higher energy taxes will be felt only if there are no increases in government spending. It is, they believe, ultimately a question of how Americans want to be taxed. Do they want wages, profits, and savings to be taxed, as they are now, or their use of energy? In the former case, the government is taxing a desirable activity which should be encouraged for the sake of job creation and economic expansion. In the latter case, it is taxing undesirable activity which should be discouraged for the sake of protecting the environment and preserving national security. See also Energy policy; Environmental economics [Alfred A. Marcus]
RESOURCES BOOKS Marcus, A. A. Controversial Issues in Energy Policy. Phoenix, AZ: Sage Press, 1992.
Enhydra lutris see Sea otter
Eniwetok Atoll see Bikini atoll
Enteric bacteria Enteric bacteria are defined as bacteria which reside in the intestines of animals. Members of the Enterobacteriaceae family, enteric bacteria are important because some of them symbiotically aid the digestion of their hosts, while other pathogenic species cause disease or death in their host organism. The pathogenic members of this family include species from the genera Escherichia, Salmonella, Shigella, Klebsiella, and Yersinia. All of these pathogens are closely associated with fecal contamination of foods and water. In North America, the reported incidence of salmonellosis outweighs the occurrence of all of the other reportable diseases by other enteric bacteria combined.
Most infections from enteric pathogens require large numbers of organisms to be ingested by immunocompetent adults, with the exception of Shigella. Symptoms include gastrointestinal distress and diarrhea. The enteric bacteria related to Escherichia coli are known as the coliform bacteria. Coliform bacteria are used as indicators of pathogenic enteric bacteria in drinking and recreational waters. See also Sewage treatment
Entrainment see Air pollution
Environment When people say “I am concerned about the environment,” what do they mean? What does the use of the definite article mean in such a statement? Is there such a thing as “the” environment? Environment is derived from the French words environ or environner, meaning “around,” which in turn originated from the Old French virer and viron (together with the prefix en), which mean “a circle, around, the country around, or circuit.” Etymologists frequently conclude that, in English usage at least, environment is the total of the things or circumstances around an organism—including humans— though environs is limited to the “surrounding neighborhood of a specific place, the neighborhood or vicinity.” Even a brief etymological encounter with the word environment provokes two persuasive suggestions for possible structuring of a contemporary definition. First, the word environment is identified with a totality, the everything that encompasses each and all of us, and this association is established enough to be not lightly dismissed. The very notion of “environment,” as Anatol Rapoport indicated, suggest the partitioning of a “portion of the world into regions, an inside and an outside.” The environment is the outside. Second, the word’s origin in the phrase “to environ” indicates a process derivative, one that alludes to some sort of action or interaction, at the very least inferring that the encompassing is active, in some sense reciprocal, that the environment, whatever its nature, is not simply an inert phenomenon to be impacted without response or without affecting the organism in return. Environment must be a relative word, because it always refers to something “environed” or enclosed. Ecology as a discipline is focused on studying the interactions between an organism of some kind and its environment. So ecologists must be concerned with what H. L. Mason and J. H. Langenheim described as a “key concept in the structure of ecological knowledge,” but a concept with which ecologists continue to have problems of confusion 467
Environment
between ideas and reality—the concept of environment. Mason and Langenheim’s article “Language Analysis and the Concept Environment” continues to be the definitive statement on the use of the word environment in experimental ecology. The results of Mason and Langenheim’s analysis were essentially four-fold: 1) they limited environmental phenomena “in the universal sense” to only those phenomena that have an operational relation with any organism: other phenomena present that do not enter a reaction system are excluded, or dismissed as not “environmental phenomena"; 2) they restricted the word environment itself to mean “the class composed of the sum of those phenomena that enter a reaction system of the organism or otherwise directly impinge upon it” so that physical exchange or impingement becomes the clue to a new and limited definition; 3) they specifically note that their definition does not allude to the larger meaning implicit in the etymology of the word; and 4) they designate their limited concept as operational environment but state that when the word environment is used with qualification, then it still refers to the operational construct, establishing that “’environment’ per se is synonymous with ’operational environment’.” This definition does allow a prescribed and limited conception of environment and might work for experimental ecology but is much too limited for general usage. Environmental phenomena of relevance to the aforementioned concern for “the” environment must incorporate a multitude of things other than those that physically impinge on each human being. And it is much more interactive and overlapping than a restricted definition would have people believe. To better understand contemporary human interrelationships with the world around them, environment must be an incorporative, holistic term and concept. Thinking about the environment in the comprehensive sense—with the implication that everything is the environment with each entity connected to each of a multitude of others—makes environment what David Currie in a book of case studies and material on pollution described “as not a modest concept.” But, such scope and complexity, difficult as they are to resolve, intensify rather than eliminate the very real need for a kind of transcendence. The assumption seems valid that human consciousness regarding environment needs to be raised, not restricted. Humans need increasingly to comprehend and care about what happens in far away places and to people they do not know but that do affect them, that do impact even their localized environments, that do impinge on their individual well-being. And they need to incorporate the reciprocal idea that their actions impact people and environments outside the immediate in place and time: in the world today, environmental impacts 468
Environmental Encyclopedia 3 transcend the local. Thus it is necessary that human awareness of those impacts also be transcendent. It is uncertain that confining the definition of environment to operationally narrow physical impingement could advance this goal. One suspects instead that it would significantly retard it, a retardation that contemporary human societies can ill afford. Internalization of a larger environment, including an understanding of common usages of the word, might on the other hand aid people in caring about, and assuming responsibility for, what happens to that environment and to the organisms in it. An operational definition can help people find the mechanisms to deal with problems immediate and local, but can, if they are not careful, limit them to an unacceptable mechanistic and unfeeling approach to problems in the environment-at-large. Acceptance of either end of the spectrum—a limited operational definition or an incorporative holistic definition—as the only definition creates more confusion than clarification. Both are needed. Outside the laboratory, however, in study of the interactional, interdependent world of contemporary humankind, the holistic definition must have a place. A sense of the comprehensive “out there,” of the totality of world and people as a functionally significant, interacting unit should be seeping into the consciousness of every person. Carefully chosen qualifiers can help deal with the complexity: “natural” or “built” or “perceptual” all specify aspects of human surroundings more descriptive and less incorporative than “environment” used alone, without adjectives. Other noun can also pick up some of the meanings of environment, though none are direct synonyms: habitat, milieu, mis en scence, ecumene all designate specified and limited aspects of the human environment, but none except “environment” are incorporative of the whole complexity of human surroundings. An understanding of environment must not be limited to an abstract concept that relates to daily life only in terms of whether to recycle cans or walk to work. The environment is the base for all life, the source of all goods. Poor people in underdeveloped nations know this; their day-to-day survival depends on what happens in their local environments. Whether it rains or does not, whether commercial seiners move into local fishing grounds or leave them alone, and whether local forest products are lost to the cause of world timber production affect these people more directly. What they, like so many other humans around the world, may not also recognize, is that “environment” now extends far beyond the bounds of the local: environment is the intimate enclosure of the individual or a local human population and the global domain of the human species.
Environmental Encyclopedia 3
Environment Canada
The Brundtland report Our Common Future recognized this with a healthy, modern definition: “The environment does not exist as a sphere separate from human actions, ambitions, and needs, and attempts to defend it in isolation from human concerns have given the word ’environment’ a connotation of naivety in some political circles.” The report goes on to note that “the ’environment’ is where we all live...and ’development’ is what we all do in attempting to improve our lot within that abode. The two are inseparable.” Each human being lives in a different environment than any other human because every single one screens their surroundings through their own individual experience and perceptions. Yet all human beings live in the same environment, an external reality that all share, draw sustenance from, and excrete into. So understanding environment becomes a dialectic, a resolution and synthesis of individual characteristics and shared conditions. Solving environmental problems depends on the intelligence exhibited in that resolution. [Gerald L. Young Ph.D.]
RESOURCES BOOKS Bates, M. The Human Environment. Berkeley: University of California, School of Forestry, 1962. Dubos, R. “Environment.” Dictionary of the HistoryIn of Ideas, edited by P. P. Wiener. New York: Charles Scribner’s Sons, 1973.
PERIODICALS Mason, H. L., and J. H. Langenheim. “Language Analysis and the Concept Environment.” Ecology 38 (April 1957): 325-340. Patten, B. C. “Systems Approach to the Concept of Environment.” Ohio Journal of Science 78 (July 1978): 206-222. Young, G. L. “Environment: Term and Concept in the Social Sciences.” Social Science Information 25 (March 1986): 83-124.
Environment Canada Environment Canada is the agency with overall responsibility for the development and implementation of policies related to environmental protection, monitoring, and research within the government of Canada. Parts of this mandate are shared with other federal agencies, including those responsible for agriculture, forestry, fisheries, and nonrenewable resources such as minerals. Environment Canada also works with the environment-related agencies of Canada’s 10 provincial and two territorial governments through such groups as the Canadian Council of Ministers of the Environment. The head of Environment Canada is a minister of the federal cabinet, who is “responsible for policies and actions to preserve and enhance the quality of the environment for
the benefit of present and future generations of Canadians.” In 1990, following a lengthy and extensive consultation process organized by Environment Canada, the Government of Canada released its Canada’s Green Plan for a Healthy Environment, which details the broader goals, as well as many specific objectives, to be pursued towards achieving a state of ecologically sustainable economic development in Canada. The first and most general of the national objectives under the Green Plan is to “secure for current and future generations a safe and healthy environment, and a sound and prosperous economy.” The Green Plan is intended to set a broad environmental framework for all government activities and objectives, including the development of policies. The government of Canada has specifically committed to working toward the following priority objectives: (1) clean air, water, and land; (2) sustainable development of renewable resources; (3) protection of special places and species; (4) preserving the integrity of northern Canada; (5) global environmental security; (6) environmentally responsible decision making at all levels of society; and (7) minimizing the effects of environmental emergencies. Environment Canada will play the lead role in implementing the vision of the Green Plan and in coordinating the activities of the various agencies of the government of Canada. In order to be able to integrate the dual challenges of a new environmental agenda (set by the expectations of Canadians in general and the federal government in particular) and the need to continue to deliver traditional programs, Environment Canada is moving from a three-program to a one-program administrative structure. Under that single program, six activities are coordinated: (1) the Atmospheric Environment Service activity, through which information is provided and research conducted on weather, climate, oceanic conditions, and air quality; (2) the Conservation and Protection Service activity, which focuses on special species and places, global environmental integrity, the integrity of Canadian ecosystems, environmental emergencies, and ecological and economic interdependence; (3) the Canadian Parks Service activity, concentrating on the ecological and cultural integrity of special places, as well as on environmental and cultural citizenship; (4) the Corporate Environmental Affairs activity, dealing with environmentally responsible decision making and ecosystem-science leadership; (5) the State of the Environment Reporting activity, through which credible and comprehensive environmental information, linked with socio-economic considerations, is provided to Canadians; and (6) the Administration activity, covering corporate management and services. See also Environment; Future generations [Bill Freedman Ph.D.] 469
Environmental accounting
RESOURCES BOOKS Canada’s Green Plan for a Healthy Environment. Ottawa: Government of Canada, 1990. Environment Canada. Annual Report, 1988-1990. Ottawa: Government of Canada, 1990.
Environmental accounting A system of national or business accounting where such environmental assets as air, water, and land are not considered to be free and abundant resources but instead are considered to be scarce economic assets. Any environmental damage caused by the production process must be treated as an economic expense and entered on the balance sheet accordingly. It is important to include in this framework the full environmental cost occurring over the full life cycle of a product, including not only the environmental costs incurred in the production process, but also the environmental costs resulting from use, recycling and disposal of products. This is also known as the cradle-to-grave approach. See also Ecological economics
Environmental aesthetics In his journal, Henry David Thoreau asked in 1859 “In what book is this world and its beauty described? Who has plotted the steps toward the discovery of beauty?” Almost a 100 years later, in his book The Sand County Almanac, ecologist Aldo Leopold addressed Thoreau’s question by advocating what he called a “conservation esthetic” as the door to appreciating the richness of the natural world, and a resulting conservation ethic. Leopold suggested that increased ecological awareness would more finely tune people’s perception of the world around them. The word aesthetics is, after all, derived from the Greek word aisthesis, literally “perception by the senses.” Leopold claimed that perception, “like all real treasures of the mind, can be split into infinitely small fractions without losing its quality,” a necessity if we are to revive our appreciation of the richness and diversity of the world that surrounds us. Instead, he thought that most recreationists are like “motorized ants who swarm the continents before learning to see [their] own back yard,” so that recreational development becomes “a job not of building roads into lovely country, but of building receptivity into the still unlovely human mind.” Despite the fact that Thoreau and Leopold are both widely read by environmentalists and others, aesthetics has remained largely the domain of philosophy, and environmental aesthetics until recently a neglected domain in gen470
Environmental Encyclopedia 3 eral. In philosophy, aesthetics has developed quite narrowly as an esoteric subdiscipline focused on theories of the arts, but a philosophy exclusive of nature and environment. An environmental philosophy has emerged in recent years, but this has developed mainly from the ethics tradition in philosophy and has, for some reason, neglected the aesthetic tradition (though Sepa¨nmaa, and some other philosophers, have tried to encourage an environmental aesthetics at the intersection of traditions: on the environment as an aesthetic object and on environmental aesthetics as the philosophy of environmental criticism). While extensive literature has emerged on environmental ethics, the gap left by philosophy for environmental aesthetics has been filled by writers in a number of disciplines, notably psychology and geography, especially through empirical research on perception. Yi Fu Tuan’s pioneering works (e.g., Topophilia, in 1974) broadened the study of aesthetics, transcended his own discipline of geography, and continue to shape the debate today about the content and concepts appropriate to an environmental aesthetic. In classical aesthetics, sight and hearing are considered the primary senses since they are most immediately involved in the contemplation and appreciation of art and music. Arguably, the other three senses--touch, smell, and taste-must also be brought to bear for a true sensing of all the arts. Certainly, all five of the basic senses are central to an environmental aesthetic, an aesthetic sense, as Berleant notes, of all “the continuities that join integrated human persons with their natural and cultural condition.” To achieve this, Berleant suggests, requires what he calls “an integrated sensorium,” a “recognition of synaesthesia,” i.e., a “fusion of the sense modalities.” He claims that “perception is not passive but an active, reciprocal engagement with environment"; environmental aesthetics is what Berleant calls “an aesthetics of engagement.” Aesthetic engagement with environment involves not only the five senses, but the individual, behavioral and personal history of each person, as well as the cultural context and continuity in which that person has developed. An environmental aesthetic is “always contextual, mediated by the [long and complex] variety of conditions and influences that shape all [human] experience.” An environmental aesthetic is also contingent on place. A sense of place, or a developed, knowledgeable understanding of the place in which one lives, is central to a well-developed environmental aesthetic. This can lead to greater engagement with place, and more involvement with local planning and land-use decisions, among other issues. An understanding of environmental aesthetics cannot be attained without some notion of how the word and concept ’environment’ itself is defined. Before recent publications on environmental aesthetics, most writers on the subject focused
Environmental Encyclopedia 3
Environmental chemistry
on the beauty of nature, for example as experienced in the national parks of the United States. As a more formal environmental aesthetics has emerged, the definition of environment (and environmental beauty) has enlarged. Sepa¨nmaa, for example, defines “environmental aesthetics [as] the aesthetics of the real world” and includes in this “all of the observer’s external world: the natural environment, the cultural environment, and the constructed environment.” Berleant also claims that “the idea of an aesthetic environment is a new concept that enlarges the meaning of environment.” But he considers environment not only as everything “out there,” but as “everything that there is; it is all-inclusive, a total, integrated, continuous process.” This conception is hard to grasp but “nonetheless soberly realistic, for it recognizes that ultimately everything affects everything else, that humans along with all other things inhabit a single intraconnected realm.” Berleant’s conception of environment “does not differentiate between the human and the natural and...interprets everything as part of a single, continuous whole.” He advocates “the largest idea of environment [which] is the natural process as people live it, however they live it. Environment is nature experienced, nature lived.” An environmental aesthetic then, depends on education in human ecology, on more complete understanding of how people connect with other people, and of how they interact with a wide range of environments, local to global. Better understanding is both a result of, and a cause of, more engagement with place and surroundings. The resulting connections and commitment have ramifications for the design and planning of the built environment, and for exploiting and managing the natural environment. Aesthetics, in this ecological, integrated sense, can contribute to cost-benefit decisions about how we use the environments in which we live, including city and regional planning, and in design at all levels, from individual artifact to our grasp of the global realities of the contemporary world. Berleant argues that environmental perception becomes an aesthetic only through a complete understanding of the totality. Though that is not possible for any one person, an appreciation of the richness and diversity of the earth’s natural and human domain is necessary to achieve such an aesthetic. An environmental aesthetic then is necessarily complex, but it can also provide simple commandments for everyday life: as Thoreau suggested, “to affect the quality of the day, that is the highest of arts.” [Gerald L. Young Ph.D.]
RESOURCES BOOKS Berleant, A. The Aesthetics of Environment. Philadelphia, PA: Temple University Press, 1992.
Nasar, J. L., ed. Environmental Aesthetics: Theory, Research, and Applications. New York: Cambridge University Press, 1988.
Environmental auditing The environmental auditing movement gained momentum in the early 1980s as companies beset by new liabilities associated with Superfund and old hazardous waste sites wanted to insure that their operations were adhering to federal and local policies and company procedures. Most audits were initiated to avoid legal conflicts, and many companies brought in outside consultants to do the audits. The audits served many useful functions, including increasing management and employee awareness of environmental issues and initiating data collection and central monitoring of matters previously not watched as carefully. In many companies environmental auditing played a useful role in organizing information about the environment. It paved the way for the pollution prevention movement which had a great impact on company environmental management in the late 1980s when some companies started to view all of their pollution problems comprehensively and not in isolation from one another.
Environmental chemistry Environmental chemistry refers to the occurrence, movements, and transformations of chemicals in the environment. Environmental chemistry deals with naturally occurring chemicals such as metals, other elements, organic chemicals, and biochemicals that are the products of biological metabolism. Environmental chemistry also deals with synthetic chemicals that have been manufactured by humans and dispersed into the environment, such as pesticides, polychlorinated biphenyls (PCBs), dioxins, furans, and many others. The occurrence of chemicals refers to their presence and quantities in various compartments of the environment and ecosystems. For example, in a terrestrial ecosystem such as a forest, the most important compartments to consider are the mineral soil, water and air present in spaces within the soil, the above-ground atmosphere, dead biomass within the soil and lying on the ground as logs and other organic debris, and living organisms, the most abundant of which are trees. Each of these components of the forest ecosystem contains a wide variety of chemicals in some concentration, and in some amount. Chemicals move between all of these compartments, as fluxes that represent elements of nutrient and mineral cycles. The movements of chemicals within and among compartments often involve a complex of transformations among 471
Environmental chemistry
potential molecular states. There may also be changes in physical states, such as evaporation of liquids, or crystallization of dissolved substances. The transformations of chemicals among molecular states can be illustrated by reference to the environmental cycling of sulfur. Sulfur (S) is commonly emitted to the atmosphere as the gases sulfur dioxide (SO2) or hydrogen sulfide (H2S), which are transformed by photochemical reactions into the negatively-charged ion, sulfate (SO4-2). The sulfate may eventually be deposited with precipitation to a terrestrial ecosystem, where it may be absorbed along with soil water by tree roots, and later used to synthesize biochemicals such as proteins and amino acids. Eventually, the plant may die and its biomass deposited to the soil surface as litter. Microorganisms can then metabolize the organic matter as a source of energy and nutrients, eventually releasing simple inorganic compounds of sulfur such as sulfate or hydrogen sulfide into the environment. Alternatively, the plant biomass may be harvested by humans and used as a fuel, with the organic sulfur being oxidized during combustion and emitted to the atmosphere as sulfur dioxide. Organic and mineral forms of sulfur also occur in fossil fuels such as petroleum and coal, and the combustion of those materials also results in an emission of sulfur dioxide to the atmosphere. Contamination and pollution Contamination and pollution both refer to the presence of chemicals in the environment, but it is useful to distinguish between these two conditions. Contamination refers to the presence of one or more chemicals in concentrations higher than normally occurs in the ambient environment, but not high enough to cause biological or ecological damages. In contrast, pollution occurs when chemicals occur in the environment in concentrations high enough to cause damages to organisms. Pollution results in toxicity and ecological changes, but contamination does not cause those damages. Chemicals that are commonly involved in pollution include the gases sulfur dioxide and ozone, diverse kinds of pesticides, elements such as arsenic, copper, mercury, nickel, and selenium, and some naturally occurring biochemicals. In addition, large concentrations of nutrients such as phosphate and nitrate can cause eutrophication, a type of pollution associated with excessive ecological productivity. Although any of these chemicals can cause pollution in certain situations, they most commonly occur in concentrations too small to cause toxicity or other ecological damages. Modern analytical chemistry has become extremely sophisticated, and this allows trace contamination of potentially toxic chemicals to be measured at levels that are much smaller than what is required to cause demonstrable physiological or ecological damages. 472
Environmental Encyclopedia 3 Environmental chemistry of the Aamosphere Nitrogen gas (N2) comprises about 79% of the mass of Earth’s atmosphere, while 20% is oxygen (O2), 0.9% argon (Ar), 0.035% carbon dioxide (CO2), and the remainder composed of a variety of trace gases. The atmosphere also contains variable concentrations of water vapor, which can range from 0.01% in frigid arctic air to 5% in humid tropical air. The atmosphere also can contain high concentrations of gases, vapors, or particulates that are potentially harmful to people, other animals, or vegetation, or that cause damages to buildings, art, or other materials. The most important gaseous air pollutants (listed alphabetically) are ammonia (NH3), carbon monoxide (CO), fluoride (F, usually occurring HF), nitric oxide and nitrogen dioxide (NO and NO2, together known as oxides of nitrogen, or NOx), ozone (O3), peroxyacetyl nitrate (PAN), and sulfur dioxide (SO2). Vapors of elemental mercury and hydrocarbons can also be air pollutants. Particulates with tiny diameters (less than 1m) can also be important, including dusts containing such toxic elements as arsenic, copper, lead, nickel, and vanadium, organic aerosols that are emitted as smoke during combustions (including toxins known as polycyclic aromatic hydrocarbons), and non-reactive minerals such as silicates. Some so-called “trace toxics” also occur in the atmosphere in extremely small concentrations. The trace toxics include persistent organochlorine chemicals such as the pesticides DDT and dieldrin, polychlorinated biphenyls (PCBs), and the dioxin, TCDD. Other, less persistent pesticides may also be air pollutants close to places where they are used. Environmental chemistry of water Earth’s surface waters vary enormously in their concentrations of dissolved and suspended chemicals. Other than the water, the chemistry of oceanic water is dominated by sodium chloride (NaCl), which has a typical concentration of about 3.5% or 35 g/l. Also important are sulfate (2.7 g/ l), magnesium (1.3 g/l), and potassium and calcium (both 0.4 g/l). Some saline lakes can have much larger concentrations of dissolved ions, such as Great Salt Lake in Utah, which contains more than 20% salts. Fresh waters are much more dilute in ions, although the concentrations are variable among waterbodies. The most important cations in typical fresh waters are calcium (Ca2+), magnesium (Mg2+), sodium (Na+), ammonium (NH4+), and hydrogen ion (H+; this is only present in acidic waters, otherwise hydroxy ion or OH- occurs). The most important anions are bicarbonate (HCO3-) sulfate (SO42+), chloride (Cl-), and nitrate (NO3-). Some fresh waters have high concentrations of dissolved organic compounds, known
Environmental Encyclopedia 3 as humic substances, which can stain the water a tea-like color. Typical concentrations of major ions in fresh water are: calcium 15 mg/l, sulfate 11 mg/l, chloride 7 mg/l, silica 7 mg/l, sodium 6 mg/l, magnesium 4 mg/l, and potassium 3 mg/l. The water of clean precipitation is considerably more dilute than that of surface waters such as lakes. For example, precipitation at a remote place in Nova Scotia contained 1.6 mg/l of sulfate, 1.3 mg/l chloride, 0.8 mg/l sodium, 0.7 mg/ l nitrate, 0.13 mg/l calcium, 0.08 mg/l ammonium, 0.08 mg/l magnesium, and 0.08 mg/l potassium. Because that site is about 31 mi (50 km) from the Atlantic Ocean, its precipitation is influenced by sodium and chloride originating with sea spray. In comparison, a more central location in North America had a sodium concentration of 0.09 mg/ l and chloride 0.15 mg/l. Pollution of surface waters is most often associated with the dumping of human or industrial sewage, nutrient inputs from agriculture, acidification caused by acidic precipitation or by acid-mine drainage, and industrial inputs of toxic chemicals. Eutrophication is caused when nutrient inputs cause large increases in aquatic productivity, especially in fresh waters and shallow marine waters into which sewage is dumped or that receive runoff containing agricultural fertilizers. In general, marine ecosystems become eutrophic when they are fertilized with nitrate, and freshwater systems with phosphate. Only 35–100 g/l or more of phosphate is enough to significantly increase the productivity of most shallow lakes, compared with the background concentration of about 10 g/l or less. Freshwater ecosystems can become acidified by receiving drainage from bogs, by the deposition of acidifying substances from the atmosphere (such as acidic rain), and by acid-mine drainage. Atmospheric depositions have caused a widespread acidification of surface waters in eastern North America, Scandinavia, and other places. Surface waters acidified by atmospheric depositions commonly develop pHs of about 4.5–5.5. Tens of thousands of lake and running-water ecosystems have been damaged in this way. Acidification has many biological consequences, including toxicity caused to many species of plants and animals, including fish. Some industries emit metals to the environment, and these may pollute fresh and marine waters. For instance, lakes near large smelters at Sudbury, Ontario, have been polluted by sulfuric acid, copper, nickel, and other metals, which in some cases occur in concentrations large enough to cause toxicity to aquatic plants and animals. Mercury contamination of fish is also a significant problem in many aquatic environments. This phenomenon is significant in almost all large fish and sharks, which accumulate mercury progressively during their lives and commonly have residues in their flesh that exceed 0.5 ppm (this
Environmental chemistry
is the criterion set by the World Health Organization for the maximum concentration of mercury in fish intended for human consumption). It is likely, however, that the oceanic mercury is natural in origin, and not associated with human activities. Many fresh-water fish also develop high concentrations of mercury in their flesh, also commonly exceeding the 0.5 ppm criterion. This phenomenon has been demonstrated in many remote lakes. The source of mercury may be mostly natural, or it may originate with industrial sources whose emissions are transported over a long distance in the atmosphere before they are deposited to the surface. Severe mercury pollution has also occurred near certain factories, such as chlor-alkali plants and pulp mills. The most famous example occurred at Minamata, Japan, where industrial discharges led to the pollution of marine organisms, and then resulted in the poisoning of fish-eating animals and people. Environmental chemistry of soil and rocks The most abundant elements in typical soils and rocks are oxygen (47%), silicon (28%), aluminum (8%), and iron (3–4%). Virtually all of the other stable elements are also present in soil and rocks, and all of these can occur in a great variety of molecular forms and minerals. Under certain circumstances, some of these chemicals can occur in relatively high concentrations, sometimes causing ecological damages. This can occur naturally, as in the case of soils influenced by so-called serpentine minerals, which can contain hundreds to thousands of ppm of nickel. In addition, industrial emissions of metals from smelters have caused severe pollution. Soils near Sudbury, for example, can contain nickel and copper concentrations up to 5,000 ppm each. Even urban environments can be severely contaminated by certain metals. Soils collected near urban factories for recycling old automobile batteries can contain lead in concentrations in the percent range, while the edges of roads can contain thousands of ppm of lead emitted through the use of leaded gasoline. Trace toxics Some chemicals occur in minute concentrations in water and other components of the environment, yet still manage to cause significant damages. These chemicals are sometimes referred to as trace toxics. The best examples are the numerous compounds known as halogenated hydrocarbons, particularly chlorinated hydrocarbons such as the insecticides DDT, DDD, and dieldrin, the dielectric fluids PCBs, and the chlorinated dioxin, TCDD. These chemicals are not easily degraded by either ultraviolet radiation or by metabolic reactions, so they are persistent in the environment. In addition, chlorinated hydrocarbons are virtually insoluble in water, but are highly soluble in lipids such as fats and oils. Because most lipids in ecosystems occur within the bodies of organisms, chlorinated hydrocarbons have a marked tendency to bioaccumulate (i.e., to occur preferen473
Environmental Encyclopedia 3
Environmental Defense
tially in organisms rather than in the non-living environment). This, coupled with the persistence of these chemicals, results in their strong tendency to food-chain/web accumulate or biomagnify (i.e., to occur in their largest concentrations in top predators). Fish-eating birds are examples of top predators that have been poisoned by exposure to chlorinated hydrocarbons in the environment. Some examples of species that have been affected by this type of ecotoxicity include the peregrine falcon (Falco peregrinus), bald eagle (Haliaeetus leucocephalus), osprey (Pandion haliaetus), brown pelican (Pelecanus occidentalis), double-crested cormorant (Phalacrocorax auritus), and western grebe (Aechmophorus occidentalis). Concentrations of chlorinated hydrocarbons in the water of aquatic habitats of these birds is generally less than 1 g/l (part per billion, or ppb), and less than 1 ng/l (part per trillion, or ppt) in the case of TCDD. However, some of the chlorinated hydrocarbons can biomagnify to tens to hundreds of mg/kg (ppm) in the fatty tissues of fish-eating birds. This can cause severe toxicity, characterized by reproductive failures, and even the deaths of adult birds, both of which can cause populations to collapse. Other trace toxics also cause ecological damages. For example, although it is only moderately persistent in aquatic environments, the insecticide carbofuran can accumulate in acidic standing water in recently treated fields. If geese, ducks, or other birds or mammals utilize those temporary aquatic habitats, they can be killed by the carbofuran residues. Large numbers of wildlife have been killed this way in North America. Petroleum Water pollution can also result from the occurrence of hydrocarbons in large concentrations, especially after spills of crude oil or its refined products. Oil pollution can result from accidental spills of petroleum from wrecked tankers, offshore drilling platforms, broken pipelines, and from spills during warfare, as occurred during the Gulf War of 1991. Other important sources of oil pollution include operational discharges from tankers disposing oily bilge waters, and chronic releases from oil refineries and urban runoff. The concentration of natural hydrocarbons in seawater is about 1 ppb, mostly due to releases from phytoplankton and bacteria. Beneath a slick of petroleum spilled at sea, however, the concentration of dissolved hydrocarbons can exceed several ppm, enough to cause toxicity to some organisms. There are also finely suspended droplets of petroleum in water beneath slicks, as a result of wave action on the floating oil. The slick and the sub-surface emulsion of oilin-water are highly damaging to organisms that become coated with these substances.
[Bill Freedman Ph.D.] 474
RESOURCES BOOKS Freedman, B. Environmental Ecology, 2nd ed. San Diego: Academic Press, 1995. Hemond, H.F., and E.J. Fechner. Chemical Fate and Transport in the Environment. San Diego: Academic Press, 1994. Manahan, S. Environmental Chemistry, 6th ed. Boca Raton, FL: Lewis Publishers, 1994.
Environmental Defense Environmental Defense is a public interest group founded in 1967 and concerned primarily with the protection of the environment and the concomitant improvement of public health. Originally called Environmental Defense Fund, the name was shortened to end confusion related to the use of the word fund. In the beginning a group of Long Island scientists organized to oppose local spraying of the pesticide DDT, and Environmental Defense is still staffed by scientists as well as lawyers and economists. Over time the group has expanded its interests to include air quality, energy, solid waste, water resources, agriculture, wildlife, habitats, and international environmental issues. Environmental Defense presently has a membership of approximately 300,000, an annual budget of $39.1 million, and a staff of 200 working out of eight regional offices. Environmental Defense seeks to protect the environment by initiating legal action in environment-related matters and also by conducting public service and educational campaigns. It publishes a newsletter detailing the organization’s activities, as well as occasional books, reports, and monographs. Environmental Defense also conducts and encourages research relevant to environmental issues and promotes administrative, legislative, and corporate actions and policies in defense of the environment. Environmental Defense’s strategies and orientation have changed somewhat over the years from the early days when the group’s motto was “Sue the bastards!” At about the time that Frederic D. Krupp became its executive director in 1984, Environmental Defense began to view environmental problems more in view of economic needs. As Krupp put it, the practical effectiveness of the environmental movement in the future would depend on its realization that behind environmental problems “there are nearly always legitimate social needs–and that long-term solutions lie in finding alternative ways to meet those underlying needs.” With this in mind, Krupp proposed a “third stage of environmentalism” which combined direct opposition to environmentally harmful practices with proposals for realistic, economically-viable alternatives. This strategy was first applied successfully to large-scale power production in California, where utilities were planning a massive expansion of
Environmental Encyclopedia 3
Environmental degradation
generating capacity. Environmental Defense demonstrated that this expansion was largely unnecessary (thereby saving the utilities and their customers a considerable amount of money while also protecting the environment) by showing that the use of existing and well-established technology could greatly reduce the need for new capacity without affecting the utility’s customers. Environmental Defense also showed that it was economically effective to buy power generated from renewable energy resources, including wind energy. The Environmental Defense worked with the McDonalds Corporation in 1991 on a task force to reduce the fast food giant’s estimated two million lb-per-day (907,184 kg) effusion of waste. One of the most widely publicized results of these efforts was that McDonald’s was convinced to stop packaging its hamburgers in polystyrene containers. Combined with other strategies, McDonald’s estimates that the task force’s recommendations will eventually reduce its waste flow by 75%. More recently in 2000, the Environmental Defense joined with eight companies to reduce the amount of greenhouse gases in the environment. In 2001, the Action Network (which has 750,000 members) and For My World projects were set up to inform the public on environmental and health issues. Scorecard.org rates area pollutants based on the most recent findings by the Environmental Protection Agency (EPA). Environmental Defense continues to search for ways to harness economic forces in ways that destroy incentives to degrade the environment. This approach has made Environmental Defense one of the most respected and heeded environmental groups among United States corporations. Environmental Defense virtually ghost-wrote the Bush Sr. Administration’s Acid Rain Bill, which makes considerable use of market-oriented strategies such as the issuing of tradable emissions permits. These permits allow for a set amount of pollution per firm based on ceilings set for entire industries. Companies can buy, sell, and trade these permits, thus providing them with a profit motive to reduce harmful emissions. [Lawrence J. Biskowski]
RESOURCES ORGANIZATIONS Environmental Defense, 257 Park Avenue South, New York, NY USA 10010 (212) 505-2100, Fax: (212) 505-2375, Email: webmaster@ environmentaldefense.org,
Environmental Defense Fund see Environmental Defense
Environmental degradation Degradation is the act or process of reducing something in value or worth. Environmental degradation, therefore, is the de-valuing of and damage to the environment by natural or anthropogenic causes. The loss of biodiversity, habitat destruction, depletion of energy or mineral sources, and exhaustion of groundwater aquifers are all examples of environmental degradation. Presently there are four major areas of global concern due to environmental degradation: marine environment, ozone layer, smog and air pollution, and the vanishing rain forest. Pollution, at some level, is found throughout the world’s oceans, which cover two-thirds of the planet’s surface. Marine debris, farm runoff, industrial waste, sewage, dredge material, stormwater runoff, and atmospheric deposition all contribute to marine pollution. The level of degradation varies from region to region, but its effects are seen in such remote places as Antarctica and the Bering Sea. Issues of waste management and disposal have had a large degrading impact on these areas. There are major national and international efforts to control pollution from shipping (including oil spills and general pollution due to ship ballast) and direct ocean or estuary discharges. Clean up has been started in some areas with some initial success. Another major problem facing the world is the depletion of the ozone layer, which is linked to the use of a group of chemicals called chlorofluorocarbons (CFCs). These chemicals are widely used by industry as refrigerants and in polystyrene products. Once released into the air they rise to the stratosphere and eat away at the ozone layer. This layer is important because it protects us from harmful ultraviolet radiation, which is the chief cause of skin cancer. Smog in urban areas and air quality in general have become crucial issues in the last few decades. Acid rain, which occurs when sulfur dioxide and nitrogen oxide— as emissions from power plants and industries—change in the atmosphere to form harmful compounds that fall to earth in rain, fog, and snow. Acid rain damages lakes and streams as well as buildings and monuments. Air visibility is curtailed and the health of humans as well as plants and trees can be affected. Automobile emissions of nitrogen oxides, carbon monoxide, and volatile organic compounds—although much reduced in the United States—also contribute to smog and acid rain. Carbon monoxide continues to be a problem in cities such as Los Angeles where there is heavy automobile congestion. The vanishing rain forest is also of major global concern. The degradation of the rain forest—with its extensive logging, deforestation, and massive destruction of habitat—has threatened the survival of many species of plants 475
Environmental Encyclopedia 3
Environmental design
and animals as well as disrupting climate and weather patterns locally and globally. Although tropical rain forests cover only about five to seven percent of the world’s land surface, they contain about one-half to two-thirds of all species of plants and animals, some which have never been studied for their medicinal or food properties. The problem of environmental degradation has been addressed by various environmental organizations throughout the world. Environmentalists are no longer solely concerned with the local region and efforts to stop or at least slow down environmental degradation has taken on a global significance. [James L. Anderson]
RESOURCES BOOKS The Global Ecology Handbook: What You Can Do About the Environmental Crisis. Boston: Beacon Press, 1990. Our Common Future. World Commission on Environmental Development. New York: Oxford University Press, 1987. Preserving Our Future Today. U.S. Environmental Protection Agency. Washington, DC: U.S. Government Printing Office, 1991. Silver, C. S., and R. S. Defries. One Earth, One Future: Our Changing Global Environment. Washington, DC: National Academy Press, 1990. Triedjell, S. T. “Soil and Vegetative Systems.” In Contemporary Problems in Geography. Oxford: Clarendon Press, 1988.
Environmental design Environmental design is a new approach in planning consumer products and industrial processes that are ecologically intelligent, sustainable, and healthy for both humans and our environment. Based on the work of innovative thinkers such as architect Bill McDonough, chemist Michael Braungart, physicist Amory Lovins, Swedish physician Dr. KarlHenrik Robert, and business executive Paul Hawken, this movement is an effort to rethink our whole industrial economy. During the first Industrial Revolution 200 years ago, raw materials such as lumber, minerals, and clean water seemed inexhaustible, while nature was regarded as a hostile force to be tamed and civilized. We use materials to make the things we wanted, then discard them when they no longer are useful. “Dilution is the solution to pollution,” suggests that if we just spread our wastes out in the environment widely enough, no one will notice. This approach has given us an abundance of material things, but also has produced massive pollution and environmental degradation. It also is incredibly wasteful. On average, for every truckload of products delivered in the United States, 32 truckloads of waste are produced along the way. The automobile is a typical example. Industrial ecologist, Amory Lovins, calculates that for every 100 gallons 476
(380 l) of gasoline burned in your car engine, only 1% (0 gal or 3.8 l) actually moves the passengers inside. All the rest is used to move the vehicles itself. The wastes produced—carbon dioxide, nitrogen oxides, unburned hydrocarbon, rubber dust, heat—are spread through the environment where they pollute air, water, and soil. And when the vehicle wears out after only a few years of service, thousands of pounds of metal, rubber, plastic, and glass become part of our rapidly growing waste steam. This isn’t the way things work in nature, environmental designers point out. In living systems, almost nothing is discarded or unused. The wastes from one organism become the food of another. Industrial processes, to be sustainable over the long term, should be designed on similar principles, designers argue. Rather than following current linear patterns in which we try to maximize the throughput of materials and minimize labor, products and processes should be designed to be energy efficient and use renewable materials. They should create products that are durable and reusable or easily dismantled for repair and remanufacture, and are non-polluting throughout their entire life cycle. We should base our economy on renewable solar energy rather than fossil fuels. Rather than measure our economic progress by how much material we use, we should evaluate productivity by how many people are gainfully and meaningfully employed. We should judge how well we’re doing by how many factories have no smokestacks or dangerous effluents. We ought to produce nothing that will require constant vigilance from future generations. Inspired by how ecological systems work, Bill McDonough proposes three simple principles for designing processes and products: OWaste equals food. This principle encourages elimination of the concept of waste in industrial design. Every process should be designed so that the products themselves, as well as leftover chemicals, materials, and effluents, can become “food” for other processes. ORely on current solar income. This principle has two benefits: First, it diminishes, and may eventually eliminate, our reliance on hydrocarbon fuels. Second, it means designing systems that sip energy rather than gulping it down. ORespect diversity. Evaluate every design for its impact on plant, animal, and human life. What effects do products and processes have on identity, independence, and integrity of humans and natural systems? Every project should respect the regional, cultural, and material uniqueness of its particular place. According to McDonough, our first question about a product is whether it is really needed. Could we obtain the same satisfaction, comfort, or utility in another way that would have less environmental and social impacts? Can the things we design be restorative and regenerative: that is,
Environmental Encyclopedia 3 can they help reduce the damage done by earlier, wasteful approaches, and can they help nature heal rather than simply adding to existing problems? McDonough invites us to reinvent our businesses and institutions to work with nature, and redefine ourselves as consumers, producers, and citizens to promote a new sustainable relationship with the Earth. In an eco-efficient economy, he says, products might be divided into three categories: OConsumables are products like food, natural fabrics, or paper that are produced from renewable materials and can go back to the soil as compost. OService products are durables such as cars, televisions, and refrigerators. These products should be leased to the customer to provide their intended service, but would always belong to the manufacturer. Eventually, they would be returned to the maker, who would be responsible for recycling or remanufacturing. OUnmarketables are materials like radioactive isotopes, persistent toxins, and bioacumulative chemicals. Ideally, no one would make or use these products. But because eliminating their use will take time, McDonough suggests that for now, these materials should belong to the manufacturer and be molecularly tagged with the maker’s mark. If they are discovered to be discarded illegally, the manufacturer would be liable. Following these principles McDonough Braungart Design Chemistry has created nontoxic, easily recyclable, healthy materials for buildings and for consumer goods. Rather than design products for a “cradle to grave” life cycle, MBDC aims for a fundamental conceptual shift to a Cradle to Cradle processes whose materials perpetually circulate in closed systems that create value and are inherently healthy and safe. Among some important examples are carpets designed to be recycled at the end of their useful life, paints and adhesives that are non-toxic and non-allergenic, and clothing that is both healthy for the wearer and that has minimal environmental impact in its production. In his architecture firm, McDonough + partners, these new design models and environmentally friendly materials have been used in a number of innovative building projects. A few notable examples include: The Gap Inc. offices in California and the Environmental Studies building at Oberlin College in Ohio. Built in 1994, The Gap building in San Bruno, California, is designed to maintain the unique natural features of the site while providing comfortable, healthy, and flexible office spaces. Intended to promote employee well-being and productivity as well as eco-efficiency, The Gap building has high ceilings, open, airy spaces, a natural ventilation system including operable windows, a full-service fitness center (including a pool), and a landscaped atrium for each office bay that brings the outside in. Skylights in the roof deliver
Environmental design
daylight to interior offices and vent warm, stale air. Warm interior tones and natural woods (all wood used in the building was harvested by certified sustainable methods) give a friendly feel. Paints, adhesives, and floor coverings are low toxicity to maintain a healthy indoor environment. A pleasant place to work, the offices help recruit top employees and improve both effectiveness and retention. The roof of The Gap Building is planted with native grasses and wildflowers that absorb rainwater and help improve ambient environmental quality. The grass roof also is beautiful and provides thermal and acoustic insulation. At night, cool outdoor air is flushed through the building to provide natural cooling. By providing abundant daylight, high-efficiency fluorescent lamps, fresh air ventilation, and other energy-saving measures, this pioneering building is more than 30% more energy efficient than required by California law. Operating savings within the first four to eight years of occupancy are expected to repay the initial costs of these design innovations. An even more environmentally friendly building was built at Oberlin College in 2001 to house its Environmental Studies Program. Under the leadership of Dr. David Orr, the Adam Joseph Lewis Center is planned around the concept of ecological stewardship, and is intended to be both “restorative” and “regenerative” rather than merely non-damaging to the environment. The building is designed to be a net energy exporter, generating more power from renewable sources than it consumes annually. More than 3,700 sq ft (roughly 350 sq m) of photovoltaic panels on the roof are expected to generate 75,000 kilowatt hours of energy per year. The building also draws on geothermal wells for heating and cooling, and features use of natural daylight and ventilation to maintain interior comfort levels and a healthy interior environment. High efficiency insulation in walls and windows are expected to make energy consumption nearly 80% lower than standard academic buildings in the area. The Lewis Center also incorporates an innovative “living machine” for internal waste water treatment, a constructed wetland for storm water management, and a landscape that provides social spaces, learning opportunities with live plants, and habitat restoration. It is expected that all water used in the building will be returned to the environment in as good quality as when it entered. The water produced by natural cleaning processes should be of high enough quality for drinking, although doing so isn’t planned at present. Taken together, these restorative and regenerative environmental design approaches could bring about a new industrial revolution. The features of environment design are incorporated in McDonough’s “Hanover Principles” prepared for the 2000 World Fair in Hanover, Germany. This manifesto for green design urges us to recognize how humans 477
Environmental dispute resolution
interact with and depends on the natural world. According to McDonough, we need to recognize even distant effects and consider all aspects of human settlement, including community, dwelling, industry and trade, in terms of existing and evolving connections between spiritual and material consciousness. We should accept responsibility for the consequences of design decisions upon human well being, the viability of natural systems. We have to understand the limitations of design. No human creation lasts forever, and design doesn’t solve all problems. Those who create and plan should practice humility in the face of nature. We should treat nature as a model and mentor, not an inconvenience to be evaded and controlled. If we can incorporate these ecologically intelligent principles in our practice, we may be able to link long-term, sustainable considerations with ethical responsibility, and to reestablish the integral relationship between natural processes and human activity. [William P. Cunningham Ph.D.]
RESOURCES BOOKS Hawken, Paul, Amory Lovins, and L. Hunter Lovins. Natural Capitalism: Creating the Next Industrial Revolution. Back Bay Books, 2000. Hawken, Paul. The Ecology of Commerce: A Declaration of Sustainability. New York: Harperbusiness, 1994. Hutchison, Colin. Building to Last: The Challenge for Business Leaders. London: Earthscan, 1997. McDonough, William The Hanover Principles. 2000. [cited July 9, 2002].. McDonough, William and Michael Braungart. Cradle to Cradle: A Blueprint for the Next Industrial Revolution. San Francisco, CA: North Point Press, 2002.
Environmental dispute resolution Environmental Dispute Resolution (EDR) or Alternative Dispute Resolution (ADR), as it is more generally known, is an out-of-court alternative to litigation to resolve disputes between parties. Although ADR can be used with virtually any legal dispute, it is often used to resolve environmental disputes. There are several types of ADR, ranging from the least formal to the most formal process: (a) negotiation, (b) mediation, (c) adjudication, (d) arbitration, (e) minitrial, and (f) summary jury trial. Negotiation: Negotiation is the simplest and most often practiced form of ADR. The parties do not enter the judicial system, but rather settlements are reached in an informal setting and then reduced to written terms. Mediation: Mediation is an extension of the direct negotiation process. The term is loosely used and is often confused with arbitration or informal processes in general. Mediation is a process in which a neutral third-party inter478
Environmental Encyclopedia 3 venes to help disputants reach a voluntary settlement. The mediator has no authority to force the parties to reach an agreement. Mediation is often the most appropriate technique for environmental disputes because the parties often have no prior negotiating relationship and, because there are often many technical and scientific uncertainties, the assistance of a qualified professional is helpful. Mediation is also used with varying success in environmental policy-making, standard setting, determination of development choices, and the enforcement of environmental standards. Many states explicitly recognize mediation as the primary method for initially dealing with environmental disputes, and mediation procedures are written into federal environmental policy, specifically in the regulations dealing with the Comprehensive Environmental Response, Compensation and Liability Act (CERCLA) and the Resource Conservation and Recovery Act (RCRA). Mediation is not appropriate, however, with all environmental disputes because some environmental laws were designed to encourage a slower examination of issues that impact society. Adjudication: Adjudication is sometimes referred to as “private judging.” It is an ADR process in which the parties give their evidence and arguments to a neutral third-party who then renders an objective, binding decision. It is a voluntary procedure and private unless one party seeks judicial enforcement or review after the decision is made. The parties must agree on the adjudication and procedural rules for the process and each side is contractually bound for the length of the proceeding. The advantage of adjudication is that a law- and/or environment-trained third party renders an objective decision based on the presented facts and legal arguments. The parties set their own rules so an adjudicator is not bound to legal principles of any particular jurisdiction. Private organizations provide adjudication services for fees but they can be expensive. Arbitration: Arbitration is a process whereby a private judge, or arbitrator, hears the arguments of the parties and renders a judgment. The process works much like a court except that the parties choose the arbitrator and the substantive law he or she should apply. The arbitrator also has much more latitude in creating remedies which are fair to both parties. People often confuse the responsibilities of arbitrators and mediators. Arbitrators are passive functionaries who determine right or wrong; mediators are active functionaries who attempt to move the parties to reconciliation and agreement, regardless of who is right or wrong. Parties cannot be forced into arbitration unless the contract in question includes an arbitration clause or the parties consented to enter into arbitration after the dispute developed. Since arbitration is a contractual remedy, the
Environmental Encyclopedia 3
Environmental economics
arbitrator can consider only those disputes and remedies which the parties agreed to submit to arbitration. Minitrial: A minitrial is a private process in which parties agree to voluntarily reach a negotiated settlement. They present their cases in summary form before a panel of designated representatives of each party. The panel offers non-binding conclusions on the probable outcome of the case, were it to be litigated. The parties may then use the results to assist with negotiation and settlement. Summary Jury Trial: A summary jury trial is similar to a minitrial except that the evidence is presented to a nonexpert, impartial jury, rather than a panel chosen by the parties, which subsequently prepares non-binding conclusions on each of the issues in dispute. Parties may then use the assessment of the jury’s “verdict” to help with negotiation and settlement. [Kevin Wolf]
RESOURCES BOOKS Loew, W. R., and A. M. Ramirex. “Resolving Environmental Disputes with ADR.” The Practical Real Estate Lawyer 8 (May 1992): 15-23.
PERIODICALS Kubasek, N., and G. Silverman, “Environmental Mediation.” American Business Law Journal 26 (Fall 1988): 533-555.
Environmental economics Environmental economics is a relatively new field, but its roots go back to the end of the nineteenth century when economists first discussed the problem of externality. Economic transactions have external effects which are not captured by the price system. Prime examples of these externalities are air pollution and water pollution. The absence of a price for nature’s capacity to absorb wastes has an obvious solution in economic theory. Economists advocate the use of surrogate prices in the form of pollution taxes and discharge fees. The non-priced aspect of the transaction then has a price, which sends a signal to the producers to economize on the use of the resource. In addition to the theory of externalities, economists have recognized that certain goods, such as those provided by nature, are common property. Lacking a discrete owner, they are likely to be over-utilized. Ultimately, they will be depleted. Few will be left for future generations, unless common property goods like the air and water are protected. Besides pollution taxes and discharge fees, economists have explored the use of marketable emission permits as a means of rectifying the market imperfection caused by pollution. Rather than establishing a unit charge for pollu-
tion, government would issue permits equivalent to an agreed-upon environmental standard. Holders of the permits would have the right to sell them to the highest bidder. The advantage of this system, wherein a market for pollution rights has been established, is that it achieves environmental quality standards. Under a charge system, trial-and-error tinkering would be necessary to achieve the standards. Besides discharge fees and markets for pollution rights, economists have advocated the use of cost-benefit analysis in environmental decision making. Since control costs are much easier to measure than pollution benefits, economists have concentrated on how best to estimate the benefits of a clean environment. They have relied on two primary means of doing so. First, they have inferred from the actual decisions people make in the marketplace what value they place on a clean and healthy environment. Second, they have directly asked people to make trade-off choices. The inference method might rely on residential property values, decomposing the price of a house into individual attributes including air quality, or it might rely on the wage premium risky jobs enjoy. Despite many advances, the problem of valuing environmental benefits continues to be controversial with special difficulties surrounding the issues of quantifying the value of a human life, recreational benefits, and ecological benefits including species and habitat survival. For instance, the question of how much a life is worth is repellent and absurd since human worth cannot be truly captured in monetary terms. Nonetheless it is important to determine the benefits for cost-benefit purposes. The costs of reducing pollution often are immediate and apparent, while the benefits are far-off and hard to determine. So, it is important to try to gauge what these benefits might be worth. Economists call for a more rational ordering of risks. The funds for risk reduction are not limitless, and the costs keep mounting. Risks should be viewed in a detached and analytical way. Polls suggest that Americans worry most about such dangers as oil spills, acid rain, pesticides, nuclear power, and hazardous wastes, but scientific risk assessments show that these are only low or medium-level dangers. The greater hazards come from radon, lead, indoor air pollution, and fumes from chemicals such as benzene and formaldehyde. Radon, the odorless gas that naturally seeps up from the ground and is found in people’s homes, causes as many as 20,000 lung cancer deaths per year, while hazardous waste dumps cause at most 500 cancer deaths. Yet the Environmental Protection Agency (EPA) spends over $6 billion a year to clean up hazardous waste sites while its spends only $100 million a year for radon protection. To test a home for radon costs about $25, and to clean it up if it is found contaminated costs $1,000. To make the entire national housing stock free from radon 479
Environmental economics
would cost a few billion dollars. In contrast, projected spending for cleaning up hazardous waste sites is likely to exceed $500 billion despite the fact that only about 11 percent of such sites pose a measurable risk to human health. Greater rationality would mean that less attention would be paid to some risks and more attention to others. For instance, scientific risk assessment suggests that sizable new investments will be needed to address the dangers of ozone layer depletion and greenhouse warming. Ozone depletion is likely to result in 100,000 more cases of skin cancer by the year 2050. Global warming has the potential to cause massive catastrophe. For businesses, risk assessment provides a way to allocate costs efficiently. They are increasingly using it as a management tool. To avoid another accident like Bhopal, India, Union Carbide has set up a system by which it rates its plants “safe,” “made safer,” or “shut down.” Environmentalists, on the other hand, generally see risk assessment as a tactic of powerful interests used to prevent regulation of known dangers or permit building of facilities where there will be known fatalities. Even if the chances of someone contracting cancer and dying is only one in a million, still someone will perish, which the studies by risk assessors indeed document. Among particularly vulnerable groups of the population (allergy sufferers exposed to benzene for example) the risks are likely to be much greater, perhaps as great as one fatality for every 100 persons. Environmentalists conclude that the way economists present their findings is too conservative. By treating everyone alike, they overlook the real danger to particularly vulnerable people. Risk assessment should not be used as an excuse for inaction. Environmentalists have also criticized environmental economics for its emphasis on economic growth without considering the unintended side-effects. Economists need to supplement estimates of the economic costs and benefits of growth with estimates of the effects of that growth that cannot be measured in economic terms. Many environmentalists also believe that the burden of proof should rest with new technologies, in that they should not be allowed simply because they advance material progress. In affluent societies especially, economic expansion is not necessary. Growth is promoted for many reasons to restore the balance of payments, to make the nation more competitive, to create jobs, to reduce the deficit, to provide for the old and sick, and to lessen poverty. The public is encouraged to focus on statistics on productivity, balance of payments, and growth, while ignoring the obvious costs. Environmental groups, on the other hand, have argued for a steady-state economy in which population and per capita resource consumption stabilize. It is an economy with a constant number of people and goods, maintained at the lowest feasible flows of matter and energy. Human services would play a large 480
Environmental Encyclopedia 3 role in a steady-state economy because they do not require much energy or material throughput and yet contribute to economic growth. Environmental clean-up and energy conservation also would contribute, since they add to economic growth while also having a positive effect on the environment. Growth can continue, according to environmentalists, but only if the forms of growth are carefully chosen. Free time, in addition, would have to be a larger component of an environmentally-acceptable future economy. Free time removes people from potentially harmful production. It also provides them with the time needed to implement alternative production processes and techniques, including organic gardening, recycling, public transportation, and home and appliance maintenance for the purposes of energy conservation. Another requirement of an environmentally acceptable economy is that people accept a new frugality, a concept that also has been labeled joyous austerity, voluntary simplicity, and conspicuous frugality. Economists represent the environment’s interaction with the economy as a materials balance model. The production sector, which consists of mines and factories, extracts materials from nature and processes them into goods and services. Transportation and distribution networks move and store the finished products before they reach the point of consumption. The environment provides the material inputs needed to sustain economic activity and carries away the wastes generated by it. People have long recognized that nature is a source of material inputs to the economy, but they have been less aware that the environment plays an essential role as a receptacle for society’s unwanted by-products. Some wastes are recovered by recycling, but most are absorbed by the environment. They are dumped in landfills, treated in incinerators, and disposed of as ash. They end up in the air, water, or soil. The ultimate limits to economic growth do not come only from the availability of raw materials from nature. Nature’s limited capacities to absorb wastes also set a limit on the economy’s ability to produce. Energy plays a role in this process. It helps make food, forest products, chemicals, petroleum products, metals, and structural materials such as stone, steel, and cement. It supports materials processing by providing electricity, heating, and cooling services. It aids in transportation and distribution. According to the law of the conservation of energy, the material inputs and energy that enter the economy cannot be destroyed. Rather they change form, finding their way back to nature in a disorganized state as unwanted and perhaps dangerous by-products. Environmentalists use the laws of physics (the notion of entropy) to show how society systematically dissipates low entropy, highly concentrated forms of energy by converting it
Environmental Encyclopedia 3 to high entropy, little concentrated waste that cannot be used again except at very high cost. They project current resource use and environmental degradation into the future to demonstrate that civilization is running out of critical resources. The earth cannot tolerate additional contaminants. Human intervention in the form of technological innovation and capital investment complemented by substantial human ingenuity and creativity is insufficient to prevent this outcome unless drastic steps are taken soon. Nearly every economic benefit has an environmental cost, and the sum total of the costs in an affluent society often exceed the benefits. The notion of carrying capacity is used to show that the earth has a limited ability to tolerate the disposal of contaminants and the depletion of resources. Economists counter these claims by arguing that limits to growth can be overcome by human ingenuity, that benefits afforded by environmental protection have a cost, and that government programs to clean up the environment are as likely to fail as the market forces that produce pollution. The traditional economic view is that production is a function of labor and capital and, in theory, that resources are not necessary since labor and/or capital are infinitely substitutable for resources. Impending resource scarcity results in price increases which lead to technological substitution of capital, labor, or other resources for those that are in scarce supply. Price increases also create pressures for efficiency-in-use, leading to reduced consumption. Thus, resource scarcity is reflected in the price of a given commodity. As resources become scarce, their prices rise accordingly. Increases in price induce substitution and technological innovation. People turn to less scarce resources that fulfill the same basic technological and economic needs provided by the resources no longer available in large quantities. To a large extent, the energy crises of the 1970s (the 1973 price shock induced by the Arab oil embargo and 1979 price shock following the Iranian Revolution) were alleviated by these very processes: higher prices leading to the discovery of additional supply and to conservation. By 1985, energy prices in real terms were lower than they were in 1973. Humans respond to signals about scarcity and degradation. Extrapolating past consumption patterns into the future without considering the human response is likely to be a futile exercise, economists argue. As far back as the end of the eighteenth century, thinkers such as Thomas Malthus have made predictions about the limits to growth, but the lesson of modern history is one of technological innovation and substitution in response to price and other societal signals, not one of calamity brought about by resource exhaustion. In general, the prices of natural resources have been declining despite increased production and demand. Prices have fallen because of discoveries of new resources and because of innovations in the extraction and refinement pro-
Environmental education
cess. See also Greenhouse effect; Trade in pollution permits; Tragedy of the Commons [Alfred A. Marcus]
RESOURCES BOOKS Ekins, P., M. Hillman, and R. Hutchinson. The Gaia Atlas of Green Economics. New York: Doubleday, 1992. Kneese, A., R. Ayres, and R. D’Arge. Economics and the Environment: A Materials Balance Approach. Washington, DC: Resources for the Future, 1970. Marcus, A. A. Business and Society: Ethics, Government, and the World Economy. Homewood, IL: Irwin Publishing, 1993.
PERIODICALS Cropper, M. L., and W. E. Oates. “Environmental Economics.” Journal of Economic Literature (June 1992): 675-740.
Environmental education Environmental education is fast emerging as one of the most important disciplines in the United States and in the world. Merging the ideas and philosophy of environmentalism with the structure of formal education systems, it strives to increase awareness of environmental problems as well as to foster the skills and strategies for solving those problems. Environmental issues have traditionally fallen to the state, federal, and international policymakers, scientists, academics, and legal scholars. Environmental education (often referred to simply as “EE") shifts the focus to the general population. In other words, it seeks to empower individuals with an understanding of environmental problems and the skills to solve them. Background The first seeds of environmental education were planted roughly a century ago and are found in the works of such writers as George Perkins Marsh, John Muir, Henry David Thoreau, and Aldo Leopold. Their writings served to bring the country’s attention to the depletion of natural resources and the often detrimental impact of humans on the environment. In the early 1900s, three related fields of study arose that eventually merged to form the present-day environmental education. Nature education expanded the teaching of biology, botany, and other natural sciences out into the natural world, where students learned through direct observation. Conservation education took root in the 1930s, as the importance of long-range, “wise use” management of resources intensified. Numerous state and federal agencies were created to tend public lands, and citizen organizations began forming in earnest to protect a favored animal, park, river, or other resource. Both governmental and citizen entities included 481
Environmental education
an educational component to spread their message to the general public. Many states required their schools to adopt conservation education as part of their curriculum. Teacher training programs were developed to meet the increasing demand. The Conservation Education Association formed to consolidate these efforts and help solidify citizen support for natural resource management goals. The third pillar of modern EE is outdoor education, which refers more to the method of teaching than to the subject taught. The idea is to hold classrooms outdoors; the topics are not restricted to environmental issues but includes art, music, and other subjects. With the burgeoning of industrial output and natural resource depletion following World War II, people began to glimpse the potential environmental disasters looming ahead. The environmental movement exploded upon the public agenda in the late 1960s and early 1970s, and the public reacted emotionally and vigorously to isolated environmental crises and events. Yet it soon became clear that the solution would involve nothing short of fundamental changes in values, lifestyles, and individual behavior, and that would mean a comprehensive educational approach. In August 1970, the newly-created Council on Environmental Quality called for a thorough discussion of the role of education with respect to the environment. Two months later, Congress passed the Environmental Education Act, which called for EE programs to be incorporated in all public school curricula. Although the act received little funding in the following years, it energized EE proponents and prompted many states to adopt EE plans for their schools. In 1971, the National Association for Environmental Education formed, as did myriad of state and regional groups. Definition What EE means depends on one’s perspective. Some see it as a teaching method or philosophy to be applied to all subjects, woven into the teaching of political science, history, economics, and so forth. Others see it as a distinct discipline, something to be taught on its own. As defined by federal statute, it is the “education process dealing with people’s relationships and their natural and manmade surroundings, and includes the relation of population, pollution, resource allocation and depletion, conservation, transportation, technology and urban and rural planning to the total human environment.” One of the early leaders of the movement is William Stapp, a former professor at the University of Michigan’s School of Natural Resources and the Environment. His three-pronged definition has formed the basis for much subsequent thought: “Environmental education is aimed at producing a citizenry that is knowledgeable concerning the biophysical environment and its associated problems, aware 482
Environmental Encyclopedia 3 of how to help solve these problems, and motivated to work toward their solution.” Many environmental educators believe that programs covering kindergarten through twelfth grade are necessary to successfully instill an environmental ethic in students and a comprehensive understanding of environmental issues so that they are prepared to deal with environmental problems in the real world. Further, an emphasis is placed on problemsolving, action, and informed behavioral changes. In its broadest sense, EE is not confined to public schools but includes efforts by governments, interest groups, universities, and news media to raise awareness. Each citizen should understand the environmental issues of his or her own community: land-use planning, traffic congestion, economic development plans, pesticide use, water pollution and air pollution, and so on. International level Concurrently with the emergence of EE in this country, other nations began pushing for a comprehensive approach to environmental problems within their own borders and on a global scale. In 1972, at the United Nations Conference on the Human Environment in Stockholm, the need for an international EE effort was clearly recognized and emphasized. Three years later, an International Environmental Education Workshop was held in Belgrade, from which emerged an eloquent, urgent mandate for the drastic reordering of national and international development policies. The “Belgrade Charter” called for an end to the military arms race and a new global ethic in which “no nation should grow or develop at the expense of another nation.” It called for the eradication of poverty, hunger, illiteracy, pollution, exploitation, and domination. Central to this impassioned plea for a better world was the need for environmental education of the world’s youth. That same year, the UN approved a $2 million budget to facilitate the research, coordination, and development of an international EE program among dozens of nations. Effectiveness There has been criticism over the last 15 years that EE too often fails to educate students and makes little difference in their behavior concerning the environment. Researchers and environmental educators have formulated a basic framework for how to improve EE: 1) Reinforce individuals for positive environmental behavior over an extended period of time. 2) Provide students with positive, informal experiences outdoors to enhance their “environmental sensitivity.” 3) Focus instruction on the concepts of “ownership” and “empowerment.” The first concept means that the learner has some personal interest or investment in the environmental issues being discussed. Perhaps the student can relate more readily to concepts of solid waste disposal if there is a landfill in the neighborhood. Empowerment gives
Environmental Encyclopedia 3
Environmental enforcement
learners the sense that they can make changes and help resolve environmental problems. 4) Design an exercise in which students thoroughly investigate an environmental issue and then develop a plan for citizen action to address the issue, complete with an analysis of the social, cultural, and ecological consequences of the action. Despite the efforts of environmental educators, the movement has a long way to go. The scope and number of critical environmental problems facing the world today far outweigh the successes of EE. Further, most countries still do not have a comprehensive EE program that prepares them, as future citizens, to make ecologically sound choices and to participate in cleaning up and caring for the environment. Lastly, educators, including the media, are largely focused on explaining the problems but fall short on explaining or offering possible solutions. The notion of “empowerment” is often absent.
EE materials for teachers, to conduct research on effective approaches to EE, to survey and assess the EE needs of all 50 states, to establish a computer network for teachers needing access to information and resources, and to develop a teacher training manual for conducting EE workshops around the country.
Recent developments and successes in the United States Project WILD, based in Boulder, Colorado, is a K– 12 supplementary conservation and environmental education program emphasizing wildlife protection, sponsored by fish and wildlife agencies and environmental educators. The project sets up workshops in which teachers learn about wildlife issues. They in turn teach children and help students understand how they can act responsibly on behalf of wildlife and the environment. The program, begun in 1983, has grown tremendously in terms of the number of educators reached and the monetary support from states, which, combined, are spending about $3.6 million annually. The Global Rivers Environmental Education Network (GREEN), begun at the University of Michigan under the guidance of William Stapp, has likewise been enormously successful, perhaps more so. Teachers all over the world take their students down to their local river and show them how to monitor water quality, analyze watershed usage, and identify socioeconomic sources of river degradation. Lastly, and most importantly, the students then present their findings and recommendations to the local officials. These students also exchange information with other GREEN students around the world via computers. Another promising development is the National Consortium for Environmental Education and Training (NCEET), also based at the University of Michigan. The partnership of academic institutions, non-profit organizations, and corporations, NCEET was established in 1992 with a three-year, $4.8 million grant from the Environmental Protection Agency (EPA). Its main purpose is to dramatically improve the effectiveness of environmental education in the United States. The program has attacked its mission from several angles: to function as a national clearinghouse for K–12 teachers, to make available top-quality
Hungerford, H. R., and T. L. Volk. “Changing Learner Behavior Through Environmental Education.” Journal of Environmental Education 21 (Spring 1990): 8-21. “The Belgrade Charter.” Connect: Unesco-UNEP Environmental Education Newsletter 1 (January 1976).
[Cathryn McCue]
RESOURCES BOOKS Gerston, R. Just Open the Door: A Complete Guide to Experiencing Environmental Education. Danville, IL: Interstate Printers and Publishers, 1983. Swan, M. “Forerunners of Environmental Education.” In What Makes Education Environmental?, edited by N. McInnis and D. Albrect. Louisville, KY: Data Courier and Environmental Educators, 1975.
PERIODICALS
Environmental enforcement Environmental enforcement is the set of actions that a government takes to achieve full implementation of environmental requirements (compliance) within the regulated community and to correct or halt situations or activities that endanger the environment or public health. Experience with environmental programs has shown that enforcement is essential to compliance because many people and institutions will not comply with a law unless there are clear consequences for noncompliance. Enforcement by the government usually includes inspections to determine the compliance status of the regulated community and to detect violations; negotiations with individuals or facility managers who are out of compliance to develop mutually agreeable schedules and approaches for achievement of compliance; legal action when necessary to compel compliance and to impose some consequences for violation of the law or for posing a threat to public health and the environment; and compliance promotion, such as educational programs, technical assistance, and subsidies, to encourage voluntary compliance. Nongovernmental groups may become involved in enforcement by detecting noncompliance, negotiating with violators, and commenting on governmental enforcement actions. They may also, if the law allows, take legal actions against a violator for noncompliance or against the government for not enforcing environmental requirements. The banking and insurance industries may be indirectly involved with enforcement by requiring assurance of compliance with 483
Environmental enforcement
environmental requirements before they issue a loan or an insurance policy to a facility. Strong social sanctions for noncompliance with environmental requirements can also be effective to ensure compliance. For example, the public may choose to boycott a product if they believe the manufacturer is harming the environment. Environmental enforcement is based on environmental laws. An environmental law provides the vision, scope, and authority for environmental protection and restoration. Some environmental laws contain requirements while others specify a structure and criteria for establishing requirements, which are then developed separately. Requirements may be general, in which they apply to a group of facilities, or facility-specific. Examples of environmental enforcement programs include those that govern the ambient environment, performance, technology, work practices, dissemination of information and product or use bans. Ambient standards (media quality standard) are goals for the quality of the ambient environment (that is, air and water quality). Ambient standards are usually written in units of concentration, and they are used to plan the levels of emissions that can be accommodated from individual sources while still meeting an area-wide goal. Ambient standards can also be used as triggers, i.e., when a standard is exceeded, monitoring or enforcement efforts are increased. Enforcement of these standards involves relating an ambient measurement to emissions or activities at a specific facility, which can be difficult. Performance standards, widely used for regulations, permits, and monitoring requirements, limit the amount or rate of particular chemicals or discharges that a facility can release into the environment in a given period of time. These standards allow sources to choose which technologies they will use to meet the standards. Performance standards are often based on output that can be achieved by using the best available control technology. Some standards allow a source with multiple emissions to vary its emissions from each stack as long as the total sum of emissions does not exceed the permitted total. Compliance with emission standards is accomplished by sampling and monitoring, which in some cases may be difficult and/or expensive. Technology standards require the regulated community to use a particular type of technology (i.e., “best available technology") to control and/or monitor emissions. Technology standards are effective if the equipment specified is known to perform well under the range of conditions experienced by the source. Compliance is measured by whether the equipment is installed and operating properly. However, proper operation over a long period of time is more difficult to monitor. The use of technology standards can inhibit technological innovation. 484
Environmental Encyclopedia 3 Practice standards require or prohibit work activities that may have environmental impacts (e.g., prohibition of carrying hazardous liquids in uncovered containers). Regulators can easily inspect for compliance and take action against noncomplying sources, but ongoing compliance is not easy to ensure. Dissemination of information and product or use bans are also governed by environmental enforcement programs. Information standards require a source of potential pollution (e.g., a manufacturer or facility involved in generating, transporting, storing, treating, and disposing of hazardous wastes) to develop and submit information to the government. For example, a source generating pollution may be required to monitor, maintain records, and report on the level of pollution generated and whether or not the source exceeds performance standards. Information requirements are also used when a potential pollution source is a product such as a new chemical or pesticide. The manufacturer may be required to test and report on the potential of the product to cause harm if released into the environment. Finally, product or use bans are used to prohibit a product (i.e., ban the manufacture, sale, and/or use of a product) or they may prohibit particular uses of a product. An effective environmental law should include the authority or power necessary for its own enforcement. An effective authority should govern implementation of environmental requirements, inspection, and monitoring of facilities, and legal sanctions for noncompliance. One type of authority that is used is guidance for the implementation of environmental laws by issuance of regulations, permits, licenses, and/or guidance policies. Regulations establish, in greater detail than is specified by law, general requirements that must be met by the regulated community. Some regulations are directly enforced while others provide criteria and procedures for developing facility-specific requirements utilizing permits and licenses to provide the basis of enforcement. Permits are used to control activities related to construction or operation of facilities that generate pollutants. Requirements in permits are based on specific criteria established in laws, regulations, and/or guidance. General permits specify what a class of facilities is required to do, while a facility-specific permit specifies requirements for a particular facility, often taking into account the conditions there. Licenses are permits to manufacture, test, sell, and/or distribute a product that may pose an environmental or public health risk if improperly used. Licenses may be general or facilityspecific. Written guidance and policies, which are prepared by the regulator, are used to interpret and implement requirements to ensure consistency and fairness. Guidance may be necessary because not all applications of requirements can be anticipated, or when regulation is achieved by the use of facility-specific permits or licenses.
Environmental Encyclopedia 3 Authority is also required to provide for inspection and monitoring of facilities, with legal sanctions for noncompliance. Requirements may either be waived or prepared for facility-specific conditions. The authority will inspect regulated facilities and gain access to their records and equipment to determine if they are in compliance. Authority is necessary to ensure that the regulated community monitors its own compliance, maintains records of its compliance activities and status, reports this information periodically to the enforcement program, and provides information during inspections. An effective law should also include the authority to take legal action against noncomplying facilities, imposing a range of monetary penalties and other sanctions on facilities that violate the law, as well as criminal sanctions on those facilities or individuals who deliberately violate the law (e.g., facilities that knowingly falsify data). Also, power should be granted to correct situations that pose an immediate and substantial threat to public health and/or the environment. The range and types of environmental enforcement response mechanisms available depend on the number and types of authorities provided to the enforcement program by environmental and related laws. Enforcement mechanisms may be designed to return violators to compliance, impose a sanction, or remove the economic benefits of noncompliance. Enforcement may require that specific actions be taken to test, monitor, or provide information. Enforcement may also correct environmental damages and modify internal company management problems. Enforcement response mechanisms include informal responses such as phone calls, site visits and inspections, warning letters, and notices of violations, which are more formal than warning letters. They provide the facility manager with a description of the violation, what should be done to correct it, and by what date. Informal responses do not penalize but can lead to more severe responses if ignored. The more formal enforcement mechanisms are backed by law and are accompanied by procedural requirements to protect the rights of the individual. Authority to use formal enforcement mechanisms for a specific situation must be provided in the applicable environmental law. Civil administrative orders are legal, independently enforceable orders issued directly by enforcement program officials that define the violation, provide evidence of the violation, and require the recipient to correct the violation within a specified time period. If the recipient violates the order, program managers can take further legal action using additional orders or the court system to force compliance with the order. Further legal action includes the use of field citations, which are administrative orders issued by inspectors in the field. They require the violator to correct a clear-cut violation and pay a small monetary fine. Field citations are used to
Environmental engineering
handle more routine types of violations that do not pose a major threat to the environment. Legal action may also lead to civil judicial enforcement actions, which are formal lawsuits before the courts. These actions are used to require action to reduce immediate threats to public health or the environment, to enforce administrative orders that have been violated, and to make final decisions regarding orders that have been appealed. Finally, a criminal judicial response is used when a person or facility has knowingly and willfully violated the law or has committed a violation for which society has chosen to impose the most serious legal sanctions available. This response involves criminal sanction, which may include monetary penalties and imprisonment. The criminal response is the most difficult type of enforcement, requiring intensive investigation and case development, but it can also create a significant deterrence. Environmental enforcement must include processes to balance the rights of individuals with the government’s need to act quickly. A notice of violation should be issued before any action is taken so that the finding of violation can be contested, or so that the violation can be corrected before further government action. Appeals should be allowed at several stages in the enforcement process so that the finding of violation, the required remedial action, or the severity of the proposed sanction can be reviewed. There should also be dispute resolution processes for negotiations between program officials and the violator, which may include face-toface discussions, presentations before a judge or hearing examiner, or use of third party mediators, arbitrators, or facilitators. [Judith Sims]
RESOURCES BOOKS Wasserman, C. E. Principles of Environmental Enforcement. Washington, D.C.: U.S. Environmental Protection Agency, 1992.
Environmental engineering The development of environmental engineering as a discipline is a reflection of the modern need to maintain public health by providing safe drinking water and sanitation, and by treating and disposing of sewage, municipal solid waste, and pollution. Originally, sanitary engineering, a limited subdiscipline of civil engineering, performed some of these functions. But with the growth of concern for protecting the environment and the passage of laws regulating disposal of wastes, environmental engineering has grown into a discrete discipline encompassing a wide range of activities including: “proper disposal or recycling of wastewater and solid wastes, adequate drainage of urban and rural areas 485
Environmental Encyclopedia 3
Environmental engineering
for proper sanitation, control of water, soil and atmospheric pollution and the social and environmental impact of these solutions.” Education for environmental engineers requires that they be “well informed concerning engineering problems in the field of public health, such as control of insect-borne diseases, the elimination of industrial health hazards, and the provision of adequate sanitation in urban, rural and recreational areas, and the effect of technological advance on the environment.” More broadly environmental engineering is defined by W. E. Gilbertson as “that branch of engineering which is concerned with the application of scientific principles to (1) the protection of human populations from the effects of adverse environmental factors, (2) the protection of environments, both local and global, from the potentially deleterious effects of human activities, and (3) the improvement of environmental quality of man’s health and well-being.” The American Academy of Environmental Engineers (AAEE) has defined environmental engineering as “the application of engineering principles to the management of the environment for the protection of human health; for the protection of nature’s beneficial ecosystems and for environment-related enhancement of the quality of human life.” Degree-granting institutions in the United States do not necessarily consider environmental engineering as a separate discipline. A report by the U. S. Engineering Manpower Commission found that only 192 baccalaureate environmental engineering degrees were granted in 1988; however C. Robert Baillod estimates that at least 10% of the 8,800 annual graduates from baccalaureate civil engineering programs are educated to function as environmental engineers. If similar estimates are made for chemical, mechanical, geological and other engineers who function as environmental engineers, 1,000–2,000 graduates are entering the profession each year. Data collected by Baillod indicate that the supply of environmental engineers will satisfy half the demand for 2,000– 5,000 new environmental engineering graduates per year for the next decade. From 1970 to 1985, an increasing number of environmental statutes passed at the federal and state level in the United States while parallel legislation was being established internationally to regulate and control environmental pollution. In the United States, the establishment of the Comprehensive Environmental Response, Compensation and Liability Act (CERCLA)—known as Superfund for short— has provided the impetus for significant activity in remediation as well as providing industries and municipalities with
incentives (such as liability for environmental damage) to clean up and avoid pollution. In order to comply with environmental laws and also to maintain good business practice, corporations are including impacts on the environment in planning for their process engineering. A serious potential 486
for fines or costly liability suits exists if the design of processes is not carefully conducted with environmental safeguards. In addition to employment by the private sector, state and federal governments also employ environmental engineers. The primary function of environmental engineers in government is research and development for implementation of regulations and their enforcement. At the federal level, agencies such as the Environmental Protection Agency (EPA), as well as the Departments of Commerce, Energy, Interior and Agriculture, employ environmental engineers. Environmental engineering is proving crucial in addressing an array of environmental needs. Techniques recently conceived by environmental engineers include: The development of an oil-absorbing, floating sponge-like material as a re-usable first response material for remediating oil spills on open bodies of water. ODesign and operation of a plant to process electrolytic plating wastes collected from a large urban area to reuse the metals and detoxify the cyanide as a means of avoiding the discharges of these wastes into the sewer system. ODevelopment of a process for reusing the lead, zinc, and cadmium which would otherwise be lost as a fume in the remelting of automotive scrap to form steel products. OUse of naturally-occurring bacterial agents in cleanup of underground aquifers contaminated by prior discharges of creosote, a substance used to preserve wood products. ODevelopment of processes for removal of and destruction of PCBs and other hazardous organic agents from spills into soil. ODevelopment of sensing techniques which enable tracing of pollution to point sources and the determination of the degree of pollution which has occurred for application of legal remedies. ODevelopment of process design and control instrumentation in nuclear reactors to prevent, contain and avoid nuclear releases. ODesign and development of feedlots for animals in which the waste products are made into reusable agricultural products. ODevelopment of sterilization, incineration and gas cleanup systems for treatment of hospital wastes. OCertification of properties to verify the absence of factors which would make new owners liable to environmental litigation (for example, absence of asbestos, absence of underground storage tanks for hazardous material)s. ORedesign of existing chemical plants to recycle or eliminate waste streams. ODevelopment of processes for recycling wastes (for example, processes for de-inking and reuse of newsprint or reuse of plastics). O
Environmental Encyclopedia 3
Environmental estrogens
These wide-ranging examples are typical of the solutions which are being developed by a new generation of technically-trained individuals. In an increasingly populated and industrialized world, environmental engineers will continue to play a pivotal role in devising technologies needed to minimize the impact of humans on the earth’s resources. [Malcolm T. Hepworth]
RESOURCES BOOKS American Academy of Environmental Engineers. AAEE Bylaws. Annapolis, Maryland: 1990. Cartledge, B., ed. Monitoring the Environment. New York: Oxford University Press, 1992. Corbitt, Robert A.: Standard Handbook of Environmental Engineering. New York: McGraw-Hill, 1989. Jacobsen, J., ed. Human Impact on the Environment: Ancient Roots, Current Challenges. Boulder, CO: Westview Press, 1992.
PERIODICALS Crucil, C. “Environmentally Sound Buildings Now Within Reach.” Alternatives 19 (January-February 1993): 9-10.
OTHER Gilbertson, W. E. “Environmental Quality Goals and Challenges.” Proceedings of the Third National Environmental Engineering Education Conference edited by P. W. Purdon. American Academy of Environmental Engineers and the Association of Environmental Engineering Professors, Drexel University, 1973.
Environmental estrogens The United States Environmental Protection Agency (EPA) defines an environmental endocrine disruptor—the term the Agency uses for environmental estrogens—as “an exogenous agent that interferes with the synthesis, secretion, transport, binding, action, or elimination of natural hormones in the body that are responsible for the maintenance of homeostasis, reproduction, development, and/or behavior.” Dr. Theo Colborn, a zoologist and senior scientist with the World Wildlife Fund, and the person most credited with raising national awareness of the issue, describes these chemicals as “hand-me-down poisons” that are passed from mothers to offspring and may be linked to a wide range of adverse effects, including low sperm counts, infertility, genital deformities, breast and prostate cancer, neurological disorders in children such as hyperactivity and attention deficits, and developmental and reproductive disorders in wildlife. Colborn discusses these effects in her 1996 book, Our Stolen Future—co-authored with Dianne Dumanoski and John Peterson Myers—which asks: “Are we threatening our fertility, intelligence, and survival?” Some other names used for the same class of chemicals are hormone disruptors, estrogen
mimics, endocrine disrupting chemicals, and endocrine modulators. While EPA takes the position that it is “aware of and concerned” about data indicating that exposure to environmental endocrine disruptors may cause adverse impacts on human health and the environment, the Agency at present does not consider endocrine disruption to be “an adverse endpoint per se.” Rather, it is “a mode or mechanism of action potentially leading to other outcomes"—such as the health effects Colborn described drawing from extensive research of numerous scientists—but, in EPA’s view, the link to human health effects remains an unproven hypothesis. For Colborn and a significant number of other scientists, however, enough is known to support prompt and far-reaching action to reduce exposures from these chemicals and myriad products that are manufactured using them. Foods, plastic packaging, and pesticides are among the sources of exposure Colborn raises concerns about in her book. Ultimately, the environmental estrogens issue is about whether these chemicals are present in the environment at high enough levels to disrupt the normal functioning of wildlife and human endocrine systems and thereby cause harmful effects. The endocrine system is one of at least three important regulatory systems in humans and other animals (the nervous and immune systems are the other two) and includes such endocrine glands as the pituitary, thyroid, pancreas, adrenal, and the male and female gonads, or testes and ovaries. These glands secrete hormones into the bloodstream where they travel in very small concentrations and bind to specific sites called “cell receptors” in target tissues and organs. The hormones affect development, reproduction, and other bodily functions. The term “endocrine disruptors” includes not only estrogens but also antiandrogens and other agents that act on the endocrine system. The question of whether environmental endocrine disruptors may be causing effects in humans has arisen over the past decade based on a growing body of evidence about effects in wildlife exposed to dichlorodiphenyl-trichlorethane (DDT), polychlorinated biphenyls (PCBs), and other chemicals. For instance, field studies have proven that tributyltin (TBT), which is used as an antifouling paint on ships, can cause “imposex” in female snails, which are now commonly found with male genitalia, including a penis and vas deferens, the sperm-transporting tube. TBT has also been shown to cause decreased egg production by the periwinkle (Littorina littorea). As early as 1985, concerns arose among scientists and the public in the United Kingdom over the effects of synthetic estrogens from birth control pills entering rivers, a concern that was heightened when anglers reported catching fish with both male and female characteristics. Other studies have found Great Lakes salmon to invariably have thyroids that were abnormal in appearance, 487
Environmental estrogens
even when there were no overt goiters. Herring gulls (Larus argentatus) throughout the Great Lakes have also been found with enlarged thyroids. In the case of the salmon and gulls, no agent has been determined to be causing these effects. But other studies have linked DDT exposure in the Great Lakes to eggshell thinning and breakage among bald eagles and other birds. In Lake Apopka, Florida, male alligators (Alligator mississippiensis) exposed to a mixture of dicofol, DDT, and dichlorodiphenyldichloroethylene (DDE) have been “demasculinized,” with phalluses one-half to onefourth the normal size. Red-eared turtles (Trachemys scripta) in the lake have also been demasculinized. One 1988 study reported that four of 15 female black bears (Ursus americanus) and one of four female brown bears (Ursus arctos) had, to varying degrees, male sex organs. These and nearly 300 other peer-reviewed studies have led EPA—in conjunction with the multi-agency White House Committee on Environment and Natural Resources—to develop a “framework for planning” and an extensive research agenda to answer questions about the effects of endocrine disruptors. The goal is to better understand the potential effects of such chemicals on human beings before implementing regulatory actions. The federal research agenda has been evolving through a series of workshops. As early as 1979, the National Institute of Environmental Health Sciences (NIEHS), based in Research Triangle Park, North Carolina, held an “Estrogens in the Environment” conference to evaluate the chemical properties and diverse structures among environmental estrogens. NIEHS held a second conference in 1985 that addressed numerous potential toxicological and biological effects from exposure to these chemicals. NIEHS’s third conference, held in 1994, focused on detrimental effects in wildlife. At an April 1995 EPA-sponsored workshop on “Research Needs for the Risk Assessment of Health and Environmental Effects of Endocrine Disruptors,” a number of critical research questions were discussed: What do we know about the carcinogenic effects of endocrine-disrupting agents in humans and wildlife? What are the research needs in this area, including the highest priority research needs? Similar questions were discussed for reproductive effects, neurological effects, immunological effects, and a variety of risk assessment issues. Drawing on the preceding conferences and workshops, in February 1997 EPA issued a Special Report on Environmental Endocrine Disruption: An Effects Assessment and Analysis that recommended key research needs to better understand how environmental endocrine disruptors may be causing the variety of specific effects in human beings and wildlife hypothesized by some scientists. For instance, male reproductive research should include tests that evaluate both the quantity and quality of sperm produced. Furthermore, when testing the endocrine-disrupting potential of chemicals, it is important to test for both estrogenic and 488
Environmental Encyclopedia 3 antiandrogenic activity because new data suggest that it is possible the latter—antiandrogenic activity—not estrogenic activity, is causing male reproductive effects. In the area of ecological research, EPA’s special report highlighted the need for research on such issues as what chemicals or class of chemicals can be considered genuine endocrine disruptors and what dose is needed to cause an effect. Even before environmental estrogens received a place on the federal environmental agenda as a priority concern, Colborn and other scientists first met in July 1991 in Racine, Wisconsin, to discuss their misgivings about the prevalence of estrogenic chemicals in the environment. From that meeting came the landmark “Wingspread Consensus Statement” of 21 leading researchers. The statement asserted that the scientists were certain that a large number of human-made chemicals that have been released into the environment, as well as a few natural ones, “have the potential to disrupt the endocrine system of animals, including humans,” and that many wildlife populations are already affected by these chemicals. Furthermore, the scientists expressed certainty that the effects may be entirely different in the embryo, fetus, or perinatal organisms than in the adult; that effects are more often manifested in offspring than in exposed parents; that the timing of exposure in the developing organism is crucial; and that, while embryonic development is the critical exposure period, “obvious manifestations may not occur until maturity.” Besides these and other “certain” conclusions, the scientists estimated with confidence that “some of the developmental impairments reported in humans today are seen in adult offspring of parents exposed to synthetic hormone disruptors (agonists and antagonists) released in the environment” and that “unless the environmental load of synthetic hormone disruptors is abated and controlled, large scale dysfunction at the population level is possible.” The Wingspread Statement included numerous other consensus views on what models predict and the judgment of the group on the need for much greater research and a comprehensive inventory of these chemicals. The Food Quality Protection Act of 1996 (FQPA) and the Safe Drinking Water Act Amendments of 1996 require EPA to develop a screening program to determine whether pesticides or other substances cause effects in humans similar to effects produced by naturally occurring estrogens and other endocrine effects. The FQPA requires pesticide registrants to test their products for such effects and submit reports, and it requires that registrations be suspended if registrants fail to comply. Besides the EPA screening program, the United Nations Environment Programme is pursuing a multinational effort to manage “persistent organic pollutants,” including DDT and PCBs, which, though banned in the United States, are still used elsewhere and can persist in the environment and be trans-
Environmental Encyclopedia 3
Environmental ethics
ported long-distance. In February 1997, Illinois became the first state to issue a strategy for endocrine disruptors that requires every Illinois EPA program to assess its current activities affecting these chemicals and to begin monitoring a list of known, probable, and suspected chemicals in case further action is needed in the future. [David Clarke]
RESOURCES BOOKS Colborn, T., D. Dumanoski, and J. P. Myers. Our Stolen Future. New York: Penguin Books, 1996. “Estrogens in the Environment.” Environmental Health Perspectives Supplements 3, supplement 7. North Carolina: Research Triangle Park, 1995. National Science and Technology Council. Committee on Environment and Natural Resources. The Health and Ecological Effects of Endocrine Disrupting Chemicals, A Framework for Planning. Washington, D.C., 1996. U.S. Environmental Protection Agency. Special Report on Environmental Endocrine Disruption: An Effects Assessment and Analysis. (EPA/630/R-96/ 012). Washington, D.C.: GPO, 1997.
Environmental ethics Ethics is a branch of philosophy that deals with morals and values. Environmental ethics refers to the moral relationships between humans and the natural world. It addresses such questions as, do humans have obligations or responsibilities toward the natural world, and if so, how are those responsibilities balanced against human needs and interests? Are some interests more important than others? Efforts to answer such ethical questions have led to the development of a number of schools of ethical thought. One of these is utilitarianism, a philosophy associated with the English eccentric Jeremy Bentham and later modified by his godson John Stuart Mill. In its most basic terms, utilitarianism holds that an action is morally right if it produces the greatest good for the greatest number of people. The early environmentalist Gifford Pinchot was inspired by utilitarian principles and applied them to conservation. Pinchot proposed that the purpose of conservation is to protect natural resources to produce “the greatest good for the greatest number for the longest time.” Although utilitarianism is a simple, practical approach to human moral dilemmas, it can also be used to justify reprehensible actions. For example, in the nineteenth century many white Americans believed that the extermination of native peoples and the appropriation of their land was the right thing to do. However, most would now conclude that the good derived by white Americans from these actions does not justify the genocide and displacement of native peoples. The tenets of utilitarian philosophy are presented in terms of human values and benefits, a clearly anthropocentric
world view. Many philosophers argue that only humans are capable of acting morally and of accepting responsibility for their actions. Not all humans, however, have this capacity to be moral agents. Children, the mentally ill, and others are not regarded as moral agents, but, rather, as moral subjects. However, they still have rights of their own—rights that moral agents have an obligation to respect. In this context, moral agents have intrinsic value independent of the beliefs or interests of others. Although humans have long recognized the value of non-living objects, such as machines, minerals, or rivers, the value of these objects is seen in terms of money, aesthetics, cultural significance, etc. The important distinction is that these objects are useful or inspiring to some person—they are not ends in themselves but are means to some other end. Philosophers term this instrumental value, since these objects are the instruments for the satisfaction of some other moral agent. This philosophy has also been applied to living things, such as domestic animals. These animals have often been treated as simply the means to some humanly-desired end without any inherent rights or value of their own. Aldo Leopold, in his famous essay on environmental ethics, pointed out that not all humans have been considered to have inherent worth and intrinsic rights. As examples he points to children, women, foreigners, and indigenous peoples—all of whom were once regarded as less than full persons; as objects or the property of an owner who could do with them whatever he wished. Most civilized societies now recognize that all humans have intrinsic rights, and, in fact, these intrinsic rights have also been extended to include such entities as corporations, municipalities, and nations. Many environmental philosophers argue that we must also extend recognition of inherent worth to all other components of the natural world, both living and non-living. In their opinion, our anthropocentric view, which considers components of the natural world to be valuable only as the means to some human end, is the primary cause of environmental degradation. As an alternative, they propose a biocentric view which gives inherent value to all the natural world regardless of its potential for human use. Paul Taylor outlines four basic tenets of biocentrism in his book, Respect for Nature. These are: 1) Humans are members of earth’s living community in the same way and on the same terms as all other living things; 2) Humans and other species are interdependent; 3) Each organism is a unique individual pursuing its own good in its own way; 4) Humans are not inherently superior to other living things. These tenets underlie the philosophy developed by Norwegian Arne Naess known as deep ecology. From this biocentric philosophy Paul Taylor developed three principles of ethical conduct: 1) Do not harm any natural entity that has a good of its own; 2) Do not try 489
Environmental Encyclopedia 3
Environmental health
to manipulate, control, modify, manage or interfere with the normal functioning of natural ecosystems, biotic communities, or individual wild organisms; 3) Do not deceive or mislead any animal capable of being deceived or misled. These principles led Professor Taylor to call for an end to hunting, fishing and trapping, to espouse vegetarianism, and to seek the exclusion of human activities from wilderness areas. However, Professor Taylor did not extend intrinsic rights to non-living natural objects, and he assigned only limited rights to plants and domestic animals. Others argue that all natural objects, living or not, have rights. Regardless of the appeal that certain environmental philosophies may have in the abstract, it is clear that humans must make use of the natural world if they are to survive. They must eat other organisms and compete with them for all the essentials of life. Humans seek to control or eliminate harmful plants or animals. How is this intervention in the natural world justified? Stewardship is a principle that philosophers use the justify such interference. Stewardship holds that humans have a unique responsibility to care for domestic plants and animals and all other components of the natural world. In this view, humans, their knowledge, and the products of their intellect are an essential part of the natural world, neither external to it nor superfluous. Stewardship calls for humans to respect and cooperate with nature to achieve the greatest good. Because of their superior intellect, humans can improve the world and make it a better place, but only if they see themselves as an integral part of it. Ethical dilemmas arise when two different courses of action each have valid ethical underpinnings. A classic ethical dilemma occurs when any course of action taken will cause harm, either to oneself or to others. Another sort of dilemma arises when two parties have equally valid, but incompatible, ethical interests. To resolve such competing ethical claims Paul Taylor suggests five guidelines: 1) it is usually permissible for moral agents to defend themselves; 2) basic interests, those interests necessary for survival, take precedence over other interests; 3) when basic interests are in conflict, the least amount of harm should be done to all parties involved; 4) whenever possible, the disadvantages resulting from competing claims should be borne equally by all parties; 5) the greater the harm done to a moral agent, the great is the compensation required. Ecofeminists do not find that utilitarianism, biocentrism or stewardship provide adequate direction to solve environmental problems or to guide moral actions. In their view, these philosophies come out of a patriarchal system based on domination—of women, children, minorities and nature. As an alternative, ecofeminists suggest a pluralistic, relationship-oriented approach to human interactions with the environment. Ecofeminism is concerned with nurturing, reciprocity, and connectedness, rather than with rights, re490
sponsibilities, and ownership. It challenges humans to see themselves as related to others and to nature. Out of these connections, then, will flow ethical interactions among individuals and with the natural world. See also Animal rights; Bioregionalism; Callicott, J. Baird; Ecojustice; Environmental racism; Environmentalism; Future generations; Humanism; Intergenerational justice; Land stewardship; Rolston, Holmes; Speciesism [Christine B. Jeryan]
RESOURCES BOOKS Devall, B., and G. Sessions. Deep Ecology. Layton, UT: Gibbs M. Smith, 1985. Odell, R. Environmental Awakening: The New Revolution to Protect the Earth. Cambridge, MA: Ballinger, 1980. Olson, S. Reflections From the North Country. New York: Knopf, 1980. Plant, J. Healing the Wounds: The Promise of Ecofeminism. Santa Cruz, CA: New Society Publishers, 1989. Rolston, H. Environmental Ethics. Philadelphia, Temple University Press, 1988. Taylor, P. Respect for Nature. Princeton, NJ: Princeton University Press, 1986.
Environmental health Environmental health is concerned with the medical effects of chemicals, pathogenic (disease-causing) organisms, or physical factors in our environment. Because our environment affects nearly every aspect of our lives in some way or other, environmental health is related to virtually every branch of medical science. The special focus of this discipline, however, tends to be health effects of polluted air and water, contaminated food, and toxic or hazardous materials in our environment. Concerns about these issues makes environmental health one of the most compelling reasons to be interested in environmental science. For a majority of humans, the most immediate environmental health threat has always been pathogenic organisms. Improved sanitation, nutrition, and modern medicine in the industrialized countries have reduced or eliminated many of the communicable diseases that once threatened us. But for people in the less developed countries where nearly 80% of the world’s population lives, bacteria, viruses, fungi, parasites, worms, flukes, and other infectious agents remain major causes of illness and death. Hundreds of millions of people suffer from major diseases such as malaria, gastrointestinal infections (diarrhea, dysentery, cholera), tuberculosis, influenza, and pneumonia spread through the air, water, or food. Many of these terrible diseases could be eliminated or greatly reduced by a cleaner environment, inexpensive dietary supplements, and better medical care.
Environmental Encyclopedia 3 For the billion or so richest people in the world— including most of the population of the United States and Canada—diseases related to lifestyle or longevity tend to be much greater threats than more conventional environmental concerns such as dirty water or polluted air. Heart attacks, strokes, cancer, depression and hypertension, traffic accidents, trauma, and AIDS lead as causes of sickness and death in wealthy countries. These diseases are becoming increasingly common in the developing world as people live longer, exercise less, eat a richer diet, and use more drugs, tobacco, and alcohol. Epidemiologists predict that by the middle of the next century, these diseases of affluence will be leading causes of sickness and death everywhere. Although a relatively minor cause of illness compared to the factors above, toxic or hazardous synthetic chemicals in the environment are becoming an increasing source of concern as industry uses more and more exotic materials to manufacture the goods we all purchase. There are many of these compounds to worry about. Somewhere around five million different chemical substances are known, about 100,000 are used in commercial quantities, and about 10,000 new ones are discovered or invented each year. Few of these materials have been thoroughly tested for toxicity. Furthermore, the process of predicting what our chances of exposure and potential harm might be from those re-leased into the environment remains highly controversial. Toxins are poisonous, which means that they react specifically with cellular components or interfere with unique physiological processes. A particular chemical may be toxic to one organism but not another, or dangerous in one type of exposure but not others. Because of this specificity, they may be harmful even in very dilute concentrations. Ricin, for instance, is a protein found in castor beans and one of the most toxic materials known. Three hundred picograms (trillionths of a gram) injected intravenously is enough to kill an average mouse. A single molecule can kill an individual cell. If humans were as sensitive as mice, a few teaspoons of this compound, divided evenly and distributed uniformly could kill everyone in the world. By the way, this points out that not all toxins are produced by industry. Many natural products are highly toxic. Toxins that have chronic (long-lasting) or irreversible effects are of special concern. Among some important examples are neurotoxins (attack nerve cells), mutagens (cause genetic damage), teratogens (result in birth defects), and carcinogens (cause cancer). Many pesticides and metals such as mercury, lead, and chromium are neurotoxins. Loss of even a few critical neurons can be highly noticeable or may even be lethal making this category of great importance. Chemicals or physical factors such as radiation that damage genetic material can harm not only cells produced in the
Environmental health
exposed individual, but also the offspring of those individuals as well. Among the most dread characteristics of all these chronic environmental health threats are that the initial exposure may be so small or have results so unnoticeable that the victim doesn’t even know that anything has happened until years later. Furthermore the results may be catastrophic and irreversible once they do appear. These are among our worst fears and are powerful reasons that we are so apprehensive about environmental contaminants. There may be no exposure—no matter how small—of some chemicals that is absolutely safe. Because of these fears, we often demand absolute protection from some of the most dread contaminants. Unfortunately, this may not be possible. There may be no way to insure that we are never exposed to any amount of some hazards. It may be that our only recourse is to ask how we can reduce our exposure or mitigate the consequences of that exposure. In spite of the foregoing discussion of the dangers of chronic effects from minute exposures to certain materials or factors, not all pollutants are equally dangerous nor is every exposure an unacceptable risk. Our fear of unknown and unfamiliar industrial chemicals can lead to hysterical demands for zero exposure to risks. The fact is that life is risky. Furthermore, some materials are extremely toxic while others are only moderately or even slightly so. This is expressed in the adage of the German physician, Paracelsus, who said in 1540 that “The dose makes the poison.” It has become a basic principle of toxicology that nearly everything is toxic at some concentration but most materials have some lower level at which they present an insignificant risk. Sodium chloride (table salt), for instance, is essential for human life in small doses. If you were forced to eat a kilogram all at once, however, it would make you very sick. A similar amount injected all at once into your blood stream would be lethal. How a material is delivered—at what rate, through which route of entry, in what form—is often as important as what the material is. The movement, distribution, and fate of materials in the environment are important aspects of environmental health. Solubility is one of the most important characteristics in deter- mining how, when, and where a material will travel through the environment and into our bodies. Chemicals that are water soluble move more rapidly and extensively but also are easier to wash off, excrete, or eliminate. Oil or fat soluble chemicals may not move through the environment as easily as water-soluble materials but may penetrate very efficiently through the skin and into tissues and organs. They also may be more likely to be concentrated and stored permanently in fat deposits in the body. The most common route of entry into the body for many materials is through ingestion and absorption in the 491
Environmental health
gastrointestinal (GI) tract. The GI tract, along with the urinary system are the main routes of excretion of dangerous materials. Not surprisingly, those cells and tissues most intimately and continuously in contact with dangerous materials are among the ones most likely to be damaged. Ulcers, infections, lesions, or tumors of the mouth, esophagus, stomach, intestine, colon, kidney, bladder, and associated glands are among the most common manifestations of environmental toxins. Other common routes of entry for toxins are through the respiratory system and the skin. These also are important routes for excreting or discharging unwanted materials. Some of our most convincing evidence about the toxicity of particular chemicals on humans has come from experiments in which volunteers (students, convicts, or others) were deliberately given measured levels under controlled conditions. Because it is now considered unethical to experiment on living humans, we are forced to depend on proxy experiments using computer models, tissue cultures, or laboratory animals. These proxy tests are difficult to interpret. We can’t be sure that experimental methods can be extrapolated to how real living humans would react. The most commonly used laboratory animals in toxicity tests are rodents like rats and mice. However, different species can react very differently to the same compound. Of some 200 chemicals shown to be carcinogenic in either rats or mice, for instance, about half caused cancer in one species but not the other. How should we interpret these results? Should we assume that we are as sensitive as the most susceptible animal, as resistant as the least sensitive, or somewhere in between? It is especially difficult to determine responses to very low levels of particular chemicals, especially when they are not highly toxic. The effects of random events, chance, and unknown complicating factors become troublesome, often resulting in a high level of uncertainty in predicting risk. The case of the sweetener saccharin is a good example of the complexities and uncertainties in risk assessment. Studies in the 1970s suggested a link between saccharin and bladder cancer in male rats. Critics pointed out that humans would have to drink 800 cans of soft drink per day to get a dose equivalent to that given to the rats. Furthermore, they argued, most people are not merely large rats. The Food and Drug Administration uses a range of estimates of the probable toxicity of saccharine in humans. At current rates of consumption, the lower estimate predicts that only one person in the United states will get cancer every 1,000 years from saccharine. That is clearly inconsequential considering the advantages of reduced weight, fewer cases of diabetes, and other benefits from this sugar substitute. The upper estimate, however, suggests that 3,640 people 492
Environmental Encyclopedia 3 will die each year from this same exposure. That is most certainly a risk worth worrying about. An emerging environmental health concern with a similarly high level of uncertainty but potentially dire consequences is the disruption of endocrine hormone functions by synthetic chemicals. About ten years ago, wildlife biologists began to report puzzling evidence of reproductive failures and abnormal development in certain wild animal populations. Alligators in a lake in central Florida, for instance, were reported to have a 90% decline in egg hatching and juvenile survival along with feminization of adult males including abnormally small penises and lack of sperm production. Similar reproductive problems and developmental defects were reported for trout in the Great Lakes, seagulls in California, panthers in Florida, and a number of other species. Even humans may be effected if reports of global reduction of sperm counts and increases of hormone-dependent cancers prove to be true. Both laboratory and field studies point to a possible role of synthetic chemicals in these problems. More than 50 chemicals, if present in high enough concentrations, are now known to mimic or disrupt the signals conveyed by naturally occurring endocrine hormones that control almost every aspect of development, behavior, immune functions, and metabolism. Among these chemicals are dioxin, polychlorinated biphenyl, and several persistent pesticides. This new field of research promises to be of great concern in the next few years because it combines dread factors of great emotional power such as undetectable exposure, threat to future generations, unknown or delayed consequences, and involuntary or inequitable distribution of risk. In spite of the seriousness of the concerns expressed above, the Environmental Protection Agency warns that we need to take a balanced view of environmental health. The risks associated with allowable levels of certain organic solvents in drinking water or some pesticides in food are thought to carry a risk of less than one cancer in a million people in a lifetime. Many people are outraged about being exposed to this risk, yet they cheerfully accept risks thousands of times as higher from activities they enjoy such as smoking, driving a car, or eating an unhealthy diet. According to the EPA, the most important things we as individuals can do to improve our health are to reduce smoking, drive safely, eat a balanced diet, exercise reasonably, lower stress in our lives, avoid dangerous jobs, lower indoor pollutants, practice safe sex, avoid sun exposure, and prevent household accidents. Many of these factors over which we have control are much more risky than the unknown, uncontrollable, environmental hazards we fear so much. [William P. Cunningham Ph.D.]
Environmental Encyclopedia 3 RESOURCES BOOKS Foster, H. D. Health, Disease and the Environment. Boca Raton: CRC Press, 1992. Moeller, D. W. Environmental Health. Cambridge: Harvard University Press, 1992. Morgan, M. T. Environmental Health. Madison: Brown & Benchmark, 1993.
PERIODICALS Hall, J. V., et al. “Valuing the Health Benefits of Clean Air.” Science 255 (February 14, 1991): 812–17.
Environmental history Much of human history has been a struggle for food, shelter, and survival in the face of nature’s harshness. Three major events or turning points have been the use of fire, the development of agriculture, and the invention of tools and machines. Each of these advances has brought benefits to humans but often at the cost of environmental degradation. Agriculture, for instance, increased food supplies but also caused soil erosion, population explosions, and support of sedentary living and urban life. It was the Industrial Revolution that gave humankind the greater power to conquer and devastate our environment. Jacob Bronowski called it an energy revolution, with power as the prime goal. As he noted, it is an ongoing revolution, with the fate of literally billions of people hanging on the outcome. The Industrial Revolution, with its initial dependence on the steam engine, iron works, and heavy use of coal, made possible our modern lifestyle with its high consumption of energy and material resources. With it, however, has come devastating levels of air, water, land, and chemical pollution. In essence, environmental history is the story of the growing recognition of our negative impact upon nature and the corresponding public interest in correcting these abuses. Cunningham and Saigo describe four stages of conservation history and environmental activism: 1) pragmatic resource conservation; 2) moral and aesthetic resource preservation; 3) growing concern over the impact of pollution on health and ecosystems; and 4) global environmental citizenship. Environmental history, like all history, is very much a study of key individuals and events. Included here are Thomas Robert Malthus, George Perkins Marsh, Theodore Roosevelt, and Rachel Carson. Writing at the end of the eighteenth century, Malthus was the first to develop a coherent theory of population, arguing that growth in food supply could not keep up with the much larger growth in population. Of cruel necessity, population growth would inevitably be limited by famine, pestilence, disease, or war. Modern
Environmental history
supporters are labeled “neoMalthusians” and include notable spokespersons Paul Erlich and Lester Brown. In his 1864 book Man in Nature, George Perkins Marsh was the first to attack the American myth of superabundance and inexhaustible resources. Citing many examples from Mediterranean lands and the United States, he described the devastating impact of land abuse through deforestation and soil erosion. Lewis Mumford called this book “the fountainhead of the conservation movement,” and Stewart Udall described it as the beginning of land wisdom in this country. Marsh’s work led to forest preservation and influenced President Theodore Roosevelt and his chief forester, Gifford Pinchot. Effective forest and wildlife protection began during Theodore Roosevelt’s presidency whose term of office (1901–1909) has been called “The Golden Age of Conservation.” His administration established the first wildlife refuges and national forests. At this time key differences emerged between proponents of conservation and preservation. Pinchot’s policies were utilitarian, emphasizing the wise use of resources. By contrast, preservationists led by John Muir argued for leaving nature untouched. A key battle was fought over the Hetch Hetchy Reservoir in Yosemite National Park, a proposed water supply for San Francisco, California. Although Muir lost, the Sierra Club (founded in 1882) gained national prominence. Similar battles are now being waged over petroleum extraction in Alaska’s Arctic National Wildlife Refuge and mining permits on federal lands, including wilderness areas. Rachel Carson gained widespread fame through her battle against the indiscriminate use of pesticides. Her 1962 book Silent Spring has been hailed as the “fountainhead of the modern environmental movement.” It has been translated into over 20 languages and is still a best seller. She argued against pesticide abuse and for the right of common citizens to be safe from pesticides in their own homes. Though vigorously opposed by the chemical industry, her views were vindicated by her overpowering reliance on scientific evidence, some given surreptitiously by government scientists. In effect, Silent Spring was the opening salvo in the battle of ecologists against chemists. Much of the current mistrust of chemicals stems from her work. Several historical events are relevant to environmental history. The closing of the American frontier at the end of the nineteenth century gave political strength to the Theodore Roosevelt presidency. The 1908 White House Conference on Conservation, organized and chaired by Gifford Pinchot, is perhaps the most prestigious and influential meeting ever held in the United States. During the 1930s, the drought in the American Dust Bowl awakened the country to the soil erosion concerns first 493
Environmental history
voiced by Marsh. The establishment of the Soil Conservation Service in 1935 was a direct response to this national tragedy. In 1955, an international symposium entitled “Man’s Role in Changing the Face of the Earth” was held at Princeton University. An impressive assemblage of scholars led by geographer Carl Ortwin Sauer, zoologist Marston O. Bates, and urban planner Lewis Mumford documented the history of human impact on the earth, the processes of human alteration of the environment, and the prospects for future habitability. The Apollo moon voyages, especially Apollo 8 in December 1968 and the dramatic photos taken of Earth from space, awakened the world to the concept of “spaceship earth.” It was as though the entire human community gave one collective gasp at the small size and fragile beauty of this one planet we call home. The two energy price shocks of the 1970s spawned by the 1973 Arab-Israeli War and the 1979 Iranian Islamic Revolution fundamentally altered American energy habits. Our salvation came in part from our horrific waste. Simple energy conservation measures and more fuel-efficient automobiles, predominantly of foreign manufacture, produced enough savings to cause a mid-1980s crash in petroleum prices. Nonetheless, many efficiencies begun during the 1970s remain. The Montreal Accord of 1988 is notable for being the first international agreement to phase out a damaging chemical, chlorofluorocarbons (CFCs). This was a direct response to evidence from satellite data over Antarctica that chlorine-based compounds were destroying the stratospheric ozone shield, which provides vital protection against damaging ultraviolet radiation. Key accidents and corresponding media coverage of environmental concerns have powerfully influenced public opinion. Common themes during the 1960s and 1970s included the “death of Lake Erie,” the ravages of strip mining for coal, the confirmation of automobile exhaust as a key source of photochemical smog, destruction of the ozone layer, and the threat of global warming. The ten-hour Annenberg CPB project, Race to Save the Planet, is now common fare in environmental science telecourses. Media coverage of specific accidents or sites has had some of the greatest impact on public awareness of environmental problems. Oil spills have provided especially vivid and troubling images. The wreck of the Torrey Canyon in 1967 off southern England was the first involving a supertanker, and a harbinger of even larger disasters to come, such as the Exxon Valdez spill in Alaska. The blowout of an oil well in California’s Santa Barbara Channel played nightly on network television news programs. Scenes of oil494
Environmental Encyclopedia 3 covered birds and muddy shorelines were powerful images in the battle for environmental awareness and commitment. The Cuyahoga River in Cleveland was so polluted with petroleum waste that it actually caught fire twice. Love Canal, a forgotten chemical waste dump in Niagara Falls, New York, became the inspiration for passage of the Superfund Act, a tax on chemical companies to pay for cleanup of abandoned hazardous waste sites. It was also personal vindication for Lois Marie Gibbs, leader of the Love Canal Homeowners Association. One air pollution episode in New York City was blamed for the deaths of about 300 people. In response to another in Birmingham, Alabama, a federal judge ordered the temporary shutdown of local steel mills. The loudest alarms raised against the growing use of nuclear power in the U.S. were sounded by the combination of the 1979 near-meltdown at Three Mile Island Nuclear Reactor in Pennsylvania and the subsequent (1986) Chernobyl Nuclear Power Station disaster in the Ukraine. Media coverage and growing public awareness of pollution problems, has led to widespread support for corrective legislation. A large body of such legislation was passed between 1968 and 1980. Especially notable were the National Environmental Policy Act, which created the Environmental Protection Agency, clean air and water acts, Superfund, and the Surface Mining Control and Reclamation Act. The latter required miners to reshape the land to near its original contour and to replace topsoil, both essential keys to the long-term recovery of the land. Earlier noteworthy legislation includes the establishment in 1872 of Yellowstone as the world’s first national park; establishment of national wildlife refuges, forests, and parks, and the agencies to oversee them; and the creation of the Soil Conservation Service. The Wilderness Act of 1964 sought to set aside government land for nondestructive uses only. Much has been accomplished, and environmental issues now command widespread public attention. A list of books, journals, environmental organizations, and relevant government agencies now fills six pages of small print in one popular environmental textbook. Nonetheless, important challenges lie ahead in the pursuit of a quality environment that will tax environmental organizations, government policymakers, and voters. Some key issues for the future can be grouped into the following four categories. 1) Rapidly increasing costs as control standards reach higher and higher levels. The inexpensive and easy solutions have mostly been tried. Solving the air pollution problems within the Los Angeles basin is a prime example of this challenge. 2) Control of phosphates and nitrates in our waterways will require increasing commitment to tertiary (or chemical) sewage
Environmental Encyclopedia 3
Environmental impact assessment
treatment plants. We may also find it necessary to reroute all urban runoff through such plants. 3) Solutions to the global warming problem, if supported by ongoing scientific research, will require alternative energy strategies, especially as large, newly emerging economies of China, India, Brazil, and other countries seek a growing share of the total energy pie. 4) There is a growing conservative trend and related hostility to environmental concerns among our younger population. Consequently, the need for meaningful environmental education and dialogue will only continue to increase.
[Nathan H. Meleen]
RESOURCES BOOKS The American Experience. “Rachel Carson’s Silent Spring.” Boston: Public Broadcasting System, 1993. Cunningham, W. P., and B. W. Saigo. Environmental Science: A Global Concern. 4th ed. Dubuque, IA: Wm. C. Brown, 1997. Marsh, G. P. Man and Nature, or Physical Geography as Modified by Human Action. 1864. Reprint, Cambridge, MA: Harvard University Press, 1965. Miller Jr., G. T. Living in the Environment. 9th ed. Belmont, CA: Wadsworth, 1996. Thomas Jr., W. L., ed. Man’s Role in Changing the Face of the Earth. Chicago: University of Chicago Press, 1956.
Environmental impact assessment A written analysis or process that describes and details the probable and possible effects of planned industrial or civil project activities on the ecosystem, resources, and environment. The National Environmental Policy Act (NEPA) first promulgated guidelines for environmental impact assessments with the intention that the environment receive proper emphasis among social, economic, and political priorities in governmental decision-making. This act required environmental impact assessments for major federal actions affecting the environment. Many states now have similar requirements for state and private activities. Such written assessments are called Environmental Impact Statements or EISs. EISs range from brief statements to extremely detailed multi-volume reports that require many years of data collection and analysis. In general, the environmental impact assessment process requires consideration and evaluation of the proposed project, its impacts, alternatives to the project, and mitigating strategies designed to reduce the severity of adverse effects. The assessments are completed by multidisciplinary teams in government agencies and consulting firms. The experience of the United States Army Corps of Engineers in detailing the impacts of projects such as dams and waterways is particularly noteworthy, as the Corps has
developed comprehensive methodologies to assess impacts of such major and complex projects. These include evaluation of direct environmental impacts as well as social and economic ramifications. The content of the assessments generally follows guidelines in the National Environmental Policy Act. Assessments usually include the following sections: OBackground information describing the affected population and the environmental setting, including archaeological and historical features, public utilities, cultural and social values, topography, hydrology, geology and soil, climatology, natural resources, and terrestrial and aquatic communities; ODescription of the proposed action detailing its purpose, location, time frame, and relationship to other projects; OThe environmental impacts of proposed action on natural resources, ecological systems, population density, distribution and growth rate, land use, and human health. These impacts should be described in detail and include primary and secondary impacts, beneficial and adverse impacts, short and long term effects, the rate of recovery, and importantly, measures to reduce or eliminate adverse effects; OAdverse impacts that cannot be avoided are described in detail, including a description of their magnitude and implications; OAlternatives to the project are described and evaluated. These must include the “no action” alternative. A comparative analysis of alternatives permits the assessment of environmental benefits, risks, financial benefits and costs, and overall effectiveness; OThe reason for selecting the proposed action is justified as a balance between risks, impacts, costs, and other factors relevant to the project; OThe relationship between short and long term uses and maintenance is described, with the intent of detailing short and long term gains and losses; OReversible and irreversible impacts; OPublic participation in the process is described; OFinally, the EIS includes a discussion of problems and issues raised by interested parties, such as specific federal, state, or local agencies, citizens, and activists. The environmental impact assessment process provides a wealth of detailed technical information. It has been effective in stopping, altering, or improving some projects. However, serious questions have been raised about the adequacy and fairness of the process. For example, assessments may be too narrow or may not have sufficient depth. The alternatives considered may reflect the judgment of decisionmakers who specify objectives, the study design, and the alternatives considered. Difficult and important questions exist regarding the balance of environmental, economic, and 495
Environmental Encyclopedia 3
Environmental Impact Statement
other interests. Finally, these issues often take place in a politicized and highly-charged atmosphere that may not be amenable to negotiation. Despite these and other limitations, environmental impact assessments help to provide a systematic approach to sharing information that can improve public decision-making. See also Risk assessment [Stuart Batterman]
RESOURCES BOOKS Rau, J., and D. G. Wooten, eds. Environmental Impact Analysis Handbook. New York, McGraw-Hill, 1980.
OTHER The National Environmental Policy Act of 1969, as Amended, P.L. 91140 (1 January 1970), amended P.L. 94-83 (9 August 1975).
Environmental Impact Statement The National Environmental Policy Act (1969) made all federal agencies responsible for analyzing any activity of theirs “significantly affecting the quality of the human environment.” Environmental Impact Statements (EIS) are the assessments stipulated by this act, and these reports are required for all large projects initiated, financed, or permitted by the federal government. In addition to examining the damage a particular project might have on the environment, federal agencies are also expected to review ways of minimizing or alleviating these adverse effects a review which can include consideration of the environmental benefits of abandoning the project altogether. The agency compiling an EIS is required to hold public hearings; it is also required to submit a draft to public review, and it is forbidden from proceeding until it releases a final version of the statement. The NEPA has been called “the first comprehensive commitment of any modern state toward the responsible custody of its environment,” and the EIS is considered one of the most important mechanisms for its enforcement. It is often difficult to identify environmental damages with remedies that can be pursued in court, but the filing of an EIS and the standards the document must meet are clear and definite requirements for which federal agencies can be held accountable. These requirements have allowed environmental groups to focus legal challenges on the adequacy of the report, contesting the way an EIS was prepared or identifying environmental effects that were not taken into account. The expense and the delays involved in defending against these challenges have often given these groups powerful leverage for convincing a company or an agency to change or omit particular elements of a project. Many environmental organizations have taken advantage of these op496
portunities; between 1974 and 1983, over 100 such suits were filed every year. Although litigation over impact statements can have a decisive influence on a wide range of decisions in government and business, the legal status of these reports and the legal force of the NEPA itself are not as strong as many environmentalists believe they should be. The act does not require agencies to limit or prevent the potential environmental damage identified in an EIS. The Supreme Court upheld this interpretation in 1989, deciding that agencies are “not constrained by NEPA from deciding that other values outweigh the environmental costs.” The government, in other words, is required only to identify and evaluate the adverse impacts of proposed projects; it is not required, at least by NEPA, to do anything about them. Environmentalists have long argued that environmental protection needs a stronger legal grounding than this act provides; some such as Lynton Caldwell, who was originally involved in the drafting of the NEPA, maintain that only a constitutional amendment will serve this purpose. In addition to the controversies over what should be included in these reports and what should be done about the information, there have also been a number of debates over who is required to file them. Environmental groups have filed suit in the Pacific Northwest alleging that the government should require logging companies to file impact statements. And many people have observed that an EIS is not actually required of all government agencies; the U. S. Department of Agriculture, for instance, is not required to file such reports on its commodity support programs. Impact statements have been opposed by business and industrial groups since they were first introduced. An EIS can be extremely costly to compile, and the process of filing and defending them can take years. Businesses can be left in limbo over projects in which they have already invested large amounts of money, and the uncertainties of the process itself have often stopped development before it has begun. In the debate about these statements, many advocates for business interests have pointed out that environmental regulation accounts for 23% of the 400 billion dollars the federal government spends on regulation each year. They argue that impact statements restrict the ability of the United States to compete in international markets by forcing American businesses to spend money on compliance that could be invested in research or capital improvements. Many people believe that impact statements seriously delay many aspects of economic growth, and business leaders have questioned the priorities of many environmental groups, who seem to value conservation over social benefits such as high-levels of employment. In July of 1993, a judge in a federal district court ruled that the North American Free Trade Agreement
Environmental Encyclopedia 3 (NAFTA) could not be submitted to Congress for approval until the Clinton Administration had filed an EIS on the treaty. The controversy over whether an EIS should be required for NAFTA is a good example of the battle between those who want to extend the range of the EIS and those who want to limit it, as well as the practical problems with positions held by both sides. Environmentalists fear the consequences of free trade in North America, particularly free trade with Mexico. They believe that most industries would not take any precautions about the environment unless they were forced to observe them. Environmental protection in Mexico, when it exists, is corrupt and inefficient; if NAFTA is approved by Congress, many believe that businesses in the United States will move south of the border to escape environmental regulations. This could have devastating consequences for the environment in Mexico, as well as an adverse impact on the United States economy. It is also possible that an extensive economic downturn, if perceived to be the result of such relocations, could affect the future of environmental regulation in this country as the United States begins to compete with Mexico over the incentives it can offer industry. Opponents of the decision to require an EIS for NAFTA insist that such a document would be almost impossible to compile. The statement would have to be enormously complex; it would have to consider a range of economic as well as environmental factors, projecting the course of economic development in Mexico before predicting the impact on the environment. Extending the range of impact statements and the NEPA would cause the same expensive delays for this treaty that these statutes have caused for projects within the United States, and critics have focused mainly on the effect such an extension would have on our international competitiveness. They argue that this decision, if upheld, could have broad ramifications for American foreign policy. An EIS could be required for every treaty the government signs with another country, including negotiations over fishing rights, arms control treaties, and other trade agreements. Foreign policy decisions could then be subject to extensive litigation over the adequacy of the EIS filed by the appropriate agency. Many environmentalists would view this as a positive development, but others believe it could prevent us from assuming a leadership role in international affairs. Carol Browner, the former chief administrator of the Environmental Protection Agency (EPA), announced that the agency was determined to reduce some of the difficulties of complying with environmental regulations. She was especially concerned with increasing efficiency and limiting the delays and uncertainties for business. But whatever changes she was able to make, the process of compiling an EIS will never seem cost effective to business, at least in the short
Environmental law
term, and the controversy over these statements continues. See also Economic growth and the environment; Environmental auditing; Environmental economics; Environmental impact assessment; Environmental Monitoring and Assessment Program; Environmental policy; Life-cycle assessment; Risk analysis; Sustainable development [Douglas Smith]
RESOURCES PERIODICALS Burck, C. “Surprise Judgement on NAFTA.” Fortune 128 (July 26, 1993): 12. Davies, J. “Suit Threatens Washington State Industry.” Journal of Commerce 390 (October 21, 1991): 9A. Dentzer, S. “Hasta la Vista in Court, Baby.” U.S. News and World Report 115 (12 July 1993): 47. Ember, L. “EPA’s Browner to Take Holistic Approach to Environmental Protection.” Chemical and Engineering News 71 (1 March 1993): 19. Gregory, R., R. Keeney, and D. von Wintervelt. “Adapting the Environmental Impact Statement Process to Inform Decisionmakers.” Journal of Policy Analysis and Management (Winter 1992): 58.
Environmental labeling see Blue Angel; Eco Mark; Green Cross; Green Seal
Environmental law Environmental law has been defined as the law of planetary housekeeping. It is concerned with protecting the planet and its people from activities that upset the earth and its life-sustaining capabilities, and it is aimed at controlling or regulating human activity toward that end. Until the 1960s, most environmental legal issues in the United States involved efforts to protect and conserve natural resources, such as forests and water. Public debate focused on who had the right to develop and manage those resources. In the succeeding decades, lawyers, legislators and environmental activists increasingly turned their attention to the growing and pervasive problem of pollution. In both instances, environmental law—a term not coined until 1969—evolved mostly from a grassroots movement that forced Congress to pass sweeping legislation, much of which contained provisions for citizen suits. As a result, the courts were thrust into a new era of judicial review of the administrative processes and of scientific uncertainty. Initially, environmental law formed around the principles of common law, which is law created by courts and judges that rests upon a foundation of judicial precedents. However, environmental law soon moved into the arena of administrative and legislative law, which encompasses most 497
Environmental law
of today’s environmental law. The following discussion looks at both areas of law, reviews some of the basic issues involved in environmental law, and outlines some landmark cases. Generally speaking, common law is based on the notion that one party has done harm to another, in legal terms called a tort. There are three broad types of torts, all of which have been applied in environmental law with varying degrees of success. Trespass is the physical invasion of one’s property, which has been interpreted to include situations such as air pollution, runoff of liquid wastes, or contamination of groundwater. Closely associated with trespass are the torts of private and public nuisance. Private nuisance is interference with the use of one’s property. Environmental examples include noise pollution, odors and other air pollution, and water pollution. The operation of a hazardous waste site fits the bill for private nuisance, where the threat of personal discomfort or disease interferes with the enjoyment of one’s home. A public nuisance adversely affects the safety or health of the public or causes substantial annoyance or inconvenience to the public. In these situations, the courts tend to balance the plaintiff’s interest against the social and economic need for the defendant’s activity. Lastly, negligence involves the defendant’s conduct. To prove negligence it must be shown that the defendant was obligated to exercise due care, that the defendant breached that duty, that the plaintiff suffered actual loss or damages, and that there is a reasonable connection between the defendant’s conduct and the plaintiff’s injury. These common law remedies have not been very effective in protecting the overall quality of our environment. The lawsuits and resulting decisions were fragmented and site specific as opposed to issue oriented. Further, they rely heavily on a level of hard scientific evidence that is elusive in environmental issues. For instance, a trespass action must be based on a somehow visible or tangible invasion, which is difficult if not impossible to prove in pollution cases. Common law presents other barriers to action. Plaintiffs must prove actual physical injury (so-called “aesthetic injuries” don’t count) and a causal relationship to the plaintiff’s activity, which again, is a difficult task in environmental issues. In the early 1970s, environmental groups, aided by the media, focused public attention on the broad scope of the environmental crisis, and Congress reacted. It passed a host of comprehensive laws, including amendments to the Clean Air Act (CAA), the Endangered Species Act (ESA), the National Environmental Policy Act (NEPA), the Resource Conservation and Recovery Act (RCRA), the Toxic Substances Control Act (TSCA), and others. These laws, or statutes, are implemented by federal agencies, who 498
Environmental Encyclopedia 3 gain their authority through “organic acts” passed by Congress or by executive order. As environmental problems grew more complicated, legislators and judges increasingly deferred to the agencies’ expertise on issues such as the health risk from airborne lead, the threshold at which a species should be considered endangered, or the engineering aspects of a hazardous waste incinerator. Environmental and legal activists then shifted their focus toward administrative law—challenging agency discretion and procedure as opposed to specific regulations— in order to be heard. Hence, most environmental law today falls into the administrative category. Most environmental statutes provide for administrative appeals by which interest groups may challenge agency decisions through the agency hierarchy. If no solution is reached, the federal Administrative Procedures Act provides that any person aggrieved by an agency decision is entitled to judicial review. The court must first grant the plaintiff “standing,” the right to be a party to legal action against an agency. Under this doctrine, plaintiffs must show they have been injured or harmed in some way. The court must then decide the level of judicial review based on one of three issues—interpretation of applicable statutes, factual basis of agency action, and agency procedure—and apply a different level of scrutiny in each instance. Generally, courts are faced with five basic questions when reviewing agency action: Is the action or decision constitutional? Did the agency exceed its statutory authority or jurisdiction? Did if follow legal procedure? Is the decision supported by substantial evidence in the record? Is the decision arbitrary or capricious? Depending on the answers, the court may uphold the decision, modify it, remand or send it back to the agency to redo or reverse it. By far the most important statute that cracked open the administrative process to judicial review is NEPA. Passed in 1969, the law requires all agencies to prepare an Environmental Impact Statement (EIS) for all major federal actions, including construction projects and issuing permits. Environmental groups have used this law repeatedly to force agencies to consider the environmental consequences of their actions, attacking various procedural aspects of EIS preparation. For example, they often claim that a given agency failed to consider alternative actions to the proposed one, which might reduce environmental impact. In filing a lawsuit, plaintiffs might seek an injunction against a certain action, say, to stop an industry from dumping toxic waste into a river, or stop work on a public project such as a dam or a timber sale that they claim causes environmental damage. They might seek compensatory damages to make up for a loss of property or for health costs, for instance,
Environmental Encyclopedia 3 and punitive damages, money awards above and beyond repayment of actual losses. Boomer v. Atlantic Cement Co. (1970) is a classic common law nuisance case. The neighbors of a large cement plant claimed they had incurred property damage from dirt, smoke and vibrations. They sued for compensatory damages and to enjoin or stop the polluting activities, which would have meant shutting down the plant, a mainstay of the local economy. The New York court rejected a long-standing practice and denied the injunction. Further, in an unusual move, the court ordered the company to pay the plaintiffs for present and future economic loss to their properties. A dissenting judge said the rule was a virtual license for the company to continue the nuisance so long as it paid for it. Sierra Club v. Morton (1972) opened the way for environmental groups to act on behalf of the public interest, and of nature, in the courtroom. The Sierra Club challenged the U.S. Forest Service’s approval of Walt Disney Enterprises’ plan to build a $35 million complex of motels, restaurants, swimming pools and ski facilities that would accommodate up to 14,000 visitors daily in Mineral King Valley, a remote, relatively undeveloped national game refuge in the Sierra Nevada Mountains of California. The case posed the now-famous question: Do trees have standing? The Supreme Court held that the Sierra Club was not “injured in fact” by the development and therefore did not have standing. The Sierra Club reworded its petition, gained standing and stopped the development. Citizens to Preserve Overton Park v. Volpe (1971) established the so-called “hard look” test to which agencies must adhere even during informal rule making. It opened the way for more intense judicial review of the administrative record to determine if an agency had made a “clear error of judgment.” The plaintiffs, local residents and conservationists, sued to stop the U.S. Department of Transportation from approving a six-lane interstate through a public park in Memphis, Tennessee. The court found that Secretary Volpe had not carefully reviewed the facts on record before making his decision and had not examined possible alternative routes around the park. The case was sent back to the agency, and the road was never built. Tennessee Valley Authority v. Hill (1978) was the first major test of the Endangered Species Act and gained the tiny snail darter fish fame throughout the land. The Supreme Court authorized an injunction against completion of a multi-million dollar dam in Tennessee because it threatened the snail darter, an endangered species. The court balanced the act against the money that had already been spent and ruled that Congress’s intent in protecting endangered species was paramount. Just v. Marinette County (1972) involved wetlands, the public trust doctrine and private property rights. The
Environmental Law Institute
plaintiffs claimed that the county’s ordinance against filling in wetlands on their land was unconstitutional, and that the restrictions amounted to taking their property without compensation. The county argued it was exercising its normal police powers to protect the health, safety and welfare of citizens by protecting its water resources through zoning measures. The Wisconsin appellate court ruled in favor of the defendant, holding that the highest and best use of land does not always equate to monetary value, but includes the natural value. The opinion reads, “...we think it is not an unreasonable exercise of that [police] power to prevent harm to public rights by limiting the use of private property to its natural uses.” Although some progress was made in curbing environmental degradation through environmental law in the 1970s, environmental legislation was significantly weakened by the Supreme Court in the 1980s. [Cathryn McCue]
RESOURCES BOOKS Anderson, F., D. R. Mandelker, and A. D. Tarlock. Environmental Protection: Law and Policy. New York: Little, Brown, 1984. Findley, R., and D. Farber. Environmental Law in a Nutshell. St. Paul, MN: West Publishing Co., 1988. Plater, Z., R. Abrams, and W. Goldfarb. Environmental Law and Policy: Nature, Law and Society. St. Paul, MN: West Publishing Co., 1992.
Environmental Law Institute Environmental Law Institute (ELI) is an independent research and education center involved in developing environmental laws and policies at both national and international levels. The institute was founded in 1969 by the Public Law Education Institute and the Conservation Foundation to conduct and promote research on environmental law. In the ensuing years it has maintained a strong and effective presence in forums ranging from college courses to law conferences. For example, ELI has organized instructional courses at universities for both federal and non-governmental agencies. In addition, it has sponsored conferences in conjunction with such bodies as the American Bar Association, the American Law Institute, and the Smithsonian Institute. Within the field of environmental law, ELI provides a range of educational programs and services. In 1991, for instance, the institute helped develop an environmental law course for practicing judges in the New England area. Through funding and endowments, the institute has since managed to expand this particular judicial education program into other regions. A similar program enables ELI to offer
499
Environmental Encyclopedia 3
Environmental liability
training courses to federal judges currently serving in district, circuit, and even bankruptcy courts. ELI also offers various workshops to the general public. In New Jersey, the institute provided a course designed to guide citizens through the state’s environmental laws and thus enable them to better develop pollution-prevention programs in their communities. Broader right-to-know guidance has since been provided—in collaboration with the World Wildlife Fund—at the international level. ELI’s endeavors at the federal level include various interactions with the Environmental Protection Agency (EPA). The two groups worked together to develop the National Wetlands Protection Hotline, which answers public inquiries on wetlands protection and regulation, and to assess the dangers of exposure to various pollutants. Since its inception, ELI has evolved into a formidable force in the field of environmental law. In 1991, it drafted a statute to address the continuing problem of lead poisoning in children. The institute has also worked—in conjunction with federal and private groups, including scientists, bankers, and even realtors—to address health problems attributable to radon gas. ELI has compiled and produced several publications. Among the leading ELI books are Law of Environmental Protection, a two-volume handbook (updated annually) on pollution control law, and Practical Guide to Environmental Management, a resource book on worker health and safety. In addition, ELI has worked with the EPA in producing Environmental Investments: The Cost of a Clean Environment. The institute’s principal periodical is Environmental Law Reporter, which provides analysis and coverage of topics ranging from courtroom decisions to regulation developments. ELI also publishes Environmental Forum—a policy journal intended primarily for individuals in environmental law, policy, and management—and National Wetlands Newsletter, which reports on ensuing developments—legal, scientific, regulatory—related to wetlands management. [Les Stone]
RESOURCES ORGANIZATIONS Environmental Law Institute, 1616 P St., NW, Suite 200, Washington, D.C. USA 20036 (202) 939-3800, Fax: (202) 939-3868, Email:
[email protected],
Environmental liability Environmental liability refers primarily to the civil and criminal responsibility in the environmental issues of hazardous substances that threaten to endanger public health. Compliance with the standards issued by the U.S. Environmental 500
Protection Agency (EPA) became a major issue following the December 11, 1980 enactment by Congress of the original Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA). In 1986, the Superfund Amendments and Reauthorization Act (SARA) provided an amendment to CERCLA. The initial legislation created a tax on chemical and petroleum companies, and gave the federal government authority to handle the releases or threatened releases of the hazardous waste. That tax created $1.6 billion over the first five years of the act, put into a trust fund that would cover the costs of cleaning up abandoned or uncontrolled hazardous waste sites. When “Superfund” was created, changes to the original legislation, as well as additions, were made that reflected the experience gained from the first years of administering the program. It also raised the trust fund to $8.5 billion. The complex issue of liability was made more complex following 1986 when regulations increased state involvement, and encouraged greater citizen participation in the decisions of site cleanup. In addition to the civil liability of claims due to federal, state, and local governments, a possibility of criminal liability emerged as a matter of particular concern. As those who might be responsible, CERCLA defines four categories of individuals and corporations against whom judgment could be rendered, referred to as potentially responsible parties (PRPs): Ocurrent owners or operators of a specific piece of real estate Opast owners if they owned or operated the property at the time of the hazardous contamination Ogenerators and possessors of hazardous substances who arranged for disposal, treatment, or transport Ocertain transporters of hazardous substances In acting under EPA-expanded powers, some states have provided exemptions from liability in certain cases. For example, the state of Wisconsin has provided that the person responsible for the discharge of a hazardous substance is the one who is required to report it, investigate, it and clean up the contamination. According to Wisconsin Department of Natural Resources information, the state defines the responsible person as the one who “causes, possesses, or controls” the contamination—the one who owns the property with a contaminant discharge or owns a container that has ruptured. Other Wisconsin exemptions include limiting liability for parties who voluntarily remediate contaminated property; limiting liability for lenders and representatives, such as banks, credit unions, mortgage bankers, and similar financial institutions, or insurance companies, pension funds or government agencies engaged in secured lending; limiting liability for local government units; and, limiting liability for property affected by off-site discharge. Courts have also acted in finding persons responsible not specifically listed as PRPs:
Environmental Encyclopedia 3
Environmental literacy and ecocriticism
lessees of contaminated property lessors of the contaminated property for contamination caused by their lessees OLandlords and lessees for the contamination cause by their sub-lessees Ocorporate officers in their personal capacity Oshareholders Oparent corporations liable for their subsidiaries Otrustees and personal representatives personally liable for contaminated property owned by a trust or estate Osuccessor corporation Odonees Olenders who foreclose on and subsequently manage contaminated property In an environmentally aware, health-conscious society, both in America and throughout the world, environmental liability continues into the early twenty-first century to be a matter of grave concern not only with land, but also in maritime issues. O O
[Jane E. Spear]
RESOURCES PERIODICALS American Insurance Association. Asbestos and Environmental Liability. [cited June 2002]. . Amos, Bruce. “A Free Market for Environmental Liability.” Pollution Engineering Online 1998 [June 2002]. . Battelle. Managing Corporate Environmental Liability. 2001 [June 2002]. . Environmental Liability. 1993 [cited July 2002]. . Goldstein, Michael R., and Howard D. Rosen. “Environmental Liability in the 90s.” Asset Protection News 2, no. 3 (April/May 1993). Wisconsin Department of Natural Resources. Environmental Liability Exemptions. April 12, 2002 [June 2002]. .
ORGANIZATIONS U.S. Environmental Protection Agency, 1200Pennsylvania Avenue, NW, Washington, DC USA 20460 202-260-2090, ,
Environmental literacy and ecocriticism Environmental literacy and ecocriticism refer to the work of educators, scholars, and writers to foster a critical understanding about environmental issues. Environmental literacy includes educational materials and programs designed to provide lay citizens and students with a broad understanding of the relationship between humans and the natural world, borrowing from the fields of science, politics, economics, and the arts. Environmental literacy also seeks to develop the knowledge and skills citizens and students may need to
identify and resolve environmental crises, individually or as a group. Ecocriticism is a branch of literary studies that offers insights into the underlying philosophies in literature that address the theme of nature and have been catalysts for change in public consciousness concerning the environment. Americans have long turned to literature and popular culture to develop, discuss, and communicate various ideals about the natural world and their relationship to how Americans see themselves and function together. This literature has also made people think about the idea of progress: what constitutes advancement in culture, what are the goals of a healthy society, and how nature would be considered and treated by such a society. In contemporary times, the power and visibility of modern media in influencing these debates is also widely recognized. Given this trend, understanding how these forms of communication work and developing them further to broaden public participation, which is a task of environmental literacy and ecocriticism, is vital to the environmental movement. Educators and ecocritics take diverse approaches to the task of raising consciousness about environmental issues, but they share a collective concern for the global environmental crisis and begin with the understanding that nature and human needs require rebalancing. In that, they become emissaries, as writer Barry Lopez suggests in Orion magazine, who have to “reestablish good relations with all the biological components humanity has excluded from its moral universe.” For Lopez, as with many generations of nature writers, including Henry David Thoreau, John Muir, Edward Abbey, and Terry Tempest Williams, and Annie Dilliard, the lessons to be imparted are learned from long experience with and observation of nature. Lopez suggests another pervasive theme, that observing the ever-changing natural world can be a humbling experience, when he writes of “a horizon rather than a boundary for knowing, toward which we are always walking.” The career of Henry David Thoreau was one of the most influential and early models for being a student of the natural world and for the development of an environmental awareness through attentive participation within nature. Thoreau also made a fundamental contribution to American’s identification with the ideals of individualism and selfsufficiency. His most important work, Walden, was a book developed from his journal written during a two-and-a-halfyear experiment of living alone and self-sufficiently in the woods near Concord, Massachusetts. Thoreau describes his process of education as an awakening to a deep sense of his interrelatedness to the natural world and to the sacred power of such awareness. This is contrasted to the human society from which he isolated himself, of whose utilitarianism, materialism, and consumerism he was extremely critical. 501
Environmental literacy and ecocriticism
Thoreau famously writes in Walden: “I went to the woods because I wished to live deliberately, to front only the essential facts of life, and see if I could not learn what it had to teach, and not, when I came to die, discover that I had not lived.” For Thoreau, living with awareness of the greater natural world became a matter of life and death. Many educators have also been influenced by two founding policy documents, created by commissions of the United Nations, in the field of environmental literacy. The Belgrade Charter (UNESCO-UNEP, 1976) and the Tbilisi Declaration (UNESCO, 1978) share the goal “to develop a world population that is aware of, and concerned about, the environment and its associated problems.” Later governmental bodies such as the Brundtland Commission (Brundtland, 1987), the United Nations Conference on Environment and Development in Rio (UNCED, 1992), and the Thessaloniki Declaration (UNESCO, 1997) have built on these ideas. One of the main goals of environmental literacy is to provide learners with knowledge and experience to assess the health of an ecological system and to develop solutions to problems. Models for environmental literacy include curriculums that address key ecological concepts, provide handson opportunities, foster collaborative learning, and establish an atmosphere that strengthens a learner’s belief in responsible living. Environmental literacy in such programs is seen as more than the ability to read or write. As in nature writing, it is also about a sensibility that views the natural world with a sense of wonder and experiences nature through all the senses. The element of direct experience of the natural world is seen as crucial in developing this sensibility. The Edible Schoolyard program in the Berkeley, California, school district, for example, integrates an organic garden project into the curriculum and lunch program, where students become involved in the entire process of farming, while learning to grow and prepare their own food. The program aims to promote participation and awareness to the workings of the natural world, and also to awaken all the senses to enrich the process of an individual’s development. Public interest in environmental education came to the forefront in the 1970s. Much of the impetus as well as the funding for integrating environmental education into school curriculums comes from non-profit foundations and educators’ associations such as the Association for Environmental and Outdoor Education, the Center for Ecoliteracy, and The Institute for Earth Education. In 1990, the United States Congress created the National Environmental Education and Training Foundation (NEETF) whose efforts include expanding environmental literacy among adults and providing funding opportunities for school districts to advance their environmental curriculums. The National Environmental Education Act of 1990 directed the Environmental Protection Agency (EPA) to provide national leadership 502
Environmental Encyclopedia 3 in the environmental literacy arena. To that end, the EPA established several initiatives including the Environmental Education Center as a resource for educators, and the Office of Environmental Education, which provides grants, training, fellowships, and youth awards. The Public Broadcasting System also plays an active role in the promotion of environmental literacy as evidenced by the partnership of the Annenberg Foundation and the Corporation for Public Broadcasting to create and disseminate educational videos for students and teachers, and grant programs such as that sponsored by New York’s Channel 13/WNET Challenge Grants. A common thread woven through these organizations is a definition of environmental learning that goes beyond simple learning to an appreciation of nature. However, appreciation is measured differently by each organization and segments of the American population differ on which aspects of the environment should be preserved. At the end of the 1990s, the George C. Marshall Institute directed an independent commission to study whether the goals of environmental education were being met. The Commission’s 1997 report found that curricula and texts vary widely on many environmental concepts, including what constitutes conservation. Although thirty-one states have academic standards for environmental education, a national cohesiveness is lacking. Thus, the main challenges to environmental literacy are the lack of unifying programs that would bring together the many approaches to environmental education, and the fact that there is inconsistent support for these programs from the government and public school system. Observers of environmental literacy movements suggest that the new perspectives that learners gain may often be at odds with the concerns and ethics of mainstream society, issues that writers such as Thoreau grappled with. For instance, consumerism and conservationism may be at opposite ends of the spectrum of how people interact with the natural world and its resources. To be effective, literacy initiatives must address these dilemmas and provide tools to solve them. Environmental literacy is thus about providing new ways of seeing the world, about providing language tools to address these new perceptions, and to provide ethical frameworks through which people can make informed choices on how to act. Ecocriticism develops the tools of literary criticism to understand how the relationship of humans to nature is addressed in literature, as a subject, character, or as a component of the setting. Ecocritics also highlight the ways in which literature is a vehicle to create environmental consciousness. For critic William Rueckert, the scholar who coined the term ecocriticism in 1978, poetry and literature are the “verbal equivalent of fossil fuel, only renewable,”
Environmental Encyclopedia 3 through which abundant energy is transferred between nature and the reader. Ecocritics highlight aspects of nature described in literature, whether frontiers, rivers, regional ecosystems, cities, or garbage, and ask what the purposes of these descriptions are. Their interests have included understanding how historical movements such as the Industrial Revolution have changed the relationship between human society and nature, giving people the false illusion that they can completely control nature, for instance. Ecocriticism also brings together perspectives from various academic disciplines and draws attention to their shared purposes. Studies in ecology and cellular biology, for example, echo the theme of interconnectedness of the individual and the natural world seen in poetry, by demonstrating how the life of all organisms is dependent upon their on-going interactions with the environment around them. Although nature writers have expressed their philosophies of nature and reflected on their modes of communication since the nineteenth century, as a self-conscious practice, ecocriticism’s history did not began until the late 1970s. By the 1990s, it had gained wide currency. In his 1997 article “Wild Things,” published in Utne Reader, Gregory McNamee notes that courses in environmental literature are available at colleges across the nation and that “’ecocritcism’ has become something of an academic growth industry.” In 1992, the Association for the Study of Literature and Environment (ASLE) was founded with the mission “to promote the exchange of ideas and information about literature and other cultural representations that consider that human relationships with the natural world". Nature’s role in theatre and film are also popular ecocriticism topics for academic study in the form of seminars on, for example, The Nature of Shakespeare, and suggested lists of commercial films for class discussion that include Chinatown, Deliverance, The China Syndrome, Silkwood, A Civil Action and Jurassic Park. Ecocritics Carl Herndl and Stuart Brown suggest that there are three underlying philosophies in evaluating nature in modern society. The language used by institutions that make government policies usually regards nature as a resource to be managed for greater social welfare. This is described as an ethnocentric perspective, which begins with the idea that one opinion or way of looking at the world is superior to others. Thus, the benefits of environmental issues are always measured against various political and social interests, and not seen as important simply in themselves. Another viewpoint is the anthropocentric perspective, wherein human perspectives are central in the world and are the ultimate source of meaning. The specialized language of the sciences, which treats nature as an object of study, is an example of this. The researcher is seen as existing outside
Environmental literacy and ecocriticism
of or above nature, and science is grounded on the faith that humans can come to know all of nature’s secrets. In contrast, poetry often describes nature in terms of its beauty and emotional and spiritual power. This language sees man as part of the natural world and seeks to harmonize human values and actions with a respect for nature. This is the ecocentric perspective, which means putting nature and ecology as the central viewpoint when considering the various interactions in the world, including human ones. That is, this perspective acknowledges that humans are a part and parcel of nature and ultimately depend upon the ecology’s living and complex interactions for survival. Scholars make the distinction between environmental writing and other kinds of literature that use images of nature in some fashion or another. Environmental writing explores at length ecocentric perspectives. They include discussions about human ethical responsibility towards the natural world, such as in Aldo Leopold’s A Sand County Almanac, considered one of the best explorations of environmental ethics. Many ecocritics also share a concern for the environment, and one aim of eco-criticism is to raise awareness within the literary world about the environmental movement and nature-centered perspectives in understanding human relationships and cultural practices. In Silent Spring, a major text in the field of environmental literacy and ecocriticism, Rachel Carson writes that society faces two choices: to travel as we now do on a superhighway at high speed but ending in disaster, or to walk the less traveled “other road” which offers the chance to preserve the earth. The challenge of ecocriticism is to spread the word of the “other road,” and to simultaneously offer constructive criticism to the environmental movement from within. [Douglas Dupler]
RESOURCES BOOKS Carson, Rachel. Silent Spring. New York: Houghton Mifflin, 1994. Finch, Robert, and John Elder, eds. The Norton Book of Nature Writing. New York: W.W. Norton & Co., 1990. Herndl, Carl, and Stuart Brown, eds. Green Culture: Environmental Rhetoric in Contemporary America. Madison, WI: University of Wisconsin Press, 1996. Leopold, Aldo. A Sand County Almanac. New York: Oxford University Press, 1966. Rueckert, William. “Literature and Ecology: An Experiment in Ecocriticism.” The Ecocriticism Reader, edited by Cheryll Glotfelty and Harold Fromm. Athens, GA: University of Georgia Press, 1996. Snyder, Gary. Practice of the Wild. San Francisco: North Point Press, 1990. Thoreau, Henry David. Walden. 1854; Reprint, Boston: Beacon Press, 1997. Williams, Terry Tempest. Refuge: An Unnatural History of Family and Place. New York: Pantheon, 1991.
503
Environmental monitoring
PERIODICALS Lopez, Barry. “The Naturalist.” Orion Autumn 2001 [cited June 2002]. . McNamee, Gregory. “Wild Things.” Utne Reader November-December 1997 [cited July 2002]. .
OTHER Association for the Study of Literature and Environment. [cited July 2002]. . Center for Ecoliteracy. [cited June 2002]. . Environmental Education Page U.S. Environmental Protection Agency. [cited July 2002]. . Institute for Earth Education. [cited July 2002]. . National Environmental Education and Training Foundation. [cited July 2002]. . North American Association for Environmental Education. [cited June 2002]. .
Environmental mediation and arbitration see Environmental dispute resolution
Environmental monitoring Environmental monitoring detects changes in the health of an ecosystem and indicates whether conditions are improving, stable, or deteriorating. This quality, too large to gauge as a whole, is assessed by measuring indicators, which represent more complex characteristics. The concentration of sulfur dioxide, for example, is an indicator that reflects the presence of other air pollutants. The abundance of a predator indicates the health of the larger environment. Other indicators include metabolism, population, community, and landscape. All changes are compared to an ideal, pristine ecosystem. The SER (stressor-exposure-response) model, a simple but widely used tool in environmental monitoring, classifies indicators as one of three related types: OStressors, which are agents of change associated with physical, chemical, or biological constraints on environmental processes and integrity. Many stressors are caused by humans, such as air pollution, the use of pesticides and other toxic substances, or habitat change caused by forest clearing. Stressors can also be natural processes, such as wildfire, hurricanes, volcanoes, and climate change. OExposure indicators, which link a stressor’s intensity at any point in time to the cumulative dose received. Concentrations or accumulations of toxic substances are exposure indicators; so are clear-cutting and urbanization. OResponse indicators, which shows how organisms, communities, processes, or ecosystems react when exposed to a 504
Environmental Encyclopedia 3 stressor. These include changes in physiology, productivity, or mortality, as well as changes in species diversity within communities and in rates of nutrient cycling. The SER model is useful because it links ecological change with exposure to environmental stress. Its effectiveness is limited, however. The model is a simple one, so it cannot be used for complex environmental situations. Even with smaller-scale problems, the connections between stressor, exposure, and response are not understood in many cases, and additional research is required. Environmental monitoring programs are usually one of two types, extensive or intensive. Extensive monitoring occurs at permanent, widely spaced locations, sometimes using remote-sensing techniques. It provides an overview of changes in the ecological character of the landscape, often detecting regional trends. It measures the effects of human activities like farming, forestry, mining, and urbanization. Information from extensive monitoring is often collected by the government to determine such variables as water and air quality, to calculate allowable forest harvests, set bag limits for hunting and fishing, and establish the production of agricultural commodities. Extensive monitoring usually measures stressors (such as emissions) or exposure indicators (concentration of pollutants in the air). Response indicators, if measured at all in these programs, almost always have some economic importance (damage to forest or agricultural crops). Distinct species or ecological processes do not have economic value and are not usually assessed in extensive-monitoring programs, even though these are the most relevant indicators of ecological integrity. Intensive monitoring is used for detailed studies of structural and functional ecology. Unlike extensive monitoring, a relatively small number of sites provide information on stressors such as climate change and acid rain. Intensive monitoring is also used to conduct experiments in which stressors are manipulated and the responses studied, for example by acidifying or fertilizing lakes, or by conducting forestry over an entire watershed. This research, aimed at understanding the dynamics of ecosystems, helps develop ecological models that distinguish between natural and anthropogenic change. Support for ecological monitoring of either kind has been weak, although more countries are beginning programs and establishing networks between monitoring sites. The United States has founded the Long-Term Ecological Research (LTER) network to study extensive ecosystem function, but little effort is directed toward understanding environmental change. The Environmental Monitoring and Assessment Program (EMAP) of the Environmental Protection Agency (EPA) studies intensive environmental change, but its activities are not integrated with LTER. In
Environmental Encyclopedia 3 comparison, an ecological-monitoring network being designed by the government of Canada to study changes in the environment will integrate both extensive and intensive monitoring. Communication between the two types of monitoring is important. Intensive information provides a deeper understanding of the meaning of extensive-monitoring indicators. For example, it is much easier to measure decreases in surface-water pH and alkalinity caused by acid rain than to monitor resulting changes in fish or other biological variables. These criteria can, however, be measured at intensivemonitoring sites, and their relationships to pH and alkalinity used to predict effects on fish and other fauna at extensive sites where only pH and alkalinity are monitored. The ultimate goal of environmental monitoring is to measure, anticipate, and prevent the deterioration of ecological integrity. Healthy ecosystems are necessary for healthy societies and sustainable economic systems. Environmental monitoring programs can accomplish these goals, but they are expensive and require a substantial commitment by government. Much has yet to be accomplished. [Bill Freedman and Cynthia Staicer]
RESOURCES BOOKS Freedman, B., C. Staicer, and N. Shackell. A Framework for a National Ecological-Monitoring Program for Canada. Ottawa: Environment Canada, 1992.
PERIODICALS Franklin, J. F., C. S. Bledsoe, and J. T. Callahan. “Contributions of the Long-term Ecological Research Program.” Bioscience 40 (1990): 509–524. Odum, E. P. “Trends Expected in Stressed Ecosystems.” Bioscience 35 (1985): 419–422. Schindler, D. W. “Experimental Perturbations of Whole Lakes as Tests of Hypotheses Concerning Ecosystem Structure and Function.” Oikos 57 (1990): 25–41.
Environmental Monitoring and Assessment Program The Environmental Monitoring and Assessment Program (EMAP), established in 1990 by the Environmental Protection Agency (EPA), is a federal project designed to create a continually updated survey of ecological resources in the United States. This comprehensive list monitors and links resource data from several U.S. agencies, including the National Oceanic and Atmospheric Administration, the Fish and Wildlife Service, and the U.S. Department of Agriculture. Research from the program is intended to illustrate changes in specific ecosystems in the U.S. and to determine
Environmental policy
if those changes could have resulted from “human-induced stress.” RESOURCES ORGANIZATIONS Environmental Monitoring and Assessment
[email protected],
Program,
,
Email:
Environmental policy Strictly, an environmental policy can be defined as a government’s chosen course of action or plan to address issues such as pollution, wildlife protection, land use, energy production and use, waste generation, and waste disposal. In reality, the way a particular government handles environmental problems is most often not a result of a conscious choice from a set of alternatives. More broadly, then, a government’s environmental policy may be characterized by examining the overall orientation of its responses to environmental challenges as they occur, or by defining its policy as the sum of plans for, and reactions to, environmental issues made by any number of different arms of government. A society’s environmental policy will be shaped by the actions of its leaders in relation to the five following questions: OShould government intervene in the regulation of the environment or leave resolution of environmental problems to the legal system or the market? OIf government intervention is desirable, at what level should that intervention take place? In the United States, for example, how should responsibility for resolution of environmental problems be divided between and among federal, state and local governments and who should have primary responsibility? OIf government intervenes at some level, how much protection should it give? How safe should the people be and what are the economic trade-offs necessary to ensure that level of safety? OOnce environmental standards have been set, what are the methods to attain them? How does the system control the sources of environmental destruction so that the environmental goals are met? OFinally, how does the system monitor the environment for compliance to standards and how does it punish those who violate them? Policy in the United States The United States has no single, overarching environmental policy and its response to environmental issues— subject to conflicting political, corporate and public influence, economic limitation and scientific uncertainty—is 505
Environmental Encyclopedia 3
Environmental policy
rarely monolithic. American environmental policies are an amalgamation of Congressional, state and local laws, regulations and rules formulated by agencies to implement those laws, judicial decisions rendered when those rules are challenged in court, programs undertaken by private businesses and industry, as well as trends in public concerns. In Congress, many environmental policies were originally formed by what are commonly known as “iron triangles.” These involve three groups of actors who form a powerful coalition: the Congressional committee with jurisdiction over the issue; the relevant federal agency handling the problem; and the interest group representing the particular regulated industry. For example, the key actors in forming policy on clear-cutting in the national forests are the House subcommittee on Forests, Family Farms and Energy, the U.S. Forest Service (USFS), and the National Forest Products Association, which represents many industries dependent on timber. For more than a century, conservation and environmental groups worked at the fringes of the traditional “iron triangle.” Increasingly, however, these public interest groups—which derived their financial support and sense of mission from an increasing number of citizen members— began gaining more influence. Scientists, whose studies and research today play a pivotal role in decision-making, also began to emerge as major players. The Watershed years Catalyzed by vocal, energetic activists and organizations, the emergence of an “environmental movement” in the late 1960s prompted the government to grant environmental protection a greater priority and visibility. 1970, the year of the first celebration of Earth Day, saw the federal government’s landmark passage of the Clean Air Act and the National Environmental Policy Act, as well as Richard Nixon’s creation of an Environmental Protection Agency (EPA) which was given the control of many environmental policies previously administered by other agencies. In addition, some of the most serious problems such as DDT and mercury contamination began to be addressed between 1969 and 1972. Yet, environmental policies in the 1970s developed largely in an adversarial setting pitting environmental groups on one side and the traditional iron triangles on the other. The first policies that came out of this era were designed to clean up visible pollution—clouds of industrial soot and dust, detergent-filled streams and so forth—and employed “end-of-pipe” solutions to target point sources, such as wastewater discharge pipes, smokestacks, and other easily identifiable emitters. An initial optimism generated by improvements in air and water quality was dashed by a series of frightening environmental episodes at Times Beach, Missouri, Three Mile Island, Love Canal, New York and other locations. 506
Such incidents (as well as memory of the devastation caused by the recently-banned DDT) shifted the focus of public concern to specific toxic agents. By the early 1980s, a fearful public led by environmentalists had steered governmental policy toward tight regulation of individual, invisible toxic substances—dioxin, PCBs and others—by backing measures limiting emissions to within a few parts per million. Without an overall governmental framework for action, the result has been a multitude of regulations and laws that address specific problems in specific regions that sometimes conflict and often fail to protect the environment in a comprehensive manner. “It’s been reactionary, and so we’ve lost the integration of thought and disciplines that is essential in environmental policy making,” says Carol Browner, administrator of the U.S. EPA. One example of policy-making gone awry is the 1980 Comprehensive Environmental Response, Compensation and Liability Act (CERCLA), or Superfund toxic waste
program. The law grew as much out of the public’s perception and fear of toxic waste as it did from crude scientific knowledge of actual health risks. Roughly $2 billion dollars a year has been spent cleaning up a handful of the nation’s worst toxic sites to near pristine condition. EPA officials now believe the money could have been better spent cleaning up more sites, although to a somewhat lesser degree. Current trends in environmental policy Today, governmental bodies and public interest groups are drawing back from “micro management” of individual chemicals, individual species and individual industries to focus more on the interconnections of environmental systems and problems. This new orientation has been shaped by several (sometimes conflicting) forces, including: industrial and public resistance to tight regulations fostered by fears that such laws impact employment and economic prosperity; (2) financial limitations that prevent government from carrying out tasks related to specific contaminants, such as cleaning up waste sites or closely monitoring toxic discharges; (3) a perception that large-scale, global problems such as the greenhouse effect, ozone layer depletion, habitat destruction and the like should receive priority; (4) the emergence of a “preventative” orientation on the part of citizen groups that attempts to link economic prosperity with environmental goals. This approach emphasizes recycling, efficiency, and environmental technology and stresses the prevention of problems rather than their remediation after they reach a critical stage. This strategy also marks an attempt by some citizen organizations to a more conciliatory stance with industry and government. This new era of environmental policy is underscored by the election of Bill Clinton and Albert Gore, who made the environment a cornerstone of their campaign. In all likelihood, the Clinton administration will transform the
O
Environmental Encyclopedia 3 EPA into the cabinet-level position of Department of the Environment, giving the agency more stature and power. The EPA, the USFS and other federal environmental agencies have announced a new “ecosystem” approach to resource management and pollution control. In a bold first move, Congressional Democratic leaders are simultaneously reviewing four major environmental statutes (the Resource Conservation and Recovery Act [RCRA], Clean Water Act [CWA], Endangered Species Act [ESA] and Superfund) in the hopes of integrating the policies into a comprehensive program. See also Pollution Prevention Act [Cathryn McCue and Kevin Wolf and Jeffrey Muhr]
RESOURCES BOOKS Lave, Lester B. The Strategy of Social Regulation. Washington, DC: Brookings Institution, 1981. Logan, Robert, Wendy Gibbons, and Stacy Kingsbury. Environmental Issues for the ’90s: A Handbook for Journalists. Washington DC: The Media Institute, 1992. Portney, Paul R., ed. Public Policies for Environmental Protection. Washington, DC: Resources for the Future, 1991. Wolf Jr., Charles. Markets or Government. Cambridge, Massachusetts: MIT Press, 1988. World Resources Institute. 1992 Environmental Almanac. Boston: Houghton Mifflin Co., 1992.
PERIODICALS Schneider, Keith. “What Price Clean Up?” New York Times, March 21– 26, 1993. Smith, Fred. “A Fresh Look at Environmental Policy.” SEJ Journal 3 (Winter 1993).
OTHER Browner, Carol. Administrator of U.S. Environmental Protection Agency, comments during a press conference in Ann Arbor, MI. March 23, 1993. Environmental and Energy Study Institute. Special Report. October 14, 1992.
Environmental Protection Agency (EPA) The Environmental Protection Agency (EPA) was established in July of 1970, a landmark year for environmental concerns, having been preceded by the passing of the National Environmental Policy Act in January and the first Earth Day celebrations in April. President Richard Nixon and Congress, working together in response to the growing public demand for cleaner air, land, and water, sought to create a new agency of the federal government structured to make a coordinated attack on the pollutants that endanger human health and degrade the environment. The EPA was charged with repairing the damage already done to the
Environmental Protection Agency (EPA)
environment and with instituting new policies designed to maintain a clean environment. The EPA’s mission is “to protect human health and to safeguard the natural environment.” At the time the EPA was formed, at least fifteen programs in five different agencies and cabinet-level departments were handling environmental policy issues. For the EPA to work effectively, it was necessary to consolidate the environmental activities of the federal government into one agency. Air pollution control, solid waste management, radiation control, and the drinking water program were transferred from the U.S. Department of Health, Education and Welfare (currently known as the U.S. Department of Health and Human Services). The water pollution control and pesticides research programs were acquired from the U.S. Department of the Interior. Registration and regulation of pesticides was transferred from the U.S. Department of Agriculture, and the responsibility for setting tolerance levels for pesticides in food was acquired from the Food and Drug Administration. The EPA also took over from the Atomic Energy Commission the responsibility for setting some environmental radiation protection standards and assumed some of the duties of the Federal Radiation Council. For some environmental programs, the EPA works with other agencies: for example, the United States Coast Guard and the EPA work together on flood control, shoreline protection, and dredging and filling activities. And, since most state governments in the United States have their own environmental protection departments, the EPA delegates the implementation and enforcement of many federal programs to the states. The EPA’s headquarters is in Washington DC, and there are ten EPA regional offices and field laboratories. The main office develops national environmental policy and programs, oversees the regional offices and laboratories, requests an annual budget from Congress, and conducts research. The regional offices implement national policies, oversee the environmental programs that have been delegated to the states, and review Environmental Impact Statements for federal actions. The field laboratories conduct research, the data from which are used to develop policies and provide analytical support for monitoring and enforcement of EPA regulations, and for the administration of permit programs. The administrator of the EPA is appointed by the President, subject to approval by the Senate. The same procedure is used to appoint a deputy administrator, who assists the administrator, and nine assistant administrators, who oversee programs and support functions. Other posts include the chief financial officer, who manages the EPA’ budget and funding operations, the inspector general, who is respon507
Environmental Protection Agency (EPA)
sible for investigating environmental crimes, and a general counsel, who provides legal support. In addition to the administrative offices, the EPA is organized into the following program offices: The Office of Air and Radiation, the American Indian Environmental Office, the Office of Enforcement and Compliance Assurance, the Office of Environmental Justice, the Office of Environmental Information, the History Office, the Office of International Affairs, the Office of Prevention, Pesticides and Toxic Substances, the Office of Research and Development, the Science Policy Council, the office of Solid Waste and Emergency Response, and the Office of Water. The current EPA Administrator, elected by President George W. Bush, is Christie Whitman, formerly Governor of New Jersey, who was sworn in on January 31, 2001. Whitman’s official administrative philosophy is that environmental goals are compatible with and are connected to economic goals, and that relationships between citizens, policy makers, and private sector must be strengthened. The nomination of Linda J. Fisher to the post of EPA Deputy Administrator was confirmed by the U.S. Senate on May 24, 2001. Fisher, who formerly practiced law in Washington DC and has served as Vice President and Corporate Officer at the Monsanto Co. in St. Louis MO, is Whitman’ top managerial and policy assistant. One of the major activities of the EPA is the management of Superfund sites. For many years, uncontrolled dumping of hazardous chemical and industrial wastes in abandoned warehouses and landfills continued without concern for the potential impact on public health and the environment. Concern over the extent of the hazardous-wastesite problem led Congress to establish the Superfund Program in 1980 to locate, investigate, and clean up the worst such sites. The EPA’ Office of Emergency and Remedial Response (OERR) oversees management of the program in cooperation with individual states. When a hazardous-waste site is discovered, the EPA is notified. The EPA makes a preliminary assessment of the site and gives a numerical score according to Hazard Ranking System (HRS), which determines whether the site is placed on the National Priorities List (NPL). As of May 29, 2002, 1221 sites were listed on the final NPL, with 74 new sites proposed, 812 sites were reported to have completed construction, and 258 sites had been deleted from the list. The final NPL lists Superfund sites in which the clean-up plan is under construction or ongoing. NPL proposed sites include sites for which the HRS indicates that placement on the final NPL is appropriate. Among currently proposed NPL sites are Air Force Plant 85 near Columbus OH (coal deposits leaching sulfuric acid, ammonia, and heavy metals), Blackbird Mine in Lemhi ID (high levels of arsenic, copper, cobalt, and nickel in surface water and sediments of Meadow and Blackbird 508
Environmental Encyclopedia 3 Creeks downstream from mining tunnels and waste-rock piles), the Omaha Lead site in Omaha, NE (lead contaminated soil near populated areas and wetlands) and the Libby Asbestos site in Libby, MT (heavy asbestos exposures and chromium, copper, and nickel deposits near wetlands and fisheries). A final NPL site is deleted from the list when it is determined that no further clean up is needed to protect human health or the environment Finally, under Superfund’s redevelopment program, former hazardous-waste sites have been remade into office buildings, parking lots, or even golf courses to be re-integrated as productive parts of the community. The offices and programs of the EPA recognize a set of main objectives, or “core functions.” These core functions help define the agency’s mission and provide a common focus for all agency activities. The core functions are: OPollution Prevention—taking measures to prevent pollution from being created rather than only cleaning up what has already been released, also known as source reduction ORisk Assessment and Risk Reduction—identifying problems that pose the greatest risk to human health and the environment and taking measures to reduce those risks OScience, Research and Technology—conducting research that will help in developing environmental policies and promoting innovative technologies to solve environmental problems ORegulatory Development—developing requirements such as operating procedures for facilities and standards for emissions of pollutants OEnforcement—assuring compliance with established regulations OEnvironmental Education—developing educational materials, serving as an information clearinghouse, and providing grant assistance to local educational institutions Many EPA programs are established by legislation enacted by Congress. For example, many of the activities carried out by the Office of Solid Waste and Emergency Response originated in the Resource Conservation and Recovery Act (RCRA). Among other laws that form the legal basis for the programs of the EPA are the National Environmental Policy Act (NEPA) of 1969, which represents the basic national charter of the EPA, the Clean Air Act (CAA) of 1970, the Occupational Safety and Health Act (OSHA) of 1970, the Endangered Species Act (ESA) of 1973, the Safe Drinking Water Act (SDWA) of 1974, the Toxic Substances Control Act (TSCA) of 1976, the Clean Water Act (CWA) of 1977, the Superfund Amendments and Reauthorization Act (SARA) of 1986, and the Pollution Prevention Act (PPA) of 1990. It is through such legislation that the EPA obtains authority to develop and enforce regulations. Environmental regulations drafted by
Environmental Encyclopedia 3
Environmental Protection Agency (EPA)
EPA organization chart. (Photograph by Tom Pantages. Reproduced by permission.)
the agency are subjected to intense review before being finalized. This process includes approval by the President’s Office of Management and Budget and input from the private sector and from other government agencies. Public concern over the environmental changes with time, and the EPA alters its policy priorities in response. For example, in answer to growing concern regarding environmental impact to children’s health, the Office of Children’s Health Protection (OCHP) was created in May 1997. On February 14, 2002, President George W. Bush announced the Clear Skies Initiative, which contain the farthest reaching legislative changes to the Clean Air Act since 1990. Growing public concern about water pollution led to a landmark piece of legislation, the Federal Water Pollution Control Act of 1972, amended in 1977 and commonly known as the Clean Water Act. The Clean Water Act gives the EPA the authority to administer pollution control programs and to set water quality standards for contaminants of surface waters. October 18, 2002, marks the thirtieth Anniversary of the enactment of the Clean Water Act. In continuing support of the goals of this Act, Congress has proclaimed 2002 as the “Year of Clean Water.”
As part of its mission as an information clearing house, the EPA maintains an excellent web site at with countless links to supporting information of all kinds. The web site also includes Spanish translations of many documents, and links to children’s activities. [Teresa C. Donkin]
RESOURCES BOOKS Environmental Management. Washington, DC: U.S. Environmental Protection Agency, October 1991. Keating, B., and D. Russell. “Inside the EPA: Yesterday and Today...Still Hazy After All These Years.” E Magazine 3 (August 1992): 30–37.
OTHER U.S. Environmental Protection Agency Laws and Regulations. Clean Water Act. March 26, 2002 [cited July 10, 2002]. . U.S. Environmental Protection Web Site. [cited July 10, 2002]. .
509
Environmental Encyclopedia 3
Environmental racism
Environmental racism The term environmental racism was coined in a 1987 study conducted by the United Church of Christ that examined the location of hazardous waste dumps and found an “insidious form of racism.” Concern had surfaced five years before, when opposition to a polychlorinated biphenyl (PCB) landfill prompted Congress to examine the location of hazardous waste sites in the Southeast, the Environmental Protection Agency (EPA)’s Region IV. They found that three of the four facilities in the area were in communities primarily inhabited by people of color. Subsequent studies, such as Ben Goldman’s The Truth about Where You Live, have contended that exposure to environmental risks is significantly greater for racial and ethnic minorities than for nonminority populations. However, an EPA study contends that there is not enough data to draw such broad conclusions. The National Law Journal found that official response to environmental problems may be racially biased. According to their study, penalties for environmental crimes were higher in white communities. They also found that the EPA takes 20% longer to place a hazardous waste site in a minority community on the Superfund’s National Priorities List (NPL). And, once assigned to the NPL, these clean ups are more likely to be delayed. Advocates also contend that environmentalists and regulators have tried to solve environmental problems without regard for the social impact of the solutions. For example, the Los Angeles Air Quality Management District wanted to require businesses to set up programs that would discourage their employees from driving to work. As initially conceived, employers could have simply charged fees for parking spaces without helping workers set up car pools. The LaborCommunity Strategy Center, a local activist group, pointed out that this would have disproportionately affected people who could not afford to pay for parking spaces. As a compromise, regulators will now review employers’ plans and only approve those that mitigate the any unequal effects on poor and minority populations. In response to the concern that traditional environmentalism does not recognize the social and economic components of environmental problems and solutions, a national movement for “environmental and economic justice” has spread across the country. Groups like the Southwest Network for Environmental and Economic Justice attempt to frame the environment as part of the fight against racism and other inequalities. In addition, the federal government has begun to address the debate over environmental racism. In 1992, the EPA established an Environmental Equity office. In addition, several bills that advocate environmental justice have been introduced. See also Comprehensive Environmental 510
Response, Compensation, and Liability Act (CERCLA); Environmental economics; Environmental law; Hazardous waste siting; South [Alair MacLean]
RESOURCES BOOKS Alston, D., ed. We Speak for Ourselves: Social Justice, Race and Environment. Washington, DC: The Panos Institute, 1990. Goldman, B. A. The Truth About Where You Live: An Atlas for Action on Toxins and Mortality. New York: Times Books, 1991. Lee, C. Toxic Wastes ad Race in the United States: A National Report on the Racial and Socio-Economic Characteristics of Communities with Hazardous Waste Sites. New York: United Church of Christ, Commission for Racial Justice, 1987. U.S. Environmental Protection Agency. Environmental Equity: Reducing Risk for All Communities: Report to the Administrator from the EPA Environmental Equity Workgroup. Washington, DC: U.S. Government Printing Office, 1992. U.S. General Accounting Office. Siting of Hazardous Waste Landfills and Their Correlation with Racial and Economic Status of Surrounding Communities. Washington, DC: U.S. Government Printing Office, 1983.
PERIODICALS Russell, D. “Environmental Racism.” Amicus Journal 11 (Spring 1989): 22–32.
Environmental refugees The term environmental refugee was coined in the late 1980s by the United Nations Environment Programme and refers to people who are forced to leave their community of origin because the land can no longer support them. Environmental factors such as soil erosion, drought, or floods, which are often coupled with poor socioeconomic conditions, are the cause of this loss of stability and security. Many environmental influences may cause such a displacement and include the deterioration of agricultural land, natural or “unnatural” disasters, climate change, the destruction resulting from war, and environmental scarcity. Environmental scarcity can be supply induced, demand induced, or structural. Supply induced scarcity refers to the depletion of agricultural resources, as in the erosion of cropland or overgrazing. Demand induced scarcity occurs when consumption of a resource increases or when population growth occurs, as is occurring in countries such as Philippines, Kenya, and Costa Rica. Structural scarcity results from the unequal social distribution of a resource within a community. The causes of environmental scarcity can occur simultaneously and in combination with each other, as seen in South Africa during the years of apartheid. Approximately 60% of the cropland in South Africa is marked by low organic content, and half of the country receives less than 19.5 in (500 mm) of annual precipitation. When these factors were
Environmental Encyclopedia 3 coupled with the rapid soil erosion, overcrowding, and unequal social distribution of resources experienced at this time, environmental scarcity resulted. Other environmental influences such as climate change and natural disasters can greatly compound the problems related to scarcity. Those countries which are especially vulnerable to these other influences are those which are already experiencing the precursors to scarcity, for example, highly populated countries such as Egypt and Bangladesh. On Haiti, per capita grain production is half of what it was only 40 years ago and residents only get about 80% of their minimum nutritional needs. Environmental problems place an added burden on a situation that is already under pressure. When this combination occurs in societies without strong social ties or political and economic stability, many times the people within the population have no choice but to relocate. Environmental refugees tend to come from rural areas and developing countries--those most vulnerable to the influences of scarcity, climate change, and natural disasters. According to the Centers for Disease Control and Prevention, since the early 1960s most emergencies involving refugees have taken place in these less developed countries where resources are inadequate to support the population during times of need. In 1995, CDC directed 45 relief missions to developing countries such as Angola, Bosnia, Haiti, and Sierra Leone. The number of displaced people is rising worldwide. Of these, the number forced to migrate because of economic and environmental conditions is growing more rapidly than refugees from political strife. According to Dr. Norman Myers at the University of Oxford, there are 25 million environmental refugees today, compared with 20 million officially recognized refugees migrating due to political, religious, or ethical problems. It has been predicted that by the year 2010, this number could rise to 50 million. The number of migrants seeking environmental refuge is grossly underestimated because many do not actually cross borders but are forced to wander within their own country. As global warming causes major climate changes, these numbers could increase even more. Climate change alone may displace 150 million more people by the middle of the next century. Not only would a global climate change increase the number of refugees, it could have a negative impact on agricultural production which would seriously limit the amount of food surpluses available to help displaced people. Although approximately three out of every five refugees are fleeing from environmental hardships, this group of refugees is not legally recognized. According to the 1951 Convention on the Status of Refugees as modified by the 1967 Protocol, a legal refugee is a person who escapes a country and cannot re-enter due to fear of persecution for reasons of race, religion, nationality, social affiliation, or
Environmental refugees
political opinion. This definition requires both the element of persecution as well as cross-border migration. Because of these two requirements, states are not legally compelled to recognize environmental refugees; as mentioned above, many of these refugees never leave their own country, and it is unreasonable to expect them to prove fear of persecution. Environmental refugees are often forced to enter a country illegally since they cannot be granted protection or asylum. Many of the Mexican immigrants who enter the United States are escaping the sterile, unproductive land they have been living on. Over 60% of the land in Mexico is degraded, with soil erosion contributing to over 494,000 acres (200,000 ha) more land being rendered unproductive every year. Many of the people considered to be economic migrants are actually environmental refugees. Those that are recognized as political refugees often must live in overcrowded refugee camps which are no more prepared to sustain the population than the land they are escaping from. Two thousand Somali refugees forced to live in such camps on the border of Kenya were displaced once more when flooding in 1994 ended a long drought but destroyed relief food. Many environmental refugees never resettle and must live the rest of their lives migrating from place to place, looking for land that can sustain them. Researchers are currently working on ways to predict where the next large migrations will come from and how to prevent them from occurring. Predictive models are extremely difficult to produce because of the interaction between the socioeconomic status of the people and the environmental influences on the land. Stuart Liederman, an environmental scientist at the University of New Hampshire, is developing a model which will predict which areas are at risk of producing environmental refugees. This model is a mathematical formula which could be used with any population. One side of the equation combines the rate of environmental decay, the amount of time over which this deterioration will take place, and the susceptibility of the people and the environment. The other side of the equation combines the restoration rate, the time it would take to reestablish the environment, and the potential for recovery of the people and the land. These predictions will allow for preventive measures to be taken, for preparations to be made for future refugees, and for restoring the devastated homelands of past migrants. Creation of a working model may also help convince policymakers that those escaping unlivable environmental conditions need to be legally recognized as refugees. Until environmental refugees are granted legal status, there will be no protection or compensation granted these individuals. The most effective way to deal with the growing number of displaced persons is to concentrate on the reasons they are being forced to leave their homelands. The increasing number of people forced to leave their homeland due 511
Environmental Encyclopedia 3
Environmental resources
to ecological strife is an indicator of environmental quality. Environmental protection is necessary to prevent the situation in many countries from getting worse. In the meantime, measures must be taken to accommodate the needs of these refugees, and the environment must be recognized as a legitimate source of conflict for individuals seeking protection in other lands. [Jennifer L. McGrath]
RESOURCES BOOKS Jacobson, J. L. Environmental Refugees: A Yardstick of Habitability. Worldwatch Institute, 1988. Tickell, C. “Environmental Refugees: The Human Impact of Global Environmental Change.” In Greenhouse Glasnost: The Crisis of Global Warming, edited by T. Minger. New York: Ecco Press, 1990. Lindahl-Kiessling, K., and H. Landberg, eds. Population, Economic Development, and the Environment. New York: Oxford University Press, 1994.
PERIODICALS Fell, N. “Outcasts from Eden.” New Scientist Magazine 151, no. 2045 (August 1996): 24–7. Homer-Dixon, T. F. “Environmental Change and Economic Decline in Developing Countries.” International Studies Notes 16, no. 1 (Winter 1991): 18. Myers, N. “Environmental Refugees in a Globally Warmed World.” BioScience 43, no. 11 (December 1993): 752–61.
OTHER Centers for Disease Control. Famine Affected, Refugee, and Displaced Populations: Recommendations For Public Health Issues. MMWR 1992; 41 (No RR-13).
Environmental resources An environmental resource is any material, service, or information from the environment that is valuable to society. This can refer to anything that people find useful in their environs, or surroundings. Food from plants and animals, wood for cooking, heating, and building, metals, coal, and oil are all environmental resources. Clean land, air, and water are environmental resources, as are the abilities of land, air, and water to absorb society’s waste products. Heat from the sun, transportation and recreation in lakes, rivers, and oceans, a beautiful view, or the discovery of a new species are all environmental resources. The environment provides a vast array of materials and services that people use to live. Often these resources have competing uses and values. A piece of land, for instance, could be used as a farm, a park, a parking lot, or a housing development. It could be mined or used as a garbage dump. The topic of environmental resources, then, raises the question, what do people find valuable in their environment, 512
and how do people choose to use the resources that their environment provides? Some resources are renewable, or infinite, and some are non-renewable, or finite. Renewable resources like energy from the sun are plentiful and will be available for a long time. Finite resources, like oil and coal, are non-renewable because once they are extracted from the earth and burned they cannot be used again. These resources are in limited supply and need to be used carefully. Many resources are becoming more and more limited, especially as population and industrial growth place increasing pressure on the environment. Before the Industrial Revolution, for example, people relied on their own strength and their animals for work and transportation. The invention of the steam engine in the 1850s radically altered peoples’ ability to do work and to consume energy. Today we have transformed our environment with machines, cars, and power plants and in the process we have burnt extraordinary amounts of coal, oil, and natural gas. Some predict that world coal deposits will last another 200 years, while oil and natural gas reserves will last another one hundred years at current rates of consumption. This rate of use is clearly not sustainable. The terms finite and infinite are important because they indicate how much of a given resource is available, and how fast people can use that resource without limiting future supplies. Some resources that were once taken for granted are now becoming more valuable. One of these resources is the environment’s ability to absorb the waste that people produce. In Jakarta, Indonesia, people living in very close quarters in small shanties along numerous tidal canals use their only water supply for bathing, washing clothes, drinking water, fishing, and as a toilet. It is common to see people bathing just down stream from other people who are defecating directly into the river. This scene illustrates a central problem in environmental resource management. These people have only one water source and many needs in order to live. The demands that they place on these resources seriously affect the health and quality of life for all the people, but all of the needs must be met in some way. Thoughtful management of these environmental resources, like building latrines, could alleviate some of the strain on the river and improve other uses of the same resource. People all over the world have taken for granted the valuable resources of air, land, and water quality so that many rivers are undrinkable and unswimable because they contain raw sewage, chemical fertilizers, and industrial wastes. As people make decisions about what they will take from their environment, they also must be conscious of what they intend to put back into that environment. Resource economics was established during a time in human history when environmental resources were thought to be limitless and without value until they were harvested
Environmental Encyclopedia 3 and brought to market. From this viewpoint, the world is big enough that when one resource is exhausted another resource can be found to take its place. Land is valuable according to what can be taken from it in order to make a profit. This kind of management leads to enormous short term gains and is responsible for the speed and efficiency of economic growth throughout the world. One the other hand, this view overlooks longer term profits and the reality that the world is an increasingly small, interconnected, and fragile system. People can no longer assume that they can find fresh new supplies when they use up what they have. Very few places on earth remain untouched and unexploited. The world’s remaining forests, if managed with care, could supply all of society’s needs for timber and still remain relatively healthy and intact. Forest resources can be renewable, since forests grow quickly enough to replace themselves if used in moderation. Unfortunately, in many places forests are being destroyed at an alarming rate. In Costa Rica, Central America, 25% of the remaining forest land has disappeared since 1970. These forests have been cleared to harvest tropical hardwoods, to create farmland and pasture for animals, and to forage wood for cooking and heating. In a country struggling for economic growth, these are all important needs, but they do not always make long term economic sense. Farmers who graze cattle in tropical rain forests or who clear trees off of steep hillsides destroy their land in a matter of years with the idea that this is the fastest way to make money. In the same way, loggers harvest trees for immediate sale, even though many of these trees take hundreds of years to replenish themselves. In fact, the price for tropical hardwoods has gone up four-fold since 1970. The trees cut and sold in 1970 represent a huge economic loss to the Costa Rican economy, since they were sold for a fraction of their present value. Often, the soil on this land quickly erodes downhill into streams and rivers, clogging the rivers with sediment and killing fish and other wildlife. This has the added drawback of damaging hydroelectric and irrigation dams and hurting the fishing industry. Despite these tragic losses, Costa Rica is a model in Central America and in the world for finding alternative uses for its natural resources. Costa Rica has set aside one fifth of its total land area for nature preserves and national park lands. These beautiful and varied parks are valuable for several reasons. First, they help to protect and preserve a huge diversity of tropical species, many undiscovered and unstudied. Second, they protect a great deal of vegetation that is important in producing oxygen, stabilizing atmospheric chemistry, and preventing global climate change. Third, the natural beauty of these parks attracts many international tourists. Tourism is one of Costa Rica’s major industries, providing much needed economic development. People from around the world appreciate the beauty and the wonder—the intangi-
Environmental science
ble values—of these resources. Local people who would have been hired one time to cut down a forest can now be hired for a lifetime to work as park rangers and guides. Some would also argue that these nature preserves have value in themselves without reference to human needs, simply because they are filled with beautiful living birds, insects, plants, and animals. Much of the dialogue in environmental resource management is about the need to balance the needs for economic growth and prosperity with needs for sustainable resource use. In a limited, finite world, there is a need to close the gap between the rates of consumption and rates of supply. The debate over how to assign value to different environmental resources is a lively one because the way that people think about their environment directly affects how they interact with the world. [John Cunningham]
RESOURCES BOOKS Ahmad, Y., et al. Environmental Accounting and Sustainable Development: A UNEP World Bank Symposium. Washington, DC: World Bank, 1989.
PERIODICALS Repetto, R. “Accounting for Environmental Assets.” Scientific American 266 (June 1992): 94–8+.
Environmental restoration see Restoration ecology
Environmental risk analysis see Risk analysis
Environmental science Environmental science is often confused with other fields of related interest, especially ecology, environmental studies, environmental education, and environmental engineering. Renewed interest in environmental issues in the late 1960s and early 1970s, gave rise to numerous programs at many universities in the United States and other countries, most under two rubrics: environmental science or environmental studies. The former focused, as might be expected, on scientific questions and issues of environmental interest; the latter were often courses, with the emphasis on questions of environmental ethics, aesthetics, literature, etc. These new academic units marked the first formal appearance of environmental science on most campuses, at least by that label. But environmental science is essentially the application of scientific methods and principles to the 513
Environmental Encyclopedia 3
Environmental stress
study of environmental questions, so it has probably been around in some form as long as science itself. Air and water quality research, for example, have been carried on in many universities for many decades: that research is environmental science. By whatever label and in whatever unit, environmental science is not constrained within any one discipline; it is a comprehensive field. A considerable amount of environmental research is accomplished in specific departments such as chemistry, physics, civil engineering, or the various biology disciplines. Much of this work is confined to a single field, with no interdisciplinary perspective. These programs graduate scientists who build on their specific training to continue work on environmental problems, sometimes in a specific department, sometimes in an interdisciplinary environmental science program. Many new academic units are interdisciplinary, their members and graduates specifically designated as environmental scientists. Most have been trained in a specific discipline, but they may have degrees from almost any scientific background. In these units, the degrees granted—from B.S. to Ph.D.—are in Environmental Science, not in a specific discipline. Environmental science is not ecology, though that discipline may be included. Ecologists are interested in the interactions between some kind of organism and its surroundings. Most ecological research and training does not focus on environmental problems except as those problems impact the organism of interest. Environmental scientists may or may not include organisms in their field of view: they mostly focus on the environmental problem, which may be purely physical in nature. For example, acid deposition can be studied as a problem of emissions and characteristics of the atmosphere without necessarily examining its impact on organisms. An alternate focus might be on the acidification of lakes and the resulting implications for resident fish. Both studies require expertise from more than one traditional discipline; they are studies in environmental science. See also Air quality; Environment; Environmental ethics; Nature; Water quality [Gerald L. Young Ph.D.]
RESOURCES BOOKS Cunningham, W. P. Environmental Science: A Global Concern. Dubuque, IA: William C. Brown, 1992. Henry, J. G., and G. W. Heinke. Environmental Science and Engineering. Englewood Cliffs, NJ: Prentice-Hall, 1989. Jorgensen, S. E., and I. Johnson. Principles of Environmental Science and Technology. 2nd ed. Amsterdam, NY: Elsevier, 1989.
514
Environmental stress In the ecological context, environmental stress can be considered any environmental influence that causes a discernible ecological change, especially in terms of a constraint on ecosystem development. Stressing agents (or stressors) can be exogenous to the ecosystem, as in the cases of long-range transported acidifying substances, toxic gases, or pesticides. Stress can also cause change as a result of an accentuation of some pre-existing site factor beyond a threshold for biological tolerance, for example thermal loading, nutrient availability, wind, or temperature extremes. Often implicit within the notion of environmental stress, particularly from the perspective of ecosystem managers, is a judgement about the quality of the ecological change. That is, from the human perspective, whether the effect is “good” or “bad.” Environmental stressors can be divided into several, not necessarily exclusive, classes of causal agencies: O“Physical stress” refers to episodic events (or disturbance) associated with intense but usually brief loadings of kinetic energy, perhaps caused by a windstorm, volcanic eruption, tidal wave, or an explosion. OWildfire is another episodic stress, usually causing a mass mortality of ecosystem dominants such as trees or shrubs and a rapid combustion of much of the biomass of the ecosystem. OPollution occurs when certain chemicals are bio-available in a sufficiently large amount to cause toxicity. Toxic stressors include gaseous air pollutants such as sulfur dioxide and ozone, metals such as lead and mercury, residues of pesticides, and even nutrients that may be beneficial at small rates of supply but damaging at higher rates of loading. ONutrient impoverishment implies an inadequate availability of physiologically essential chemicals, which imposes an oligotrophic constraint upon ecosystem development. OThermal stress occurs when heat energy is released into an ecosystem, perhaps by aquatic discharges of low-grade heat from power plants and other industrial sources. OExploitative stress refers to the selective removal of particular species or size classes. Exploitation by humans includes the harvesting of forests or wild animals, but it can also involve natural herbivory and predation, as with infestations of defoliating insects such as locusts, spruce budworm, or gypsy moth, or irruptions of predators such as crownof-thorns starfish. OClimatic stress is associated with an insufficient or excessive regime of moisture, solar radiation, or temperature. These can act over the shorter term as weather, or over the longer term as climate.
Environmental Encyclopedia 3
Environmental Working Group
Within most of these contexts, stress can be exerted either chronically or episodically. For example, the toxic gas sulfur dioxide can be present in a chronically elevated concentration in an urbanized region with a large number of point sources of emission. Alternatively, where the emission of sulfur dioxide is dominated by a single, large point source such as a smelter or power plant, the toxic stress associated with this gas occurs as relatively short-term events of fumigation. Environmental stress can be caused by natural agencies as well as resulting directly or indirectly from the activities of humans. For example, sulfur dioxide can be emitted from smelters, power plants, and homes, but it can also be emitted in large quantities by volcanoes. Similarly, climate change has always occurred naturally, but it may also be forced by human activities that result in emissions of carbon dioxide, methane, and nitrous oxide into the atmosphere. Over most of Earth’s history, natural stressors have been the dominant constraints on ecological development. Increasingly, however, the direct and indirect consequences of human activities are becoming dominant environmental stressors. This is caused by both the increasing human population and by the progressively increasing intensification of the per-capita effect of humans on the environment. [Bill Freedman Ph.D.]
RESOURCES BOOKS Freedman, B. “Environmental Stress and the Management of Ecological Reserves.” In Science and the Management of Protected Areas, edited by J. H. M. Willison, et al. Amsterdam: Elsevier, 1992. Grime, J. P. Plant Strategies and Vegetation Processes. New York: Wiley, 1979.
Environmental Stress Index Environmental Stress Index (ZPG) is a survey to determine the quality of life in American cities. Zero Population Growth, Inc. based in Washington D.C., conducted this “Urban Stress Test” in the late 1980s. One hundred and ninety-two cities were selected throughout the United States. The population-linked survey was based on 11 criteria: Population change; Population density; Education; Violent crime; Community economics; Individual economics (percent below federal poverty level and per capita income); Births (percent of teenage births and infant mortality); Air quality (meeting Environmental Protection Agency (EPA) standards); Hazardous waste (number of EPAdesignated hazardous waste sites); Water (quality and supply); Sewage (model cities provide better than secondary treatment of their wastewater).
Cites were ranked one to six with number one being best. The cities with the lower scores were called model cities. The cities with the higher scores were called the stressed cities. Among the model cities were Abilene, Texas, with an index of 1.6; Roanoke, Virginia, 1.6; Berkeley, California, 2.0; Colorado Springs, Colorado, 2.0; and Peoria, Illinois, 2.0. Among America’s worst cities were Phoenix, Arizona, 5; Houston, Texas, 4.5; Los Angeles, 4.3; Honolulu, Hawaii, 4.3; and Baltimore, Maryland, 4.3.
Environmental Working Group The Environmental Working Group is a public interest research group that monitors public agencies and public policies on topics relating to environmental and social justice. EWG publicizes its findings in research reports that emphasize both national and local implications of federal laws and activities. These research reports are based on analysis of public databases, often obtained through the Freedom of Information Act. In operation since 1993, EWG is a nonprofit organization funded by grants from private foundations. EWG is based in Washington, D.C. and Oakland, California, and is associated with the Tides Center of San Francisco. The organization performs its research both independently and in collaboration with other public interest research groups such as the Sierra Club and the Surface Transportation Policy Project (a nonprofit coalition focusing on social and environmental quality in transportation policy). EWG specializes in analyzing large computer databases maintained by government agencies, such as the Toxic Release Inventory database maintained by the Environmental Protection Agency to record spills of toxic chemicals into the air or water, or the Regulatory Analysis Management System, maintained by the Army Corps of Engineers for internal tracking of permits granted for filling and draining wetlands. Because many of these data sources are capable of exposing actions or policies embarrassing to public agencies, EWG has often obtained data by using the Freedom of Information Act, which legally enforces public release of data belonging to the public domain. Many of the databases researched by EWG have never been thoroughly analyzed before, even by the agency collecting the data. EWG is unusual in working with primary data—going directly to the original database—rather than basing its research on secondary sources, anecdotal information, or interviews. Research findings are published both in print and electronically on the Internet. Electronic publishing allows immediate and inexpensive distribution of reports that concern 515
Environmental Encyclopedia 3
Environmentalism
issues of current interest. EWG is a prolific source of information, producing extensive, detailed reports, often at a rate of more than one a month. Among the environmental topics on which the EWG has reported are drinking water quality, wetland protection and destruction, and the impacts of agricultural pesticides on both farm workers and consumers. Social justice and policy issues that EWG has researched include campaign finance reform, inequalities and inefficiency in farm subsidy programs, and threats to public health and the environment from medical waste. For each general topic, EWG usually publishes a series of articles ranging from the nature and impacts of a federal law to what individuals can do about the current problem. Also included with research reports are state-by-state and county summaries of statistics, which provide details of local implications of the general policy issues. [Mary Ann Cunningham Ph.D.]
RESOURCES BOOKS Cook, K. A., and A. Art. City Slickers: Farm Subsidy Recipients in America’s Big Cities. Washington, DC: Environmental Working Group, 1995. Environmental Working Group. Dishonorable Discharge: Toxic Pollution of America’s Waters. Washington, DC: Environmental Working Group, 1996.
ORGANIZATIONS Environmental Working Group, 1718 Connecticut Ave., NW, Suite 600, Washington, D.C. USA 20009 (202) 667-6982, Fax: (202) 232-2592, Email:
[email protected],
Environmentalism Environmentalism is the ethical and political perspective that places the health, harmony, and integrity of the natural environment at the center of human attention and concern. From this perspective human beings are viewed as part of nature rather than as overseers. Therefore to care for the environment is to care about human beings since we cannot live without the survival of the natural habitat. Although there are many different views within the very broad and inclusive environmentalist perspective, several common features can be discerned. The first is environmentalism’s emphasis on the interdependence of life and the conditions that make life possible. Human beings, like other animals, need clean air to breathe, clean water to drink, and nutritious food to eat. Without these necessities, life would be impossible. Environmentalism views these conditions as being both basic and interconnected. For example, fish contaminated with polychlorinated biphenyl (PCB), mercury, and other toxic substances are not only hazardous to humans but to bears, eagles, gulls, and other predators. Likewise, mighty whales depend on tiny plankton, cows on corn, 516
koala bears on eucalyptus leaves, bees on flowers, and flowers on bees and birds, and so on through all species and ecosystems. All animals, human and nonhuman alike, are interdependent participants in the cycle of birth, life, death, decay, and rebirth. A second emphasis of environmentalism is on the sanctity of life—not only human life but all life, from the tiniest microorganism to the largest whale. Since the fate of our species is inextricably tied with theirs and since life requires certain conditions to sustain it, environmentalists contend that we have an obligation to respect and care for the conditions that nurture and sustain life in its many forms. While environmentalists agree on some issues, there are also a number of disagreements about the purposes of environmentalism and about how to best achieve those ends. Some environmentalists emphasize the desirability of conserving natural resources for recreation, sightseeing, hunting, and other human activities, both for present and future generations. Such a utilitarian view has been sharply criticized by Arne Naess and other proponents of deep ecology who claim that the natural environment has its own intrinsic value apart from any aesthetic, recreational, or other value assigned to it by human beings. Bears, for example, have their own intrinsic value or worth, quite apart from that assigned to their existence via shadow pricing or other mechanism by bear-watchers, hunters, or other human beings. Environmentalists also differ on how best to conserve, reserve, and protect the natural environment. Some groups, such as the Sierra Club and the Nature Conservancy, favor gradual, low-key legislative and educational efforts to inform and influence policy makers and the general public about environmental issues. Other more radical environmental groups, such as the Sea Shepherd Conservation Society and Earth First!, favor carrying out direct action by employing the tactics of ecotage (ecological sabotage), or monkey-wrenching, to stop strip mining, logging, drift net fishing, and other activities that they deem dangerous to animals and ecosystems. Within this environmental spectrum are many other groups, including the World Wildlife Fund, Greenpeace, Earth Island Institute, Clean Water Action, and other organizations which use various techniques to inform, educate, and influence public opinion regarding environmental issues and to lobby policy makers. Despite these and other differences over means and ends, environmentalists agree that the natural environment, whether valued instrumentally or intrinsically, is valuable and worth preserving for present and future generations. [Terence Ball]
Environmental Encyclopedia 3 RESOURCES BOOKS Chase, S., ed. Defending the Earth: A Dialogue Between Murray Bookchin and Dave Foreman. Boston: South End Press, 1991. Devall, B., and G. Sessions. Deep Ecology: Living as if Nature Mattered. Layton, UT: Gibbs M. Smith, 1985. Eckersley, R. Environmentalism and Political Theory. Albany, NY: State University of New York Press, 1992. Naess, A. Ecology, Community and Lifestyle. New York: Cambridge University Press, 1989. Worster, D. Nature’s Economy: A History of Ecological Ideas. New York: Cambridge University Press, 1977.
Environmentally preferable purchasing Environmentally preferable purchasing (EPP) invokes the practice of buying products with environmentally-sound qualities—reduced packaging, reusability, energy efficiency, recycled content and rebuilt or re-manufactured products. It was first addressed officially with Executive Order (EO) 12873 in October 1993, “Federal Acquisition, Recycling and Waste Prevention,” but was further enhanced in September 14, 1998, in by EO 13101 also signed by President Clinton. Entitled, “Greening the Government through Waste Prevention, Recycling and Federal Acquisition,” it superseded EO 12873, but retained similar directives for purchasing. The “Final Guidance” of directives was issued through the Environmental Protection Agency in 1995. What the federal government would adopt as a guideline for its purchases also would mark the beginning of environmentally preferable purchasing for the private sector, and create an entirely new direction for individuals and businesses as well as governments. At the federal level, the EPA’s “Final Guidance” was issued to apply to all acquisitions, from supplies and services to buildings and systems. It developed five “guiding principles” for incorporating the plan into the federal government setting. The five guiding principles are listed as follows: OEnvironment + Price + Performance = Environmentally Preferable Purchasing OPollution prevention OLife cycle perspective/multiple attributes OComparison of environmental impacts OEnvironmental performance information Through its web site, in an entire section devoted to environmentally preferable purchasing, product and service information is provided that includes alternative fuels, buildings, cleaners, conferences, electronics, food serviceware, carpets, and copiers.
Environmentally preferable purchasing
In the private world of business, environmentally preferable purchasing has promised to save money, in addition to meeting EPA regulations and improving employee safety and health. In an age of environmental liability, EPP can make the difference when a question of environmental ethics, or damage arises. For the private consumer, purchasing “green” in the late 1960s and 1970s tended to mean something as simple as recycled paper used in Christmas cards, or unbleached natural fibers for clothing. By 2002, the average American home is affected in countless additional ways—energy efficient kitchen appliances and personal computers; environmentally-sound household cleaning products; and, neighborhood recycling centers. To be certified as “green” things such as recyclability, biodegradability, organic ingredients, and no ozone depleting chemicals are tested. Of those everyday uses, the concern over cleaning products for home, industrial, and commercial use has been the focus of particular attention. Massachusetts has been one of the state’s that has taken a lead in providing leadership on the issue of toxic chemicals with its Massachusetts Toxic Use Reduction Act. With a focus on products that have known carcinogens and ozone-depleting substances, excessive phosphate concentrations, and volatile organic compounds, testing has been continued to provide alternative products that are more environmentally acceptable—and safer for humans and all forms of life, as well. By 2002, the state had awarded contracts to six firms selling environmentally preferred cleaning agents. The products approved for purchasing must follow the following mandated criteria: Ocontain no ingredients from the Massachusetts Toxic Use Reduction Act list of chemicals Ocontain no carcinogens appearing on lists established by the International Agency for Research on Cancer, the National Toxicology Program, or the Occupational Safety and Health Administration; and not contain any chemicals defined as Class A, B. or C carcinogens by the EPA Ocontain no ozone-depleting ingredients Omust be compliant with the phosphate content levels stipulated in Massachusetts law Omust be compliant with the Volatile Organic Compound (VOC) content levels stipulated in Massachusetts law The National Association of Counties offers an extensive list of EPP resources links through its web site. In addition to offices and agencies of the federal government, the states of Minnesota and Massachusetts, and the local governments of King County, Washington and Santa Monica, California, the list includes such organizations as, Buy Green, Earth Systems’ Virtual Shopping Center for the En517
Environmentally responsible investing
vironment (an online database of recycling industry products and services), the Environmental Health Coalition, Green Seal, the National Institute of Government Purchasing, Inc., and the National Pollution Prevention Roundtable. Businesses and business-related companies mentioned include the Chlorine Free Products Association, Pesticide Action Network of North America, Chlorine-Free Paper Consortium, and the Smart Office Resource Center. [Jane E. Spear]
RESOURCES OTHER Argonne National Laboratory, (U.S. Department of Energy). Green Purchasing Links. [cited June 2002]. . Commonwealth of Massachusetts. Environmentally Preferable Products Procurement Program. [cited June 2002]. . Environmental Protection Agency. Environmentally Preferable Purchasing. 1998 [cited April 2002]. . National Association of Counties. Environmentally Preferred Purchasing Resources. [cited July 2002]. . National Safety Council/Environmental Health Center. Environmentally Preferable Purchasing. May 8, 2001 [June 2002]. . NYCWasteLe$$ Government. Environmentally Preferable Purchasing. October 2001 [June 2002]. . Ohio Environmental Protection Agency. Environmentally Preferable Purchasing. [cited June 2002]. .
ORGANIZATIONS Earth Systems, 508 Dale Avenue, Charlottesville, Virginia USA 22903 434-293-2022, National Association of Counties, 440 First Street, NW, Washington, D.C. USA 20001 202-393-6226, Fax: 202-393-2630, U.S. Environmental Protection Agency, 1200 Pennsylvania Avenue, NW, Washington, D.C. USA 20460 202-260-2090,
Environmentally responsible investing Environmentally responsible investing is one component of a larger phenomenon known as socially responsible investing. The idea is that investors should use their money to support industries whose operations accord with the investors’ personal ethics. This concept is not a new one. In the early part of the century, Methodists, Presbyterians, and Baptists shunned companies that promoted sinful activities such as smoking, drinking, and gambling. More recently, many investors chose to protest apartheid by divesting from companies with operations in South Africa. Investors today might arrange their investment portfolios to reflect companies’ commitment to affirmative action, human rights, animal rights, the environment, or any other issues the investors believe to be important. 518
Environmental Encyclopedia 3 The purpose of environmentally responsible investing is to encourage companies to improve their environmental records. The recent emergence and growth of mutual funds identifying themselves as environmentally oriented funds indicates that environmentally responsible investing is a popular investment area for the 1990s. In 1990, around $1 billion were invested in environmentally oriented mutual funds. The naming of these funds can be misleading, however. Some funds have been developed for the purpose of being environmentally responsible; others have been developed for the purpose of reaping the profits anticipated to occur in the environmental services sector as environmentalists in the marketplace and environmental regulations encourage the purchasing of green products and technology. These funds are not necessarily environmentally responsible; some companies in the environmental clean-up industry, for example, have less than perfect environmental records. As the idea of environmentally responsible investing is still new, a generally accepted set of criteria for identifying environmentally responsible companies has not yet emerged. The fact is that everyone pollutes to some extent. The question is where to draw the line between acceptable and unacceptable behavior toward the environment. When grading a company in terms of its behavior toward the environment, one could use an absolute standard. For example, one could exclude all companies that have violated any Environmental Protection Agency (EPA) standards. The problem with such a standard is that some companies that have very good overall environmental records have sometimes failed to meet certain EPA standards. Alternatively, a company could be graded on its efforts to solve environmental problems. Some investors prefer to divest of all companies in heavily polluting industries, such as oil and chemical companies; others might prefer to use a relative approach and examine the environmental records of companies within industry groups. By directly comparing oil companies with other oil companies, for example, one can identify the particular companies committed to improving the environment. For consistency, some investors might choose to divest from all companies that supply or buy from an environmentally irresponsible company. It then becomes an arbitrary decision as to where this process stops. If taken to an extreme, the approach rejects holding United States treasury securities, since public funds are used to support the military, one of the world’s largest polluters and a heavy user of nonrenewable energy. A potential new indicator for identifying environmentally responsible companies has been developed by the Coalition for Environmentally Responsible Economies (CERES); it is a code called the Valdez Principles. The principles are the environmental equivalent of the Sullivan Principles, a
Environmental Encyclopedia 3
Ephemeral species
code of conduct for American companies operating in South Africa. The Valdez Principles commit companies to strive to achieve sustainable use of natural resources and the reduction and safe disposal of waste. By signing the principles, companies commit themselves to continually improving their behavior toward the environment over time. So far, however, few companies have signed the code, possibly because it requires companies to appoint environmentalists to their boards of directors. As there is no generally accepted set of criteria for identifying environmentally responsible companies, investors interested in such an investment strategy must be careful about accepting “environmentally responsible” labels. Investors must determine their own set of screening criteria based on their own personal beliefs about what is appropriate behavior with respect to the environment. [Barbara J. Kanninen]
Enzymes have both beneficial and harmful effects in the environment. On the one hand, environmental hazards such as heavy metals, pesticides, and radiation often exert their effects on an organism by disabling one or more of its critical enzymes. As an example, arsenic is poisonous to animals because it forms a compound with the enzyme glutathione. The enzyme is disabled and prevented from carrying out its normal function, the maintenance of healthy red blood cells. On the other hand, uses are now being found for enzymes in cleaning up the environment. For example, the Novo Nordisk company has discovered that adding an enzyme known as Pulpzyme can vastly reduce the amount of chlorine needed to bleach wood pulp in the manufacture of paper. Since chlorine is a serious environmental contaminant, this technique may represent a significant improvement on present pulp and paper manufacturing techniques. [David E. Newton]
RESOURCES BOOKS Brill, J. A., and A. Reder. Investing From the Heart. New York: Crown, 1992. Harrington, J. C. Investing With Your Conscience. New York: Wiley, 1992.
PERIODICALS McMurdy, D. “Green Is the Color of Money [Environmental Investing in Canada].” Maclean’s, December 1991, 49–50. Rauber, P. “The Stockbroker’s Smile [Environmental Sector Funds].” Sierra 75 (July-August 1990): 18–21.
Enzyme Enzymes are catalysts, compounds (a protein) that speed up the rate at which chemical reactions occur within living organisms without undergoing any permanent change themselves. They are crucial to life since, without them, the vast majority of biochemical reactions would occur too slowly for organisms to survive. In general, enzymes catalyze two quite different kinds of reactions. The first type of reaction includes those by which simple compounds are combined with each other to make new tissue from which plants and animals are made. For example, the most common enzyme in nature is probably carboxydismutase, the enzyme in green plants that couples carbon dioxide with an acceptor molecule in one step of the photosynthesis process by which carbohydrates are produced. Enzymes also catalyze reactions by which more complex compounds are broken down to provide the energy needed by organisms. The principal digestive enzyme in the human mouth, for example, is ptyalin (also known as a amylase), which begins the digestion of starch.
EPA see Environmental Protection Agency
Ephemeral species Ephemeral species are plants and animals whose lifespan lasts only a few weeks or months. The most common types of ephemeral species are desert annuals, plants whose seeds remain dormant for months or years but which quickly germinate, grow, and flower when rain does fall. In such cases the amount and frequency of rainfall determine entirely how frequently ephemerals appear and how long they last. Tiny, usually microscopic, insects and other invertebrate animals often appear with these desert annals, feeding on briefly available plants, quickly reproducing, and dying in a few weeks or less. Ephemeral ponds, short-duration desert rain pools, are especially noted for supporting ephemeral species. Here small insects and even amphibians have ephemeral lives. The spadefoot toad (Scaphiopus multiplicatus), for example, matures and breeds in as little as eight days after a rain, feeding on short-lived brine shrimp, which in turn consume algae and plants that live as long as water or soil moisture lasts. Eggs, or sometimes the larvae of these animals, then remain in the soil until the next moisture event. Ephemerals play an important role in many plant communities. In some very dry deserts, as in North Africa, ephemeral annuals comprise the majority of living species— although this rich flora can remain hidden for years at a time. Often widespread and abundant after a rain, these plants provide an essential food source for desert animals, including domestic livestock. Because water is usually un519
Environmental Encyclopedia 3
Epidemiology
available in such environments, many desert perennials also behave like ephemeral plants, lying dormant and looking dead for months or years but suddenly growing and setting seed after a rare rain fall. The frequency of desert ephemeral recurrence depends upon moisture availability. In the Sonoran Desert of California and Arizona, annual precipitation allows ephemeral plants to reappear almost every year. In the drier deserts of Egypt, where rain may not fall for a decade or more, dormant seeds must survive for a much longer time before germination. In addition, seeds have highly sensitive germination triggers. Some annuals that require at least one inch (two to three cm) of precipitation in order to complete their life cycle will not germinate when only one centimeter has fallen. In such a case seed coatings may be sensitive to soil salinity, which decreases as more rainfall seeps into the ground. Annually-recurring ephemerals often respond to temperature, as well. In the Sonoran Desert some rain falls in both summer and winter. Completely different summer and winter floral communities appear in response. Such adaptation to different temporal niches probably helps decrease competition for space and moisture and increase each species’ odds of success. Although they are less conspicuous, ephemeral species also occur outside of desert environments. Short-duration food supplies or habitable conditions in some marine environments lead to ephemeral species growth. Ephemerals successfully exploit such unstable environments as volcanoes and steep slopes prone to slippage. More common are spring ephemerals in temperate deciduous forests. For a few weeks between snow melt and closure of the overstory canopy, quick-growing ground plants, including small lilies and violets, sprout and take advantage of available sunshine. Flowering and setting seed before they are shaded out by larger vegetation, these ephemerals disappear by mid-summer. Some persist in the form of underground root systems, but others are true ephemerals, with only seeds remaining until the next spring. See also Adaptation; Food chain/web; Opportunistic organism [Mary Ann Cunningham Ph.D.]
RESOURCES BOOKS Whitford, W. G. Pattern and Process in Desert Ecosystems. Albuquerque: University of New Mexico Press, 1986. Zahran, M. A., and A. J. Willis. The Vegetation of Egypt. London: Chapman and Hall, 1992.
PERIODICALS Hughes, J. “Effects of Removal of Co-Occurring Species on Distribution and Abundance of Erythronium americanum (Liliaceae), a Spring Ephemeral.” American Journal of Botany 79 (1990): 1329–39.
520
Went, F. W. “The Ecology of Desert Plants.” Scientific American 192 (1955): 68–75.
Epidemiology Epidemiology, the study of epidemics, is sometimes called the medical aspect of ecology because it is the study of diseases in animal populations, including humans. The epidemiologist is concerned with the interactions of organisms and their environments as related to the presence of disease. Environmental factors of disease include geographical features, climate, and concentration of pathogens in soil and water. Epidemiology determines the numbers of individuals affected by a disease, the environmental circumstances under which the disease may occur, the causative agents, and the transmission of disease. Epidemiology is commonly thought to be limited to the study of infectious diseases, but that is only one aspect of the medical specialty. The epidemiology of the environment and lifestyles has been studied since Hippocrates’s time. More recently, scientists have broadened the worldwide scope of epidemiology to studies of violence, of heart disease due to lifestyle choices, and to the spread of disease because of environmental degradation. Epidemiologists at the Epidemic Intelligence Service (EIS) of the Centers for Disease Control and Prevention have played important roles in landmark epidemiologic investigations. Those include the identification in 1955 of a lot of poliovirus vaccine, supposedly dead, that was contaminated with live polio virus; an investigation of the definitive epidemic of Legionnaires’ disease in 1976; identification of tampons as a risk factor for toxic-shock syndrome; and investigation of the first cluster of cases that came to be called acquired immunodeficiency syndrome (AIDS). EIS officers are increasingly involved in the investigation of noninfectious disease problems, including the risk of injury associated with all-terrain vehicles and cluster deaths related to flour contaminated with parathion. The epidemiological classification of disease deals with the incidence, distribution, and control of disorders of a population. Using the example of typhoid, a disease spread through contaminated food and water, scientists first must establish that the disease observed is truly caused by Salmonella typhosa, the typhoid organism. Investigators then must know the number of cases, whether the cases were scattered over the course of a year or occurred within a short period, and the geographic distribution. It is critical that the precise locations of the diseased patients be established. In a hypothetical case, two widely separated locations within a city might be found to have clusters of cases of typhoid arising simultaneously. It might be found that each of these clusters revolved around a family unit, suggesting that personal rela-
Environmental Encyclopedia 3 tionships might be important. Further investigation might disclose that all of the infected persons had dined at one time or at short intervals in a specific home, and that the person who had prepared the meal had visited a rural area, suffered a mild attack of the disease, and now was spreading it to family and friends by unknowing contamination of food. One very real epidemic of cholera in the West African nation of Guinea-Bissau was tracked by CDC researchers using maps, interviews, and old-fashioned footwork door-todoor through the country. An investigator eventually tracked the source of the cholera outbreak to contaminated shellfish. Epidemic diseases result from an ecological imbalance of some kind. Ecological imbalance, and hence, epidemic disease may be either naturally caused or induced by man. A breakdown in sanitation in a city, for example, offers conditions favorable for an increase in the rodent population, with the possibility that diseases may be introduced into and spread among the human population. In this case, an epidemic would result as much from an alteration in the environment as from the presence of a causative agent. For example, an increase in the number of epidemics of viral encephalitis, a brain disease, in man has resulted from the ecological imbalance of mosquitoes and wild birds caused by man’s exploitation of lowland for farming. Driven from their natural habitat of reeds and rushes, the wild birds, important natural hosts for the virus that causes the disease, are forced to feed near farms; mosquitoes transmit the virus from birds to cattle to man. Lyme disease, which was tracked by epidemiologists from man to deer to the ticks which infest deer, is directly related to environmental changes. The lyme disease spirochete probably has been infecting ticks for a long time; museum specimens of ticks collected on Long Island in the l940s were found to be infected. Since then, tick populations in the Northeast have increased dramatically, triggering the epidemic. There are more ticks because many of the forests that had been felled in the Northeast have returned to forestland. Deer populations in those areas have exploded, close to concentrated human populations, as have the numbers of Ixodes dammini ticks which feed on deer. The deer do not become ill, but when a tick bite infects a human host, the result can be a devastating disease, including crippling arthritis and memory loss. Disease detectives, as epidemiologists are called, are taking on new illnesses like heart disease and cancer, diseases that develop over a lifetime. In 1948, epidemiologists enrolled 5,000 people in Framingham, Massachusetts, for a study on heart disease. Every two years the subjects have undergone physicals and answered survey questions. Epidemiologists began to understand what factors put people at
Epidemiology
risk, such as high blood pressure, elevated cholesterol levels, smoking, and lack of exercise. CDC epidemiologists are now tracking the pattern of violence, traditionally a matter for police. If a pattern is found, then young people who are at risk can be taught to stop arguments before they escalate to violence, or public health workers can recognize behaviors that lead to spouse abuse, or the warning signs of teenage suicide, for example. In the 1980s, classic epidemiology discovered that a puzzling array of illnesses was linked, and it came to be known as AIDS. Epidemiologists traced the disease to sexual contact, then to contaminated blood supplies, then proved the AIDS virus could cross the placental barrier, infecting babies born to HIV-infected mothers. The AIDS virus, called human immunodeficiency virus, may have existed for centuries in African monkeys and apes. Perhaps 40 years ago, this virus crossed from monkey to man, although researchers do not know how or why. African chimpanzees can be infected with HIV, but they don’t develop the disease, suggesting that chimps have developed protective immunity. Eventually AIDS, over centuries, probably will develop into a less deadly disease in humans. But before then, researchers fear that new, more deadly, diseases will evolve. As human communities change and create new ways for diseases to spread, viruses and bacteria constantly evolve as well. Rapidly increasing human populations prove a fertile breeding ground for microbes, and as the planet becomes more crowded, the distances that separate communities become smaller. Epidemiology has become one of the important sciences in the study of nutritional and biotic diseases around the world. The United Nations supports, in part, a World Health Organization investigation of nutritional diseases. Epidemiologists have also been called upon in times of natural emergencies. When Mount St. Helens erupted on May 18, 1980, CDC epidemiologists were asked to assist in an epidemiologic evaluation. The agency funded and assisted in a series of studies on the health effects of dust exposure, occupational exposure, and mental health effects of the volcanic eruption. In 1990, CDC epidemiologists began research for the Department of Energy to study people who have been exposed to radiation. A major task of the study is to quantify exposures based on historical reconstructions of emissions from nuclear plant operations. Epidemiologists have undertaken a major thyroid disease study for those people exposed to radioactive iodine as a result of living near the Hanford Nuclear Reservation in Richland, Washington, during the l940s and l950s. [Linda Rehkopf] 521
Erodible
RESOURCES BOOKS Friedman, G. D. Primer of Epidemiology. 3rd ed. New York: McGrawHill, 1987. Goldsmith, J. R., ed. Environmental Epidemiology: Epidemiological Investigations of Community Environmental Problems. St. Louis: CRC Press, 1986. Kopfler, F. C., and G. Craun, eds. Environmental Epidemiology. Chelsea, MI: Lewis, 1986.
Erodible Susceptible to erosion or the movement of soil or earth particles due to the primary forces of wind, moving water, ice and gravity. Tillage implements may also move soil particles, but this transport is usually not considered erosion.
Erosion Erosion is the wearing away of the land surface by running water, wind, ice, or other geologic agents, including such processes as gravitational creep. The term geologic erosion refers to the normal, natural erosion caused by geological processes acting over long periods of time, undisturbed by humans. Accelerated erosion is a more rapid erosion process influenced by human, or sometimes animal, activities. Accelerated erosion in North America has only been recorded for the past few centuries, and in research studies, postsettlement erosion rates were found to be eight to 350 times higher than presettlement erosion rates. Soil erosion has been both accelerated and controlled by humans since recorded history. In Asia, the Pacific, Africa, and South America, complex terracing and other erosion control systems on arable land go back thousands of years. Soil erosion and the resultant decreased food supply have been linked to the decline of historic, particularly Mediterranean, civilizations, though the exact relationship with the decline of governments such as the Roman Empire is not clear. A number of terms have been used to describe different types of erosion, including gully erosion, rill erosion, interrill erosion, sheet erosion, splash erosion, saltation, surface creep, suspension, and siltation. In gully erosion, water accumulates in narrow channels and, over short periods, removes the soil from this narrow area to considerable depths, ranging from 1.5 ft (0.5 m) to as much as 82–98 ft (25–30 m). Rill erosion refers to a process in which numerous small channels of only a few inches in depth are formed, usually occurring on recently cultivated soils. Interrill erosion is the removal of a fairly uniform layer of soil on a multitude of relatively small areas by rainfall splash and film flow. 522
Environmental Encyclopedia 3 Usually interpreted to include rill and interril erosion, sheet erosion is the removal of soil from the land surface by rainfall and surface runoff. Splash erosion, the detachment and airborne movement of small soil particles, is caused by the impact of raindrops on the soil. Saltation is the bouncing or jumping action of soil and mineral particles caused by wind, water, or gravity. Saltation occurs when soil particles 0.1–0.5 mm in diameter are blown to a height of less than 6 in (15 cm) above the soil surface for relatively short distances. The process includes gravel or stones effected by the energy of flowing water, as well as any soil or mineral particle movement downslope due to gravity. Surface creep, which usually requires extended observation to be perceptible, is the rolling of dislodged particles 0.5–1.0 mm in diameter by wind along the soil surface. Suspension occurs when soil particles less than 0.1 mm diameter are blown through the air for relatively long distances, usually at a height of less than 6 in (15 cm) above the soil surface. In siltation, decreased water speed causes deposits water-borne sediments, or silt, to build up in stream channels, lakes, reservoirs, or flood plains. In the water erosion process, the eroded sediment is often higher (enriched) in organic matter, nitrogen, phosphorus, and potassium than in the bulk soil from which it came. The amount of enrichment may be related to the soil, amount of erosion, the time of sampling within a storm, and other factors. Likewise, during a wind erosion event, the eroded particles are often higher in clay, organic matter, and plant nutrients. Frequently, in the Great Plains, the surface soil becomes increasingly more sandy over time as wind erosion continues. Erosion estimates using the Universal Soil Loss Equation (USLE) and the Wind Erosion Equation (WEE) estimate erosion on a point basis expressed in mass per unit area. If aggregated for a large area (e.g., state or nation), very large numbers are generated and have been used to give misleading conclusions. The estimates of USLE and WEE indicate only the soil moved from a point. They do not indicate how far the sediment moved or where it was deposited. In cultivated fields, the sediment may be deposited in other parts of the field with different crop cover or in areas where the land slope is less. It may also be deposited in riparian land along stream channels or in flood plains. Only a small fraction of the water-eroded sediment leaves the immediate area. For example, in a study of five river watersheds in Minnesota, it was estimated that from less than 1–27% of the eroded material entered stream channels, depending on the soil and topographic conditions. The deposition of wind-eroded sediment is not well quantified, but much of the sediment is probably deposited in nearby
Environmental Encyclopedia 3
Escherichia coli
In addition to depleting farmlands, eroded sediment causes off-site damages that, according to one study, may exceed on-site loss. The sediment may end up in a domestic water supply, clog stream channels, even degrade wetlands, wildlife habitats, and entire ecosystems. See also Environmental degradation; Gillied land; Soil eluviation; Soil organic matter; Soil texture [William E. Larson]
RESOURCES BOOKS Paddock, J. N., and C. Bly. Soil and Survival: Land Stewardship and the Future of American Agriculture. San Francisco: Sierra Club Books, 1987. Resource Conservation Glossary. 3rd ed. Ankeny, IA: Soil Conservation Society of America, 1982.
PERIODICALS Steinhart, P. “The Edge Gets Thinner.” Audubon 85 (November 1983): 94–106+.
OTHER Brown, L. R., and E. Wolf. “Soil Erosion: Quiet Crisis in the World Economy.” Worldwatch Paper #60. Washington DC: Worldwatch Institute, 1984. Soil erosion on a trail in the Adirondack Mountains (Photograph by Yoav Levy. Phototake. Reproduced by permission.)
areas more protected from the wind by vegetative cover, stream valleys, road ditches, woodlands, or farmsteads. While a number of national and regional erosion estimates for the United States have been made since the 1920s, the methodologies of estimation and interpretations have been different, making accurate time comparisons impossible. The most extensive surveys have been made since the Soil, Water and Related Resources Act was passed in 1977. In these surveys a large number of points were randomly selected, data assembled for the points, and the Universal Soil Loss Equation (USLE) or the Wind Erosion Equation (WEE) used to estimate erosion amounts. While these equations were the best available at the time, their results are only estimations, and subject to interpretation. Considerable research on improved methods of estimation is underway by the U.S. Department of Agriculture. In the cornbelt of the United States, water erosion may cause a 1.7–7.8% drop in soil productivity over the next one hundred years, as compared to current levels, depending on the topography and soils of the area. The U.S.D.A. results, based on estimated erosion amounts for 1977, only included sheet erosion, not losses of plant nutrients. Though the figures may be low for this reason, other surveys have produced similar estimates.
Escherichia coli Escherichia coli, or E. coli is a bacterium in the family Enterobacteriaceae that is found in the intestines of warm-blooded animals, including humans. E. coli represent about 0.1% of the total bacteria of an adult’s intestines (on a Western diet). As part of the normal flora of the human intestinal tract, E. coli aids in food digestion by producing vitamin K and B-complex vitamins from undigested materials in the large intestine and suppresses the growth of harmful bacterial species. However, E. coli has also been linked to diseases in about every part of the body. Pathogenic strains of E. coli have been shown to cause pneumonia, urinary tract infections, wound and blood infections, and meningitis. Toxin-producing strains of E. coli can cause severe gastroenteritis (hemorrhagic colitis), which can include abdominal pain, vomiting, and bloody diarrhea. In most people, the vomiting and diarrhea stop within two to three days. However, about 5–10% of the those affected will develop hemolytic-uremic syndrome (HUS), which is a rare condition that affects mostly children under the age of 10, but also may affect the elderly as well as persons with other illnesses. About 75% of HUS cases in the United States are caused by an enterohemorrhagic (intestinally-related organism that causes hemorrhaging) strain of E. coli referred to as E. coli O157:H7, while the remaining cases are caused by non-O157 strains. E. coli. O157:H7 is found in the intestinal tract of cattle. In the United States, the Centers for Disease 523
Escherichia coli
Control and Prevention estimates that there are about 10,000–20,000 infections and 500 deaths annually that are caused by E. coli O157:H7. E. coli O157:H7, first identified in 1982, and isolated with increasing frequency since then, is found in contaminated foods such as meat, dairy products, and juices. Symptoms of an E. coli O157:H7 infection start about seven days after infection with the bacteria. The first symptom is sudden onset of severe abdominal cramps. After a few hours, watery diarrhea begins, causing lose of fluids and electrolytes (dehydration), which causes the person to feel tired and ill. The watery diarrhea lasts for about a day, and then changes to bright red bloody stools, as the infection causes sores to form in the intestines. The bloody diarrhea lasts for two to five days, with as many as 10 bowel movements a day. Additional symptoms may include nausea and vomiting, without a fever, or with only a mild fever. After about five to 10 days, HUS can develop. HUS is characterized by destruction of red blood cells, damage to the lining of blood vessel walls, reduced urine production, and in severe cases, kidney failure. Toxins produced by the bacteria enter the blood stream, where they destroy red blood cells and platelets, which contribute to the clotting of blood. The damaged red blood cells and platelets clog tiny blood vessels in the kidneys, or cause lesions to form in the kidneys, making it difficult for the kidneys to remove wastes and extra fluid from the body, resulting in hypertension, fluid accumulation, and reduced production of urine. The diagnosis of an E. coli infection is made through a stool culture. Treatment of HUS is supportive, with particular attention to management of fluids and electrolytes. Some studies have shown that the use of antibiotics and antimotility agents during an E. coli infection may worsen the course of the infection and should be avoided. Ninety percent of children with HUS who receive careful supportive care survive the initial acute stages of the condition, with most having no long-term effects. In about 50% of the cases, short term replacement of kidney function is required in the form of dialysis. However, between 10 and 30% of the survivors will have kidney damage that will lead to kidney failure immediately or within several years. These children with kidney failure require on-going dialysis to remove wastes and extra fluids from their bodies, or may require a kidney transplant. The most common way an E. coli O157:H7 infection is contracted is through the consumption of undercooked ground beef (e.g., eating hamburgers that are still pink inside). Healthy cattle carry E. coli within their intestines. During the slaughtering process, the meat can become contaminated with the E. coli from the intestines. When contaminated beef is ground up, the E. coli bacteria are spread throughout the meat. Additional ways to contract an E. coli
524
Environmental Encyclopedia 3 infection include drinking contaminated water and unpasteurized milk and juices, eating contaminated fruits and vegetables, and working with cattle. The infection is also easily transmitted from an infected person to others in settings such as day care centers and nursing homes when improper sanitary practices are used. Prevention of HUS caused by ingestion of foods contaminated with E. coli O157:H7 and other toxin-producing bacteria is accomplished through practicing hygienic food preparation techniques, including adequate hand washing, cooking of meat thoroughly, defrosting meats safely, vigorous washing of fruits and vegetables, and handling leftovers properly. Irradiation of meat has been approved by the United States Food and Drug Administration and the United States Department of Agriculture in order to decrease bacterial contamination of consumer meat supplies. The presence of E. coli in surface waters indicates that there has been fecal contamination of the water body from agricultural and/or urban and residential areas. However, the contribution from human vs. agricultural sources is difficult to determine. Since the concentration of E. coli in a surface water body is dependent on runoff from various sources of contamination, it is related to the land use and hydrology of the surrounding watershed. E. coli concentrations at a specific location in a water body will vary depending on the bacteria levels already in the water, inputs from various sources, dilution with precipitation and runoff, and die-off or multiplication of the organism within the water body. Sediments can act as a reservoir for E. coli, as the sediments protect the organisms from bacteriophages and microbial toxicants. The E. coli can persist in the sediments and contribute to concentrations in the overlying waters for months after the initial contamination. Routine monitoring for enteropathogens, which cause gastrointestinal diseases and are a result of fecal contamination, is necessary to maintain water that is safe for drinking and swimming. Many of these organisms are hard to detect, so monitoring of an indicator organism is used to determine fecal contamination. To provide safe drinking water, the water is treated with chlorine, ultra-violet light, and/or ozone. Traditionally fecal coliform bacteria have been used as the indicator organisms for monitoring, but the test for these bacteria also detects thermotolerant non-fecal coliform bacteria. Therefore, the U.S. Environmental Protection Agency (EPA) is recommending that E. coli as well as enterococci be used as indicators of fecal contamination of a water body instead of fecal coliform bacteria. The test for E. coli does not include non-fecal thermotolerant coliforms. An epidemiological study has shown that even though the strains of E. coli present in a water body may not be pathogenic, these organisms are the best predictor of swimmingassociated gastrointestinal illness.
Environmental Encyclopedia 3
Essential fish habitat
The U.S. EPA recreational water quality standard is based on a threshold concentration of E. coli above which the health risk from waterborne disease is unacceptably high. The recommended standard corresponds to approximately 8 gastrointestinal illnesses per 1000 swimmers. The standard is based on two criteria: 1) a geometric mean of 126 organisms per 100 ml, based on several samples collected during dry weather conditions, or 2) 235 organisms/100 ml sample for any single water sample. During 2002, the U.S. EPA finalized guidance on the use of E. coli as the basis for bacterial water quality criteria to protect recreational freshwater bodies. [Judith L. Sims]
RESOURCES BOOKS Bell, Chris, and Alec Kyriakides. E. coli. Boca Raton: Chapman & Hall, 1998. Burke, Brenda Lee. Don’t Drink the Water: The Waterton Tragedy. Victoria, BC: Trafford Publishing, 2001. Parry, Sharon, and S. Palmer. E. coli: Environmental Health Issues of Vtec 0157. London, UK:Spon Press, 2001. Sussman, Max, ed. Escherichia coli: Mechanisms of Virulence. Cambridge, UK: Cambridge University Press, 1997. U.S. Environmental Protection Agency. Implementation Guidance for Ambient Water Quality Criteria for Bacteria. Draft, EPA-823-B-02-003. Washington, DC: U.S. Environmental Protection Agency, 2002.
PERIODICALS Koutkia, Polyxeni, Eleftherios Mylonakis, and Timothy Flanigan. “Enterohemorrhagic Escherichia coli: An Emerging Pathogen.” American Family Physician, 56, no. 3 (September 1, 1997): 853–858.
OTHER “Escherichia coli and Recreational Water Quality in Vermont.” Bacterial Water Quality. February 7, 2000 [June 2002]. . U.S. Food and Drug Administration. “Escherichia coli.” Bad Bug Book. February 13, 2002 [cited May 25, 2002]. .
Essential fish habitat Essential Fish Habitat (EFH) is a federal provision to conserve and sustain the habitats that fish need to go through their life cycles. The United States Congress in 1996 added the EFH provision to the Magnuson Fishery Conservation and Management Act of 1976. Renamed the MagnusonStevens Conservation and Management Act in 1996, the act is the federal law that governs marine (sea) fishery management in the United States. The amended act required that fishery management plans include designations and descriptions of essential fish habitats. The plan is a document describing the strategy to reach management goals in a fishery, an area where fish
breed and people catch them. The Magnuson-Stevens Act covers plans for waters located within the United States’ exclusive economic zone. The zone extends offshore from the coastland for three to 200 miles. The designation of EFH was necessary because the continuing loss of aquatic habitat posed a major longterm threat to the viability of commercial and recreational fisheries, Congress said in 1996. Lawmakers defined EFH as “those waters and substrate necessary to the fish for spawning, breeding, feeding, or growth to maturity". Substrate consists of sediment and structures below the water. The Magnuson-Stevens Act called for identification of EFH by eight regional fishery management councils and the Highly Migratory Species Division of the National Marine Fisheries Service (NMFS), an agency of the Commerce Department’s National Oceanic and Atmospheric Administration (NOAA). Under the Magnuson-Stevens Act, NOAA manages more than 700 species. These species range from tiny reef fish to large tuna. NOAA Fisheries and the councils are required by the act to minimize “to the extent practicable” the adverse effects of fishing on EFH. The act also directed the councils and NOAA to devise plans to conserve and enhance EFH. Those plans are included in the management plans. Also in the plan are “habitat areas of particular concern.” These areas within an EFH include rare habitat or habitat that is ecologically important. Furthermore, the act required federal agencies to work with NMFS when the agencies plan to authorize, finance, or carry out activities that could adversely affect EFH. This process called an EFH consultation is required if the agency plans an activity like dredging near an essential fishing habitat. While NMFS does not have veto power over the project, NOAA Fisheries will provide conservation recommendations. The eight regional fishery management councils were established by the 1976 Magnuson Fishery and Conservation Management Act. That legislation also established the exclusive economic zone and staked the United States’ claim to it. The 1976 act also addressed issues such as foreign fishing and how to connect the fishing community to the management process, according to an NOAA report. The councils manage living marine resources in their regions and address issues such as EFH. The New England Fishery Management Council manages fisheries in federal waters off the coasts of Maine, New Hampshire, Massachusetts, Rhode Island, and Connecticut. New England fish species include Atlantic cod, Atlantic halibut, and white hake. The Mid-Atlantic Fishery Management Council manages fisheries in federal waters off the mid-Atlantic coast. Council members represent the states of New York, 525
Environmental Encyclopedia 3
Estuary
New Jersey, Pennsylvania, Delaware, Maryland, and Virginia. North Carolina is represented on this council and the South Atlantic Council. Fish species found within this region include ocean quahog, Atlantic mackerel, and butterfish. The South Atlantic Fishery Management Council is responsible for the management of fisheries in the federal waters within a 200-mile area off the coasts of North Carolina, South Carolina, Georgia, and east Florida to Key West. Marine species in this area include cobia, golden crab, and Spanish mackerel. The Gulf of Mexico Fishery Management Council draws its membership from the Gulf Coast states of Florida, Alabama, Mississippi, Louisiana, and Texas. Marine species in this area include shrimp, red drum, and stone crab. The Caribbean Fishery Management Council manages fisheries in federal waters off the Commonwealth of Puerto Rico and the U.S. Virgin Islands. The management plan covers coral reefs and species including queen triggerfish and spiny lobster. The North Pacific Fishery Management Council includes representatives from Alaska and Washington state. Species within this area include salmon, scallops, and king crab. The Pacific Fishery Management Council draws its members from Washington, Oregon, and California. Species in this region include salmon, northern anchovy, and Pacific bonito. The Western Pacific Fishery Management Council is concerned with the United States exclusive economic zone that surrounds Hawaii, American Samoa, Guam, the Northern Mariana Islands, and other U.S. possessions in the Pacific. Fishery management encompasses coral and species such as swordfish and striped marlin. [Liz Swain]
RESOURCES BOOKS Dobbs, David. The Great Gulf: Fishermen, Scientists, and the Struggle to Revive the World’s Greatest Fishery. Washington, DC: Island Press, 2000. Hanna, Susan. Fishing Grounds: Defining a New Era for American Fishing Management. Washington, DC: Island Press, 2000.
ORGANIZATIONS National Marine Fisheries Service Office of Habitat Conservation, 1315 E. West Highway, 15th Floor, Silver Springs, MD 20910 (301) 7132325, Fax: (301) 713-1043, Email:
[email protected], http:// www.nmfs.noaa.gov National Oceanic and Atmospheric Administration, 14th Street & Constitution Avenue, NW, Room 6013, Washington, D.C. 20230 (202) 482-6090, Fax: (202) 482-3154, Email:
[email protected], http:// www.noaa.gov
526
Estuary Estuaries represent one of the most biologically productive aquatic ecosystems on Earth. An estuary is a coastal body of water where chemical and physical conditions modulate in an intermediate range between the freshwater rivers that feed into them and the salt water of the ocean beyond them. It is the point of mixture for these two very different aquatic ecosystems. The freshwater of the rivers mix with the salt water pushed by the incoming tides to provide a brackish water habitat ideally suited to a tremendous diversity of coastal marine life. Estuaries are nursery grounds for the developing young of commercially important fish and shellfish. The young of any species are less tolerant of physical extremes in their environment than adults. Many species of marine life cannot tolerate the concentrations of salt in ocean water as they develop from egg to subadult, and by providing a mixture of fresh and salt water, estuaries give these larval life forms a more moderate environment in which to grow. Because of this, the adults often move directly into estuaries to spawn. Estuaries are extremely rich in nutrients, and this is another reason for the great diversity of organisms in these ecosystems. The flow of freshwater and the periodic flooding of surrounding marshlands provides an influx of nutrients, as does the daily surges of tidal fluctuations. Constant physical movement in this environment keeps valuable nutrient resources available to all levels within the food chain/web. Coastal estuaries also provide a major filtration system for waterborne pollutants. This natural water treatment facility helps maintain and protect water quality, and studies have shown that one acre of tidal estuary can be the equivalent of a $75,000 waste treatment plant. When its value for recreation and seafood production are included in this estimate, a single acre of estuary has been valued at $83,000. An acre of farmland in America’s Corn Belt, for comparison, has a top value of $1,200 and an annual income through crop production of $600. Throughout the ages man has settled near bodies of water and utilized the bounty they provide, and the economic value of estuaries, as well as the fact that they are a coastal habitat, has made them vulnerable to exploitation. Chesapeake Bay on the Atlantic coast is the largest estuary in the United States, draining six states and the District of Columbia. It is the largest producer of oysters in the country; it is the single largest producer of blue crabs in the world, and 90 percent of the striped bass found on the East Coast hatch there. It one of the most productive estuaries in the world, yet its productivity has declined in recent decades due to a huge increase in the number of people in the region. Between the 1940s and the 1990s, the population jumped
Environmental Encyclopedia 3
Ethnobotany
from about three and a half million to over 15 million, bringing with it an increase in pollution and overfishing of the bay. Sewage treatment plants contribute large amounts of phosphates, while agricultural, urban, and suburban discharges deposit nitrates, which in turn contribute to algal blooms and oxygen depletion. Pesticides and industrial toxics also contribute to the bay’s problems. Since the early 1980s concerted efforts to clean up the Chesapeake Bay and restore its seafood productivity have been undertaken by state and federal agencies. Progress has been made but there is still much to be done, both to restore this vital ecosystem and to insure prolonged cooperation between government agencies, industry, and the general populace. See also Agricultural pollution; Aquatic chemistry; Commercial fishing; Dissolved oxygen; Nitrates and nitrites; Nitrogen cycle; Restoration ecology [Eugene C. Beckham]
RESOURCES BOOKS Baretta, J. W., and P. Ruardij, eds. Tidal Flat Estuaries. New York: SpringerVerlag, 1988. McLusky, D. The Estuarine Ecosystem. New York: Chapman & Hall, 1989.
PERIODICALS Horton, T. “Chesapeake Bay: Hanging in the Balance.” National Geographic 183 (1993): 2–35.
Ethanol Ethanol is an organic compound with the chemical formula C2H5OH. Its common names include ethyl alcohol and grain alcohol. The latter term reflects one method by which the compound can be produced: the distillation of corn, sugar cane, wheat and other grains. Ethanol is the primary component in many alcoholic drinks such as beer, wine, vodka, gin, and whiskey. Many scientists believe that ethanol can and should be more widely used in automotive fuels. When mixed in a one to nine ratio with gasoline, it is sold as gasohol. The reduced costs of producing gasohol have only recently made it a viable economic alternative to other automotive fuels. See also Alternative fuels
Ethnobotany The field of ethnobotany is concerned with the relationship between indigenous cultures and plants. Plants play a major and complex role in the lives of indigenous peoples, providing nourishment, shelter, and medicine. Some plants have had such a major effect on traditional cultures that religious
ceremonies and cultural beliefs were developed around their use. Ethnobotanists study and document these relationships. The discovery of many plant-derived foods and medicines first used by indigenous cultures has changed the modern world. On the economic side, the field of ethnobotany determines the traditional uses of plants in order to find other potential applications for food, medicine, and industry. As an academic discipline, ethnobotany studies the interaction between peoples and plant life to learn more about human culture, history, and development. Ethnobotany draws upon many academic areas including anthropology, archeology, biology, ecology, chemistry, geography, history, medicine, religious studies, and sociology to help understand the complex interaction between traditional human cultures and the plants around them. Early explorers who observed how native peoples used plants then carried those useful plants back to their own cultures might be considered the first ethnobotanists, although that is not a word they would have used to describe themselves. The plant discoveries these explorers made caused the expansion of trade between many parts of the globe. For example, civilizations were changed by the discovery and subsequent trade of sugar, tea, coffee, and spices including cinnamon and black pepper. During his 1492 voyage to the Caribbean, Christopher Columbus discovered tobacco in Cuba and took it back with him to Europe, along with later discoveries of corn, cotton, and allspice. Other Europeans traveling to the Americas discovered tomatoes, potatoes, cocoa, bananas, pineapples and other useful plants and medicines. Latex rubber was discovered in South America when European explorers observed native peoples dipping their feet in the rubber compound before walking across hot coals. The study of plants and their place in culture has existed for centuries. In India the vedas, which are long religious poems that have preserved ancient beliefs for thousands of years, contain descriptions of the use and value of certain plants. Descriptions of how certain plants can be used have been found in the ancient Egyptian scrolls. More than 500 years ago the Chinese made records of medicinal plants. In ancient Greece, Aristotle wrote about the uses of plants in early Greek culture, and theorized that each plant had a unique spirit. The Greek surgeon Dioscorides in A.D. 77 recorded the medicinal and folk use of nearly 600 plants in the Mediterranean region. During the Middle Ages, there existed many accounts of the folk and medicinal use of plants in Europe in books called herbals. As the study of plants became more scientific, ethnobotany evolved as a field. One of the most important contributors to the field was Carl Linnaeus, a Swedish botanist who developed the system of naming organisms that is still used today. This system of binomial classification gives each 527
Ethnobotany
organism a Latin genus and species name. It was the first system that enabled scientists speaking different languages to organize and accurately record new plant discoveries. In addition to naming the 5,900 plants known to European botanists, Linnaeus sent students around the world looking for plants that could be useful to European civilization. This was an early form of economic botany, an offshoot of ethnobotany that is interested primarily in practical developments of new uses for plants. Pure ethnobotany is more sociology-based, and is primarily interested in the cultural relationship of plants to the indigenous peoples who use them. In the nineteenth century, bioprospecting, (the active search for valuable plants in other cultures) multiplied as exploration expanded across the globe. The most famous ethnobotanist of the 1800s was Richard Spruce, an Englishman who spent 17 years in the Amazon living among the native people and observing and recording their use of plants. Observing a native treatment for fever, Spruce collected cinchona bark from which the drug quinine was developed. Quinine has saved the lives of millions of people infected with malaria. Spruce also documented the native use of hallucinogenic plants used in religious rituals and the accompanying belief systems associated with these hallucinogenic plants. Many native cultures on all continents believed that certain psychoactive plants that caused visions or hallucinations gave them access to spiritual and healing powers. Spruce discovered that for some cultures, plants were central to the way human life and the world was perceived. Thus, the field of ethnobotany began expanding from the study of the uses of plants to how plants and cultures interacted on more sociological or philosophical/religious grounds. An American botanist named John Harshberger first coined the term “ethnobotany” in 1895, and the field evolved into an academic discipline in the twentieth century. The University of New Mexico offered the first master’s degree in ethnobotany. Their program concentrated on the ethnobotany of the Native Americans of the southwestern United States. In the twentieth century one of the most famous ethnobotanists and plant explorers was Harvard professor Richard Evans Schultes. Inspired by the story of Richard Spruce, Schultes lived for 12 years with several indigenous tribes in South America while he observed and documented their use of plants as medicine, poison, and for religious purposes. He discovered many plants that have been investigated for pharmaceutical use. His work also provided further insight into the use of hallucinogenic plants by native peoples and the cultural significance of that practice. Interest in ethnobotany increased substantially in the 1990s. One study showed that the number of papers in the 528
Environmental Encyclopedia 3 field doubled in the first half of the 1990s, when compared to the last half of the 1980s. By the early 2000s, several universities around the world offered graduate programs in ethnobotany, and undergraduate programs were becoming more common. By the start of twenty-first century, ethnobotany was an accepted academic field. In addition, the pharmaceutical industry continued to research plants used by indigenous cultures in a quest for new or more effective drugs. Many countries conduct ethnobotanical studies of their native cultures. A research survey showed that nearly 40% of ethnobotanical studies are being conducted in North America, but other countries including Brazil, Columbia, Germany, France, England, India, China, and Kenya, are also active in the field. For instance, ethnobotanists in China and India are recording traditional medicine systems that use native plants. Researchers are also studying traditional sustainable agriculture methods in Africa in the hope of applying them to new drought-management techniques. Interest in the field of ethnobotany has increased as scientists have learned more about indigenous cultures. “Primitive” people were once viewed as backward and lacking in knowledge compared to the “civilized” world, but ethnobotanists and other scientists have shown that native peoples often have developed sophisticated knowledge about preparing and using plants for food and medicine. The ways in which people and plants interact are broad and complex, and the modern field of ethnobotany, in its many areas of inquiry, reflects this complexity. Areas of interest to ethnobotanists include traditional peoples’ knowledge and interaction with plant life, traditional agricultural practices, indigenous peoples’ conception of the world around them and the role of plants in their religious belief systems and rituals, how plants are used to make products and art objects, the knowledge and use of plants for medicine (a sub-field called ethnopharmacology), and the historical relationship between plants and past peoples (a sub-field called paleoethnobotany). At the beginning of the twenty-first century, ethnobotanists are fighting a war against time, because the world’s native cultures and stores of plant life are disappearing at a rapid pace. Language researchers have estimated that the world lost over half of its spoken languages in the twentieth century, and of the languages that remain, only 20% were being taught to young children in the early 2000s. The loss of traditional languages indicates that indigenous cultures are disappearing quickly as well. Historically, up to 75% of all useful plant-derived compounds have been discovered through the observation of folk and traditional plant use. Ethnobotanists are concerned that modern people will lose a storehouse of valuable plant knowledge, as well as rare
Environmental Encyclopedia 3
Eurasian milfoil
cultural and historical information, as these indigenous cultures disappear. Ethnobotanists, like many scientists, are concerned about environmental devastation from development pressure and slash-and-burn agriculture in less developed regions that are rich and diverse in plant life, such as the Amazon rainforest. Harvard biologist E. O. Wilson has estimated that the world is losing as many as 74 species per day. To put ethnobotanists’ concerns about forest loss in perspective, a single square kilometer of rainforest may contain more plant species than an area in North America equaling size of Vermont. And, of all the plant species in the rainforest, fewer than 1% have been scientifically analyzed for potential uses. Considering that 25% of all prescription drugs in the United States contain active ingredients derived from plants, the continuing loss of the world’s plant communities could be an incalculable loss. In the face of disappearing indigenous peoples and the loss of botanically rich habitat, ethnobotanists must work quickly. Active areas in the field of ethnobotany in the early 2000s include the identification of new plant-derived foods, fibers and pharmaceuticals, investigation of traditional agricultural methods and vegetation management, using traditional technologies for sustainable development, and the deeper understanding of ecology and ways to preserve the biodiversity of ecosystems. [Douglas Dupler]
RESOURCES BOOKS Cotton, C. M. Ethnobotany: Principles and Applications. New York: John Wiley & Sons, 1996. Davis, Wade. Light at the Edge of the World: A Journey Through the Realm of Vanishing Cultures. Washington, DC: National Geographic Society, 2001. Moerman, Daniel E. Native American Ethnobotany. Portland, OR: Timber Press, 1998. Plotkin, Mark J. Tales of a Shaman’s Apprentice: An Ethnobotanist Searches for New Medicines in the Amazon Rainforest. New York: Viking, 1993. Schultes, Richard Evans, and Albert Hoffman. Plants of the Gods. Rochester, VT: Healing Arts Press, 1992. Schultes, Richard Evans, and Siri von Reis, eds. Ethnobotany: Evolution of a Discipline. Portland, OR: Dioscorides Press, 1995. Wilson, E. O. Biophilia. Cambridge: Harvard University Press, 1984.
PERIODICALS Cox, Paul Alan. “Carl Linnaeus: The Roots of Ethnobotany.” Odyssey (February 2001): 6. Magee, Bernice E. “Hunting for Poisons.” Appleseeds, April 2001, 20.
OTHER Society for Economic Botany Home Page. [cited July 2002]. .
EU see European Union
Eurasian milfoil Eurasian milfoil (Myriophyllum spicatum) is a feathery-looking underwater plant that has become a major nuisance in waterways in the United States. Introduced into the country over 70 years ago as an ornamental tank plant by the aquarium industry, it has since spread to the East Coast from Vermont to Florida and grows as far west as Wisconsin and Texas. It is also found in California. It is particularly abundant in the Chesapeake Bay, the Potomac River, and in some Tennessee Valley reservoirs. Eurasian milfoil is able to tolerate a wide range of salinity in water and grows well in inland fresh waters as well as in brackish coastal waters. The branched stems grow from 1–10 ft (0.3–3 m) in length and are upright when young, becoming horizontal as they get older. Although most of the plant is underwater, the tips of the stems often project out at the surface. Where the water is hard, the stems may get stiffened by calcification. The leaves are arranged in whorls of four around the branches with 12–16 pairs of leaflets per leaf. As with other milfoils, the red flowers of Eurasian milfoil are small and inconspicuous. As fragments of the plant break off, they disperse quickly through an area, with each fragment capable of putting out roots and developing into a new plant. Due to its extremely dense growth, Eurasian milfoil has become a nuisance that often impedes the passage of boats and vessels along navigable waterways. It can cause considerable damage to underwater equipment and lead to losses of time and money in the maintenance and operation of shipping lanes. It also interferes with recreational water uses such as fishing, swimming, and diving. In the prairie lakes of North Dakota, the woody seeds of this plant are eaten by some waterfowl such as ducks. However, this plant appears to have little wildlife value, although it may provide some cover to fishlife. Aquatic weed control experts recommend that vessels that have passed through growths of Eurasian milfoil be examined and plant fragments removed. They also advocate the physical removal of plant growths from shorelines and other areas with a high potential for dispersal. Additionally, chemical agents are also often used to control the weed. Some of the commonly used herbicides include 2,4-D and Diquat. These are usually diluted with water and applied at the rate of one to two gallons per acre. The application of herbicides to control aquatic weed growth is regulated by environmental agencies due to the potential of harm to nontarget organisms. Application times are restricted by season and duration of exposure and depend on the kind of herbi529
Environmental Encyclopedia 3
European Union
cide used, its mode of action, and its impacts on the other biota in the aquatic system. See also Aquatic chemistry; Commercial fishing
[Usha Vedagiri]
RESOURCES BOOKS Gunner, H. B. Microbiological Control of Eurasian Watermilfoil. Washington, DC: U.S. Army Engineer Waterways Experiment Station, 1983. Hotchkiss, N. Common Marsh Underwater and Floating-Leaved Plants of the United States and Canada. Mineola, NY: Dover Publications, 1972. Schmidt, J. C. How to Identify and Control Water Weeds and Algae. Milwaukee, WI: Applied Biochemists, Inc., 1987.
European Economic Community (EEC) see European Union
European Greens see Green politics
European Union The European Union (EU) is a political and monetary organization of European nations. Its members as of 2002 were Austria, Belgium, Denmark, Finland, France, Germany, Greece, Ireland, Italy, Luxembourg, the Netherlands, Portugal, Spain, Sweden, and the United Kingdom. While member states retain their national governments, the EU promulgates transnational laws and treaties, has a unified agricultural policy, and has removed trade barriers between members. The EU uses a common currency, the euro. The EU represents about a quarter of the global economy, roughly equal to that of the United States. The European Union developed gradually in the years after World War II. It began in 1957 as the European Economic Community, or EEC, which originally included France, West Germany, Italy, Belgium, the Netherlands, and Luxembourg. The EEC was superceded by the European Community, or EC, in 1967. The principal aim of the EC was to foster free trade among member countries, eliminating tariffs and customs barriers. The EC expanded its role in the 1970s and 1980s. As more states joined, the EC began to strengthen political cooperation among members, holding meetings to coordinate foreign policy and international law. By the mid-1980s, plans were underway to transform the EC into a single European market. This single market would have a central banking system, a transnational currency, and a unified foreign policy. These goals were articulated in the Single European Act of 1987, which 530
also for the first time set out environmental protection as an important goal of European economic development. A treaty written in 1991, called the Maastricht Treaty or the Treaty on European Union, spelled out the future shape of the European Union. All the member states had ratified the treaty by November, 1993, when the European Union officially superceded the EC. The EU went on to launch a new currency, the euro, in 1999. In January 2002 the euro replaced existing EU member currencies. The EU had incorporated environmental policy into its economic plans beginning in 1987. New members were required to conform to certain environmental standards. And the EU was able to make regional environmental policy, which often made more sense than national laws for problems like air and water pollution that extended across national borders. Environmental regulation in the EU was generally stricter than in the United States, though member states did not always comply with the law. The EU was often quicker than the United States on implementing environmental strategy, such as rules for computer and cell phone disposal. The EU approved the 1997 Kyoto Protocol, an international agreement to reduce the production of socalled greenhouse gases in order to arrest global warming. As details of the implementation of the plan were being worked out over the next five years, EU negotiators remained committed to curtailing European gas emissions even after it became clear that the United States would not sign the Kyoto treaty. Sweden took over the rotating presidency of the EU in 2002, and planned to make environmental policy a high priority. “The environment is more global than anything else except foreign policy,” Sweden’s environmental minister and EU spokesman told Harper’s Bazaar (March 2001). With the EU representing 25% of the global economy, the organization’s stance on environmental issues was always significant. [Angela Woodward]
RESOURCES PERIODICALS Andrews, Edmund L. “Bush Angers Europe by Eroding Pact on Warming”New York Times, April 1, 2001, 3. Milmo, Sean. “EU Lacks Industry Support for EU-Wide Emissions Trading System.” Chemical Market Reporter 260, no. 5 (July 30, 2001): 6. Sains, Ariane. “EU News: Sweden’s EU Agenda.” Harper’s Bazaar 134, no. 3472 (March 2001): 287. Scott, Alex. “EU Mulls Criminal Sanctions for Eco-Crimes.” Chemical Week 164, no. 16 (April 17, 2002): 15. “What Next, Then?” Economist 360, no. 8232 (July 28, 2001): 69.
Environmental Encyclopedia 3
Eutectic Refers to a type of solar heating system which makes use of phase changes in a chemical storage medium. At its melting point, any chemical compound must absorb a quantity of heat in order to change phase from solid to liquid, and at its boiling point, it must absorb an additional quantity of heat to change to a gas. Conversely, a compound releases a quantity of heat when condensing or freezing. Therefore, a chemical warmed by solar energy through a phase change releases far more energy when it cools than, for example, water heated from a cool liquid to a warm liquid state. For this reason, solar heating systems that employ phasechanging chemicals can store more energy in a compact space than a water-based system.
Eutrophication see Cultural eutrophication
Evapotranspiration Evapotranspiration is a key part of the hydrologic cycle. Some water evaporates directly from soils and water bodies, but much is returned to the atmosphere by transpiration (a word combining transport and evaporation) from plants via openings in the leaves called stomata. Within the same climates, forests and lakes yield about the same amount of water vapor. The amount of evapotranspiration is dependent on energy inputs of heat, wind, humidity, and the amount of stored soil water. In climate studies this term is used to indicate levels of surplus or deficit in water budgets. Aridity may be defined as an excess of potential evapotranspiration over actual precipitation, while in humid regions the amount of runoff correlates well with the surplus of precipitation over evapotranspiration.
Everglades A swampy region in southern Florida, the Everglades are described as a vast, shallow sawgrass (Cladium effusum) marsh with tree islands, wet prairies, and aquatic sloughs, the Everglades historically covered most of southeastern Florida, prior to massive drainage and reclamation projects launched at the turn of the century. The glades constitute the southern end of the Kissimmee Lake Okeechobee Everglades system, which encompasses most of south and central Florida below Orlando. Originally, the Everglades covered an area approximately 40 mi (64 km) wide and 100 mi (161 km) long, or 2.5 million acres, but large segments have been isolated by canals and levees. Today, intensive agriculture
Everglades
in the north and rapid urban development in the east are among the Everglades’ various land uses. Two general habitat regions can be demarcated in the Everglades. The first includes three water conservation areas, basins created to preserve portions of the glades and provide multiple uses, such as water supply. This region is located in the northern Everglades and contains most of the intact natural marsh. The second is the southern habitat, which includes the Everglades National Park and the southern third of the three water conservation areas. The park has been designated a World Heritage Site of international ecological significance, and the Everglades as a whole are one of the outstanding freshwater ecosystems in the United States. Topographically flat, elevations in the Everglades are generally less than 20 ft (6.1 m). The ground slopes north to south at an average gradient of 0.15 ft per mile, with the highest elevations in the north and the lowest in the south. The climate is generally subtropical, with long, hot, humid, and wet summers from May to October followed by mild, dry winters from November to April. During the wet season severe storms can result in lengthy periods of flooding, while during the dry season cool, sometimes freezing temperatures can be accompanied by thunderstorms, tornadoes, and heavy rainfall. Before the Everglades were drained, large areas of the system were inundated each year as Lake Okeechobee overflowed its southern rim. The “River of Grass” flowed south and was in constant flux through evapotranspiration, rainfall, and water movement into and out of the Everglades’ aquifer. The water discharged into the tidewaters of south Biscayne Bay, Florida Bay, and Ten Thousand Islands. In the early 1880s, Philadelphia industrialist Hamilton Disston began draining the Everglades under a contract with Florida trustees. Disston, whose work ceased in 1889, built a substantial number of canals, mainly in the upper waters of the Kissimmee River, and constructed a canal between Lake Okeechobee and the Calooshatchee River to provide an outlet to the Gulf of Mexico. The Miami River was channelized beginning in 1903, and other canals—the Snapper Creek Canal, the Cutler Canal, and the Coral Gables Waterway—were opened to help drain the Everglades. Water tabless in south Florida fell 5–6 ft (1.5–1.8 m) below 1900 levels, causing stress to wetlands systems and losses of peat up to 6 ft (1.8 m) in depth. Full-scale drainage and reclamation occurred under Governor W.S. Jennings (1901–1905) and Governor Napoleon Bonaparte Broward (1905–1909). In 1907, the Everglades Drainage District was created and built six major canals over 400 miles long before it suffered financial collapse in 1928. These canals enabled agriculture to flourish within 531
Everglades
the region. In the late 1920s, when settlers realized better water control and flood protection were needed, low muck levees were built along Lake Okeechobee’s southwest shore, eliminating the lake’s overflow south to the Everglades. But hurricanes in 1926 and 1928 breached the levees, destroying property and killing 2,100 people. As a result, the Lake Okeechobee Flood Control District was established in 1929, and over the following fifteen years the United States Army Corps of Engineers constructed and enlarged flood control canals. It was only in the mid-1950s, with the development and implementation of the Central and Southern Florida Project for Flood Control & Other Purposes (C&SF Project), that water control took priority over uncontrolled drainage of the Everglades. The project, completed by 1962, was to provide flood protection, water supply, and environmental benefits over a 16,000 square-mile (41,440 sq-km) area. It consists of 1,500 miles (2,415 km) of canals and levees, 125 major water control structures, 18 major pumping stations, 13 boat locks, and several hundred smaller structures. Interspersed throughout the Everglades is a series of habitats, each dominated by a few or in some cases a single plant species. Seasonal wetlands and upland pine forests, which once dominated the historic border of the system, have come under the heaviest pressure from urban and agricultural development. In the system’s southern part, freshwater wetlands are superseded by muhly grass (Muhlenbergia filipes), prairies, upland pine and tropical hardwood forests, and mangrove forests that are influenced by the tides. Attached algae, also known as periphyton, are an important component of the Everglades food web, providing both organic food matter and habitat for various grazing invertebrates and forage fish that are eaten by wading birds, reptiles, and sport fish. These algae include calcareous and filamentous algae (Scytonema hoffmani, Schizothrix calcicola) and diatoms (Mastogloia smithii v. lacustris).Sawgrass (Cladium jamaicense) constitutes one of the main plants occurring throughout the Everglades, being found in 65–70% of the remaining freshwater marsh. In the north, the sawgrass grows in deep peat soils and is both dense and tall, reaching up to 10 ft (3 m) in height. In the south, it grows in lownutrient marl soils and is less dense and shorter, averaging 2.5–5 ft (0.75–1.5 m). Sawgrass is adapted to survive both flooding and burning. Stands of pure sawgrass as well as mixed communities are found in the Everglades. The mixed communities can include maidencane (Panicum hemitomon) arrowhead (Sagittaria lancifolia), water hyssop (Bacopa caroliniana), and spikerush (Eleocharis cellulosa). Wet prairies, which together with aquatic sloughs provide habitat during the rainy season for a wide variety of aquatic invertebrates and forage fish, are another important habitat of the Everglades system. They are seasonally inun532
Environmental Encyclopedia 3 dated wetland communities that require certain standing water for six to ten months. Once common, today more than 1,500 square miles (3,885 sq km) of these prairies have been drained or destroyed. The lowest elevations of the Everglades are ponds and sloughs, which have deeper water and longer inundation periods. They occur throughout the system, and in some cases can be formed by alligators in peat soils. Among the types of emergent vegetation commonly found in these areas are white water lily (Nymphaea odorata), floating heart (Nymphoides aquatica), and spatterdock (Nuphar luteum). Common submerged species include bladderwort (Utricularia) and the periphyton mat community. Ponds and sloughs serve as important feeding areas and habitat for Everglades wildlife. At the highest elevations are found communities of isolated trees surrounded by marsh called tree islands. These provide nesting and roosting sites for colonial birds and habitat for deer and other terrestrial animals during highwater periods. Typical dominant species constituting tree islands are red bay (Persa borbonia), swamp bay (Magnolia virginiana), dahoon holly (Ilex cassine), pond apple (Annona glabra), and wax myrtle (Myrica cerifera). Beneath the canopy grows a dense shrub layer of cocoplum (Chrysobalanus icacao), buttonbush (Cephalanthus accidentalis), leather leaf fern (Acrostichum danaeifolium), royal fern (Osmunda regalis), cinnamon fern (O. cinnamonea), chain fern (Anchistea virginica), bracken fern (Pteridium aquilinium), and lizards tail (Saururus cernuus). In addition to the indigenous plants of the Everglades, numerous exotic and nuisance species have been brought into Florida and have now spread in the wild. Some threaten to invade and displace indigenous species. Brazilian pepper (Schinus terebinthifolius), Australian pine (Casuarina equisetifolia), and melaleuca (Melaleuca quinquenervia) are three of the most serious exotic species that have gained a foothold and are displacing native plants. The Florida Game and Fresh Water Fish Commission has identified 25 threatened or endangered species with the Everglades. Mammals include the Florida panther (Felis concolor coryi), mangrove fox squirrel (Sciurus niger avicennia), and black bear (Ursus americanus floridanus). Birds include the wood stork (Mycteria americana), snail kite (Rostrhamus sociabilis), and the red-cockaded (Picoides borealis). Endangered or threatened reptiles and amphibians include the gopher tortoise (Gopherus polyphemus the eastern indigo snake (Drymarchon corais couperi), and the loggerhead sea turtle (Caretta caretta). The alligator (Alligator mississippiensis) was once endangered due to excessive alligator hide hunting. In 1972, the state made alligator product sales illegal. Protection allowed the species to recover, and it is now widely distributed in wetlands throughout the state. It is still listed as threatened
Environmental Encyclopedia 3
Evolution
to the Park. In 1994, the Everglades Forever Act was passed by the Florida State Legislature. The Act called for construction of experimental marshes called Stormwater Treatment Areas that were designed to remove phosphorus from water entering the Everglades. In 1997, six more Stormwater Treatment Areas were constructed and phosphorus removal was estimated to be as much as 50%, due in part to better management practices that were mandated by the Everglades Forever Act. In 2000, President Clinton authorized the spending of billions of federal dollars to restore the Everglades, while Florida Governor Jeb Bush agreed to a state commitment of 50% of the cost, in a bill called the Florida Investment Act. In 2001 and 2002, the state of Florida, under Governor Jeb Bush, and the federal government, under President George W. Bush, committed $7.8 billion dollars to implement the Comprehensive Everglades Restoration Plan (CERP). [David Clarke and Marie H. Bundy]
RESOURCES BOOKS Douglas, M. S. The Everglades: River of Grass. New York: H. Wolff, 1947. A swampy area in the Everglades. (Photograph by Gerald Davis. Phototake. Reproduced by permission.)
by the federal government, but in 1988 Florida instituted an annual alligator harvest. Faced with pressures on Everglades habitats and the species within them, as well as the need for water management within the rapidly developing state, in 1987 the Florida legislature passed the Surface Water Improvement and Management Act (1987). The law requires the state’s five water management districts to identify areas needing preservation or restoration. The Everglades Protection Area was identified as a priority for preservation and improvement planning. Within the state’s protection plan, excess nutrients, in large part from agriculture, have been targeted as a major problem that causes natural periphyton to be replaced by species more tolerant of pollution. In turn, sawgrass and wet prairie communities are overrun by other species, impairing the Everglades’ ability to serve as habitat and forage for higher trophic level species. A federal lawsuit was filed against the South Florida Management District in 1988 for phosphorus pollution, and in 1989, President George Bush authorized the addition of more than 100,000 acres to the Everglades National Park. The law that authorized this addition was Public Law 101229 or the Everglades National Park Protection and Expansion Act of 1989. Included in this legislation was the stipulation that the Army Corps of Engineers improve water flow
PERIODICALS Johnson, R. “New Life for the ’River of Grass.’” American Forests 98 (JulyAugust 1992): 38–43. Stover, D. “Engineering the Everglades.” Popular Science 241 (July 1992): 46–51.
Evolution Evolution is an all-pervading concept in modern science applied to physical as well as biological processes. The phrase theory of evolution, however, is most commonly associated with organic evolution, the origin and evolution of the enormous diversity of life on this planet. The idea of organic evolution arose from attempts to explain the immense diversity of living organisms and dates from the dawn of culture. Attempts to formulate a naturalistic explanation of the phenomena of evolution in one form or another had been proposed, and by the end of the eighteenth century a number of naturalists from Carolus Linnaeus to Jean-Baptiste Lamarck had questioned the prevailing doctrine of fixity of species. Lamarck came the closest to proposing a complete theory of evolution, but for several reasons his theory fell short and was not widely accepted. Two English naturalists conceived the first truly complete theory of evolution. In a most remarkable coincidence, working quite independently, Charles Darwin and Alfred Russell Wallace arrived at essentially the same thesis, and their ideas were initially publicized jointly in 1858. However, 533
Evolution
it is to Charles Darwin that the recognition of the founder of modern evolution is attributed. His continued efforts toward a detailed development of the evidence and the explication of a complete, convincing theory of organic evolution resulted in the publication in 1859 of his book On the Origin of Species by Natural Selection. With the publication of this volume, widely regarded as one of the most influential books ever written, regardless of subject matter, the science of biology would never be the same. Darwin was the first evolutionist whose theories carried conviction to a majority of his contemporaries. He set forth a formidable mass of supporting evidence in the form of direct observations from nature, coupled with a comprehensive and convincing synthesis of current knowledge. Most significantly, he proposed a rational, plausible instrumentality for evolutionary change: natural selection. The theory Darwin presented is remarkably simple and rational. In brief, the theoretical framework of evolution presented by Darwin contained three basic elements. First is the existence of variation in natural populations. In nature there are no individuals who are exactly alike, therefore natural populations always consist of members who all differ from one another to some degree. Many of these differences are heritable and are passed on from generation to generation. Second, some of these varieties are better adapted to their environment than others. Third, the reproductive potential of populations is unlimited. All populations have the capacity to overproduce, but populations in nature are limited by high mortality. Thus, if all offspring cannot survive each generation, the better-adapted members have a greater probability of surviving each generation than those less adapted. Thomas Henry Huxley, when first presented with Darwin’s views, was supposed to have exclaimed: “How extremely stupid not to have thought of that.” This apocryphal statement expresses succinctly the immediate intelligibility of this momentous discovery. Darwin’s theory as presented was incomplete, as he was keenly aware, but the essential parts were adequate at the time to demonstrate that the material causes of evolution are possible and can be investigated scientifically. One of the most critical difficulties with Darwin’s theory was the inability to explain the source of hereditary variations observed in populations and, in particular, the source of hereditary innovations which, through the agency of natural selection, could eventually bring about the transformation of a species. The lack of existing knowledge of the mechanism of heredity in the mid-1800s precluded a scientifically sound solution to this problem. The re-discovery in 1900 of Gregor Mendel’s experiments demonstrating the underlying process of inheritance did not at first provide a solution to the problem facing 534
Environmental Encyclopedia 3 Darwinian evolutionary theory. In fact, in one of the most extraordinary paradoxes in the history of science, during the first decades of Mendelian genetics, a temporary decline occurred in the support of Darwinian selectionism. This was primarily because of the direction and level of research as much as the lack of knowledge in further details of the underlying mechanisms of inheritance. It was not until the publication of Theodosius Dobzhansky’s Genetics and the Origin of Species (1937) that the modern synthetic theory of evolution began to take form, renewing confidence in selectionism and the Darwinian theory. In his book, Dobzhansky, for the first time, integrated the newly developed mathematical models of population genetics with the fats of chromosomal theory and mutation together with observations of variation in natural populations. Dobzhansky also stimulated a series of books by several major evolutionists which provided the impetus and direction of research over the next decade, resulting in a reformulation of Darwin’s theory. This became known as the synthetic theory of evolution. Synthetic is used here to indicate that there has been a convergence and synthesis of knowledge from many disciplines of biology and chemistry. The birth of molecular genetics in 1950 and the enormous expansion of knowledge in this area have affected the further development and refinement of the synthetic theory markedly. But it has not altered the fundamental nature of the theory. Darwin’s initial concept in this expanded, modernized form has taken on such stature as to be generally considered as the most encompassing theory in biology. To the majority of biologists, the synthetic theory of evolution is a grand theory that forms the core of modern biology. It also provides the theoretical foundation upon which most all other theories find support and comprehension. Or, as the distinguished evolutionary biologist Dobzhansky so succinctly put it: “Nothing in life makes sense except in the light of evolution.” If the three main elements of the Darwinian theory were to be restated in terms of the modern synthetic theory, they would be as follows. First, genetic systems governing heredity in each organism are composed of genes and chromosomes which are discrete but tightly interacting units of different levels of complexity. The genes, their organized associations in chromosomes, and whole sets of chromosomes, have a large degree of stability as units. But these units are shuffled and combined in various ways by sexual processes of reproduction in most organisms. The result of this process, called recombination, maintains a considerable amount of variation in any population in nature without introducing any new hereditary factors. Genetic experimentation, is, in fact, occurring all the time in natural populations. Each generation, as a result of recombination, is made
Environmental Encyclopedia 3 up of members who carry different hereditary combinations drawn from the common pool of the population. Each member generation in turn interacts within the same environmental matrix which, in effect, “tests” the fitness of these hereditary combinations. The sole source of hereditary innovation from within the population, other than recombination, would come from mutation affecting individual genes, chromosomes or sets of chromosomes, which are then fed into the process of recombination. Second, populations of similar animals—members of the same species—usually interbreed among themselves making up a common pool of genes. The gene pools of established populations in nature tend to remain at equilibrium from generation to generation. An abundance of evidence has established the fact, as Darwin and others before him had observed, that populations have greater reproductive capacity than they can sustain in nature. Third, change, over time, in the makeup of the gene pool, may occur by one of four known processes: mutation, fluctuation in gene frequencies—known as “sampling errors,” or “genetic drift,” which is generally effective only in small, isolated populations—introduction of genes from other populations, and differential reproduction. The first three of these processes do not produce adaptive trends. These are essentially random in their effects, usually nonadaptive and only rarely and coincidentally produce adaptation. Of the four processes, differential reproduction results in adaptive trends. Differential reproduction describes the consistent production of more offspring, on an average, by individuals with certain genetic characteristics than those without those characteristics. This is the modern understanding of natural selection, with emphasis upon success in reproduction. The Darwinian and Neo-Darwinian concepts, on the other hand, placed greater emphasis upon “survival of the fittest” in terms of mortality and survival. Natural selection, in the Darwinian sense, was non-random and produced trends that were adaptive. Differential reproduction produces the same result. In what may appear to be a paradox, differential selection also acts in stabilizing the gene pool by reducing the reproductive success of those members whose hereditary variation result in non-adaptive traits. As Darwin first recognized, the link between variability or constancy of the environment and evolutionary change or stability is provided by natural selection. The interaction between organisms and their environment will inevitably affect the genetic composition of each new generation. As a result of differential selection, the most fit individuals, in terms of reproductive capacity, contribute the largest proportion of genes or alleles to the next generation. The direction in which natural selection guides the population depends
Evolution
upon both the nature of environmental change or stability and the content of the gene pool of the population. From the very beginning, evolutionists have recognized the significance of the environment/evolution linkage. This linkage always involves the interactions of a great many individuals both within populations of a single species (intraspecific) and among members of different species (interspecific linkage). These are inextricably linked with changes in the physical environment. Dobzhansky estimated that the world may contain four to five million different kinds of organisms, each exploiting in various ways a large number of habitats. The complexity of ecosystems is based upon thousands of different ways of exploiting the same habitats in which each species is adapted to its specifically defined niche within the total habitat. The diversity of these population-environment interactions is responsible for the fact that natural selection can be conservative—(normalizing selection)—and can promote constancy in the gene pool. It directs continuous change (directional selection), or promotes diversification into new species (diversifying selection). Recent trends in research have been of special significance in understanding the dynamics of these processes. They have greatly increased the body of information related to the ecological dimension of population biology. These studies have developed methods of analysis and sophisticated models which have made major advances in the study and understanding of the nature of the selective forces that direct evolutionary change. It is increasingly clear that a reciprocal imperative exists between ecological and environmental studies to further our understanding of the role of evolutionary and counterrevolutionary processes affecting stability in biological systems over time. The urgency of the need for an expanded synthesis of ecological and evolutionary studies is driven by the accelerating and expanding counterrevolutionary changes in the world environment resulting from human actions. Evolutionary processes are tremendously more complex in detail than this brief outline suggests. This spare outline, however, should be sufficient to show that here is a mechanism, involving materials and processes known beyond doubt to occur in nature, capable of shaping and changing species in response to the continuously dynamic universe in which they exist. [Donald A. Villeneuve]
RESOURCES BOOKS Dobzhansky, T., et al. Evolution. San Francisco: W. H. Freeman, 1977. Simpson, G. G. The Meaning of Evolution. New Haven, CT: Yale University Press, 1967.
535
Environmental Encyclopedia 3
Exclusive economic zone
Exclusive economic zone The Exclusive Economic Zone (EEZ) is a 200-mile boundary off the coasts of a nation that provides the exclusive right to fish, mine, and otherwise utilize the natural resources located within the zone. After World War II, many nations became concerned about the intrusion of foreign fishing vessels into the rich and productive waters off of their coasts. The 1958 Convention on Fishing and Conservation of Living Resources of the High Seas, established rules that allowed a coastal nation impose terms on the uses of natural resources of their coastal oceans. This effort at regulation was unsuccessful, and the rules were never put in place. The policy that ultimately established exclusive economic zones around sovereign nations resulted from provisions of the Third United Nations Conference on the Law of the Sea, held in 1973. As a result of the actions taken in 1958, virtually all of the world’s fisheries are controlled by one nation or another. However, the controlling nation must provide access to the resources that it cannot harvest, and is responsible for preventing pollution and depletion of the resources in the waters that it controls. In the United States, the Fishery Conservation and Management Act of 1976 established the fishery conservation zone out to 200 miles, however, it wasn’t until 1983 that the EEZ was created by presidential proclamation. [Marie H. Bundy]
Existence value see Debt for nature swap
Exotic species Exotic species are organisms that are introduced to a region or ecosystem, often unintentionally, through human migration or trade. Some exotic species are useful to man, such as horses, goats, pigs, and edible plants including wheat and oats. These are examples of species that were brought to the Americas intentionally by European colonists. Other exotic species that were introduced accidentally such as the Mediterranean fruit fly (Ceratitis capitata), Japanese beetle (Popillia japonica), Africanized bees (Apis mellifera scutellata)(sometimes called killer bees), and Norway rat (Rattus norvegicus) have become pests. Many exotic species, including most tropical fish, birds, and houseplants brought to colder climates, can survive only under continuous care. A few prove extremely adaptable and thrive in their new environment, sometime becoming invasive and out competing native species. 536
The federal government’s Office of Technology Assessment has estimated that more than 2,000 plant species introduced from around the world currently live and thrive in the United States, and that 15 of these have caused more than $500 million worth of damage. Economic costs associated with exotic species include agricultural losses, damage to infrastructure, as when aquatic plants clog water intakes, and the costs of attempts to restore native species whose survival is endangered by introduced species. Exotic species are most are most destructive when they adapt readily to their new environment and compete successfully with native species. Unfortunately there is no sure way to know which introduced species will become invasive. Often these plant and animals turn out to be better competitors because, unlike native species, they have no natural pests, parasites, diseases, or predators in their new home. Purple loosestrife (Lythrum salicaria) is an example of a plant that is kept in check in its native habitat, but is invasive in North America. This showy wetland plant with tall purple flowers may have arrived accidentally or been intentionally imported to North America as a garden ornamental. In its native northern Europe, resident beetles feed on its roots and leaves, keeping the loosestrife in check, so that it only appears occasionally and temporarily in disturbed sites. When loosestrife arrived in North America, the beetles, along with other pests and diseases, were left behind. Loosestrife has proven to be extremely adaptable and has become an aggressive weed across much of the American East and Midwest, often taking over an entire wetland and choking out other plants, eliminating much of the wetland biodiversity. In addition to the competitive advantage of having few predators, invasive exotic plants and animals may have ecological characteristics that make them especially competitive. They can be hardy, adaptable to diverse habitat conditions, and able to thrive on a variety of food sources. They may reproduce rapidly, generating large numbers of seeds or young that spread quickly. If they are aggressive colonizers adapted to living in marginal habitat, introduced species can drive resident natives from their established sites and food sources, especially around the disturbed environments of human settlement. This competitiveness has been a problem, for example, with house sparrows (Passer domesticus). These birds were intentionally introduced from Europe to North America in 1850 to control insect pests. Their aggressive foraging and breeding habits often drive native sparrows, martins, and bluebirds from their nests, and today they are one of the most common birds in North America. Exotic plants can also become nuisance species when they crowd, shade, or out-propagate their native competitors. They can be extraor-
Environmental Encyclopedia 3 dinarily effective colonists, spreading quickly and eliminating competition as they become established. The list of species introduced to the Americas from Europe, Asia, and Africa is immense, as is the list of species that have made the reverse trip from the Americas to Europe, Asia, and Africa. Some notable examples are kudzu (Pueraria lobata), the zebra mussel (Dreissena polymorpha), Africanized bees Apis mellifera scutellata), and Eurasian milfoil (Myriophyllum spicatum). Kudzu is a cultivated legume in Japan. It was intentionally brought to the southern United States for ground cover and erosion control. Fast growing and tenacious, kudzu quickly overwhelms houses, tangles in electric lines, and chokes out native vegetation. Africanized “killer” bees were accidentally released in Brazil by a beekeeper in 1957. These aggressive insects have no more venom than standard honey bees (also an Old World import), but they attack more quickly and in great numbers. Breeding with resident bees and sometimes traveling with cargo shipments, Africanized bees have spread north from Brazil at a rate of up to 200 miles (322 km) each year and now threaten to invade commercially valuable fruit orchards and domestic bee hives in Texas and California. The zebra mussel, accidentally introduced to the Great Lakes around 1985 presumably in ballast water dumped by ships arriving from Europe, colonizes any hard surface, including docks, industrial water intake pipes, and the shells of native bivalves. Each female zebra mussel can produce 50,000 eggs a year. Growing in masses with up to 70,000 individuals per square foot, these mussels clog pipes, suffocate native clams, and destroy breeding grounds for other aquatic animals. They are also voracious feeders, competing with fish and native mollusks for plankton and microscopic plants. The economy and environment of the Great Lakes now pay the price of zebra mussel infestations. Area industries spend hundreds of millions of dollars annually unclogging pipes and equipment, and commercial fishermen complain of decreased catches. Eurasian milfoil is a common aquarium plant that can propagate from seeds or cuttings. A tiny section of stem and leaves accidentally introduced into a lake by a boat or boat trailer can grow into a huge mat covering an entire lake. When these mats have consumed all available nutrients in the lake, they die and rot. The rotting process robs fish and other aquatic animals of oxygen, causing them to die. Exotic species have brought ecological disasters to every continent, but some of the most extreme cases have occurred on isolated islands where resident species have lost their defensive strategies. For example, rats, cats, dogs, and mongooses introduced by eighteenth century sailors have devastated populations of ground-breeding birds on Pacific islands. Rare flowers in Hawaii suffer from grazing goats
Exotic species
and rooting pigs, both of which were brought to the island for food, but have escaped and established wild populations. Grazing sheep threaten delicate plants on ecologically fragile North Atlantic islands, while rats, cats, and dogs endanger northern seabird breeding colonies. Rabbits introduced into Australia overran parts of the island and wiped out hundreds of acres of grassland. Humans have always carried plants and animals as they migrated from one region to another with little regard to the effects of these introductions might have on their new habitat. Many introduced species seem benign, useful, or pleasing to have around, making it difficult to predict which imports will become nuisance species. When an exotic plant or animal threatens human livelihoods or economic activity, as do kudzu, zebra mussels, and “killer” bees, people begin to seek ways to control these invaders. Control efforts include using pesticides and herbicides, and introducing natural predators and parasites from the home range of the exotic plant or animal. For example, beetles that naturally prey on purple loosestrife have been experimentally introduced in American loosestrife populations. This deliberate introduction requires a great deal of care, research, and monitoring, however, to ensure that an even worse problem does not result, as happened with the house sparrow. Such solutions, and the time and money to develop them, are usually elusive and politically controversial, so in many cases effective control methods remain unavailable. In 1999, President Bill Clinton signed the Executive Order on Invasive Species. This order established the Invasive Species Council to coordinate the activities of federal agencies, such as the Aquatic Nuisance Species Task Force, the Federal Interagency Committee for the Management of Noxious and Exotic Weeds, and the Committee on Environment and Natural Resources. The Invasive Species Council is responsible for the development of a National Invasive Species Management Plan. This plan is intended to be updated very two years to provide guidance and recommendations about the identification of pathways by which invasive species are introduced, and measures that can be taken for their control. Non-profit environmental organizations across the globe are leading the effort for control of exotic species. For example, The Nature Conservancy has established Landscape Conservation Networks to address issues of land conservation that include invasive species management. These networks bring in outside experts and land conservation partners to develop innovative and cost effective means of controlling exotic species. The Great Lakes information Network, managed by the Great Lakes Commission based in Ann Arbor, Michigan, provides online access about environmental issues, including exotics species, in the Great 537
Experimental Lakes Area
Lakes region. The Florida Exotic Pest Plant Council, founded in 1984, provides funding to organizations that educate the public about the impacts of exotic invasive plants in the State of Florida. [Mary Ann Cunningham and Marie H. Bundy]
RESOURCES BOOKS Cunningham, W. Understanding Our Environment: An Introduction. Dubuque, IA: William C. Brown, 1993.
PERIODICALS Barrett, S. C. H. “Waterweed Invasions.” Scientific American 261 (October 1989): 90–7. Rendall, Jay. “ Invasive Species". Imprint 7, no. 4 (1990): 1–8, 1990. Walker, T. “Dreissena Disaster—Scientists Battle an Invasion of Zebra Mussels.” Science News 139 (May 4, 1991): 282–84.
Experimental Lakes Area The Experimental Lakes Area (ELA) in northwestern Ontario is in a remote landscape characterized by Precambrian bedrock, northern mixed-species forests, and oligotrophic lakes, bodies of water deficient in plant nutrients. The Canadian Department of Fisheries and Oceans began developing a field-research facility at ELA in the 1960s, and the area has become the focus of a large number of investigations by D. W. Schindler and others into chemical and biological conditions in these lakes. Of the limnological investigations conducted at ELA, the best known is a series of whole-lake experiments designed to investigate the ecological effects of perturbation by a variety of environmental stress factors, including eutrophication, acidification, metals, radionuclides, and flooding during the development of reservoirs. The integrated, whole-lake projects at ELA were initially designed to study the causes and ecological consequences of eutrophication. In one long-term experiment, Lake 227 was fertilized with phosphate and nitrate. This experiment was designed to test whether carbon could limit algal growth during eutrophication, so none was added. Lake 227 responded with a large increase in primary productivity by drawing on the atmosphere for carbon, but it was not possible to determine which of the two added nutrients, phosphate or nitrate, had acted as the primary limiting factor. Observations from experiments at other lakes in ELA, however, clearly indicated that phosphate is the primary limiting nutrient in these oligotrophic water bodies. Lake 304 was fertilized for two years with phosphorus, nitrogen, and carbon, and it became eutrophic. It recovered its oligotrophic condition again when the phosphorus fertilization was stopped, even though nitrogen and carbon fertilization 538
Environmental Encyclopedia 3 were continued. Lake 226, an hourglass-shaped lake, was partitioned with a vinyl curtain into two basins, one of which was fertilized with carbon and nitrogen, and the other with phosphorus, carbon, and nitrogen. Only the latter treatment caused an algal bloom. Lake 302 received an injection of all three nutrients directly into its hypolimnion during the summer. Because the lake was thermally stratified at that time, the hypolimnetic nutrients were not available to fertilize plant growth in the epilimnetic euphotic zone, and no algal bloom resulted. Nitrogen additions to Lake 227 were reduced in 1975 and eliminated in 1990. The lake continued with high levels of productivity by fixing nitrogen from the atmosphere. Research of this sort was instrumental in confirming conclusively the identification of phosphorus as the most generally limiting nutrient to eutrophication of freshwaters. This knowledge allowed the development of waste management systems which reduced eutrophication as an environmental problem by reducing the phosphorus concentration in detergents, removing phosphorus from sewage, and diverting sewage from lakes. Another well known ELA project was important in gaining a deeper understanding of the ecological consequences of the acidification of lakes. Sulfuric acid was added to Lake 223, and its acidity was increased progressively, from an initial pH near 6.5 to pH 5.0–5.1 after six years. Sulfate and hydrogen ions were also added to the lake in increasing concentrations during this time. Other chemical changes were caused indirectly by acidification: manganese increased by 980%, zinc by 550%, and aluminum by 155%. As the acidity of Lake 223 increased, the phytoplankton shifted from a community dominated by golden-brown algae to one dominated by chlorophytes and dinoflagellates. Species diversity declined somewhat, but productivity was not adversely affected. A mat of the green alga Mougeotia sp. developed near the shore after the pH dropped below 5.6. Because of reduced predation, the density of cladoceran zooplankton was larger by 66% at pH 5.4 than at pH 6.6, and copepods were 93% more abundant. The nocturnal zooplankton predator Mysis relicta, however, was an important extinction. The crayfish Orconectes virilis declined because of reproductive failure, inhibition of carapace hardening, and effects of a parasite. The most acid-sensitive fish was the fathead minnow (Pimephales promelas), which declined precipitously when the lake pH reached 5.6. The first of many year-class failures of lake trout (Salvelinus namaycush) occurred at pH 5.4, and failure of white sucker (Catastomus commersoni) occurred at pH 5.1. One minnow, the pearl dace (Semotilus margarita), increased markedly in abundance but then declined when pH reached 5.1. Adult lake trout and white sucker were still abundant, though emaciated, at pH 5.0–5.5, but in the absence of
Environmental Encyclopedia 3
Exponential growth
successful reproduction they would have become extinct. Overall, the Lake 223 experiment indicated a general sensitivity of many organisms to the acidification of lake water. However, within the limits of physiological tolerance, the tests showed that there can be a replacement of acid-sensitive species by relatively tolerant ones. In a similarly designed experiment in Lake 302, nitric acid was shown to be nearly as effective as sulfuric acid in acidifying lakes, thereby alerting the international community to the need to control atmospheric emissions of gaseous nitrogen compounds. See also Acid rain; Algicide; Aquatic chemistry; C:N ratio; Cultural eutrophication; Water pollution [Bill Freedman Ph.D.]
RESOURCES BOOKS Freedman, B. Environmental Ecology. San Diego: Academic Press, 1995. Schindler, D. W. “The Coupling of Elemental Cycles by Organisms: Evidence from Whole-Lake Chemical Perturbations.” In Chemical Processes in Lakes, edited by W. Stumm. New York: Wiley, 1985.
PERIODICALS Schindler, D. W., et al. “Long-Term Ecosystem Stress: The Effects of Years of Experimental Acidification of a Small Lake.” Science 228 (June 21, 1985): 1395–1401.
Exponential growth The distinction between arithmetic and exponential growth is crucial to an understanding of the nature of growth. Arithmetic growth takes place when a constant amount is being added, as when a child puts a dollar a week in a piggy-bank. Although the total amount increases, the amount being added remains the same. Exponential growth, on the other hand, is characterized by a constant or even accelerating rate of growth. At a constant rate of increase, measured in percentages, the amounts added grow themselves. Growth is then usually measured in doubling times because these remain constant while the amounts added increase. When the annual rate of increase is 1%, the doubling time will be 70 years. From this fact, a simple formula to calculate doubling times given a rate of increase can be derived: dividing 70 by the percentage rate will yield the number of years it takes to double the original amount. A savings account with, say, a fixed annual interest rate of 5% furnishes a convenient example. If the original deposit is $1,000, then the growth over the first year is $50. Over the second year, growth will be $52.50. In 14 years, there will be $2,000 in the account (70 divided by 5 equals 14). In the first period of 14 years, then, total growth will
be $1,000, but in the second period of 14 years total growth will be $2,000, and so on. During the tenth 14-year period, $512,000 is added, and at the end of that period the total amount in the account will be $1,024,000. As this example illustrates, growth will be relatively slow initially, but it will start speeding up dramatically over time. When growth takes place at an accelerating rate of increase, doubling times of course will become shorter and shorter. The notion of exponential growth is of particular interest in population biology because all populations of organisms have the capacity to undergo exponential growth. The biotic potential or maximum rate of reproduction for all living organisms is very high, that is to say that all species theoretically have the capacity to reproduce themselves many, many times over during their lifetimes. In actuality, only a few of the offspring of most species survive, due to reproductive failure, limited availability of space and food, diseases, predation, and other mishaps. A few species, such as the lemming, go through cycles of exponential population growth resulting in severe overpopulation. A catastrophic dieback follows, during which the population is reduced enormously, readying it for the next cycle of growth and dieback. Interacting species will experience related fluctuations in population levels. By and large, however, populations are held stable by environmental resistance, unless an environmental disturbance takes place. Climatological changes and other natural phenomena may cause such habitat disturbances, but more usually they result from human activity. Pollution, predator control, and the introduction of foreign species into habitats that lack competitor or predator species are a few examples among many of human activities that may cause declines in some populations and exponential growth in others. An altogether different case of exponential population growth is that of humans themselves. The human population has grown at an accelerating rate, starting at a low average rate of 0.002% per year early in its history and reaching a record level of 2.06% in 1970. Since then the rate of increase has dropped below 2%, but human population growth is still alarming and many scientists predict that humans are headed for a catastrophic dieback. [Marijke Rijsberman]
RESOURCES BOOKS Cunningham, W., and B. W. Saigo. “Dynamics of Population Growth.” In Environmental Science: A Global Concern. Dubuque, IA: Wm. C. Brown Publishers, 1990. Miller, G. T. “Human Population Growth.” In Living in the Environment. Belmont, CA: Wadsworth Publishing, 1990.
539
Externality
External costs see Internalizing costs
Externality Most economists argue that markets ordinarily are the superior means for fulfilling human wants. In a market, deals are ideally struck between consenting adults only when the parties feel they are likely to benefit. Society as a whole is thought to gain from the aggregation of individual deals that take place. The wealth of a society grows by means of what is called the hidden hand of free market mechanisms, which offers spontaneous coordination with a minimum of coercion and explicit central direction. However, the market system is complicated by so-called externalities, which are effects of private market activity not captured in the price system. Economics distinguishes between positive and negative externalities. A positive externality exists when producers cannot appropriate all the benefits of their activities. An example would be research and development, which yields benefits to society that the producer cannot capture, such as employment in subsidiary industries. Environmental degradation, on the other hand, is a negative externality, or an imposition on society as a whole of costs arising from specific market activities. Historically, the United States have encouraged individuals and corporate entities to make use of natural resources on public lands, such as water, timber, and even the land itself, in order to speed development of the country. Many undesirable by-products of the manufacturing process, in the form of exhaust gases or toxic waste, for instance, were simply released into the environment at no cost to the manufacturer. The agricultural revolution brought new farming techniques that relied heavily on fertilizers, pesticides, and irrigation, all of which affect the environment. Automobile owners did not pay for the air pollution caused by their cars. Virtually all human activity has associated externalities in the environmental arena, which do not necessarily present themselves as costs to participants in these activities. Over time, however, the consequences have become unmistakable in the form of a serious depletion of renewable resources and in pollution of the air, water, and soil. All citizens suffer from such environmental degradation, though not all have benefited to the same degree from the activities that caused them. In economic analysis, externalities are closely associated with common property and the notion of free riders. Many natural resources have no discrete owner and are therefore particularly vulnerable to abuse. The phenomenon of degradation of common property is known as the Tragedy of the Commons. The costs to society are understood as 540
Environmental Encyclopedia 3 costs to nonconsenting third parties, whose interests in the environment have been violated by a particular market activity. The consenting parties inflict damage without compensating third parties because without clear property rights there is no entity that stands up for the rights of a violated environment and its collective owners. Nature’s owners are a collectivity which is hard to organize. They are a large and diverse group that cannot easily pursue remedies in the legal system. In attempting to gain compensation for damage and force polluters to pay for their actions in the future the collectivity suffers from the free rider problem. Although everyone has a stake in ensuring, for example, good air quality, individuals will tend to leave it to others to incur the cost of pursuing legal redress. It is not sufficiently in the interest of most members of the group to sue because each has only a small amount to gain. Thus, government intervention is called for to protect the interests of the collectivity, which otherwise would be harmed. The government has several options in dealing with externalities such as pollution. It may opt for regulation and set standards of what are considered acceptable levels of pollution. It may require reduced lead levels in gasoline and require automakers to manufacture cars with greater fuel economy and reduced emissions, for instance. If manufacturers or social entities such as cities exceed the standards set for them, they will be penalized. With this approach, many polluters have a direct incentive to limit their most harmful activities and develop less environmentally costly technologies. So far, this system has not proved to be very effective. In practice, it has been difficult (or not politically expedient) to enforce the standards and to collect the fines. Supreme Court decisions since the early 1980s have reinterpreted some of the laws to make standards much less stringent. Many companies have found it cheaper to pay the fines than to invest in reducing pollution. Or they evade fines by declaring bankruptcy and reorganizing as a new company. Economists tend to favor pollution taxes and discharge fees. Since external costs do not enter the calculations a producer makes, the producer manufactures more of the good than is socially beneficial. When polluters have to absorb the costs themselves, to internalize them, they have an incentive to reduce production to acceptable levels or to develop alternative technologies. A relatively new idea has been to give out marketable pollution permits. Under this system, the government sets the maximum levels of pollution it will tolerate and leaves it to the market system to decide who will use the permits. The costs of past pollution (in the form of permanent environmental damage or costly cleanups) will still be borne disproportionately by society as a whole. The government generally tries to make responsible
Environmental Encyclopedia 3 parties pay for clean-ups, but in many cases it is impossible to determine who the culprit was and in others the parties responsible for the pollution no longer exist. A special case is posed by externalities that make themselves felt across national boundaries, as is the case with acid rain, ozone layer depletion, and the pollution of rivers that run through more than one country. Countries that suffer from environmental degradation caused in other countries receive none of the benefits and often do not have the leverage to modify the polluting behavior. International conservation efforts must rely on agreements specific countries may or may not follow and on the mediation of the United Nations. See also Internalizing cost; Trade in pollution permits [Alfred A. Marcus and Marijke Rijsberman]
RESOURCES BOOKS Mann, D., and H. Ingram. “Policy Issues in the Natural Environment.” In Public Policy and the Natural Environment, edited by H. Ingram and R. K. Goodwin. Greenwich, CT: JAI Press, 1985. Marcus, A. A. Business and Society: Ethics, Government, and the World Economy. Homewood, IL:. Irwin Press, 1993.
Extinction Extinction is the complete disappearance of a species, when all of its members have died or been killed. As a part of natural selection, the extinction of species has been ongoing throughout the earth’s history. However, with modern human strains on the environment, plants, animals, and invertebrates are becoming extinct at an unprecedented rate of thousands of species per year, especially in tropical rain forests. Many thousands more are threatened and endangered. Scientists have determined that mass extinctions have occurred periodically in prehistory, coming about every 50 million years or so. The greatest of these came at the end of the Permian period, some 250 million years ago, when up to 96% of all species on the earth may have died off. Dinosaurs and many ocean species disappeared during a well-documented mass extinction at the end of the Cretaceous period (about 65 million years ago). It is estimated that of the billions of species that have lived on the earth during the last 3.5 billion years, 99.9% are now extinct. It is thought that most prehistoric extinctions occurred because of climatological changes, loss of food sources, destruction of habitat, massive volcanic eruptions, or asteroids or meteors striking the earth. Extinctions, however, have never been as rapid and massive as they have been in the modern era. During the last two centuries, more than 75
Extinction
species of mammals and over 50 species of birds have been lost, along with countless other species that had not yet been identified. James Fisher has estimated that since 1600, including species and subspecies, the world has lost at least 100 types of mammals and 258 kinds of birds. The first extinction in recorded history was the European lion, which disappeared around A.D. 80. In 1534, seamen first began slaughtering the great auk, a large, flightless bird once found on rocky North Atlantic islands, for food and oil. The last two known auks were killed in 1844 by an Icelandic fisherman motivated by rewards offered by scientists and museum collectors for specimens. Humans have also caused the extinction of many species of marine mammals. Steller’s sea cow, once found on the Aleutian Islands off Alaska, disappeared by 1768. The sea mink, once abundant along the coast and islands of Maine, was hunted for its fur until about 1880, when none could be found. The Caribbean monk seal, hunted by sailors and fishermen, has not been found since 1962. The early European settlers of America succeeded in wiping out several species, including the Carolina parakeet and the passenger pigeon. The pigeon was one of most plentiful birds in the world’s history, and accounts from the early 1800s describe flocks of the birds blackening the sky for days at a time as they passed overhead. By the 1860s and 1870s tens of millions of them were being killed every year. As a result of this overhunting, the last passenger pigeon, Martha, died in the Cincinnati Zoo in 1914. The pioneers who settled the West were equally destructive, causing the disappearance of 16 separate types of grizzly bear, six of wolves, one type of fox, and one cougar. Since the Pilgrims arrived in North America in 1620, over 500 types of native American animals and plants have disappeared. In the last decade of the twentieth century, the rate of species loss was unprecedented and accelerating. Up to 50 million species could be extinct by 2050, with a rate of three per day. Most of these species extinctions will occur— and are occurring—in tropical rain forests, the richest biological areas on the earth. Rain forests are being cut down at a rate of one to two acres per second. In 1988, Harvard professor and biologist Edward O. Wilson estimated the current annual rate of extinction at up to 17,500 species, including many unknown rain forest plants and animals that have never been studied or even seen, by humans. Botanist Peter Raven, director of the Missouri Botanical Garden, calculated that a total of one-quarter the world’s species could be gone by 2010. A study by the World Resources Institute pointed out that humans have accelerated the extinction rate to 100 to 1,000 times its natural level. While it is impossible to predict the magnitude of these losses or the impact they will have on the earth and 541
Environmental Encyclopedia 3
Exxon Valdez
its future generations, it is clear that the results will be profound, possibly catastrophic. In his book, Disappearing Species: The Social Challenge, Eric Eckholm of the Worldwatch Institute observed that humans, in their ignorance, have changed the natural course of evolution with current mass-extinction rates. “Should this biological massacre take place, evolution will no doubt continue, but in a grossly distorted manner. Such a multitude of species losses would constitute a basic and irreversible alteration in the nature of the biosphere even before we understand its workings...” Eckholm further notes that when a plant species is wiped out, some 10 to 30 dependent species, such as insects and even other plants, can also be jeopardized. An example of the complex relationship that has evolved between many tropical species is the 40 different kinds of fig trees native to Central America, each of which has a specific insect pollinator. Other insects, including pollinators of other plants, depend on these trees for food. Thus, the extinction of one species can set off a chain reaction, the ultimate effects of which cannot be foreseen. Although scientists know that human life will be harmed by these losses, the weight of the impact is unclear. As the Council on Environmental Quality states in its book The Global Environment and Basic Human Needs, over the next decade or two, “unique ecosystems populated by thousands of unrecorded plant and animal species face rapid destruction—irreversible genetic losses that will profoundly alter the course of evolution.” This report also cautions that species extinction entails the loss of many useful products. Perhaps the greatest industrial, agricultural and medical costs of species reduction will stem from future opportunities unknowingly lost. Only about 5% of the world’s plant species have yet been screened for pharmacologically active ingredients. Ninety percent of the food that humans eat comes from just 12 crops, but scores of thousands of plants are edible, and some will undoubtedly prove useful in meeting human food needs. See also Biodiversity; Climate; Dodo; Endangered species [Lewis G. Regenstein]
RESOURCES BOOKS Etheredge, N. The Miner’s Canary: A Paleontologist Unravels the Mysteries of Extinction. Englewood Cliffs, NJ: Prentice-Hall, 1991. Raup, D. M. Extinction: Bad Genes or Bad Luck? New York: Norton, 1991. Tudge, C. Last Animals at the Zoo: How Mass Extinction Can Be Stopped. Washington, DC: Island Press, 1992.
542
Exxon Valdez On March 24, 1989, the 987-foot super tanker Exxon Valdez outbound from Port Valdez, Alaska, with a full load of oil from Alaska’s Prudhoe Bay passed on the wrong side of a lighted channel marker guarding a shallow stretch of Prince William Sound. The momentum of the large ship carried it onto Bligh Reef and opened a 6 x 20 ft hole in the ship’s hull. Through this hole poured 257,000 barrels (11 million gallons) of crude oil, approximately 21% of the ship’s 1.26 million barrel (53 million gallon) cargo, making it the largest oil spill in the history of the United States. The oil spill resulting from the Exxon Valdez accident spread 38,0000 metric tonnes of oil along 1,500 miles of pristine shoreline on Prince William Sound and the Kenai Peninsula, covering an area of 460 miles. Oil would eventually reach shores southwest of the spill up to 600 miles away. The Exxon Valdez Oil Spill Trustee Council estimates that 250,000 seabirds, 2,800 sea otters, 300 harbor seals, 250 bald eagles, and 22 killer whales, were killed. These figures may be an underestimate of the animals killed by the oil because many of the carcasses likely sank or washed out to sea before they could be collected. Most of the birds died from hypothermia due to the loss of insulation caused by oil-soaked feathers. Many predatory birds, such as bald eagles, died as a result of ingesting contaminated fish and birds. Hypothermia affected sea otters as well, and many of the dead mammals suffered lung damage due to oil fumes. Billions of salmon eggs were also lost to the spill. While a record 43 million pink salmon were caught in Prince William Sound in 1990, by 1993 the harvest had declined to a record low of three million. Response to the oil spill was slow and generally ineffective. The Alyeska Oil Spill Team responsible for cleaning up oil spills in the region took more than 24 hours to respond, despite previous assurances that they could mount a response in three hours. Much of the oil containment equipment was missing, broken, or barely operable. By the time oil containment and recovery equipment were in place, 42 million liters of oil had already spread over a large area. Ultimately, less than 10% of this oil was recovered, the remainder dispersing into the air, water, and sediment of Prince William Sound and adjacent sounds and fjords. Exxon reports spending a total of $2.2 billion to clean up the oil. Much of this money employed 10,000 people to clean up oil-fouled beaches; yet after the first year, only 3% of the soiled beaches had been cleaned. In response to public concern about the poor response time and uncoordinated initial cleanup efforts following the Valdez spill, the Oil Pollution Act (OPA; part of the Clean Water Act) was signed into law in August 1990. The Act
Environmental Encyclopedia 3
Exxon Valdez
After hitting Bligh Reef on March 24, 1989, the Exxon Valdez sits near Naked Island, Alaska, swirls of crude oil surrounding the ship. Containment booms can be seen attached to the ship. (AP/Wide World Photos. Reproduced by permission.)
543
Environmental Encyclopedia 3
Exxon Valdez
established a Federal trust fund to finance clean-up efforts for up to $1 billion per spill incident. Ultimately, nature was the most effective surface cleaner of beaches; winter storms removed the majority of oil and by the winter of 1990, less than 6 miles (10 km) of shoreline was considered seriously fouled. Cleanup efforts were declared complete by the U.S. Coast Guard and the State of Alaska in 1992. On October 9, 1991, a settlement between Exxon and the State of Alaska and the United States government was approved by the U.S. District Court. Under the terms of the agreement, Exxon agreed to pay $900 million in civil penalties over a 10-year period. The civil settlement also provides for a window of time for new claims to be made should unforeseen environmental issues arise. That window is from September 1, 2002 to September 1, 2006. Exxon was also fined $150 million in a criminal plea agreement, of which $125 million was forgiven in return for the company’s cooperation in cleanup and various private settlements. Exxon also paid $100 million in criminal restitution for the environmental damage caused by the spill. A flood of private suits against Exxon have also deluged the courts in the years since the spill. In 1994, a district court ordered Exxon to pay $287 million in compensatory damages to a group of commercial fishermen and other Alaskan natives who were negatively impacted by the spill. The jury who heard the case also awarded the plaintiffs $5 billion in punitive damages. However, in November 2001 a federal appeals judge overturned the $5 billion punitive award, deeming it excessive and ordering the district court to reevaluate the settlement. As of May 2002, the final punitive settlement had not been determined. The Captain of the Exxon Valdez, Joseph Hazelwood, had admitted to drinking alcohol the night the accident occurred, and had a known history of alcohol abuse. Nevertheless, he was found not guilty of charges that he operated a shipping vessel under the influence of alcohol. He was found guilty of negligent discharge of oil, fined $50,000, and sentenced to 1,000 hours of community service work. Despite official cleanup efforts by Exxon having ended, the environmental legacy of the Valdez spill lives on. The
544
Auke Bay Laboratory of the Alaskan Fisheries Science Center conducted a beach study of Prince William Sound in the summer of 2001. Researchers found that approximately 20 acres of Sound shoreline is still contaminated with oil, the majority of which has collected below the surface of the beaches where it continues to pose a danger to wildlife. Of the 30 species of wildlife affected by the spill, only two—the American bald eagle and the river otter— were considered recovered in 1999. Preliminary 2002 reports from the Exxon Valdez Oil Spill Trustee Council reflect progress is being made in the area of wildlife recovery, however, black oystercatchers, common murres, killer whales, subtidal communities, sockeye salmon, and pink salmon are all classified as recovered in the April draft of the organization’s “Update on Injured Resources and Services,” bringing the total recovered species to eight. [William G Ambrose and Paul E Renaud and Paula A Ford-Martin]
RESOURCES BOOKS Keeble, J. Out of the Channel: The Exxon Valdez Oil Spill in Prince William Sound, 10th Anniversary Edition. Spokane, WA: Eastern Washington University Press, 1999. Picou, J. S., et al. The Exxon Valdez Disaster: Readings on a Modern Social Problem, 2nd ed. Dubuque, IA: Kendall/Hunt Publishing, 1999.
PERIODICALS Berg, Catherine. “The Exxon Valdez Spill: 10 Years Later.” Endangered Species Bulletin 26, no.2 (March/April 1999): 18–9.
OTHER Alaska Fisheries Science Center, National Marine Fisheries Service, NOAA. “The Exxon Valdez Oil Spill: How Much Oil Remains?” AFSC Quarterly(July-September 2001). http://www.afsc.noaa.gov/Quarterly/ jas2001/feature_jas01.htm. Accessed May 28, 2002. U.S. Environmental Protection Agency Oil Spill Program. [cited May 28, 2002]. .
ORGANIZATIONS The Exxon Valdez Oil Spill Trustee Council, 441 West Fifth Avenue, Suite 500, Anchorage, AL USA 99501 (907) 278-8012 , Fax: (907) 2767178 , Toll Free: (800) 478-7745 (within Alaska), Toll Free: (800) 2837745 (outside Alaska), Email:
[email protected], http:// www.oilspill.state.ak.us/
F
Falco peregrinus see Peregrine falcon
Falcon see Peregrine falcon
Fallout see Radioactive fallout
Family planning The exaxt population of the world is unknown but believed at about 6.2 billion; it continues in its unrelenting growth especially in developing countries. Worldwide famine has been postponed thanks to modern agricultural procedures, known as the Green Revolution, which have greatly increased grain production. Nevertheless, with limited land and resources that can be devoted to food production—and increasing numbers of humans who need both space and food—there appears to be a significant risk of catastrophe by overpopulation. Because of this, there is increased interest in family planning. Family planning in this context means birth control to limit family size. The subject of “family planning” is not limited to birth control but includes procedures designed to overcome difficulties in becoming pregnant. About 15% of couples are unable to conceive children after a year of sexual activity without using birth control. Many couples feel an intense desire and need to conceive children. Aid to these couples is thus a reasonable part of family planning. However, for most discussions of family planning, the emphasis is on limitation of family size, not augmentation. Birth control procedures have evolved rapidly in this century. Further, utilization of existing procedures is chang-
ing with different age groups and different populations. Thus any account of birth control is likely to become rapidly obsolete. An example of the changing technology includes oral contraception with pills containing hormones. Birth control pills have been marketed in the United States since the 1960s. Since that time there have been many formulations with significant reductions in dosage. There was much greater acceptance of pills by American women under the age of 30. The intrauterine device (IUD) was much more popular in Sweden than in the United States whereas sterilization was more common in the United States than in Sweden. A very common form of birth control is the condom, which is a thin rubber sheath worn by men during sexual intercourse. They are generally readily accessible, cheap, and convenient for those individuals who may not have sexual relations regularly. Sperm cannot penetrate the thin (0.3— 0.8 mm thick) latex. Neither the human immunodeficiency virus (HIV) associated with AIDS nor the other pathogenic agents of sexually transmitted diseases (STDs) are able to penetrate the latex barrier. Some individuals are opposed to treating healthy bodies with drugs (hormones) for birth control, and for these individuals, condoms have a special appeal. Natural “skin” (lamb’s intestine) condoms are still available for individuals who may be allergic to latex, but this product provides less protection to HIV and other STDs. The reported failure rate of condoms is high and is most likely due to improper use. Yet during the Great Depression in the 1930s—when pills and other contemporary birth control procedures were not available—it is thought that the proper use of condoms caused the birth rate in the United States to plummet. Spermicides—surface active agents which inactivate sperm and STD pathogens—can be placed in the vagina in jellies, foam, and suppositories. Condoms used in conjunction with spermicides have a failure rate lower than either method used alone and may provide added protection against some infectious agents. 545
Environmental Encyclopedia 3
Family planning
Types of Contraceptives Effectiveness Birth control pills Condoms Depo Provera Diaphragm IUDS Norplant Tubal sterilization Spermicides Vasectomy
Predicted (%) 99.9 98 99.7 94 99.2 99.7 99.8 97 99.9
Actual (%) 97 88 99.7 82 97 99.7 99.6 79 99.9
The vaginal diaphragm, like the condom, is another form of barrier. The diaphragm was in use in World War I and was still used by about one third of couples by the time of World War II. However, because of the efficacy and ease of use of oral contraceptives, and perhaps because of the protection against disease by condoms, the use of vaginal diaphragms is down. The diaphragm, which must be fitted by a physician, is designed to prevent sperm access to the cervix and upper reproductive tract. It is used in conjunction with spermicides. Other similar barriers include the cervical cap and the contraceptive sponge. The cervical cap is smaller than the diaphragm and fits only around that portion of the uterus that protrudes into the vagina. The contraceptive sponge, which contains a spermicide, is inserted into the vagina prior to sexual intercourse and retained for several hours afterwards to insure that no living sperm remain. Intrauterine devices (IUDs) were popular during the 1960s and 1970s in the United States, but their use today has dwindled. However in China, a nation which is rapidly attending to its population problems, about 60 million women use IUDs. The failure rate of IUDs in less developed countries is reported to be less than that with the pill. The devices may be plastic, copper, or stainless steel. The plastic versions may be impregnated with barium sulfate to permit visualization by x ray and also may slowly release hormones such as progesterone. Ovulation continues with IUD use. Efficacy probably results from a changed uterine environment which kills sperm. Oral contraception is by means of the “pill.” Pills contain an estrogen and a progestational agent, and current dosage is very low compared with several decades ago. The combination of these two agents is taken daily for three weeks followed by one week with neither hormone. Fre546
quently a drug-free pill is taken for the last week to maintain the pill-taking habit and thus enhance the efficacy of the regimen. The estrogenic component prevents follicle maturation, and the progestational component prevents ovulation. Pill-taking women who have multiple sexual partners may wish to consider the addition of a barrier method to minimize risk for STDs. The reliability of the pill reduces the need for abortion or surgical sterilization. There may be other salutary health effects which include less endometrial and ovarian cancer as well as fewer uterine fibroids. Use of oral contraceptives in women over the age of 35 who also smoke is thought to increase the risk of heart and vascular disease. Contraceptive hormones can be administered by routes other than oral. Subdermal implants of progestin-containing tubules have been available since 1990 in the United States. In this device familiarly known as Norplant, six tubules are surgically placed on the inside of the upper arm, and the hormone diffuses through the wall of the tubules to provide long term contraceptive activity. Another form of progestinonly contraception is by intramuscular injection which must be repeated every three months. Fears engendered by IUD litigation are thought to have increased the reliance of many American women on surgical sterilization (tubal occlusion). Whatever the reason, more American women rely on the procedure than do their European counterparts. Tubal occlusion involves the mechanical disruption of the oviduct, the tube that leads from the ovary to the uterus, and prevents sperm from reaching the egg. Inasmuch as the fatality rate for the procedure is lower than that of childbirth, surgical sterilization is now the safest method of birth control. Tubal occlusion is far more common now that it was in the 1960s because of the lower cost and reduced surgical stress. Use of the laparoscope and very small incisions into the abdomen have allowed the procedure to be completed during an office visit. Male sterilization, another method, involves severing the vas deferens, the tube that carries sperm from the testes to the penis. Sperm comprise only a small portion of the ejaculate volume, and thus ejaculation is little changed after vasectomy. The male hormone is produced by the testes and production of that hormone continues as does erection and orgasm. Most abortions would be unnecessary if proper birth control measures were followed. That of course is not always the case. Legal abortion has become one of the leading surgical procedures in the United States. Morbidity and mortality associated with pregnancy have been reduced more with legal abortion than with any other event since the introduction of antibiotics to fight puerperal fever. Other methods of birth control are used by individuals who do not wish to use mechanical barriers, devices, or drugs (hormones). One of the oldest of these methods is
Environmental Encyclopedia 3
Famine
withdrawal (coitus interruptus), in which the penis is removed from the vagina just before ejaculation. Withdrawal must be exquisitely timed, is probably frustrating to both partners, is not thought to be reliable, and provides no protection against HIV and other STD infections. Another barrierand-drug-free procedure is natural family planning (also known as the rhythm method). Abstinence of sexual intercourse is scheduled for a period of time before and after ovulation. Ovulation is calculated by temperature change, careful record keeping of menstruation (the calendar method), or by vaginal mucous inspection. Natural family planning has appeal for individuals who wish to limit their exposure to drugs, but it provides no protection against HIV and other STDs. The population of the world increases by about 140 million every year, while the world is unable to sustain its new residents adequately. That increase signals the need for family planning education and the continued development of ever more efficient birth control methods. See also Population Council; Population growth; Population Institute [Robert G. McKinnell]
RESOURCES BOOKS Sitruk-Ware, R., and C. W. Bardin. Contraception. New York: Marcel Dekker, 1992. Speroff, L., and P. D. Darney. A Clinical Guide for Contraception. Baltimore: Williams & Wilkins, 1992.
Famine Famine is widespread hunger and starvation. A region struck with famine experiences acute shortages of food, massive loss of lives, social disruption, and economic chaos. Images of starving mothers and children with emaciated eyes and swollen bellies during recent crises in Ethiopia and Somalia have brought international attention to the problem of famine. Other well-known famines include the great Irish potato famines of the 1850s that drove millions of immigrants to America, and a Russian famine during Stalin’s agricultural revolution that killed 20 million people in the 1930s. The worst recorded famine in recent history occurred in China between 1958 and 1961, when 23–30 million people died as a result of the failed agricultural program, “the great leap forward.” Even though we may think of these tragedies as single, isolated events, famine, and chronic hunger continue to be serious problems. Between 18 and 20 million people, threequarters of them children, die each year of starvation or diseases caused by malnourishment. How can this be? Environmental problems like overpopulation, scarce resources,
and natural disasters affect people’s ability to produce food. Political and economic problems like unequal distribution of wealth, delayed or insufficient action by local governments, and imbalanced trade relationships between countries affect people’s ability to buy food when they cannot produce it. Perhaps the most common explanation for famine is overpopulation. The world’s population, now more than 6.2 billion people, grows by 250,000 people every day. It seems impossible that the natural world could support such rapid growth. Indeed, the pressures of rapid growth have had a devastating impact on the environment in many places. Land that once fed one family must now feed 10 families and resulting over-use harms the quality of the land. The world’s deserts are rapidly expanding as people destroy fragile topsoil by poor farming techniques, clearing vegetation, and overgrazing. Although the demands of population growth and industrialization are straining our environment, we have yet to exceed the limits of growth. Since the 1800s some have predicted that humans, like rabbits living without predators, would foolishly reproduce far beyond the carrying capacity of their environment and then die in masses from lack of food. This argument assumes that the supply of food will remain the same as populations grow, but as populations have grown, people have learned to grow more food. World food production increased two-and-a-half times between 1950 and 1980. Alter World War II, agriculture specialists caused a “green revolution,” developing new crops and farming techniques that radically increased food production per acre. Farmers began to use special hybrid crop strains, chemical fertilizers, pesticides, and advanced irrigation systems. Today, there is more than enough food available to feed everyone. In fact, the United States government spends billions of dollars every year to store excess grain and to keep farmers from farming portions of their land. Many famines occur in the aftermath of natural disasters like floods and droughts. In times of drought, crops cannot grow because they do not have enough water. In times of flood, excess water washes out fields, destroying crops and disrupting farm activity. These disasters have several effects. First, damaged crops cause food shortages, making nutrients difficult to find and making any food available too expensive for many people. Second, reduced food production means less work for those who rely on temporary farm work for their income. Famines usually affect only the poorest five to ten percent of a country’s population. They are most vulnerable because during a crisis wages for the poorest workers go down as food prices go up. Famine is a problem of distribution as well as production. Environmental, economic, and political factors together determine the supply and distribution of food in a 547
Environmental Encyclopedia 3
Famine
country. Starvation occurs when people lose their ability to obtain food by growing it or by buying it. Often, poor decisions and organizations aggravate environmental factors to cause human suffering. In Bangladesh, floods during the summer of 1974 interfered with rice transplantation, the planting of small rice seedlings in their rice patties. Although the crop was only partly damaged, speculators hoarded rice, and fears of a shortage drove prices beyond the reach of the poorest in Bangladesh. At the same time, disruption of the planting meant lost work for the same people. Even though there was plenty of rice from the previous year’s harvest, deaths from starvation rose as the price of rice went up. In December of 1974, when the damaged rice crop was harvested, the country found that its crop had been only partly ruined. Starvation resulted not from a shortage of rice, but from price speculation. The famine could have been avoided completely if the government had responded more quickly, acting to stabilize the rice market and to provide relief for famine victims. In other cases, governments have acted to avoid famine. The Indian state of Maharahtra offset affects of a severe drought in 1972 by hiring the poorest people to work on public projects like roads and wells. This provided a service for the country and at the same time diverted a catastrophe by providing an income for the most vulnerable citizens to compete with the rest of the population for a limited food supply. At the same time, the countries of Mali, Niger, Chad, and Senegal experienced severe famine, even though the average amount of food per person in these countries was the same as in Maharahtra. The difference, it would seem, lies in the actions and intentions of the governments. The Indian government provides a powerful example. Although India lags behind many countries in economic development, education, and health care, the Indians have managed to avert serious famine since 1943, four years before they gained independence from the British. Responsibility for hunger and famine rests also with the international community. Countries and peoples of the world are increasingly interconnected, continuously exchanging goods and services. We are more and more dependent on one another for success and for survival. The world economic and political order dramatically favors the wealthiest industrialized countries, in Europe and North America. Following patterns established during colonial expansion, Third World nations often produce raw materials, unfinished goods, and basic commodities like bananas and coffee that they sell to the First World at low prices. The First World nations then manufacture and refine these products and sell back information and technology, like machinery and computers for a very high price. As a result, the wealthiest nations amass capital and resources, and enjoy very high standards of living, while the poorest nations retain huge 548
national debts and struggle to remain stable economically and politically. The word’s poorest countries, then, are left vulnerable to all of the conditions which cause famine, economic hardship, political instability, overpopulation, and over-taxed resources. Furthermore, large colonial powers often left behind unjust political and social hierarchies that are very good at extracting resources and sending them north, but not as good at promoting social justice and human welfare. Many Third World countries are dominated by a small ruling class who own most of the land, control industry, and run the government. Since the poorest people, who suffer most in famines, have little power to influence government policies and manage the countries economy, their needs are often unheard and unmet. A government that rules without democratic support of its people has less incentive to protect those who would suffer in times of famine. In addition, the poorest often do not benefit from the industry and agriculture that does exist in a developing country. Large corporate farms often force small subsistence farmers off of their land. These farmers must then work for day wages, producing food for export, while local people go without adequate nutrition. Economic and social arrangements, as well as environmental conditions, are central to the problems of hunger and starvation. Famine is much less likely to occur in countries that are concerned with issues of social justice. In the same way, famine is much less likely to occur in a world that is concerned with issues of social justice. Environmental pressures of population growth and human use of natural resources will continue to be issues of great concern. Natural disasters like droughts and floods will continue to occur. The best response to the problem of famine lies in working to better manage environmental resources and crisis situations and to change political and economic structures that cause people to go without food. [John Cunningham]
RESOURCES BOOKS Lappe, F. M. World Hunger: Twelve Myths New York: Grove Press, 1986.
PERIODICALS Sen, A. “Economics of Life and Death.” Scientific American 268 (May 1993): 40–47.
Farming see Agricultural revolution; Conservation tillage; Dryland farming; Feedlots; Organic gardening and farming; Shifting cultivation; Slash and burn agriculture; Strip-farming; Sustainable agriculture
Environmental Encyclopedia 3
Fauna All animal life that lives in a particular geographic area during a particular time in history. The type of fauna to be found in any particular region is determined by factors such as plant life, physical environment, topographic barriers, and evolutionary history. Zoologists sometimes divide the earth into six regions inhabited by distinct faunas: Ethiopian (Africa south of the Sahara, Madagascar, Arabia), Neotropical (South and Central America, part of Mexico, the West Indies), Australian (Australia, New Zealand, New Guinea), Oriental (Asia south of the Himalaya Mountains, India, Sri Lanka, Malay Peninsula, southern China, Borneo, Sumatra, Java, the Philippines), Palearctic (Europe, Asia north of the Himalaya Mountains, Afghanistan, Iran, North Africa), and Nearctic (North America as far south as southern Mexico).
Federal Energy Regulatory Commission
uals in a population can be hunted verses those that should be protected so they can reproduce. As the number of animals increase, competition for food may become more intense and, therefore, growth and reproduction may decrease. The result is an example of density-dependent fecundity. Fecundity in predators typically increases with an increase in the prey population. Conversely, fecundity in prey species typically increases when predation pressure is low. Some scientists have found that fecundity is inversely related to the amount of parental care given to the young. In other words, small organisms such as insects and fish which typically invest less time and energy into caring for the young usually have higher fecundity. Larger organisms such as birds and mammals which expend a lot of energy on caring for the young through building of nests, feeding, protecting, and caring have lower fecundity rates. [John Korstad]
Fecundity Fecundity comes from the Latin word fecundus, meaning fruitful, rich, or abundant. It is the rate at which individual organisms in the population produce offspring. Although the term can apply to plants, it is typically restricted to animals. There are two aspects of reproduction: 1) fertility, referring to the physiological ability to breed, and 2) fecundity, referring to the ecological ability to produce offspring. Thus, higher fecundity is dependent on advantageous conditions in the environment that favor reproduction (e.g., abundant food, space, water and mates; limited predation, parasitism, and competition). The intrinsic rate of increase (denoted as “r") equals the birth rate minus the death rate. It is a population characteristic that takes into account that not all individuals have equal birth rates and death rates. It therefore refers to the reproductive capacity in the population made up of individual organisms. Fecundity, on the other hand, is an individual characteristic. It can be further subdivided into potential and realized fecundity. For example, deer can potentially produce four or more fawns per year, but they typically give birth to only one or two per year. In good years with ample food, they often have only two fawns. Animals in nature are limited by environmental conditions that control their life history characteristics such as birth, survivorship, and death. A graph of the number of offspring per female per age class (e.g., year) is a fecundity curve. This can then be used to interpret the individuals of a certain age class who contribute more to the population growth than others. In other words, certain age classes have a greater reproductive output than others. Wildlife managers often use this type of information in deciding which individ-
RESOURCES BOOKS Colinvaux, P. A. Ecology. New York: Wiley, 1986. Smith, R. E. Ecology and Field Biology. 4th ed. New York: Harper and Row, 1990. Ricklefs, R. E. Ecology. 3rd ed. New York: W. H. Freeman, 1990. Krebs, C. J. Ecology: The Experimental Analysis of Distribution and Abundance. 3rd ed. New York: Harper and Row, 1985.
Federal Energy Regulatory Commission The Federal Energy Regulatory Commission (FERC) is an independent, five-member commission within the U.S. Department of Energy. The Commission was created in October 1977 as a part of the Federal government’s mammoth effort to restructure and reorganize its energy program. The Commission was assigned many of the functions earlier assigned to the Federal Power Commission (FPC). The Federal Power Commission had existed since 1920 when it was created to license and regulate hydroelectric power plants situated on interstate streams and rivers. Over the next half century, the Power Commission was assigned more and more responsibility for the management of United States energy reserves. In the Public Utilities Holding Company Act of 1935, for example, Congress gave to the Commission responsibility for setting rates for the wholesale pricing of electricity shipped across state lines. FPC’s mission was expanded even further in the 1935 Natural Gas Act. That act gave the Commission the task of regulating the nation’s natural gas pipelines and setting rates for the sale of natural gas. 549
Federal Insecticide, Fungicide and Rodenticide Act (1972)
Regulating energy prices in the pre-1970s era was a very different problem than it is in the 1990s. That era was one of abundant, inexpensive energy. Producers, consumers and regulators consistently dealt with a surplus of energy. There was more energy of almost any kind than could be consumed. That situation began to change in the early 1970s, especially after the oil embargo instituted by the Organization of Petroleum Exporting Countries in 1973. The resulting energy crisis caused the United States government to re-think carefully its policies and practices regarding energy production and use. One of the studies that came out of that re-analysis was the 1974 Federal Energy Regulation Study Team report. The team found a number of problems in the way energy was managed in the United States. They reported that large gaps existed in some areas of regulation, with no agency responsible, while other areas were characterized by overlaps, with two, three, or more agencies all having some responsibility for a single area. The team also found that regulatory agencies were more oriented to the past than to current problems or future prospects, worked with incomplete or inaccurate date, and employed procedures that were too lengthy and drawn out. As one part of the Department of Energy Organization Act of 1977, then, the FPC was abolished and replaced by the Federal Energy Regulatory Commission. The Commission is now responsible for setting rates and charges for the transportation and sale of natural gas and for the transmission and sale of electricity. It continues the FPC’s old responsibility for the licensing of hydroelectric plants. In addition, the Commission has also been assigned responsibility for establishing the rates for the transportation of oil by pipelines as well as determining the value of the pipe lines themselves. Overall, the Commission now controls the pricing of 60% of all the natural gas and 30% of all the electricity in the United States. Ordinary citizens sometimes do not realize the power and influence of independent commissions like the FERC. But they can have significant impact on federal policy and practice. As one writer has said, “While Energy Secretaries come and go, and Congress can do little more than hold hearings, the five-member FERC is making national energy policy by itself.” [David E. Newton]
RESOURCES ORGANIZATIONS Federal Energy Regulatory Commission, 888 First Street, NE, Washington, D.C. USA 20009 ,
550
Environmental Encyclopedia 3
Federal Insecticide, Fungicide and Rodenticide Act (1972) The Environmental Protection Agency (EPA) is the primary regulatory agency of pesticides. The EPA’s authority on pesticides is given in the Congressionally-enacted Federal Insecticide, Fungicide and Rodenticide Act (FIFRA)—a comprehensive regulatory program for pesticides and herbicides enacted in 1972 and amended nearly 50 times over the years. The goal of FIFRA is to regulate the use of pesticides through registration. Section 3 of FIFRA mandates that the EPA must first determine that the product “will perform its intended function without unreasonable adverse effects on the environment” before it is registered. The Act defines adverse effects as “any unreasonable effects on man or the environment, taking into account the economic, social and environmental costs and benefits of using any pesticide.” To further this objective, Congress placed a number of regulatory tools at the disposal of the EPA. Congress also made clear that the public was not to bear the risk of uncertainty concerning the safety of a pesticide. To grant registration, the EPA must conclude that the food production benefits of a pesticide outweigh any risks. To make the cost-benefit determination required to register a pesticide as usable, manufacturers must submit to EPA detailed tests on the chemical’s health and environmental effects. The burden rests on the manufacturer to provide the data needed to support registration for use on a particular crop. The pesticide manufacturer is required to submit certain health and safety data to establish that the use of the pesticide will not generally cause unreasonable adverse effects. Data required include disclosure of the substance’s chemical and toxicological properties, likely distribution in the environment, and possible effects on wildlife, plants, and other elements in the environment. FIFRA is a licensing law. Pesticides may enter commerce only after they are approved or “registered following an evaluation against statutory risk/benefit standards.” The Administrator may take action to terminate any approval whenever it appears, on the basis of new information or a re-evaluation of information, that the pesticide no longer meets the statutory standard. These decisions are made on a use-by-use basis since the risks and benefits of a pesticide vary from one use to another. FIFRA is also a control law. Special precautions and instructions may be imposed. For example, pesticide applicators may be required to wear protective clothing, or the use of certain pesticides may be restricted to trained and certified applicators. Instructions, warnings, and prohibitions are incorporated into product labels, and these labels may not be altered or removed.
Environmental Encyclopedia 3
Federal Insecticide, Fungicide and Rodenticide Act (1972)
FIFRA embodies the philosophy that those who would benefit by government approval of a pesticide product should bear the burden of proof that their product will not pose unreasonable risks. This burden of proof applies both when initial marketing approval is sought and in any proceeding initiated by the Administrator to interrupt or terminate registration through suspension or cancellation. Of course, while putting the burden on industry, the assumption is that industry will be honest in its research and reports. Licensing decisions are usually based on tests furnished by an applicant for registration. The tests are performed by the petitioning company in accordance with testing guidelines prescribed by the EPA. Requirements for the testing of pesticides for major use can be met only through the expenditure of several millions of dollars and up to four years of laboratory and field testing. However, major changes in test standards, advances in testing methodology, and the heightened awareness of the potential chronic health effects of long-term, low-level exposure to chemicals which have come into the marketplace within the past several decades have brought the need to update EPA mandates. Thus, Congress directed that the EPA reevaluate its licensing decisions through a process of re-registration. That means if the government once approved a certain product for domestic use, it does not mean the EPA can be confident today that its use can continue. The EPA has the power to suspend or cancel products. Cancellation means the product in question is no longer considered safe. It must be taken off the commercial market and is no longer available for use. Suspension means the product in question is not to be sold or used under certain conditions or in certain places. This may be a temporary decision, usually dependent on further studies. There may be certain products that, in the opinion of the administrator, may be harmful. But no action is taken if and when the action is incompatible with administration priorities. That is because the EPA administrator is responsible to the directives and priorities of the President and the executive branch. Thus, some regulatory decisions are political. There is a statutory way to avoid the re-registration process. It is called Section 18. Section 18 under FIFRA allows for the use of unregistered pesticides in certain emergency situations. It provides that “the Administrator may, at his discretion, exempt any Federal or State agency from any provisions of this subchapter if he determines that emergency conditions exist which requires such exemption. The Administrator, in determining whether or not such emergency conditions exist, shall consult with the Secretary of Agriculture and the Governor of any state concerned if they request such determination.”
From 1978 to 1983, the General Accounting Office (GAO) and the House Subcommittee on Department Operations, Research and Foreign Agriculture of the Committee on Agriculture (DORFA Subcommittee) thoroughly examined the EPA’s implementation of Section 18. Under the auspices of Chair George E. Brown, Jr., (D-CA) the DORFA Subcommittee held a series of hearings which revealed numerous abuses in EPA’s administration of Section 18. A report was issued in 1982 and reprinted as part of the Committee’s 1983 Hearings. The Subcommittee found that “the rapid increase in the number and volume of pesticides applied under Section 18 was clearly the most pronounced trend in the EPA’s pesticide regulatory program.” According to the Subcommittee report, “a primary cause of the increase in the number of Section 18 exemptions derived from the difficulty the Agency had in registering chemicals under Section 3 of FIFRA in a timely manner.” The DORFA committee stated: “Regulatory actions involving suspect human carcinogens which meet or exceed the statute’s “unreasonable adverse effects” criterion for chronic toxicity often become stalled in the Section 3 review process for several years. The risk assessment procedures required by States requesting Section 18 actions, and by EPA in approving them, are generally less strict. For example, a relatively new insecticide, first widely used in 1977, was granted some 140 Section 18 emergency exemptions and over 300 (Section 24c) Special Local Needs registrations in the next four years while the Agency debated the significance to man of positive evidence of oncogenicity in laboratory animals.” The EPA’s practices changed little over the next eight years. In the spring of 1990, the Subcommittee on Environment of the House Science, Space and Technology Committee (Chair: James H. Scheuer, D-NY) investigated the EPA’s procedures under Section 18 of FIFRA and found if anything, the problem had gotten worse. The report states: “Since 1973, more than 4,000 emergency exemptions have been granted for the use of pesticides on crops for which there is no registration.” A large number of these emergency exemptions have been repeatedly granted for the same uses for anywhere from fourteen, to ten, to eight, to five years. The House Subcommittee also found that the EPA required less stringent testing procedures for pesticides under the exemption, which put companies that follow the normal procedure at a disadvantage. The Subcommittee concluded that the large numbers of emergency exemptions arose from “the EPA’s failure to implement its own regulations.” The Subcommittee identified “emergencies” as “routine predicted outbreaks and foreign competition” and “a company’s need to gain market access for use of a pesticide on a new crop, although the company often never intends to submit adequate data to register the chemical for use.” 551
Federal Land Policy and Management Act (1976)
As for the re-registration requirement, the Subcommittee observed: “The EPA’s reliance on Section 18 may be related to the Agency’s difficulty in re-registering older chemical substances. Often, Section 18 requests are made for the use of older chemicals on crops for which they are not registered. These older chemicals receive repetitive exemptions for use despite the fact that many of these substances may have difficulty obtaining re-registration since they have been identified as potentially carcinogenic. Thus, by liberally and repetitively granting exemptions to potentially carcinogenic substances, little incentive is provided to encourage companies to invest in the development of newer, safer pesticides or alternative agricultural practices.” The report concluded, “...Allowing these exemptions year after year in predictable situations provides ’back-door’ pre-registration market access to potentially dangerous chemicals.” See also Environmental law; Integrated pest management; National Coalition Against the Misuse of Pesticides; Pesticide Action Network; Risk analysis [Liane Clorfene Casten]
RESOURCES BOOKS Rodgers Jr., W. H. Environmental Law: Pesticides and Toxic Substances. 3 vols. St. Paul, MN: West, 1988. U.S. Environmental Protection Agency. Federal Insecticide, Fungicide, and Rodenticide Act: Compliance-Enforcement Guidance Manual. Rockville, MD: Government Institutes, 1984.
PERIODICALS “EPA Data Is Flawed, Says GAO.” Chemical Marketing Reporter 243 (January 11, 1993): 7+. “Controlling the Risk in Biotech.” Technology Review 92 (July 1989): 62–9.
Federal Land Policy and Management Act (1976) The Federal Land Policy and Management Act (FLPMA), passed in 1976, is the statutory grounding for the Bureau of Land Management (BLM), giving the agency authority and direction for the management of its lands. The initiative leading to the passage of FLPMA can be traced to the BLM itself. The agency was concerned about its insecure status— it was formed by executive reorganization rather than by a congressional act, it lacked a clear mandate for land management, and it was uncertain of the federal government’s plans to retain the lands it managed. This final point can be traced to the Taylor Grazing Act, which included a clause that these public lands would be managed for grazing “pending final disposal.” The BLM wanted a law that would address each of these issues, so that the agency could undertake 552
Environmental Encyclopedia 3 long-range, multiple use planning like their colleagues in the Forest Service. Agency officials drafted the first “organic act” in 1961, but two laws passed in 1964 served to really get the legislative process moving. The Public Land Law Review Commission (PLLRC) Act established a commission to examine the body of public land laws and make recommendations as to how to proceed in this policy area. The Classification and Multiple Use Act instructed the BLM to inventory its lands and classify them for disposal or retention. This would be the first inventory of these lands and resources, and suggested that at least some of these lands would be retained in federal ownership. The PLLRC issued its report in 1970. In the following years, Congress began to consider three general types of bills in response to the PLLRC report. The administration and the BLM supported a BLM organic act without additional major reforms of other public land laws. The second approach provided the BLM with an organic act, but also made significant revisions in the Mining Law of 1872 and included environmental safeguards for BLM activities. This variety of bill was supported by environmentalists. The final type of bill provided a general framework for more detailed legislation in the future. This general framework tended to support commodity production, and was favored by livestock, mining, and timber interests. In 1973, a bill of the second variety, introduced by Henry Jackson of Washington, passed the Senate. A similar bill died in the House, though, when it was denied a rule, and hence a trip to the floor, by the Rules Committee. Jackson re-introduced a bill that was nearly identical to the bill previously passed, and the Senate passed this bill in February 1976. In the House, things did not move as quickly. The main House bill, drafted by four western members of the Interior and Insular Affairs Committee, included significant provisions dealing with grazing—most importantly, a provision to adopt a statutory grazing fee formula based upon beef prices and private forage cost. This bill had the support of commodity interests, but was opposed by the administration and environmental groups. The bill passed the full House by fourteen votes in July 1976. The major differences that needed to be addressed in the conference committee included law enforcement, the grazing provisions, mining law provisions, wild horses and burros, unintentional trespass, and the California Desert Conservation Area. By late September, four main differences remained, three involving grazing, and one dealing with mining. For a period it appeared that the bill might die in committee, but final compromises on the grazing and mining issues were made and a bill emerged out of conference. The bill was signed into law in October 1976 by President Gerald Ford.
Environmental Encyclopedia 3 As passed, FLPMA dealt with four general issue areas: 1) the organic act sections, giving the BLM authority and direction for managing the lands under its control; 2) grazing policy; 3) preservation policy; and 4) mining policy. The act begins by stating that these lands will remain in public ownership: “The Congress declares that it is the policy of the United States that...the public lands be retained in public ownership.” This represented the true, final closing of the public domain; the federal government would retain the vast majority of these lands. To underscore this point, FLPMA repealed hundreds of laws dealing with the public lands that were no longer relevant. The BLM, under the authority of the Secretary of the Interior, was authorized to manage these lands for multiple use and sustained yield and was required to develop land use plans and resource inventories for the lands based on long-range planning. A director of the BLM was to be appointed by the President, subject to confirmation by the Senate. FLPMA limited the withdrawal authority of the Secretary, often used to close lands to mineral development or to protect them for other environmental reasons, by repealing many of the sources of this authority and limiting its uses in other cases. The act allowed for the sale of public lands under a set of guidelines. In a section of the law that received much attention, the BLM was authorized to enforce the law on the lands it managed. The agency was directed to cooperate with local law enforcement agencies as much as possible in this task. It was these agencies, and citizens who lived near BLM lands, who were skeptical of this new BLM enforcement power. Other important provisions of the law allowed for the capture, removal, and relocation of wild horses and burros from BLM lands and authorized the Secretary of the Interior to grant rights-of-way across these lands for most pipelines and electrical transmission lines. The controversial grazing fee formula in the House bill, favored by the livestock industry, was dropped in the conference committee. In its place, FLPMA froze grazing fees at the 1976 level for one year and directed the Secretaries of Agriculture and the Interior to undertake a comprehensive study of the grazing fee issue so that an equitable fee could be determined. This report was completed in 1977, and Congress established a statutory fee formula in 1978. That formula was only binding until 1985, though, and since that time Congress has debated the grazing fee issue numerous times, but the issue remains unsettled. FLPMA also provided that grazing permits be for ten year periods, and that at least two year notice be given before permits were cancelled (except in an emergency). At the end of the ten year lease, if the lands are to remain in grazing, the current permittee has the first priority on renewing the lease to those lands. This virtually guarantees a rancher the use of certain public lands as long as they are to be used for
Federal Land Policy and Management Act (1976)
grazing. The permittee is also to receive compensation for private improvements on public lands if the permit is cancelled. These provisions, advocated by livestock interests, further demonstrated their belief, and the belief of their supporters in Congress, that these grazing permits were a type of property right. Grazing advisory boards, originally started after the Taylor Grazing Act but terminated in the early 1970s, were resurrected. These boards consist of local grazing permittees in the area, and advise the BLM on the use of range improvement funds and on allotment management plans. Important provisions regarding the preservation of BLM lands were also included in FLPMA. BLM lands were not covered in the Wilderness Act of 1964, and FLPMA dealt with this omission by directing that these lands be reviewed for potential wilderness designation, and that recommendations be made by the agency of which lands should be designated as wilderness. These designations would then be acted upon by Congress. This process is well underway. As has been the case with additions to the National Wilderness Preservation System on national forest lands since RARE II, BLM wilderness designation is being considered on a state-by-state basis. Thus far, a comprehensive wilderness designation law has only been passed for Arizona and California. Recent controversy has centered over the designation of wilderness in Utah. FLPMA established a special California Desert Conservation Area, and directed the BLM to study this area and develop a long-range plan for its management. In 1994, after eight years of consideration, Congress passed the California Desert Protection Act. Senator Dianne Feinstein of California played the major role in guiding the legislation to passage, including overcoming an opposition-led filibuster against the act in October. The act, which included a number of compromises with desert users, established two new national parks and a new national preserve as well as designating approximately 7.5 million acres (3 million ha) of California desert as wilderness (in the two parks, the preserve, and nearly 70 new wilderness areas). The new national parks were created by enlarging and upgrading the existing Death Valley and Joshua Tree National Monuments. The Mojave National Preserve was originally to be a third national park in the desert, but its status was reduced to a national preserve to allow continued hunting, a compromise that helped gain further support for the bill. This law protected more wilderness than any law since the 1980 Alaska Lands Act. The following year, however, there was a move to alter these provisions. As part of the 1996 fiscal year Interior Appropriations bill, Congress directed that the BLM—not the National Park Service—manage the new Mojave National Preserve. According to Republican supporters, the BLM would allow for more use of the land. President Clinton vetoed 553
Environmental Encyclopedia 3
Federal Power Commission
this appropriations bill in December 1995, in part due to this change in California Desert management. When the final Interior Appropriations Act was passed in April 1996, it included a provision requiring the Park Service to manage the Mojave under the less restrictive BLM standards, but it also allowed the President to waive this provision. Clinton signed the bill, and then immediately waived the provision, so the Mojave is being managed by the National Park Service under its own standards. FLPMA required that all mining claims, based on the 1872 Mining Law, be recorded with the BLM within three years. Claims not recorded were presumed abandoned. In the past, such claims only had to be recorded at the county courthouse in the county in which the claim was located. This allowed for increased knowledge about the number and location of such claims. The law also included amendments to the Mineral Leasing Act of 1920, increasing the share of the revenues from such leases that went to the states, allowing the states to spend these funds on any public facilities needed (rather than just roads and schools), and reducing the amount of revenues going to the fund to reclaim these mineral funds. The implementation of FLPMA has been problematic. One consequence of the act, and the planning and management that it has required, was the stimulation of western hostility to the BLM and the existence of so much federal lands. According to a number of analysts, FLPMA was largely responsible for starting the Sagebrush Rebellion, the movement to have federal lands transferred to the states. The foremost implementation problems have been due to the poor bureaucratic capacity of the BLM: the lack of adequate funding, the lack of an adequate number of employees, poor standing within the U.S. Department of the Interior and presidential administrations, and its history of subservience to grazing and mining interests. [Christopher McGrory Klyza]
RESOURCES BOOKS Dana, S. T., and S. K. Fairfax. Forest and Range Policy. 2nd ed. New York: McGraw-Hill, 1980.
PERIODICALS “Fragile California Desert Bill Blooms Late in Session.” Congressional Quarterly Almanac 50 (1994): 227–231. “Public Land Management.” Congressional Quarterly Almanac 32 (1976): 182–188. Senzel, I. “Genesis of a Law, Part 1.” American Forests (January 1978): 30–32+. ———. “Genesis of a Law, Part 2.” American Forests (February 1978): 32–39.
554
Federal Power Administration see U.S. Department of Energy
Federal Power Commission The Federal Power Commission was established June 23, 1930, under the authority of the Federal Water Power Act, which was passed on March 3, 1921. The commission was terminated on August 4, 1977, and its functions were transferred to the Federal Energy Regulatory Commission under the umbrella of the U.S. Department of Energy. The most important function of the commission during its 57-year existence was the licensing of water-power projects. It also reviewed plans for water-development programs submitted by major federal construction agencies for conformance with the interests of public good. In addition, the commission retained responsibility for interstate regulation of electric utilities and the siting of hydroelectric power plants as well as their operation. It also set rates and charges for the transportation and sale of natural gas and electricity. The five members of the commission were appointed by the president with approval of the Senate; three of the members were the Secretaries of the Interior, Agriculture, and War (later designated as U.S. Department of the Army). The commission retained its status as an independent regulatory agency for decision making, which is considered necessary for national security purposes.
Feedlot runoff Feedlots are containment areas used to raise large numbers of animals to an optimum weight within the shortest time span possible. Most feedlots are open air, and are thereby subject to variable weather conditions. A substantial portion of the feed is not converted into meat, and is excreted, thus degrading the air, ground, and surface water quality. The issues of odor and water pollution from such facilities center on the traditional attitudes of producers that farming has always produced odors, and manure is a fertilizer, not a waste from a commercial undertaking. Animal excrement is indeed rich in nutrients, particularly nitrogen, phosphorus, and potassium. A single 1,300lb (590 kg) steer will excrete about 150 lb (68 kg) of nitrogen; 50 lb (23 kg) of phosphorus; and 100 lb (45 kg) of potassium in the course of a year. That is almost as much nutrient as would be required to grow one acre of corn, which needs 185 lb (84 kg) of nitrogen; 80 lb (36 kg) of phosphorus; and 215 lb (98 lb) of potassium. Unfortunately, manure is costly to transport, difficult to apply, and its nutrient quality is inconsistent. Artificial fertilizers, on the other hand, offer
Environmental Encyclopedia 3
Feedlots
ease of application and storage and proven quality and plant growth. Legislative and regulatory action have increased with encroachment of urban population and centers of high sensitivity, such as shopping malls and recreation facilities. Since odor is difficult to measure, control of these facilities is being achieved on the grounds that they must not pose a “nuisance,” a principle that is being sustained by the courts. Odor is influenced by feed, number and species of animal, lot surface and manure removal frequency, wind, humidity, and moisture. These factors, individually and collectively, influence the type of decomposition that will occur. Typically, it is an anaerobic process which produces a sharp pungent odor of ammonia, the nauseating odor of rotten eggs from hydrogen sulfide, and the smell of decaying cabbage or onions from methyl mercaptan. Odorous compounds seldom reach concentrations that are dangerous to the public. However, levels can become dangerously elevated with reduced ventilation in winter months or during pit cleaning. It is this latter activity, in conjunction with disposal onto the surface of the land, that is most frequently the cause of complaints. Members of the public respond to feedlot odors depending on their individual sensitivity, previous experience, and disposition. It can curtail outdoor activities and require windows to be closed, which means the additional use of air purifiers or air-conditioning systems. Surface water contamination is the problem most frequently attributed to open feedlot and manure spreading activities. It is due to the dissolving, eroding action of rain striking the manured-covered surface. Duration and intensity of rainfall dictates the concentration of contaminants that will flow into surface waters. Their dilution or retention in ponds, rivers, and streams depends on area hydrology (dry or wet conditions) and topography (rolling or steeply graded landscape). Such factors also influence conditions in those parts of the continent where precipitation is mainly in the form of snow. Large snow drifts form around wind breaks, and in the early spring, substantial volumes of snowmelt are generated. Odor and water pollution control techniques include simple operational changes, such as increasing the frequency of removing manure, scarifying the surface to promote aerobic conditions, and applying disinfectants and feed-digestion supplements. Other control measures require construction of additional structures or the installation of equipment at feedlots. These measures include installing water spargelines, adding impervious surfaces, drains, pits and roofs, and installing extraction fans. See also Animal waste; Odor control [George M. Fell]
RESOURCES BOOKS Larson, R. E. Feedlot and Ranch Equipment for Beef Cattle. Washington, DC: U.S. Government Printing Office, 1976. Peters, J. A. Source Assessment: Beef Cattle Feedlots. Research Triangle Park, NC: U.S. Environmental Protection Agency, 1977.
Feedlots A feedlot is an open space where animals are fattened before slaughter. Beef cattle usually arrive at the feedlot directly from the ranch or farm where they were raised, while poultry and pigs often remain in an automated feedlot from birth until death. Feed (often grains, alfalfa, and molasses) is provided to the animals so they do not have to forage for their food. This feeding regimen promotes the production of higher quality meat more rapidly. There are no standard parameters for the number of animals per acre in a feedlot, but the density of animals is usually very high. Some feedlots can contain 100,000 cows and steers. Animal rights groups actively campaign against confining animals in feedlots, a practice they consider inhumane, wasteful, and highly polluting. Feedlots were first introduced in California in the 1940s, but many are now found in the Midwest, closer to grain supplies. Feedlot operations are highly mechanized and large numbers of animals can be handled with relatively low labor input. About half of the beef produced in the United States is feedlot-raised. Feedlots are a significant nonpoint source of the pollution flowing into surface waters and groundwater in the United States. At least half a billion tons of animal waste are produced in feedlots each year. Since this waste is concentrated in the feedlot rather than scattered over grazing lands, it overwhelms the soil’s ability to absorb and buffer it and creates nitrate-rich, bacteria-laden runoff to pollute streams, rivers, and lakes. Dissolved pollutants can also migrate down through the soil into aquifers, leading to groundwater pollution over wide areas. To protect surface waters, most states require that feedlot runoff be collected. However, protection of groundwater has proved to be a more difficult problem, and successful regulatory and technological controls have not yet been developed. [Christine B. Jeryan]
RESOURCES BOOKS Kerr, R. S. Livestock Feedlot Runoff Control By Vegetative Filters. Ada, OK: U.S. Environmental Protection Agency, 1979. Larson, R. E. Feedlot and Ranch Equipment for Beef Cattle. Washington, DC: U.S. Government Printing Office, 1976.
555
Environmental Encyclopedia 3
Fertilizer
Peters, J. A. Source Assessment: Beef Cattle Feedlots. Research Triangle Park, NC: U.S. Environmental Protection Agency, 1977.
Felis concolor coryi see Florida panther
Fens see Wetlands
Ferret see Black-footed ferret
Fertility see Biological fertility
Fertilizer Any substance that is applied to land to encourage plant growth and produce higher crop yield. Fertilizers may be made from organic material—such as recycled waste, animal manure, compost, etc.—or chemically manufactured. Most fertilizers contain varying amounts of nitrogen, phosphorus, and potassium, inorganic nutrients that plants need to grow. Since the 1950s crop production worldwide has increased dramatically because of the use of fertilizers. In combination with the use of pesticides and insecticides, fertilizers have vastly improved the quality and yield of such crops as corn, rice, wheat, and cotton. However overuse and improper use of fertilizers have also damaged the environment and affected the health of humans, animals, and plants. In the United States, it is estimated that as much as 25% of fertilizer is carried away as runoff. Fertilizer runoff has contaminated groundwater and polluted bodies of water near and around farmlands. High and unsafe nitrate concentrations in drinking water have been reported in countries that practice intense farming, including the United States. Accumulation of nitrogen and phosphorus in waterways from chemical fertilizers has also contributed to the eutrophication of lakes and ponds. Ammonia, released from the decay of fertilizers, causes minor irritation to the respiratory system. While very few advocate the complete eradication of chemical fertilizers, many environmentalists and scientists urge more efficient ways of using them. For example, some farmers use up to 40% more fertilizer than they need. Frugal applications—in small doses and on an as-needed-basis on specific crops—helps reduce fertilizer waste and runoff. The 556
use of organic fertilizers, including animal waste, crop residues, or grass clippings, is also encouraged as an alternative to chemical fertilizers. See also Cultural eutrophication; Recycling; Sustainable agriculture; Trace element/micronutrient
Fibrosis A medical term that refers to the excessive growth of fibrous tissue in some part of the body. Many types of fibroses are known, including a number that affect the respiratory system. A number of these respiratory fibroses, including such conditions as black lung disease, silicosis, asbestosis, berylliosis, and byssinosis, are caused by environmental factors. A fibrosis develops when a person inhales very tiny solid particles or liquid droplets over many years or decades. Part of the body’s reaction to these foreign particles is to enmesh them in fibrous tissue. The disease name usually suggests the agent that causes the disease. Silicosis, for example, is caused by the inhalation of silica, tiny sand-like particles. Occupational sources of silicosis include rock mining, quarrying, stone cutting, and sandblasting. Berylliosis is caused by the inhalation of beryllium particles over a period of time, and byssinosis (from byssos, the Greek word for flax)is found among textile workers who inhale flax, cotton or hemp fibers.
Field capacity Field capacity refers to the amount of water that can be held in the soil after all the gravitational water has drained away. Sandy soils will have less water held at field capacity than clay soils. The more water a soil can hold at field capacity the more water is available for plants.
Filters Primarily devices for removing particles from aerosols. Filters utilize a variety of microscopic forms and a variety of mechanisms to accomplish this. Most common are fibrous filters, in which the fibers are of cellulose (paper filters), but almost any fibrous material, including glass fiber, wool, asbestos, and finely spun polymers, has been used. Microscopically, these fibers collect fine particles because fine particles vibrate around their average position due to collision with air molecules (Brownian motion). These vibrations are likely to cause them to collide with the fibers as they pass through the filter. Larger particles are removed because, as the air stream carrying them passes through the filter, some of the particles are intercepted as they pass close to the fibers and touch them. Other particles are in air streams that would cause
Environmental Encyclopedia 3 them to miss the fibers, but when the air stream bends to go around the fibers the momentum, of the particles is too much to let them remain with the stream, so that they are “centrifuged out” onto the fibers (impaction). By electrophoresis, still other particles may be attracted to the fibers by electric charges of opposite sign on the particles and on the fibers. Finally, particles may simply be larger than the space between fibers, and be sifted out of the air in a process called sieving. Filters are also formed by a process in which polymers such as cellulose esters are made into a film out of a solution in an organic solvent containing water. As the solvent evaporates, a point is reached at which the water separates out as microscopic droplets, in which the polymer is not soluble. The final result is a film of polymer full of microscopic holes where the water droplets once were. Such filters can have pore sizes from a small fraction of a micrometer to a few micrometers. (One micrometer equals 0.00004 in) These are called membrane filters. Another form of membrane filter is formed from the polymer called polycarbonate. A thin film of this material is fastened to a surface of uranium metal and placed in a nuclear reactor for a time. In the reactor, the uranium undergoes nuclear fission, and gives off particles called fission fragments, atoms of the elements formed when the uranium atoms split. Every place that an atom from the fissioning uranium passes through the film is disturbed on a molecular scale. After removal from the reactor, if the polymer sheet is soaked in alkali, the disturbed material is dissolved. The amount of material dissolved is controlled by the temperature of the solution and the amount of time the film is treated. Since the fission fragments are very energetic, they travel in straight lines, and so the holes left after the alkali treatment are very straight and round. Again, pore sizes can be from a small fraction of a micrometer to a few micrometers. These filters are known by their trade name, Nuclepore. In both types of membrane filters, the small pore size increases the role of sieving in particle removal. Because of their very simple structure, Nuclepore filters have been much studied to understand filtration mechanisms, since they are far easier to represent mathematically than a random arrangement of fibers. It was mentioned above that small particles are collected because of their Brownian motion, while larger particles are removed by interception, impaction, and sieving. Under many conditions, a particle of intermediate size may pass through, too large for Brownian diffusion, and too small for impaction, interception, or sieving. Hence many filters may show a penetration maximum for particles of a few tenths of a micrometer. For this reason, standard methods of filter testing specify that the aerosol test for determining the efficiency of filters should contain particles in that size
Filtration
range. This phenomenon has also been used to select relatively uniform particles of that size out of mixtures of many sizes. In circumstances where filter strength is of paramount importance, such as in industrial filters where a large air flow must pass through a relatively small filter area, filters of woven cloth are used, made of materials ranging from cotton to glass fiber and asbestos, these last for use when very hot gases must be filtered. The woven fabric itself is not a particularly good filter, but it retains enough particles to form a particle cake on the surface, and that soon becomes the filter. When the cake becomes thick enough to slow airflow to an unacceptable degree, the air flow is interrupted briefly, and the filters are shaken to dislodge the filter cake, which falls into bins at the bottom of the filters. Then filtration is resumed, allowing the cloth filters to be used for months before being replaced. A familiar domestic example is the bag of a home vacuum cleaner. Cement plants and some electric power plants use dozens of cloth bags up to several feet in diameter and more than ten feet (three meters) in length to remove particles from their waste gases. Otherwise poor filters can be made efficient by making them thick. A glass tube can be partially plugged with a wad of cotton or glass fiber, then nearly filled with crystals of sugar or naphthalene and used as a filter; this is advantageous since sugar can be dissolved in water, or naphthalene will sublime away if gently heated, leaving behind the collected particles. See also Baghouse; Electrostatic precipitation; Odor control; Particulate [James P. Lodge Jr.]
Filtration A common technique for separating substances in two physical states. For example, a mixture of solid and liquid can be separated into its components by passing the mixture through a filter paper. Filtration has many environmental applications. In water purification systems, impure water is often passed through a charcoal filter to remove the solid and gaseous contaminants that give water a disagreeable odor, color, or taste. Trickling filters are used to remove solid wastes in plants. Solid and liquid contaminants in waste industrial gases can be removed by passing them through a filter prior to discharge in a smokestack.
Fire see Prescribed burning; Wildfire 557
Environmental Encyclopedia 3
Fire ants
Fire ants Two distinct species of fire ants (genus Solenopsis) from South America have been introduced into the United States this century. The South American black fire ant (S. richteri) was first introduced into the United States in 1918. Its close relative, the red fire ant (S. wagneri), was introduced in 1940, probably escaping from a South American freighter docked in Mobile, Alabama. Both species became established in the southeastern United States, spreading into nine states from Texas across to Florida and up into the Carolinas. It is estimated that they have infested over 320 million acres (130 million ha) covering 13 states as well as Puerto Rico. Successful introduced species are often more aggressive than their native counterparts, and this is definitely true of fire ants. They are very small, averaging 0.2 in (5 mm) in length, but their aggressive, swarming behavior makes them a threat to livestock and pets as well as humans. These industrious, social insects build their nests in the ground— the location is easily detected by the elevated earthen mounds created from their excavations. The mounds are 18–36 in (46–91 cm) in diameter and may be up to 36 in (91 cm) high, although mounds are generally 6–10 in (15–25 cm) high. Each nest contains as many as 25,000 workers, and there may be over 100 nests on an acre of land. If the nest is disturbed, fire ants swarm out of the mound by the thousands and attack with swift ferocity. As with other aspects of ant behavior, a chemical alarm pheromone is released that triggers the sudden onslaught. Each ant in the swarm uses its powerful jaws to bite and latch onto whatever disturbed the nest, while using the stinger on the tip of its abdomen to sting the victim repeatedly. The intruder may receive thousands of stings within a few seconds. The toxin produced by the fire ant is extremely potent, and it immediately causes an intense burning pain that may continue for several minutes. After the pain subsides, the site of each sting develops a small bump which expands and becomes a tiny, fluid-filled blister. Each blister flattens out several hours later and fills with pus. These swollen pustules may persist for several days before they are absorbed and replaced by scar tissue. Fire ants obviously pose a problem for humans. Some people may become sensitized to fire ant venom, have a generalized systematic reaction, and go into anaphylactic shock. Fire-ant induced deaths have been reported. Because these species prefer open, grassy yards or fields, pets and livestock may fall prey to fire ant attacks as well. Attempts to eradicate this pest involved the use of several different generalized pesticides, as well as the widespread use of gasoline either to burn the nest and its inhabitants or to kill the ants with strong toxic vapors. Another 558
approach involved the use of specialized crystalline pesticides which were spread on or around the nest mound. The workers collected them and took them deep into the nest, where they were fed to the queen and other members of the colony, killing the inhabitants from within. A more recent method involves the release of a natural predator of the fire ant, the “phorid” fly. The fly releases an egg into the fire ant. The larva then eats the ant’s brain while releasing an enzyme. The enzyme systematically destroys the joints causing the ant’s head to fall off. The flies were released in 11 states as of 2001 and seem to be slowly inhibiting the growth of the fire ant population. As effective as some of these methods are, fire ants are probably too numerous and well established to be completely eradicated in North America. [Eugene C. Beckham]
RESOURCES BOOKS Holldobler, B. The Ants. Cambridge: Harvard University Press, 1990. Taber, Stephen Welton. Fire Ants. Texas A&M University Press, 2000.
PERIODICALS Vergano, Dan. “Decapitator Flies will Fight Fire Ants.” USA Today, November 20, 2000.
OTHER “Imported Fire Ant.” NAPIS Page for BioControl Host. April 2001 [cited May 2002]. . “Invasive Species and Pest Management: Imported Fire Ant.” Animal and Plant Health Inspection Service. May 2002 [cited May 2002]. .
First World The world’s more wealthy, politically powerful, and industrially developed countries are unofficially, but commonly, designated as the First World. The term differentiates the powerful, capitalist states of Western Europe and North America and Japan from the (formerly) communist states (Second World) and from the nonaligned, developing countries (Third World) in world systems theory. In common usage, First World refers mainly to a level of economic strength. The level of industrial development of the First World, characterized by an extensive infrastructure, mechanized production, efficient and fast transport networks, and pervasive use of high technology, consumes huge amounts of natural resources and requires an educated and skilled work force. However, such a system is usually highly profitable. Often depending upon raw materials imported from poorer countries (wood, metal ores, petroleum, food, and so on), First World countries efficiently produce goods that less developed countries desire but cannot produce themselves, including computers, airplanes, optical equipment,
Environmental Encyclopedia 3 and military hardware. Generally, high domestic and international demand for such specialized goods keeps First World countries wealthy, allowing them to maintain a high standard of material consumption, education, and health care for their citizens.
Fish and Wildlife Service The United States Fish & Wildlife Service based in Washington, D.C., is charged with conserving, protecting, and enhancing fish, wildlife, and their habitats for the benefit of the American people. As a division of the U.S. Department of the Interior, the Service’s primary responsibilities are for the protection of migratory birds, endangered species, freshwater and anadromous (saltwater species that spawn in freshwater rivers and streams) fisheries, and certain marine mammals. In addition to its Washington, D.C., headquarters, the Service maintains seven regional offices and a number of field units. Those include national wildlife refuges, national fish hatcheries, research laboratories, and a nationwide network of law enforcement agents. The Service manages 530 refuges that provide habitats for migratory birds, endangered species, and other wildlife. It sets migratory bird hunting regulations, and leads an effort to protect and restore endangered and threatened animals and plants in the United States and other countries. Service scientists assess the effects of contaminants on wildlife and habitats. Its geographers and cartographers work with other scientists to map wetlands and carry out programs to slow wetland loss, or preserve and enhance these habitats. Restoring fisheries that have been depleted by overfishing, pollution, or other habitat damage is a major program of the Service. Efforts are underway to help four important species: lake trout in the upper Great Lakes; striped bass in both the Chesapeake Bay and Gulf Coast; Atlantic salmon in New England; and salmonid species of the Pacific Northwest. Fish and Wildlife biologists working with scientists from other federal and state agencies, universities, and private organizations develop recovery plans for endangered and threatened species. Among its successes are the American alligator, no longer considered endangered in some areas, and a steadily increasing bald eagle population. Internationally, the Service cooperates with 40 wildlife research and wildlife management programs, and provides technical assistance to many other countries. Its 200 special agents and inspectors help enforce wildlife laws and treaty obligations. They investigate cases ranging from individual migratory bird hunting violations to large-scale poaching and commercial trade in protected wildlife.
Fish kills
It its “Vision for the Future” statement, the Fish and Wildlife Service states its mission to “provide leadership to achieving a national net gain of fish and wildlife and the natural systems which support them.” Into the twenty-first century, this vision statement calls for new conservation compacts with all citizens to increase the value of the United States wildlife holdings in number and biodiversity, and to provide increased opportunities for the public to use, associate with, learn about and enjoy America’s wildlife wealth. [Linda Rehkopf]
RESOURCES OTHER U.S. Fish and Wildlife Service. Vision for the Future. Washington, DC: U.S. Government Printing Office, 1991.
ORGANIZATIONS
U.S. Fish and Wildlife Service, Email:
[email protected],
Fish farming see Blue revolution (fish farming)
Fish kills Fishing has long been a major provider of food and livelihood to people throughout the world. In the United States, 50 million people enjoy fishing as an outdoor recreation—38 million in fresh water and 12 million in salt water. Combined, they spend over $315 million annually on this sport. It is no surprise, then, that public attitude towards factors that influence fishing is strong. The Environmental Protection Agency (EPA) is charged with overseeing the quality of the nation’s waterways. In 1977 they received information on 503 separate incidents in which 16.5 million fish were killed. In 1974, a record 47 million fish were killed in the Black River near Essex, Maryland, by a discharge from a sewage plant. Fish kills can result from natural as well as human causes. Natural causes include sudden changes in temperature, oxygen depletion, toxic gases, epidemics of viruses and bacteria, infestations of parasites, toxic algal blooms, lightning, fungi, and other similar factors. Human influences that lead to fish kills include acid rain, sewage effluent, and toxic spills. In a 10-year study of the causes of 409 documented fish kills totaling 3.6 million fish in the state of Missouri, 559
Fish kills
S. M. Czarnezki determined the percentage contributions as: 26% municipal-related (sewage effluent), 17% from agricultural activities, 11% from industrial operations, 8% by transportation accidents, 7% each by oxygen depletions, nonindustrial operations, and mining, 4% by disease, 3% by “other” factors, and 10% as undetermined. Fish kills may occur quite rapidly, even within minutes of a major toxic spill. Usually, however, the process takes days or even months, especially in natural causes. Experienced fishery biologists usually need a wide variety of physical, chemical, and biological tests of the habitat and fish to determine the exact causative agent or agents. The investigative procedure is often complex and may require a lot of time. Species of fish vary in their susceptibility to the different factors that contribute to die-offs. Some species are sensitive to almost any disturbance, while other fish are tolerant of changes. As discussed below, predatory fish at the top of the food chain/web are typically the first fish affected by toxic substances that accumulate slowly in the water. The most common contributor to fish kills by natural causes is oxygen depletion, which occurs when the amount of oxygen utilized by respiration, decomposition, and other processes exceeds oxygen input from the atmosphere and photosynthesis. Oxygen is more soluble in cold than warm water. Summer fish kills occur when lakes are thermally stratified. If the lake is eutrophic (highly productive), dead plant and animal matter that settles to the bottom undergoes decomposition, utilizing oxygen. Under windless conditions, more oxygen will be used than is gained, and animals like fish and zooplankton often die from suffocation. Winter fish kills can also occur. Algae can photosynthesize even when the lake is covered with ice because sunlight can penetrate through the ice. However, if heavy snowfall accumulates on top of the ice, light may not reach the underlying water, and the phytoplankton die and sink to the bottom. Decomposers and respiring organisms again use up the remaining oxygen and the animals eventually die. When the ice melts in the spring, dead fish are found floating on the surface. This is a fairly common occurrence in many lakes in Michigan, Wisconsin, Minnesota, and surrounding states. For example, dead alewives (Alosa pseudoharengus) often wash up on the southwestern shore of Lake Michigan near Chicago during spring thaws following harsh winters. In summer and winter, artificial aeration can help prevent fish kills. The addition of oxygen through aeration and mixing is one of the easiest and cheapest methods of dealing with low oxygen levels. In intensive aquaculture ponds, massive fish deaths from oxygen depletion are a constant threat. Oxygen sensors are often installed to detect low 560
Environmental Encyclopedia 3 oxygen levels and trigger the release of pure oxygen gas from nearby cylinders. Natural fish kills can also result from the release of toxic gases. In 1986, 1,700 villagers living on the shore of Lake Nyos, Cameroon, mysteriously died. A group of scientists sent to investigate determined that they died of asphyxiation. Evidently a landslide caused the trapped carbon dioxide-rich bottom waters to rapidly rise to the surface much like a popped champagne bottle. The poisonous gas killed everyone in its downwind path. Fish in the upper oxygenated waters of the lake were also killed as the carbon dioxide passed through. Hydrogen sulfide (H2S), a foul-smelling gas naturally produced in the oxygen-deficient sediments of eutrophic lakes, can also cause fish deaths. Even in oxygenated waters, high H2S levels can cause a condition in fish called “brown blood.” The brown color of the blood is caused by the formation of sulfhemoglobin, which inhibits the blood’s oxygencarrying capacity. Some fish survive, but sensitive fish such as trout usually die. Fish kills can also result from toxic algal blooms. Some bluegreen algae in lakes and dinoflagellates in the ocean release toxins that can kill fish and other vertebrates, including humans. For example, dense blooms of bluegreen algae such as Anabaena, Aphanizomenon, and Microcystis have caused fish kills in many farm ponds during the summer. Fish die not only from the toxins but also from asphyxiation resulting from decomposition of the mass of algae that also die due to lack of sunlight in the densely-populated lake water. In marine waters, toxic dinoflagellate blooms called red tides are notorious for causing massive fish kills. For example, blooms of Gymnodinium or Gonyaulax periodically kill fish along the East and Gulf Coasts of the United States. Die-offs of salmon in aquaculture pens along the southwestern shoreline of Norway have been blamed on these organisms. Millions of dollars can be lost if the fish are not moved to clear waters. Saxitoxin, the toxic chemical produced by Gonyaulax, is 50 times more lethal than strychnine or curare. Pathogens and parasites can also contribute to fish kills. Usually the effect is more secondary than direct. Fish weakened by parasites or infections of bacteria or viruses usually are unable to adapt to and survive changes in water temperature and chemistry. Under stressful conditions of over-crowding and malnourishment, gizzard shad often die from minor infestations of the normally harmless bacterium Aeromonas hydrophila. In the same way, fungal infections such as Ichthyophonus hoferia can contribute to fish kills. Most fresh water aquarium keepers are familiar with the threat of “ick” for their fish. The telltale white spots under the epithelium of the fins, body, and gills are caused by the protozoan parasite Ichthyophthirius multifiliis.
Environmental Encyclopedia 3
Fisheries and Oceans Canada
Changes in pH of lakes resulting from acid rain are a modern example of how humans can cause fish kills. Atmospheric pollutants such as nitrogen dioxide and sulfur dioxide released from automobiles and industries mix with water vapor and cause the rainwater to be more acid than normal (>pH 6.5). Nonprotected lakes downwind that receive this rainfall increase in acidity, and sensitive fish eventually die. Most of the once-productive trout streams and lakes in the southern half of Norway are now devoid of these prized fish. Sweden has combatted this problem by adding enormous quantities of lime to their affected lakes in the hope of neutralizing the acid’s effects. Sewage treatment plants add varying amounts of treated effluent to streams and lakes. Sometimes during heavy rainfall raw sewage escapes the treatment process and pollutes the aquatic environment. The greater the organic matter that comprises the effluent, the more decomposition occurs, resulting in oxygen usage. Scientists call this the biological or biochemical oxygen demand (BOD), the quantity of oxygen required by bacteria to oxidize the organic waste aerobically to carbon dioxide and water. It is measured by placing a sample of the wastewater in a glassstoppered bottle for five days at 71 degrees Fahrenheit (20 degrees Celsius) and determining the amount of oxygen consumed during this time. Domestic sewage typically has a BOD of about 200 milligrams per liter, or 200 parts per million (ppm); rates for industrial waste may reach several thousand milligrams per liter. Reports of fish kills in industrialized countries have greatly increased in recent years. Sewage effluent not only kills fish; it can also create a barrier to fish migrating upstream because of the low oxygen levels. For example, coho salmon will not pass through water with oxygen levels below 5 ppm. Oxygen depletion is often more detrimental to fish than thermal shock. Toxic chemical spills, whether via sewage treatment plants or other sources, are the major cause of fish kills. Sudden discharges of large quantities of highly toxic substances usually cause massive death of most aquatic life. If they enter the ecosystem at sublethal levels over a long time, the effects are more subtle. Large predatory or omnivorous fish are typically the first ones affected. This is because toxic chemicals like methyl mercury, DDT, PCBs, and other organic pollutants have an affinity for fatty tissue and progressively accumulate in organisms up the food chain. This is called the principle of biomagnification. Unfortunately for human consumers, these fish do not usually die right away, so people who eat a lot of tainted fish become sick and possibly die. Such is the case for Minamata disease, named for the first documented connection between the death of fishermen and methyl mercury contamination. [John Korstad]
RESOURCES BOOKS Czarnezki, J. M. A Summary of Fish Kill Investigations in Missouri, 1970– 1979. Columbia, MO: Missouri Dept. of Conservation, 1983. Ehrlich, P. R., A. H. Ehrlich, and J. P. Holdren. Ecoscience: Population, Resources, Environment. San Francisco: W. H. Freeman, 1977. Goldman, C. R., and A. J. Horne. Limnology. New York: McGrawHill, 1983. Hill, D. M. “Fish Kill Investigation Procedures.” In Fisheries Techniques, edited by L. A. Nielson and D. L. Johnson. Bethesda, MD: American Fisheries Society, 1983. Meyer, F. P., and L. A. Barclay, eds. Field Manual for the Investigation of Fish Kills. Washington, DC: U.S. Fish and Wildlife Service, 1990. Moyle, P. B., and J. J. Cech Jr. Fishes: An Introduction to Ichthyology. 2nd ed. New York: Prentice-Hall, 1988.
PERIODICALS Keup, L. E. “How to ’Read’ A Fish Kill.” Water and Sewage Works 12 (1974): 48–51.
Fish nets see Drift nets; Gill nets
Fisheries and Oceans Canada The Department of Fisheries and Oceans (DFO) in Canada was created by the Department of Fisheries and Oceans Act on April 2, 1979. This act formed a separate government department from the Fisheries and Marine Service of the former Department of Fisheries and the Environment. The new department was needed, in part, because of increased interest in the management of Canada’s oceanic resources, and also because of the mandate resulting from the unilateral declaration of the 200-nautical-mi Exclusive Economic Zones in 1977. At its inception, the DFO assumed responsibility for seacoast and inland fisheries, fishing and recreational vessel harbors, hydrography and ocean science, and the coordination of policy and programs for Canada’s oceans. Four main organizational units were created: Atlantic Fisheries, Pacific and Freshwater Fisheries, Economic Development and Marketing, and Ocean and Aquatic Science. Among the activities included in the department’s original mandate were: comprehensive husbandry of fish stocks and protection of habitat; “best use” of fish stocks for optimal socioeconomic benefits; adequate hydrographic surveys; the acquisition of sufficient knowledge for defense, transportation, energy development and fisheries, with provision of such information to users; and continued development and maintenance of a national system of harbors. Since its inception, the department’s mandate has changed in minor ways, to include new terminology such as “sustainability” and to include Canada’s “ecological inter561
Environmental Encyclopedia 3
Floatable debris
ests.” Recently, attention has been given to support those who make their living or benefit from the sea. This constituency includes the public first, but the DFO also directs its efforts toward commercial fishers, fish plant workers, importers, aquaculturists, recreational fishers, native fishers, and the ocean manufacturing and service sectors. There are now six DFO divisions: Science, Atlantic Fisheries, Pacific Fisheries, Inspection Services, International, and Corporate Policy and Support administered through six regional offices. A primary focus of DFO’s current work is the failing cod and groundfish stocks in the Atlantic; the department has commissioned two major inquiries in recent years to investigate those problems. In addition, the DFO has increased regulation of foreign fleets, and works to manage straddling stocks in the Atlantic Exclusive Economic Zone through the North Atlantic Fisheries Organization, the Pacific drift nets fisheries, recreational fishing and aquaculture development. In 1992, management problems in the major fisheries arose on both the Pacific and Atlantic coasts. American fisheries managers reneged on quotas established through the Pacific Salmon Treaty, northern cod stocks in Newfoundland virtually failed, and the Aboriginal Fishing Strategy was adopted as part of a land claim settlement on the Pacific coast. There are several major problems associated with ocean resource and environment management in Canada— problems that the DFO has neither the resources, the legislative infrastructure, nor the political will to address. One result of this has been the steady decline of commercial fish stocks, highlighted by the virtual collapse of Atlantic cod (Gadus callarias), which is Canada’s, and perhaps, the Atlantic’s most historically significant fishery. A second result has been an increased need to secure international agreements with Canada’s ocean neighbors. A third result of social significance is the perception that fisheries have been used in a political sense in cases of regional economic incentives and land claims settlements. See also Commercial fishing; Department of Fisheries and Oceans (DFO), Canada
Floatable debris Floatable debris is buoyant solid waste that pollutes waterways. Sources include boats and shipping vessels, storm water discharge, sewer systems, industrial activities, offshore drilling, recreational beaches, and landfills. Even waste dumped far from a water source can end up as floatable debris when flooding, high winds, or other weather conditions transport it into rivers and streams. According to the U.S. Environmental Protection Agency (EPA), floatable debris is responsible for the death of over 100,000 marine mammals and one million seabirds annually. Seals, sea lions, manatees, sea turtles, and other marine creatures often mistake debris for food, eating objects that block their intestinal tract or cause internal injury. They can also become entangled in lost fishing nets and line, six-pack rings, or other objects. Fishing nets lost at sea catch tons of fish that simply decompose, a phenomenon known as “ghost fishing.” Often, seabirds are ensnared in these nets when they try to eat the fish. Lost nets and other entrapping debris are also a danger for humans who swim, snorkel, or scuba dive. And biomedical waste and sewage can spread disease in recreational waters. Floatable debris takes a significant financial toll as well. It damages boats, deters tourism, and negatively impacts the fishing industry. [Paula Anne Ford-Martin]
RESOURCES BOOKS Coe, James, and Donald Rogers, eds. Marine Debris: Sources, Impacts, and Solutions New York: Springer-Verlag, 1996.
PERIODICALS Miller, John. “Solving the Mysteries of Ocean-borne Trash.” U.S. News & World Report 126, no.14 (April 1999): 48.
[David A. Duffus]
OTHER
Fishing see Commercial fishing; Drift nets; Gill nets
U.S. Environmental Protection Agency, Office of Water, Oceans and Coastal Protection Division. Assessing and Monitoring Floatable Debris— Draft. [cited May 11, 2002]. . U.S. Environmental Protection Agency, Office of Water, Oceans and Coastal Protection Division. Turning the Tide on Trash: A Marine Debris Curriculum [cited May 2002]. .
ORGANIZATIONS
Fission see Nuclear fission 562
The Center for Marine Conservation, 1725 DeSales Street, N.W., Suite 600, Washington, DC USA 20036 (202) 429-5609, Fax: (202) 872-0619, Email:
[email protected], http://www.cmc-ocean.org
Environmental Encyclopedia 3
Flooding
Flooding in Texas caused by Hurricane Beulah in 1967. (National Oceanic and Atmospheric Administration.)
Flooding Technically, flooding occurs when the water level in any stream, river, bay, or lake rises above bank full. Bays may flood as the result of a tsunami or tidal wave induced by an earthquake or volcanic eruption; or as a result of a tidal storm surge caused by a hurricane or tropical storm moving inland. Streams, rivers and lakes may be flooded by high amounts of surface runoff resulting from widespread precipitation or rapid snow melt. On a smaller scale, flash floods due to extremely heavy precipitation occurring over a short period of time can flood streams, creeks, and low lying areas in a matter of a few hours. Thus, there are various temporal and spatial scales of flooding. Historical evidence suggests that flooding causes greater loss of life and property than any other natural disaster. The magnitude, seasonality, frequency, velocity, and load are all properties of flooding which are studied by meteorologists, climatologists, and hydrologists. Spring and winter floods occur with some frequency primarily in the mid-latitude regions of the earth, and particularly where continental climate is the norm. Five climatic
features contribute to the spring and winter flooding potential of any individual year or region: 1) heavy winter snow cover; 2) saturated soils or soils at least near their field capacity for storing water; 3) rapid melting of the winter’s snow pack; 4) frozen soil conditions which limit infiltration; and 5) somewhat heavy rains, usually from large scale cyclonic storms. Any combination of three of these five climatic features usually leads to some type of flooding. This type of flooding can cause hundreds of millions of dollars in property damage, but it can usually be predicted well in advance, allowing for evacuation and other protective action to be taken (sandbagging, for instance). In some situations flood control measures such as stream or channel diversions, dams, and levees can greatly reduce the risk of flooding. This is more often done in floodplain areas with histories of very damaging floods. In addition, land use regulations, encroachment statutes and building codes are often intended to protect the public from the risk of flooding. Flash flooding is generally caused by violent weather, such as severe thunderstorms and hurricanes. This type of flooding more frequently occurs during the warm season when convective thunderstorms develop more frequently. 563
Environmental Encyclopedia 3
Floodplain
Rainfall intensity is so great that the carrying capacity of streams and channels is rapidly exceeded, usually within hours, resulting in sometimes life-threatening flooding. It is estimated that the average death toll in the United States exceeds 200 per year as a result of flash flooding. Many government weather services provide the public with flash flood watches and warnings to prevent loss of life. Many flash floods occur as the result of afternoon and evening thundershowers which produce rainfall intensities ranging from a few tenths of an inch per hour to several inches per hour. In some highly developed urban areas, the risk of flash flooding has increased over time as the native vegetation and soils have been replaced by buildings and pavement which produce much higher amounts of surface runoff. In addition, the increased usage of parks and recreational facilities which lie along stream and river channels has exposed the public to greater risk. See also Urban runoff [Mark W. Seeley]
RESOURCES BOOKS Battan, L. J. Weather In Your Life. San Francisco: W. H. Freeman, 1983. Critchfield, H. J. General Climatology. 4th ed. Englewood Cliffs, NJ: Prentice-Hall, 1983.
Floodplain An area that has been built up by stream deposition, generally represented by the main drainage channel of a watershed, is called a floodplain. This area, usually relatively flat with respect to the surrounding landscape, is subject to periodic flooding, with return periods ranging from one year to 100 years. Floodplains vary widely in size, depending on the area of the drainage basin with which they are associated. The soils in floodplains are often dark and fertile, representing material lost the to erosive forces of heavy precipitation and runoff. These soils are often farmed, though subject to the risk of periodic crop losses due to flooding. In some areas, floodplains are protected by flood control measures such as reservoirs and levees and are used for farming or residential development. In other areas, land-use regulations, encroachment statutes and local building codes often prevent development on floodplains.
Flora All forms of plant life that live in a particular geographic region at a particular time in history. A number of factors determine the flora in any particular area, including temperature, sunlight, soil, water, and evolutionary history. The 564
flora in any given area is a major factor in determining the type of fauna found in the area. Scientists have divided the earth’s surface into a number of regions inhabited by distinct flora. Among these regions are the African-Indian desert, western African rain forest, Pacific North American region, Arctic and Sub-arctic region, and the Amazon.
Florida panther The Florida panther (Felis concolor coryi), a subspecies of the mountain lion, is a member of the cat family, Felidae, and is severely threatened with extinction. Listed as endangered, the Florida panther population currently numbers between 30 and 50 individuals. Its former range probably extended from western Louisiana and Arkansas eastward through Mississippi, Alabama, Georgia, and southwestern South Carolina to the southern tip of Florida. Today the Florida panther’s range consists of the Everglades-Big Cypress Swamp area. The preferred habitat for this large cat is subtropical forests comprised of dense stands of trees, vines, and shrubs, typically in low, swampy areas. Several factors have contributed to the decline of the Florida panther. Historically the most significant factors have been habitat loss and persecution by humans. Land use patterns have altered the environment throughout the former range of the Florida panther. With shifts to cattle ranching and agriculture, lands were drained and developed, and with the altered vegetation patterns came a change in the prey base for this top carnivore. The main prey item of the Florida panther is white-tailed deer (Odocoileus virginianus). Formerly, the spring and summer rains kept the area wet, and then, as it dried out, fires would renew the grassy meadows at the forest edges, creating an ideal habitat for the deer. With development and increased deer hunting by humans, the panther’s prey base declined and so did the number of panthers. Prior to the 1950s, Florida had a bounty on Florida panthers because the animal was considered a “threat” to humans and livestock. During the 1950s, state law protected the dwindling population of panthers. In 1967 the Florida panther was listed by the U. S. Fish and Wildlife Service as an endangered species. Land development is still moving southward in Florida. With the annual influx of new residents, fruit orchards being moved south due to recent freezes, and continued draining and clearing of land, panther habitat continues to be destroyed. The Florida panther is forced into areas that are not good habitat for white-tailed deer, and the panthers are catching armadillos and raccoons for food. The panthers then become underweight and anemic due to poor nutrition.
Environmental Encyclopedia 3
Flu pandemic
Fergus, Charles. Swamp Screamer: At Large with the Florida Panther. New York: North Point Press, 1996. Miller, S. D., and D. D. Everett, eds. Cats of the World: Biology, Conservation, and Management. Washington, DC: National Wildlife Federation, 1986.
OTHER Florida Panther Net. [cited May 2002]. . Florida Panther Society. [cited May 2002]. .
Flotation
A Florida panther (Felis concolor coryi). (Photograph by Tom and Pat Leeson. Photo Researchers Inc. Reproduced by permission.)
Development contributes to the Florida panther’s decline in other ways, too. Its range is currently split in half by the east-west highway known as Alligator Alley. During peak seasons, over 30,000 vehicles traverse this stretch of highway daily, and, since 1972, 44 panthers have been killed by cars, the largest single cause of death for these cats in recent decades. Biology is also working against the Florida panther. Because of the extremely small population size, inbreeding of panthers has yielded increased reproductive failures, due to deformed or infertile sperm. The spread of feline distemper virus also is a concern to wildlife biologists. All these factors have led officials to develop a recovery plan that includes a captive breeding program using a small number of injured animals, as well as a mark and recapture program, using radio collars, to inoculate against disease and track young panthers with hopes of saving this valuable part of the biota of south Florida’s Everglades ecosystem. [Eugene C. Beckham]
RESOURCES BOOKS Belden, R. “The Florida Panther.” Audubon Wildlife Report 1988/1989. San Diego: Academic Press, 1988.
An operation in which submerged materials are floated, by means of air bubbles, to the surface of a water and removed. Bubbles are generated through a system called dissolved air flotation (DAF), which is capable of producing clouds of very fine, very small bubbles. A large number of small-sized bubbles is generally most efficient for removing material from water. This process is commonly used in wastewater treatment and by industries, but not in water treatment. For example, the mining industry uses flotation to concentrate fine ore particles, and flotation has been used to concentrate uranium from sea water. It is commonly used to thicken the sludges and to remove grease and oil at wastewater treatment plants. The textile industry often uses flotation to treat process waters resulting from dyeing operations. Flotation might also be used to remove surfactants. Materials that are denser than water or that dissolve well in water are poor candidates for flotation. Flotation should not be confused with foam separation, a process in which surfactants are added to create a foam that affects the removal or concentration of some other material.
Flu pandemic The influenza outbreak of 1918–1919 carried off between 20 to 40 million people worldwide. The Spanish flu outbreak differed significantly from other influenza (flu) epidemics. It was much more lethal, and it killed a high proportion of otherwise healthy adults. Most flu outbreaks kill only the very young, the elderly, and people with weakened immune systems. Scientists and public health officials have been trying to learn more about Spanish flu in the hopes of preventing a similar outbreak. The Spanish flu virus caused one of the worst pandemic of an infectious disease ever recorded. And while the threat of many infectious diseases, including tuberculosis and smallpox, have been contained by antibiotics and vaccination programs, influenza remains a difficult disease. There are worldwide outbreaks of influenza every year, and the flu typically reaches pandemic proportions (lethally afflicting an 565
Environmental Encyclopedia 3
Flu pandemic
unusually high portion of the population) every 10–40 years. The last influenza pandemic was the Hong Kong flu of 1968–69, which caused 700,000 deaths worldwide, and killed 33,000 Americans. The influenza virus is highly mutable, so each year’s flu outbreak presents the human body with a slightly different virus. Because of this, people do not build an immunity to influenza. Vaccines are successful in protecting people against influenza, but vaccine manufacturers must prepare a new batch each year, based on their best supposition of which particular virus will spread. Most influenza viruses originate in China, and doctors, scientists, and public health officials closely monitor flu cases there in order to make the appropriate vaccine. The two main organizations tracking influenza are the Centers for Disease Control (CDC) and the World Health Organization (WHO). The CDC and other government agencies have been preparing for a flu pandemic on the level of Spanish flu since the early 1990s. Spanish flu did not originate in Spain, but presumably in Kansas, where the first case was recorded in March, 1918, at the army base Camp Funston. It quickly spread across the United States, and then to Europe with American soldiers who were fighting in the last months of World War I. Infected ships brought the outbreak to India, New Zealand, and Alaska. Spanish flu killed quickly. People often died within 48 hours of first feeling symptoms. The disease afflicted the lungs, and caused the tiny air sacs, called alveoli, to fill with fluid. Victims were soon starved of oxygen, and sometimes effectively drowned on the fluid clogging their lungs. Children and old people recovered from the Spanish flu at a much higher rate than young adults. In the United States, the death rate from Spanish flu was several times higher for men aged 25–29 than for men in their seventies. Social conditions at the time probably contributed to the remarkable power of the disease. The flu struck just at the end of World War I, when thousands of soldiers were moving from America to Europe and across that continent. In a peaceful time, sick people may have gone home to bed, and thus passed the disease only to their immediate family. But in 1918, men with the virus were packed in already crowded hospitals and troop ships. The unrest and devastation left by the war probably hastened the spread of Spanish flu. So it is possible that if a similarly virulent virus were to arise again soon, it would not be quite as destructive. Researchers are concerned about a return of Spanish flu because little is known about what made it so virulent. The flu virus was not isolated until 1933, and since then, there have been several efforts to collect and study the 1918 virus by exhuming graves in Alaska and Norway, where bodies were preserved in permanently frozen ground. In 1997, a Canadian researcher, Kirsty Duncan, was able to extract tissue samples from the corpses of seven miners who 566
had died of Spanish flu in October 1918 and were buried in frozen ground on a tiny island off Norway. Duncan’s work allowed scientists at several laboratories around the world to do genetic work on the Spanish flu virus. But by 2002, there was still no conclusive agreement on what was so different about the 1918 virus. The influenza virus is believed to originate in migratory water fowl, particularly ducks. Ducks carry influenza viruses without becoming ill. They excrete the virus in their feces. When their feces collect in water, other animals can become infected. Domestic turkeys and chickens can easily become infected with influenza virus borne by wild ducks. But most avian (bird-borne) influenza does not pass to humans, or if it does, is not particularly virulent. But other mammals too can pick up influenza from either wild birds or domestic fowl. Whales, seals, ferrets, horses, and pigs are all susceptible to bird-borne viruses. When the virus moves between species, it may mutate. Human influenza viruses most likely pass from ducks to pigs to humans. The 1918 virus may have been a particularly unusual combination of avian and swine virus, to which humans were unusually vulnerable. Enacting controls on pig and poultry farms may be an important way to prevent the rise of a new influenza pandemic. Some influenza researchers recommend that pigs and domestic ducks and chickens not be raised together. Separating pigs and fowl at live markets may also be a sensible precaution. With the concentration of poultry and pigs at huge “factory” farms, it is important for farmers, veterinarians, and public health officials to monitor for influenza. A flu outbreak among chickens in Hong Kong in 1997 eventually killed six people, but the epidemic was stopped by the quick slaughter of millions of chickens in the area. Any action to control flu of course must be an international effort, since the virus moves rapidly without respect to national borders. [Angela Woodward]
RESOURCES PERIODICALS Gladwell, Malcolm. “The Dead Zone.” New Yorker (September 29, 1997): 52–65. Henderson, C. W. “Spanish Flu Victims Hold Clues to Fight Virus.” Vaccine Weekly (November 29, 1999/December 6, 1999): 10. Koehler, Christopher S. W. “Zeroing in on Zoonoses.” Modern Drug Discovery 8, no. 4 (August 2001): 44–50. Lauteret, Ronald L. “A Short History of a Tragedy” Alaska (November 1999): 21–23. Pickrell, John. “Killer Flu with a Human-Pig Pedigree?” Science 292 (May 11, 2001): 1041. Shalala, Donna E. “Collaboration in the Fight Against Infectious Diseases.” Emerging Infectious Diseases 4, no. 3 (July/September 1998): 354. Webster, Robert G. “Influenza: An Emerging Disease.” Emerging Infectious Diseases 4, no. 3 (July-September 1998).
Environmental Encyclopedia 3 Westrup, Hugh. “Debugging a Killer Virus.” Current Science 84, no. 9 (January 8, 1999): 4.
Flue gas The exhaust gas vented from combustion, a chemical reaction, or other physical process, which passes through a duct into the atmosphere. Exhaust air is usually captured by an enclosure and brought into the exhaust duct through induced or forced ventilation. Induced ventilation is created by lowering the pressure in the duct using fans at the end of the duct. Forced ventilation occurs when exhaust air is forced into the duct using high pressure inlet air. Flues are valuable because they not only direct polluted air to a pollution control device, but also keep the air pollutant concentrations high. High concentrations can be important if the air pollutant removal process is concentration dependent. See also Air pollution control
Flue-gas scrubbing Flue-gas scrubbing is a process for removing oxides of sulfur and nitrogen from the waste gases emitted by various industrial processes. Since the oxides of sulfur and nitrogen have been implicated in a number of health and environmental problems, controlling them is an important issue. The basic principle of scrubbing is that flue gases are forced through a system of baffles within a smokestack. The baffles contain some chemical or chemicals that remove pollutants from these gases. A number of scrubbing processes are available, all of which depend on the reaction between the oxide and some other chemical to produce a harmless compound that can then be removed from the smokestack. For example, currently the most common scrubbing reaction involves the reaction between sulfur dioxide and lime. In the first step of this process, limestone is heated to produce lime. The lime then reacts with sulfur dioxide in flue gases to form calcium sulfite, which can be removed with electrostatic precipitation. Many other scrubbing reactions have been investigated. For example, magnesium oxide can be used in place of calcium oxide in the scrubber. The advantage of this reaction is that the magnesium sulfite that is formed decomposes readily when heated. The magnesium oxide that is regenerated can then be reused in the scrubber while the sulfur dioxide can be used to make sulfuric acid. In yet another process, a mixture of sodium citrate and citric acid is used in the scrubber. When sulfur dioxide is absorbed by the mixture, a reaction occurs in which elemental sulfur is precipitated out.
Fluidized bed combustion
Although the limestone/lime process is by far the most popular scrubbing reaction, it has one serious disadvantage. The end product, calcium sulfite, is a solid that must be disposed of in some way. Solid waste disposal is already a serious problem in many areas, so adding to that problem is not desirable. For that reason, reactions such as those involving magnesium oxide, sodium citrate and citric acid have been carefully studied. The products of these reactions, sulfuric acid and elemental sulfur, are valuable raw materials that can be sold and used. In spite of that fact, the limestone/ lime scrubbing process, or some variation of it, remains the most popular method of extracting sulfur dioxide from flue gases today. Scrubbing to remove nitrogen oxides is much less effective. In principle, reactions like those used with sulfur dioxide are possible. For example, experiments have been conducted in which ammonia or ozone is used in the scrubber to react with and remove oxides of nitrogen. But such methods have had relatively little success and are rarely used by industry. Flue gas scrubbing has long met with resistance from utilities and industries. For one thing, they are not convinced that oxides of sulfur and nitrogen are as dangerous as environmentalists sometimes claim. In addition, they argue that the cost of installing scrubbers is often too great to justify their use. See also Air pollution control; Dry alkali injection; Stack emissions [David E. Newton]
RESOURCES BOOKS American Chemical Society. Cleaning Our Environment: A Chemical Perspective. 2nd edition. Washington, DC: American Chemical Society, 1978.
PERIODICALS Bretz, E. A. “Efficient Scrubbing Begins With Proper Lime Prep, Handling.” Electrical World 205 (March 1991): 21–22. “New Choices in FGD Systems Offer More Than Technology.” Electrical World 204 (November 1990): 46–47. “Scrubbers, Low-Sulfur Coal, of Plant Retirements?” Electrical World 204 (June 1990): 18.
Fluidized bed combustion Of all fossil fuels, coal exists in the largest amount. In fact, the world’s coal resources appear to be sufficient to meet our energy needs for many hundreds of years. One aim of energy technology, therefore, is to find more efficient ways to make use of these coal reserves. One efficiency procedure that has been under investigation for at least two decades is known as fluidized bed combustion. 567
Environmental Encyclopedia 3
Fluoridation
Fluidized bed combustion. Fuel is lifted by a stream of air from underneath the bed. Fuel efficiency is good and sulfur dioxide and nitrogen oxide emissions are lower than with conventional boilers. (McGraw-Hill Inc. Reproduced by permission.)
removal in the stack, such as electrostatic precipitation, can further increase efficiency with which particles are removed. Incomplete combustion of coal is common in the fluidized bed process. However, carbon monoxide and hydrogen sulfide formed in this way are further oxidized in the space above the moving grate. The products of this further oxidation are then removed by the limestone (which reacts with sulfur dioxide) or allowed to escape harmlessly in to the air (in the case of the carbon dioxide). After being used as a scavenger in the process, the limestone, calcium sulfite, and calcium sulfate can be treated to release sulfur dioxide and regenerate the original limestone. The limestone can then be re-used and the sulfur dioxide employed to make sulfuric acid. A further advantage of the fluidized bed process is that it operates at a lower temperature than does a conventional power plant. Thus, the temperature of cooling water ejected from the plant is lower, and the amount of thermal pollution of nearby waterways correspondingly lessened. Writers in the 1970s expressed high hopes for the future of fluidized bed combustion systems, but the cost of such systems is still at least double that of a conventional plant. They are also only marginally more efficient than a conventional plant. Still, their environmental assets are obvious. They should reduce the amount of sulfur dioxide emitted by up to 90% and the amount of nitrogen oxides by more than 60%. See also Air pollution control; Stack emissions [David E. Newton]
In a fluidized bed boiler, granulated coal and limestone are fed simultaneously onto a moving grate. A stream of air from below the grate lifts coal particles so that they are actually suspended in air when they begin to burn. The flow of air, small size of the coal particles, and exposure of the particles on all sides contribute to an increased rate of combustion. Heat produced by burning the coal is then used to boil water, run a turbine, and drive a generator, as in a conventional power plant. The fluidized bed process has much to recommend it from an environmental standpoint. Sulfur and nitrogen oxides react with limestone added to the boiler along with the coal. The product of this reaction, primarily calcium sulfite and calcium sulfate, can be removed from the bottom of the boiler. However, disposing of large quantities of this waste product represents one of the drawbacks of the fluidized bed system. Fly ash is also reduced in the fluidized bed process. As coal burns in the boiler, particles of fly ash tend to adhere to each other, forming larger particles that eventually settle out at the bottom of the boiler. Conventional methods of 568
RESOURCES BOOKS Electric Power Research Institute. Atmospheric Fluidized Bed Combustion Development. Palo Alto, CA: EPRI, 1982. Government Institutes, Inc. Staff, eds. Evaluating the Fluidized Bed Combustion Option, 1988. Rockville, MD: Government Institutes, 1988. Marshall, A. R. Reduced NOx Emissions and Other Phenomena in Fluidized Bed Combustion. Lanham: UNIPUB, 1992.
PERIODICALS Balzhiser, R. E., and K. E. Yeager. “Fluidized Bed Combustion.” Scientific American 257 (September 1987): 100–107.
Fluoridation Fluoridation is the precise adjustment of the concentration of the essential trace element fluoride in the public water supply to protect teeth and bones. Advocates of fluoridation such as the American Dental Association (ADA) and the National Center for Chronic Disease Prevention and Health Promotion (CDC) state that fluoridation is a safe and effective method of preventing tooth decay. Opponents of fluori-
Environmental Encyclopedia 3 dation however, such as Citizens for Health and the Fluoride Action Metwork, maintain that the role fluoridation in the decline of tooth decay is in serious doubt and that more research is required before placing a compound in water reservoirs that could cause cancer, brittle bones, and neurological problems. Fluoride is any compound that contains fluorine, a corrosive, greenish-yellow element. Tooth enamel contains small amounts of fluoride. In addition, fluoride is found in varying amounts in water and in all food and beverages, according to the ADA. A Colorado dentist discovered the effects of fluoride on teeth in the 1900s. When Frederick McKay began practicing in Colorado Springs, he established a connection between a substance in the water and the condition of residents’ teeth. People did not have cavities, but their teeth were stained brown. Dental research started on the substance that was identified as fluoride during the 1930s. Researchers concluded that a concentration of fluoride in drinking water at a ratio of 1 part per million (ppm) prevented tooth decay without staining teeth. In 2000, the ADA stated that a fluoride concentration ranging from 0.7 ppm to 1.2 ppm was sufficient to fight tooth decay. The first community to try fluoridation was Grand Rapids, Michigan. The city fluoridated the community water supply in 1945. Ten years later, Grand Rapids reported that incidents of tooth decay had declined by 60% in the children raised on fluoridated water. During the 1950s, Chicago, Philadelphia, and San Francisco also started to fluoridate their water supply. Cities including New York and Detroit opted for fluoridation during the 1960s. However, not all Americans advocated fluoridation. During the 1950s and 1960s, members of the John Birch Society maintained that fluoridation was a form of mass medication by the government. Some members charged that fluoridation was part of a Communist plot to take over the country. In the decades that followed, fluoridation was no longer associated with conspiracy theories. However, opinion about fluoridation was divided at the close of the twentieth century. By 2000, public water systems served 246.1 million Americans, according to the federal CDC. Of that amount, 65.8% of Americans used fluoridated water. In Washington D.C., 100% of the water is fluoridated, according to a CDC report on the percentage of state populations with fluoridated public water systems in 2000. The top 10 on the list were: Minnesota (98.2%), Kentucky (96.1%), North Dakota (95.4%), Indiana (95.3%), Tennessee (94.5%), Illinois (93.4%), Virginia (93.4%), Georgia (92.9%), Iowa (91.3%), and South Carolina (91.2%). At the other end of the spectrum in terms of fluoridated public water usage were: Louisiana (53.2%), Mississippi (46%), Idaho (45.4%), New Hampshire (43%), Wyoming (30.3%), California (28.7%),
Fluoridation
Oregon (22.7%), Montana (22.2%), New Jersey (15.5%), Hawaii (9%), and Utah (2%). The CDC estimated the cost of fluoridation at 50 cents per year in communities of more than 20,000 residents. The annual cost was estimated at $1 in communities of 10,000 to 20,000 residents. In communities numbering less than 5,000 people, the yearly cost was estimated at $3. The CDC reported in 2000 that extensive research during the previous 50 years proved that fluoridation was safe. Fluoridation was also endorsed by groups including the American Medical Association, the American Academy of Pediatrics, the National PTA, and the American Cancer Society. Advocates of fluoridation state that it especially benefits people who may not be able to afford dental care. Opponents however, counter that toothpaste with fluoride is available for people who believe that fluoride fights tooth decay. Furthermore, opponents point out that toothpaste with fluoride contains a warning label advising users to “seek professional assistance or contact a poison control center” if they accidentally swallow more than the amount used for brushing teeth. Lastly, their question the research methodology used to conclude that fluoridation is responsible for decreased tooth decay. Fluoridation critics include consumer advocates Ralph Nader and Jim Turner. Turner chairs the board of Citizens for Health, a grassroots organization that is asking Congress to hold hearings and review fluoridation policy. Citizens for Health belongs to the groups that believe more research is required to determine the risks and benefits of fluoridation. [Liz Swain]
RESOURCES BOOKS American Water Works Association. Water Fluoridation Principles and Practices. Denver: AWWA, 1996. Health Research Staff. Facts You Should Know About Fluoridation. Pomeroy, WA: Health Research Books, 1996. Martin, Brian. Scientific Knowledge in Controversy: The Social Dynamics of the Fluoridation Debate (Science, Technology and Society). Albany, NY: SUNY Press, 1991.
ORGANIZATIONS American Dental Association, 211 E. Chicago Avenue, Chicago, IL USA 60611 (312) 440-2500, Fax: (312) 440-2800, Email:
[email protected], Centers for Disease Control and Prevention, 1600 Clifton Road, Atlanta, GA USA 30333 (404) 639-3534, Toll Free: (800) 311-3435, , Citizens for Health, 5 Thomas Circle, NW, Suite 500, Washington, D.C. USA 20005 (202) 483-1652, Fax: (202) 483-7369, Email:
[email protected], National Center for Fluroridation Policy and Research (NCFPR) at the University of Buffalo, 315 Squire Hall, Buffalo, NY USA 14214 (716) 829-2056 , Fax: (716) 833-3517, Email:
[email protected],
569
Environmental Encyclopedia 3
Fly ash
Fly ash
Flyway
The fine ash from combustion processes that becomes dispersed in the air. To the casual observer it might appear as smoke, and indeed it is often found mixed with smoke. Fly ash arises because fuels contain a small fraction of incombustible matter. In a fuel like coal, ash has a rock-like siliceous composition, but the high temperatures at which it is formed often means that metals such as iron are incorporated into the ash particles, which take on the appearance of small, colored, glassy spheres. Petroleum produces less ash, but it is often associated with a range of oxides such as vanadium (in the case of fuel oils) and, more noticeably, hollow spheres of carbon. In traditional furnaces, much ash remained on the grate, but modern furnaces produce such fine ash that it is carried away in the hot exhaust gas. Early power stations dispersed so much fine fly ash throughout the nearby environment that they were soon forced to adopt early pollution abatement techniques. They adopted “cyclones” in which centrifugal force removes the particles by causing the waste gas stream to flow on a curved or vortical course. This technique is effective down to particle sizes of about 10 m, but smaller particles are not removed well by cyclone collectors and here electrostatic precipitation process often proves more successful, coping with a size range 30—0.1 m. In electrostatic precipitators the particles are given a negative charge and then attracted to a positive electrode where they are collected and removed. Cloth or paper filters and spraying water through the exhaust gases can be useful in removing fly ash. Fly ash is a nuisance at high concentrations because it accumulates as grit on the surfaces of buildings, clothes, cars, and outdoor furnishings. It is a highly visible and very annoying aspect of industrial air pollution. The deposition of fly ash increases cleaning costs incurred by people who live near poorly controlled combustion sources. Fly ash also has health impacts because the finer particles can penetrate into the human lung. If the deposits are especially heavy, fly ash can also inhibit plant growth. Each year millions of tons of fly ash are produced from coal-powered furnaces, most of which are dumped in waste tips. Care needs to be taken that toxic metals and alkalis are not leached from these disposal sites into watercourses. Fly ash may be used as a low grade cement in road building because it contains a large amount of calcium oxide but generally, the demand is rather low.
The route taken by migratory birds and waterfowl when they travel between their breeding grounds and their winter homes. Flyways often follow geographic features such as mountain ranges, rivers, or other bodies of water. Protecting flyways is one of the many responsibilities of wildlife managers. Draining of wetlands, residential and commercial development, and overhunting are some of the factors that threaten flyway sites visited by birds for food and rest during migration. In most cases, international agreements are needed to guarantee protection along the entire length of a flyway. In the United States, flyway protection is financed to a large extent by funds produced through the Migratory Bird Hunting Stamp Act passed by the U.S. Congress in 1934.
[Peter Brimblecombe Ph.D.]
RESOURCES BOOKS Sellers, B. H. Pollution of Our Atmosphere. Bristol: Adam Hilger, 1984.
570
Food additives Food additives are substances added to food as flavorants, nutrients, preservatives, emulsifiers, or colorants. In addition, foods may contain residues of chemicals used during the production of plant or animal crops, including pesticides, antibiotics, and growth hormones. The use of most food additives is clearly beneficial because it results in improved public health and prevention of spoilage, which enhances the food supply. Nevertheless, there is controversy about the use of many common additives and over the presence of contaminants in food. This is partly because some people are hypersensitive and suffer allergic reactions if they are exposed to certain of these chemicals. In addition, some people believe that low levels of chronic toxicity and diseases may be caused in the larger population by exposure to some of these substances. Although there is no compelling scientific evidence that this is indeed the case, the possibility of chronic damage caused by food additives and chemical residues is an important social and scientific issue. The use of food additives in the United States is closely regulated by the government agencies responsible for health, consumer safety, and agriculture. This is also the case of other developed countries in Europe, Canada, and elsewhere. Chemicals cannot be used as additives in those countries unless regulators are convinced that they have been demonstrated to be toxicologically safe, with a wide margin of security. In addition, chemicals added to commercially prepared foods must be listed on the packaging so that consumers can know what is present in the foodstuffs that they choose to eat. Because of the intrinsic nature of low-level, toxicological risks, especially those associated with diseases that may take a long time to develop, scientists are never able to demonstrate that trace exposures to any chemical are absolutely safe—there is always a level of risk, however small.
Environmental Encyclopedia 3
Food additives
UNHEALTHY FOOD ADDITIVES
Name
Description
Example products
Aspartame
An artificial sweetener associated with rashes, headaches, dizziness, depression, etc.
Diet sodas, sugar substitutes, etc.
Brominated vegetable oil (BVO)
Used as an emulsifier and clouding agent. Its main ingredient, bromate, is a poison.
Sodas, etc.
Butylated hydroxyanisole (BHA)/ butylated hydroxytoluene (BHT)
Prevents rancidity in foods and is added to food packagings. It slows the transfer of nerve impulses and affects sleep, aggressiveness, and weight in test animals.
Cereal and cheese packaging
Citrus red dye #2
Used to color oranges, it is a probable carcinogen. The FDA has recommended it be banned.
Oranges
Monosodium gltamate (MSG)
A flavor enhancer that can cause headaches, heart palpitations, and nausea.
Fast food, processed and packaged food
Nitrites
Used as preservatives, nitrites form cancer-causing compounds in the gastrointestinal tract and have been associated with cancer and birth defects.
Cured meats and wine
Saccharin
An artificial sweetener that may be carcinogenic.
Diet sodas and sugar substitutes
Sulfites
Used as a food preservative, sulfites have been linked to at least four deaths reported to the FDA in the United States.
Dried fruits, shrimp, and frozen potatoes
Tertiary butyhydroquinone (TBHQ)
It is extremely toxic in low doses and has been linked to childhood behavioral problems.
Candy bars, baking sprays, and fast foods
Yellow dye #6
Increases the number of kidney and adrenal gland tumors in lab rats. It has been banned in Norway and Sweden.
Candy and sodas
571
Food additives
Because some people object to these potential, low-level, often involuntary risks, a certain degree of controversy will always be associated with the use of food additives. This is also true of the closely related topic of residues of pesticides, antibiotics, and growth hormones in foods. Flavorants Certain chemicals are added to foods to enhance their flavor. This is particularly true of commercially processed or prepared foods, such as canned vegetables and frozen foods and meals. One of the most commonly added flavorants is table salt (or sodium chloride), a critical nutrient for humans and other animals. In large amounts, however, sodium chloride can predispose people to developing high blood pressure, a factor that is important in strokes and other circulatory and heart diseases. Table sugar (or sucrose), manufactured from sugar cane or sugar beets, and fructose, or fruit sugar, are commonly used to sweeten prepared foods. Such foods include sugar candies, chocolate products, artificial drinks, sweetened fruit juices, peanut butter, jams, ketchup, and most commercial breads. Sugars are easily assimilated from foods and are a useful form of metabolic energy. In large amounts, however, sugars can lead to weight gain, tooth decay, and hypoglycemia and diabetes in genetically predisposed people. Artificial sweeteners such as saccharine, aspartame, and sodium cyclamate avoid the nutritional problems associated with eating too much sugar. These nonsugar sweeteners may have their own problems, however, and some people consider them to be a low-level health hazard. Monosodium glutamate (or MSG) is commonly used as a flavor enhancer, particularly in processed meats, prepared soups, and oriental foods. Some people are relatively sensitive to this chemical, developing headaches and other symptoms that are sometimes referred to as “Chinese food syndrome.” Other flavorants used in processed foods include many kinds of spices, herbs, vanilla, mustard, nuts, peanuts, and wine. Some people are extremely allergic to even minute exposures to peanuts or nuts in food and can rapidly develop a condition known as anaphylactic shock, which is life-threatening unless quickly treated with medicine. This is one of the reasons why any foods containing peanuts or nuts as a flavoring ingredient must be clearly labeled as such. Many flavorants are natural in origin. Increasingly, however, synthetic flavorants are being discovered and used. For example, vanilla used to be extracted from a particular species of tropical orchid and was therefore a rather expensive flavorant. However, a synthetic vanilla flavorant can now be manufactured from wood-pulp lignins, and this has made this pleasant flavor much more readily available than it used to be. 572
Environmental Encyclopedia 3 Nutrients Many foods are fortified with minerals, vitamins, and other micronutrients. One such example is table salt, which has iodine added (as potassium iodide) to help prevent goiter in the general population. Goiter used to be relatively common but is now rare, in part because of the widespread use of iodized salt. Other foods that are commonly fortified with minerals and vitamins include milk and margarine (with vitamins A and D), flour (with thiamine, riboflavin, niacin, iron), and some commercial breads and breakfast cereals (with various vitamins and minerals, particularly in some commercial cereal preparations). Micronutrient additives in these and other commercial foods are carefully formulated to help contribute to a balanced diet in their consumers. Nevertheless, some people believe that it is somehow un-natural and un-healthy to consume foods that have been adulterated in this manner, and they prefer to eat “natural” foods that do not have any vitamins or minerals added to them. Preservatives Preservatives are substances added to foods to prevent spoilage caused by bacteria, fungi, yeasts, insects, or other biological agents. Spoilage can lead to a decrease in the nutritional quality of foods, to the growth of food-poisoning microorganisms such as the botulism bacterium, or to the production of deadly chemicals, such as aflatoxin, that can be produced in stored grains and seeds (such as peanuts) by species of fungi. Salt has long been used to preserve meat and fish, either added directly to the surface or by immersing the food in a briny solution. Nitrates and nitrites (such as sodium nitrate, or saltpetre) are also used to preserve meats, especially cured foods such as sausages, salamis, and hams. These chemicals are especially useful in inhibiting the growth of Clostridium botulinum, the bacterium that causes deadly botulism. Vinegar and wood smoke are used for similar purposes. Sulfur dioxide, sodium sulfite, and benzoic acid are often used as preservatives in fruit products, such beverages as wine and beer, and in ketchup, pickles, and spice preparations. Anti-oxidants are chemicals added to certain foods to prevent a deterioration in their quality or flavor, occurring due to the exposure of fats and oils to atmospheric oxygen. Examples of commonly used antioxidants are ascorbic acid (or vitamin C), butylated hydroxyanisole (or BHA), butylated hydroxytoluene (or BHT), gallates, and ethoxyquin. Stabilizers and emulsifiers Stabilizers and emulsifiers are added to prepared foods to maintain suspensions of fats or oils in water matrices (or vice versa), or to prevent the caking of ingredients during storage or preparation. One example of an emulsifying additive is glyceryl monostearate, often added to stored starch
Environmental Encyclopedia 3 products to maintain their texture. Alginates are compounds added to commercial ice cream, salad dressing, and other foods to stabilize emulsions of oil- or fat-in-water during storage. Colorants Some prepared foods have colors added to “improve” their aesthetic qualities and thereby to make them more attractive to consumers. This practice is especially common in the preparation of confectionaries such as candies, chocolate bars, ice creams, and similar products, and in fancy cakes and pastries. Similarly, products such as ketchup and strawberry preserves have red dyes added to enhance their color, relishes and tinned peas have green colors added, and dark breads may contain brown colorants. Most margarines have yellow colors added, to make them appear more similar to butter. Artificial drinks and drink-mixes contain food colorants appropriate to their flavor--cherry and raspberry contain red dyes, and so forth. Various chemicals are used as food colorants, some of them being extracted from plants (for example, yellow and orange carotenes), while many others are synthetic chemicals derived from coal tars and other organic substances. The acute toxicity (i.e., short-term poisoning) and chronic toxicity (i.e., longer-term damage associated with diseases, cancers, and developmental abnormalities) of these colorants are stringently tested on animals in the laboratory, and the substances must be demonstrated to be safe before they are allowed to be used as food additives. Still, some people object to having these chemicals in their food, and choose to consume products that are not adulterated with colorants. Residues of pesticides, antibiotics, and growth hormones Insecticides, fungicides, herbicides, and other pesticides are routinely used in modern, industrial agriculture. Some of these chemicals are persistent, because they do not quickly break down in the environment to simpler substances, and/or they do not readily wash off produce. The chemicals in such cases are called residues, and it is not unusual for them to be present on or in foodstuffs in low concentrations. The permissible residue levels allowed in foodstuffs intended for human consumption are closely regulated by government. However, not all foods can be properly inspected, so it is common for people to be routinely exposed to small concentrations of these chemicals in their diet. In addition, most animals cultivated in intensive agricultural systems, such as feedlots and factory farms, are routinely treated with antibiotics in their feed. This is done to prevent outbreaks of communicable diseases under densely crowded conditions. Antibiotic use is especially common during the raising of chickens, turkeys, pigs, and cows. Small residues of these chemicals remain in the meat, eggs, milk, or other products of these animals, and are ingested
Food and Drug Administration
by human consumers. Also, growth hormones are given to beef and dairy cows to increase their productivity. Small residues of these chemicals also occur in products eaten by consumers. Strictly speaking, residues of pesticides, antibiotics, and growth hormones are not additives because they are not added directly to foodstuffs. Nevertheless, these chemicals are present in foods eaten by people, and many consumers find this to be objectionable. So-called organic foods are cultivated without the use of synthetic pesticides, antibiotics, or growth hormones, and many people prefer to eat these foods instead of the much more abundantly available foodstuffs that are typically sold in commercial outlets. (Note that the term “organic foods” is somewhat of a misnomer, because all foods are organic in nature. The phrase “organic” in this sense is used to refer to foods containing additives and/or residues, etc.) Irradiation of food Irradiation is a new technology that can be used to prevent spoilage of foods by sterilizing most or all of the microorganisms and insects that they may contain. This process utilizes gamma radiation, and it is not known to cause any chemical or physical changes in foodstuffs, other than the intended benefit of killing organisms that can cause spoilage. Although this process displaces some of the uses of preservative chemicals as food additives, irradiation itself is somewhat controversial. Even though there is no scientific evidence that food irradiation poses tangible risks to consumers, some people object to the use of this technology and prefer not to consume foodstuffs processed in this manner. [Bill Freedman Ph.D.]
RESOURCES BOOKS British Nutrition Foundation. Why Additives? The Safety of Foods. London: Forbes, 1977. Freed, D. L. J. Health Hazards of Milk. London: Bailliere Tindall, 1984. Marcus, A. I. Cancer from Beef: DES, Federal Food Regulation, and Consumer Confidence. Baltimore: Johns Hopkins University Press, 1994. Miller, M. Danger! Additives at Work: A Report on Food Additives, Their Use and Control. London: London Food Commission, 1985. Safety and Nutritional Adequacy of Irradiated Food. Geneva, Switzerland: World Health Organization, 1994.
PERIODICALS Etherton, T. D. “The Impact Of Biotechnology On Animal Agriculture And The Consumer.” Nutrition Today 29, no. 4 (1994): 12–18.
Food and Drug Administration Founded in 1927, the Food and Drug Administration (FDA) is an agency of the Untied States Public Health 573
Environmental Encyclopedia 3
Food chain/web
Service. One of the nation’s oldest consumer protection
agencies, the FDA is charged with enforcing the Federal Food, Drug, and Cosmetics Act, and other related public health laws. The agency assesses risks to the public posed by foods, drugs, and cosmetics, as well as medical devices, blood, and medications such as insulin which are made from living organisms. It also tests food samples for contaminants, sets labeling standards, and monitors the public health effects of drugs given to animals raised for food. To carry out its mandate of consumer protection, the FDA employs over 9,000 investigators, inspectors, and scientists who collect domestic and imported product samples for examination by FDA scientists. The FDA has the power to remove from the market those foods, drugs, chemicals, or medical devices it finds unsafe. The FDA often seeks voluntary recall of the product by manufacturers, but the agency can also stop sales and destroy products through court action. About 3,000 products a year are found to be unfit for consumers and are withdrawn from the marketplace based on FDA action. Also, about 30,000 import shipments each year are detained at the port of entry on FDA orders. FDA scientists analyze samples of products to detect contamination, or review test results submitted by companies seeking agency approval for drugs, vaccines, food additives, dyes, and medical devices. The FDA also operates the National Center for Toxicological Research at Jefferson, Arkansas, which conducts research to investigate the biological effects of widely used chemicals. The Agency’s Engineering and Analytical Center at Winchester, Massachusetts, tests medical devices, radiation-emitting products, and radioactive drugs. The Bureau of Radiological Health was formed in 1971 to protect against unnecessary human exposure to radiation from electronic products such as microwave ovens. The FDA is one of several federal organizations that oversees the safety of biotechnology, such as the industrial use of microorganisms to processes waste and water products. In 1996, when the FDA declared that cigarettes and smokeless tobacco are nicotine-delivery devices, it took responsibility for regulating those products under the authority of the Federal Food, Drug, and Cosmetics Act. With regard to these products, the FDA has issued federal mandates concerning sales to minors, sales from vending machines, and advertising campaigns. [Linda Rehkopf]
RESOURCES PERIODICALS Gibbons, A. “Can David Kessler Revive the FDA?” Science 252 (April 12, 1991): 200–3.
574
Iglehart, J. K. “The Food and Drug Administration and Its Problems.” New England Journal of Medicine 325 (July 18, 1991): 217–20.
ORGANIZATIONS U.S. Food and Drug Administration, 5600 Fishers Lane, Rockville, MD USA 20857-0001 Toll Free: (888) INFO-FDA,
Food chain/web Food chains and food webs are methods of describing an ecosystem by describing how energy flows from one species to another. First proposed by the English zoologist Charles Elton in 1927, food chains and food webs describe the successive transfer of energy from plants to the animals that eat them, and to the animals that eat those animals, and so on. A food chain is a model for this process which assumes that the transfer of energy within the community is relatively simple. A food chain in a grassland ecosystem, for example, might be: Insects eat grass, and mice eat insects, and fox eat mice. But such an outline is not exactly accurate, and many more species of plants and animals are actually involved in the transfer of energy. Rodents often feed on both plants and insects, and some animals, such as predatory birds, feed on several kinds of rodents. This more complex description of the way energy flows through an ecosystem is called a food web. Food webs can be thought of as interconnected or intersecting food chains. The components of food chains and food webs are producers, consumers, and decomposers. Plants and chemosynthetic bacteria are producers. They are also called primary producers or autotrophs ("self-nourishing") because they produce organic compounds from inorganic chemicals and outside sources of energy. The groups that eat these plants are called primary consumers or herbivores. They have adaptations that allow them to live on a purely vegetative diet which is high in cellulose. They usually have teeth modified for chewing and grinding; ruminants such as deer and cattle have well-developed stomachs, and lagomorphs such as rabbits have caeca which aid their digestion. Animals that eat herbivores are called secondary consumers or primary carnivores, and predators that eat these animals are called tertiary consumers. Decomposers are the final link in the energy flow. They feed on dead organic matter, releasing nutrients back into the ecosystem. Animals that eat dead plant and animal matter are called scavengers, and plants that do the same are known as saprophytes. The components of food chains and food webs exist at different stages in the transfer of energy through an ecosystem. The position of every group of organisms obtaining their food in the same manner is known as a trophic level. The term comes from a Greek word meaning “nursing,” and the implication is that each stage nourishes the next. The
Environmental Encyclopedia 3
Food chain/web
In an ecosystem, food chains become interconnected to form food webs. (Illustration by Hans & Cassidy.)
first tropic level consists of autotrophs, the second herbivores, the third primary carnivores. At the final trophic level exists what is often called the “top predator.” Organisms in the same trophic level are not necessarily connected taxonomically; they are connected ecologically, by the fact they obtain their energy in the same way. Their trophic level is determined by how many steps it is above the primary producer level. Most organisms occupy only one trophic level; however some may occupy two. Insectivorous plants like the venus flytrap are both primary producers and carnivores. Horseflies are another example: the females bite and draw blood, while the males are strictly herbivores. In 1942, Raymond Lindeman published a paper entitled “The Tropic-Dynamic Aspect of Ecology.” Although a young man and only recently graduated from Yale University, he revolutionized ecological thinking by describing ecosystems in the terminology of energy transformation. He used data from his studies of Cedar Bog Lake in Minnesota to construct the first energy budget for an entire ecosystem. He measured harvestable net production at three trophic levels, primary producer, herbivore, and carnivore. He did this by measuring gross production minus growth, reproduc-
tion, respiration, and excretion. He was able to calculate the assimilation efficiency at each tropic level, and the efficiency of energy transfers between each level. Lindeman’s calculations are still widely regarded today, and his conclusions are usually generalized by saying that the ecological efficiency of energy transfers between trophic levels averages about 10%. Lindeman’s calculations and some basic laws about physics reveal important truths about food chains, food webs, and ecosystems in general. The First Law of Thermodynamics states that energy cannot be created or destroyed; energy input must equal energy output. The Second Law of Thermodynamics states that all physical processes proceed in such a way that the availability of the energy involved decreases. In other words, no transfer of energy is completely efficient. Using the generalized 10% figure from Lindeman’s study, a hypothetical ecosystem with 1,000 kcal of energy available (net production) at the primary-producer level would mean that only 100 kcal would be available to the herbivores at the second trophic level, 10 kcal to the primary carnivores at the third level, and 1 kcal to the secondary carnivores at the fourth level. Thus, no matter how much energy is 575
Environmental Encyclopedia 3
Food irradiation
assimilated by the autotrophs at the first level of an ecosystem, the eventual number of trophic levels is limited by the laws which govern the transfer of energy. The number of links in most natural food chains is four. The relationships between trophic levels has sometimes been compared to a pyramid, with a broad base which narrows to an apex. Trophic levels represent successively narrowing sections of the pyramid. These pyramids can be described in terms of the number of organisms at each trophic level. This was first proposed by Charles Elton, who observed that the number of plants usually exceeded the number of herbivores, which in turn exceeded the number of primary carnivores, and so on. Pyramids of number can be inverted, particularly at the base; an example of this would be the thousands of insects which might feed on a single tree. The pyramid-like relationship between trophic levels can also be expressed in terms of the accumulated weight of all living matter, known as biomass. Although upperlevel consumers tend to be large, the population of organisms at lower trophic levels are usually much higher, resulting in a larger combined biomass. Pyramids of biomass are not normally inverted, though they can be under certain conditions. In aquatic ecosystems, the biomass of the primary producers may be less than that of the primary consumers because of the rate at which they are being consumed; phytoplankton can be eaten so rapidly that the biomass of zooplankton and other herbivores are greater at any particular time. The relationship between trophic levels can also described in terms of energy, but pyramids of energy cannot be inverted. There will always be more energy at the bottom than there is at the top. Humans are the top consumer in many ecosystems, and they exert strong and sometimes damaging pressures on food chains. For example, overfishing or overhunting can cause a large drop in the number of animals, resulting in changes in the food-web interrelationships. On the other hand, overprotection of some animals like deer or moose can be just as damaging. Another harmful influence is that of biomagnification. Toxic chemicals such as mercury and DDT released into the environment tend to become more concentrated as they travel up the food chain. Some ecologists have proposed that the stability of ecosystems is associated with the complexity of the internal structure of the food web and that ecosystems with a greater number of interconnections are more stable. Although more studies must be done to test this hypothesis, we do know that food chains in constant environments tend to have a greater number of species and more trophic links, whereas food chains in unstable environments have fewer species and trophic links. [John Korstad and Douglas Smith] 576
RESOURCES BOOKS Krebs, C. J. Ecology: The Experimental Analysis of Distribution and Abundance. 3rd ed. New York: Harper & Row, 1985.
PERIODICALS Hairston, N. G., F. E. Smith, and L. B. Slobodkin. “Community Structure, Population Control, and Competition.” American Naturalist 94 (1960): 421–25. Lindeman, R. L. “The Trophic-Dynamic Aspect of Ecology.” Ecology 23 (1942): 399–418.
Food irradiation The treatment of food with ionizing radiation has been in practice for nearly a century since the first irradiation process patents were filed in 1905. Regular use of the technology in food processing started in 1963 when the U.S. Food and Drug Administration (FDA) approved the sale of irradiated wheat and wheat flour. Today irradiation treatment is used on a wide variety of food products and is regulated in the United States by the FDA under a Department of Health and Human Services regulation. Irradiation of food has three main applications: extension of shelf life, elimination of insects, and the destruction of bacteria and other pathogens that cause foodborne illness. This final goal may have the most far-reaching implications for Americans; the U.S. Centers for Disease Control (CDC) estimate that 76 million Americans get sick, and 5,000 die each year from illnesses caused by foodborne microorganisms, such as E. coli, Salmonella, the botulism toxin, and other pathogens responsible for food poisoning. Irradiation technology involves exposing food to ionizing radiation. The radiation is generated from gamma rays emitted by cobalt-60 or cesium-137, or from x rays or electron beams. The amount of radiation absorbed during irradiation processing is measured in units called RADs (radiant energy absorbed). One hundred RADs is equivalent to one Gray (Gy). Depending on the food product being irradiated, treatment can range from 0.05 to 30 kGy. A dosimeter, or film badge, verifies the kGy dose. The ionizing radiation displaces electrons in the food, which slows cell division and kills bacteria and pests. The irradiation process itself is relatively simple. Food is packed in totes or containers, which are typically placed on a conveyer belt. Beef and other foods that require refrigeration are loaded into insulated containers prior to treatment. The belt transports the food bins through a lead-lined irradiation cell or chamber, where they are exposed to the ionizing radiation that kills the microorganisms. Several trips through the chamber may be required for full irradiation. The length of the treatment depends upon the food being processed
Environmental Encyclopedia 3 and the technology used, but each rotation takes only a few minutes. The FDA has approved the use of irradiation for wheat and wheat powder, spices, enzyme preparations, vegetables, pork, fruits, poultry, beef, lamb, and goat meat. In 2000, the FDA also approved the use of irradiation to control salmonella in fresh eggs. Labeling guidelines introduced by the Codex Alimentarius Commission, an international food standards organization sponsored jointly by the United Nations Food and Agricultural Organization (FAO) and the World Health Organization (WHO), requires that all irradiated food products and ingredients be clearly labeled as such for consumers. Codex also created the radura, a voluntary international symbol that represents irradiation. In the United States, the food irradiation process is regulated jointly by FDA and the U.S. Department of Agriculture (USDA). Facilities using radioactive sources such as cobalt-60 are also regulated by the Nuclear Regulatory Commission (NRC. The FDA regulates irradiation sources, levels, food types and packaging, as well as required recordkeeping and labeling. Records must be maintained and made available to FDA for one year beyond the shelf-life of the irradiated food to a maximum of three years. They must describe all aspects of the treatment and foods that have been irradiated must be denoted with the radura symbol and by the statement “Treated with radiation” or “Treated by irradiation". As of 2002, food irradiation is allowed in some 50 countries and is endorsed by the World Health Organization (WHO), and many other organizations. New legislation entitled the “Farm Security and Rural Investment Act of 2002” (the Farm Bill) passed in May 2002 may soften the food irradiation standards. The Farm Bill calls for the Secretary of Health and Human Services and FDA to implement a new regulatory program for irradiated foods. The program will allow the food industry to instead label irradiated food as “pasteurized” as long as they meet appropriate food safety standards. As of the writing of this entry, these new guidelines had not been implemented. Food that has been treated with ionizing energy typically looks and tastes the same as non-irradiated food. Just like a suitcase going through an airport x-ray machine, irradiated food does not come into direct contact with a radiation source and is not radioactive. However, depending on the strength and duration of the irradiation process, some slight changes in appearance and taste have been reported in some foods after treatment. Some of the flavor changes may be attributed to the generation of substances known as radiolytic products in irradiated foods. When food products are irradiated the energy displaces electrons in the food and forms compounds called free radicals. The free radicals react with other molecules to form new stable compounds, termed radiolytic products. Benzene,
Food irradiation
formaldehyde, and hydrogen peroxide are just a few of the radiolytic products that may form during the irradiation process. These substances are only present in minute amounts, however, and the FDA reports that 90% of all radiolytic products from irradiation are also found naturally in food. The chemical change that creates radiolytic products also occurs in other food processing methods, such as canning or cooking. However, about 10% of the radiolytic products found in irradiated food are unique to the irradiation process and little is known about the effects that they may have on human health. It should be noted, however, that the World Health Organization, the American Medical Association, the American Dietetic Association, and a host of other professional healthcare organizations endorse the use of irradiation as a food safety measure. Treating fruit and vegetables with irradiation can also eliminate the need for chemical fumigation after harvesting. Produce shelf life is extended by the reduction and elimination of organisms that cause spoilage. It also slows cell division, thus delaying the ripening process, and in some types of produce irradiation extends the shelf life for up to a week. Advocates of irradiation claim that it is a safe alternative to the use of fumigants, several of which have been banned in the United States. Nevertheless, irradiation removes some of the nutrients from foods, particularly vitamins A, C, E, and the BComplex vitamins. Whether the extent of this nutrient loss is significant enough to be harmful is debatable. Advocates of irradiation say the loss is insignificant, and standard methods of cooking can destroy these same vitamins. However, research suggests that cooking an irradiated food may further increase the loss of nutrients. Critics of irradiation also question the long-term safety of consumption of irradiated food and their associated radiolytic products. They charge that the technology does nothing to address the unsanitary food processing practices and inadequate inspection programs that breed foodborne pathogens. Even if irradiation is 100% safe and beneficial there are numerous environmental concerns. Many opponents of irradiation cite the proliferation of radioactive material and the environmental hazards. The mining and on-site processing of radioactive materials are devastating to regional ecosystems. There are also safety hazards associated with the transportation of radioactive material, production of isotopes, and disposal. [Paula Ford-Martin and Debra Glidden]
RESOURCES BOOKS Molins, Ricardo A. Food Irradiation: Principles and Applications. New York: John Wiley and Sons, 2001.
577
Food policy
Stuart, Thorne, ed. Food Irradiation (Elsevier Applied Food Science Series). New York: Elsevier, 1992.
PERIODICALS Ennen, Steve. “Irradiation Plant Gets USDA Approval.” Food Processing (March 2001): 90–2. Louria, Donald. “Food Irradiation: Unresolved Issues.” Clinical Infectious Diseases 33 (August 2001): 378–80. Steele, J. H. “Food Irradiation: A Public Health Challenge for the 21st Century.” Clinical Infectious Diseases 33 (August 2001): 376–7. U.S. General Accounting Office. “Food Irradiation: Available Research Indicates That Benefits Outweigh Risks.” GAO Report to Congressional Requesters. GAO/RCED-00-217 (August 2000).
OTHER U.S. Department of Agriculture, Food Safety and Inspection Service. Irradiation of Meat and Poultry Products. [cited July 2002]. . U.S. Department of Agriculture, Food Safety and Inspection Service. “Irradiation of Meat Food Products; Final Rule.” Federal Register 64, no. 246 (December 23, 1999): 72150–66.
ORGANIZATIONS
The Food Irradiation Website, National Food Processors Association, 1350 I Street, NW Suite 300, Washington, DC USA 20005 (202) 639-5900, Fax: (202) 639-5932, Email:
[email protected], Public Citizen, Critical Mass Energy & Environmental Program, 1600 20th St. NW, Washington, DC USA 20009 (202) 588-1000, Email:
[email protected],
Food policy Through a variety of agricultural, economic, and regulatory programs which support or direct policies related to the production and distribution of food, the United States government has a large influence on how agriculture business is conducted. The government’s major impact on agriculture is the setting of prices and mandates regarding how land can be used by farmers that participate in government programs. These policies can also have a large impact on the adoption or use of alternative practices and technologies that may be more efficient and sustainable in the world marketplace. In the past, farm commodity programs have had a large influence in the past on the kinds and amounts of crops grown as well as on the choice of management practices used to grow them. Prices under government commodity programs have often been above world market prices which meant that many farmers felt compelled to preserve or build their farm commodity program base acres, since acreage determines program eligibility and future income. These programs strongly influenced land-use decisions on about two-thirds of the harvested cropland in the United States. Price and income support programs for major commodities also influence growers not in the programs. For example, pork producers are not a part of a government program, and in the past they have paid higher feed prices 578
Environmental Encyclopedia 3 because of high price supports on production of feed grains. At other times, particularly after the Food Security Act of 1985, they benefited from policies resulting in lower food costs. So as the government changes policy in one area, there can be widespread indirect impacts in other areas. For example, the federal dairy termination program, which ran from 1985–1987 was designed to reduce overproduction of milk. Those farmers who sold their milk cows and decided to produce hay for local cash markets caused a steep decline in the prices received by other established hay producers. Federal policy evolved as a patchwork of individual programs, each created to address individual problems. There was not a coherent strategy to direct programs toward a common set of goals. Many programs such as soil conservation and export programs have had conflicting objectives, but attempts have now been made in the most current farm legislation to address some of these problems. Government food policy has produced a wide variety of results. The policy has not only affected commodity prices and the level of output, but it has also shaped technological change, encouraged uneconomical capital investments in machinery and facilities, inflated the value of land, subsidized crop production practices that have led to resource degradation (such as soil erosion and surface and groundwater pollution), expanded the interstate highway system, financed irrigation projects, and promoted farm commodity exports. Together with other economic forces, government policy has had a far-reaching structural influence on agriculture, much of it unintended and unanticipated. Federal commodity programs were put into place beginning in the 1930s, primarily with the Agriculture Adjustment Act of 1938. The purpose of these programs born out of the Depression was primarily to protect crop prices and farmer income which has been done by a number of programs over the years. A variety of methods have been used including setting prices, making direct payments to farmers, and subsidizing exports. However, by the mid-1980s and early 1990s, an increasing number of people felt that these programs impeded movement toward alternative types of agriculture, to the detriment of family farms and society in general. Two components in particular were highlighted as being problems: base acre requirements and cross-compliance. All crop price and income support programs relied on the concept of an acreage base planted with a given commodity that would produce a predictable yield. Most of this acreage was planted to maximize benefits and was based on a five-year average. A farmer knew that if he/she reduced their acres for a given crop, they would not only lose the current year’s benefits, but would also lose future benefits. Cross-compliance was instituted in the Food Security Act of 1985. It was designed to control government payments and production by attaching financial penalties to the
Environmental Encyclopedia 3 expansion of program crop base acres. It served as an effective financial barrier to diversification of crops by stipulating that to receive any benefits from an established crop acreage base, the farmer must not exceed his or her acreage base for any other program crop. This had a profound impact on farmers and crop growers since about 70% of the United States’ cropland acres were enrolled in the programs. In addition to citing these problem areas, critics of food policy programs argued that many farmers faced economic penalties for adopting beneficial practices such as crop rotation or strip cropping, practices that in general reduce soil erosion and improve environmental quality. The economic incentives built into commodity programs, for example, encouraged heavier use of fertilizer, pesticides, and irrigation. These programs also encouraged surplus production, subsidized inefficient use of inputs, and they resulted in increased government expenditures. Critics argued that the rules associated with these programs discouraged farmers from pursuing alternative practices or crops, or technologies that might have proved more effective in the long term or that were more environmentally friendly. Critics also contend that much of the research conducted over the past 40 years has responded to the needs of farmers operating under a set of economic and policy incentives that encouraged high yields without regard to the longterm environmental impacts. During the late 1980s and early 1990s, several U.S. Department of Agriculture research and education programs were instituted to determine whether current levels of production can be maintained with reduced levels of fertilizers and pesticides, to examine more intensive management practices, to increase understanding of biological principles, and to improve profitability per unit of production with less government support. As the impacts of the alternative production systems on the environment are evaluated, it will be important to have policies in place that will allow the farmer to easily adopt those practices that increase efficiency and reduce impacts. In a farm bill passed in 1996 there are provisions that change these commodity programs. Labeled the “right to farm provisions,” these allow farmers to make decisions on what they grow and establish a phased seven-year reduction in price supports. Food quality and safety are major concerns addressed as a part of federal policy. Programs addressing these concerns are primarily designed to prevent health risks and acute illnesses from chemical and microbial contaminants in food. Supporters say that this has provided the country with the safest food supply in the world. However, critics contend that a number of regulations do not enhance quality or safety and put farmers that use or would adopt alternative agricultural practices at a disadvantage. Several examples can be cited. Critics of government food policy point out that until recently, meat grading standards awarded producers of
Food waste
fatty beef which has been linked to the increased likelihood of heart disease. The use of pesticides provides another example. The Environmental Protection Agency establishes pesticide residual tolerance levels in food which are monitored for compliance. For some types of risk, cancer in particular, there is a great deal of uncertainty, and critics point out which cosmetic standards that increase prices for fruits and vegetables may encourage higher risks of disease among consumers. Also, certain poultry slaughter practices can result in microbiological contamination. Of particular concern is salmonella food poisoning which has become widespread. Government food policy heavily influences on-farm decision making. In some cases one part of the policy has negative or unintended consequences for another policy or segment of farmers. As we look to the future, the struggle will be to continue to provide a coherent, coordinated policy. The recent changes in policy will need to be evaluated from the standpoint of sustainability and environmental impacts. [James L. Anderson]
RESOURCES BOOKS Agriculture and the Environment. The 1991 Yearbook of Agriculture. U.S. Government Printing Office. Washington, D.C. Alternative Agriculture. Board on Agriculture, National Research Council., Washington, DC: National Academy Press, 1989.
Food waste Waste food from residences, grocery stores, and food services accounts for nearly seven percent of the municipal solid waste stream. The per capita amount of food waste in municipal solid waste has been declining since 1960 due to increased use of garbage disposals and increased consumption of processed foods. Food waste ground in garbage disposals goes into sewer systems and thus ends up in wastewater. Waste generated by the food processing industry is considered to be industrial waste, and is not included in municipal solid waste estimates. Waste from the food processing industry includes: vegetables and fruits unsuitable for canning or freezing; vegetable, fruit, and meat trimmings; and pomace from juice manufacturing. Vegetable and fruit processing waste is sometimes used as animal feed and waste from meat and seafood processing can be composted. Liquid waste from juice manufacturing can be applied to cropland as a soil amendment. Much of the waste generated by all types of food processing is wastewater due to such processes as washing, peeling, blanching, and cooling. Some food industries recycle 579
Environmental Encyclopedia 3
Food-borne diseases
wastewaters back into their processes, but there is potential for more of this wastewater to be reused. Grocery stores generate food waste in the form of lettuce trimmings, excess foliage, unmarketable produce, and meat trimmings. Waste from grocery stores located in rural areas is often used as hog or cattle feed, whereas grocery waste in urban areas is usually ground in garbage disposals. There is potential for more urban grocery store waste to be either used on farms or composted, but lack of storage space, odor, and pest problems prevent most of this waste from being recycled. Restaurants and institutional cafeterias are the major sources of food service industry waste. In addition to food preparation wastes, they also generate large amounts of cooking oil and grease waste, post-consumer waste (uneaten food), and surplus waste. In some areas small amounts of surplus waste is utilized by feeding programs, but most waste generated by food services goes to landfills or into garbage disposals. Most food waste is generated by sources other than households. However, a greater percentage of household food waste is disposed of because there is a higher rate of recycling of industrial and commercial food waste. Only a very small segment of households compost or otherwise recycle their food wastes. [Teresa C. Donkin]
RESOURCES BOOKS U.S. Environmental Protection Agency. Solid Waste and Emergency Response. Characterization of Municipal Solid Waste in the United States: 1990 Update. Washington, DC: U.S. Government Printing Office, June 1990. U.S. Environmental Protection Agency. Solid Waste and Emergency Response. Characterization of Municipal Solid Waste in the United States: 1992 Update (Executive Summary). Washington, DC: U.S. Government Printing Office, July 1992.
PERIODICALS Youde, J., and B. Prenguber. “Classifying the Food Waste Stream.” Biocycle 32 (October 1991): 70–71.
Food-borne diseases Food-borne diseases are illnesses caused when people consume contaminated food or beverages. Contamination is frequently caused by disease-causing microbes called pathogens. Other causes of food-borne diseases are poisonous chemicals or harmful substances in food and beverages. There are so many different pathogens that more than 250 food-borne illnesses have been described, according to the United States Centers for Disease Control and Prevention(CDC). CDC estimated that food-borne pathogens 580
cause approximately 76 million illnesses, 5,000 deaths, 325,000 hospitalizations in the United States each year. Most food-borne illnesses are infections caused by bacteria, viruses, and parasites such as Cryptosporidium. Harmful toxins cause food poisonings. Since there are so many different types of food-borne illnesses, symptoms will vary. However, some early symptoms are similar because the microbe or toxin travels through the gastrointestinal tract. The initial symptoms of food-borne diseases are nausea, vomiting, abdominal cramps, and diarrhea. According to CDC, the most common food-borne viruses are caused by three bacteria and a group of viruses. Campylobacter is a bacterium that lives in the intestines of healthy birds. It is also found in raw poultry meat. An infection is caused by eating undercooked chicken or food contaminated by juices from raw chicken. The bacterial pathogen causes fever, diarrhea, and abdominal cramps. Campylobacter is the primary cause of bacteria-related diarrhea illness throughout the world. Salmonella is a bacterium that is prevalent in the intestines of birds, mammals, and reptiles. The bacterium spreads to humans through various foods. Salmonella causes the illness salmonellosis. Symptoms include fever, diarrhea, and abdominal cramps. This illness can result in a life-threatening infection for a person who is in poor health or has a weakened immune system. E. coli O157:H7 is a bacterial pathogen that has a reservoir in cattle and similar animals. E. coli causes a serious illness. People become ill after eating food or drinking water that was contaminated with microscopic amounts of cow feces, according to CDC. A person often experiences severe and bloody diarrhea and painful abdominal cramps. Hemolytic urine syndrome (HUS) occurs in 3–5% of E. coli cases. This complication may occur several weeks after the first symptoms. HUS symptoms include temporary anemia, profuse bleeding, and kidney failure. Food-borne illnesses are also caused by Calicivirus, which is also known as the Norwalk-like virus. This group of viruses is believed to spread from one person to another. An infected health service worker preparing a salad or sandwich could contaminate the food. According to CDC, infected fishermen contaminated oysters that they handled. These viruses are characterized by severe gastrointestinal illness. There is more vomiting than diarrhea, and the person usually recovers in two days. The types and causes of food-borne illnesses have changed through the years. The pasteurization of milk, improved water quality, and safer canning techniques led to a reduction in the number of cases of common food-borne illness like typhoid fever, tuberculosis, and cholera. Cause of contemporary food-borne illnesses range from parasites living in imported food to food-processing techniques. An
Environmental Encyclopedia 3
Foot and mouth disease
outbreak of a diarrheal illness in 1996 and 1997 was attributed to Cyclsopora, a parasite that contaminated raspberries grown in Guatemala. That led to 2,500 confirmed infection cases in 21 states. Food may be contaminated during processing. For example, the meat contained in one hamburger may come from hundreds of animals, according to CDC. An E. coli outbreaks during the 1990s were linked to hamburgers purchased at fast food restaurants. Technology in the form of food irradiation may eliminate the pathogens that cause food-borne disease. Advocates say that radiating food with gamma rays is effective and can be done when the food is packaged. Opponents say the process is dangerous and could produce the free radicals that cause cancer. CDC ranks “raw foods of animal origin” as the foods most likely to be contaminated. This category includes meat, poultry, raw eggs, raw shellfish, and unpasteurized milk. Furthermore, raw fruit and vegetables could also pose a health risk. Vegetables fertilized by manure can also be contaminated by the fertilizer. CDC said that some outbreaks of food-borne illness were traced to unsanitary processing procedure. Water quality was crucial when washing vegetables, as was chilling the produce after it was harvested. Of particular concern are alfalfa sprouts and other raw sprouts. These vegetables sprout in conditions that are favorable to microbes. Sprouts are eaten raw, and the microbes can grow into a large number of pathogens. CDC advises consumers to thoroughly cook meat, poultry, and eggs. Produce should be washed. Leftovers should be chilled promptly. CDC is part of the United States Public Health Service. It researches and monitors health issues. Federal regulation of food safety is the responsibility of agencies such as the Food and Drug Administration, the United States Department of Agriculture, and the National Marine Fisheries Service. [Liz Swain]
RESOURCES BOOKS Fox, Nichols. It Was Probably Something You Ate. New York: Penguin Books, 1999. Hubbert, William, et al. Food Safety and Quality Assurance. Ames, IA: Iowa State University Press, 1996. Satin, Morton. Food Alert! The Ultimate Sourcebook for Food Safety. New York: Checkmark Books, 1999.
ORGANIZATIONS Centers for Disease Control and Prevention, 1600 Clifton Road, Atlanta, GA USA 30333 (404) 639-3311, Toll Free: (800) 311-3435,
Foot and mouth disease Foot-and-mouth disease (FMD), also called hoof-andmouth disease, is a highly contagious and economically devastating viral disease of cattle, swine, and other clovenhoofed (split-toed) ruminants, including sheep, goats, and deer. The disease is highly contagious, for nearly 100% of exposed animals become infected, and it spreads rapidly through susceptible populations. Although there is no cure for FMD, it is seldom fatal, but it can kill very young animals. The initial symptoms of the disease include fever and blister-like lesions (vesicles). The vesicles rupture into erosions on the tongue, in the mouth, on the teats, and between the hooves. Vesicles that rupture discharge clear or cloudy fluid and leave raw, eroded areas with ragged fragments of loose tissue. Erosions in the mouth result in excessive production of sticky, foamy, stringy saliva, which is a characteristic of FMD. Another characteristic symptom is lameness with reluctance to move. Other possible symptoms and effects of FMD include elevated temperatures in the early stages of the disease for two to three days, abortion, low conception rates, rapid weight loss, and drop in milk production. FMD lasts for two to three weeks, with most animals recovering within six months. However, it can leave some animals debilitated, thus causing severe losses in the production of meat and milk. Even cows that have recovered seldom produce milk at their original rates. Animals grown for meat do not usually regain lost weight for many months. FMD can also lead to myocarditis, which is an inflammation of the muscular walls of the heart, and death, especially in newborn animals. Infected animals can spread the disease throughout their lives so the only way to stop an outbreak is to destroy the animals. The virus that causes the disease survives in lymph nodes and bone marrow at neutral pH. There are at least seven types and many subtypes of the FMD virus. The virus persists in contaminated fodder and in the environment for up to one month, depending on the temperature and pH. FMD thrives in dark, damp places, like barns, and can be destroyed with heat, sunlight, and disinfectants. The disease is not likely to affect humans, either directly or indirectly through eating meat from an infected animal, but humans can spread the virus to animals. FMD can remain in human nasal passages for at up to 28 hours. FMD viruses can be spread by other animals and materials to susceptible animals. The viruses can also be carried for several miles on the wind if environmental conditions are appropriate for virus survival. Specifically, an outbreak can occur when: Opeople wearing contaminated clothes or footwear or using contaminated equipment pass the virus to susceptible animals 581
Foot and mouth disease
animals carrying the virus are introduced into susceptible herds Ocontaminated facilities are used to hold susceptible animals Ocontaminated vehicles are used to move susceptible animals Oraw or improperly cooked garbage containing infected meat or animal products is fed to susceptible animals Osusceptible animals are exposed to contaminated hay, feedstuffs, or hides Osusceptible animals drink contaminated water Oa susceptible cow is inseminated by semen from an infected bull Widespread throughout the world, FMD has been identified in Africa, South America, Middle East, Asia, and parts of Europe. North America, Central America, Australia, New Zealand, Chile, and some European countries are considered to be free of FMD. The United States has been free of FMD since 1929, when the last of nine outbreaks that occurred during the nineteenth and early twentieth centuries was eradicated. In 2001, an FMD outbreak was confirmed in the United Kingdom, France, the Netherlands, the Republic of Ireland, Argentina, and Uruguay. Officials in the United Kingdom detected 2,030 cases of FMD, slaughtered almost four million animals, and took seven months to control the outbreak. The economic losses were estimated to be in the billions of pounds, and tourism in the affected countries was adversely affected. The outbreak was detected on February 20, 2001; no new cases were reported after September 30, 2001; on January 15, 2002, the British government declared the FMD outbreak to be over. The outbreak appeared to have started in a pig finishing unit in Northumberland, which was licensed to feed processed waste food. The disease appeared to have spread through two routes: through infected pigs who were sent to a slaughterhouse and through windborne spread to sheep on a nearby farm. These sheep entered the marketing chain and were sold in markets and through dealers, where they infected other sheep, people, and vehicles, spreading the FMD virus throughout England, Wales, southern Scotland. As the outbreak continued, cases were detected in other European countries. The Animal and Plant Health Inspection Service (APHIS) of the United States Department of Agriculture (USDA) has developed a on-going comprehensive prevention program to protect American agriculture from FMD. APHIS continuously monitors for FMD cases worldwide. When FMD outbreaks are identified, APHIS initiates regulations that prohibit importation of live ruminants and swine and many animal products from the affected countries. In the 2001 outbreak in some European Union member countries, APHIS temporarily restricted importation of live ruminants and swine and their products from all European Union memO
582
Environmental Encyclopedia 3 ber states. APHIS officials are on duty at all United States land and maritime ports-of-entry to ensure that passengers, luggage, cargo, and mail are checked for prohibited agricultural products or other materials that could carry FMD. The USDA Beagle Brigade, dogs trained to sniff out prohibited meat products and other contraband items, are also on duty at airports to check incoming flights and passengers. The cooperation of private citizens is a crucial component of the protection program. APHIS prevents travelers entering the United States from bringing any agricultural products that could spread FMD and other harmful agricultural pests and diseases. Therefore passengers must declare all food items and other materials of plant or animal origin that they are carrying. Prohibited agricultural products that are found are confiscated and destroyed. Passengers must also report any visits to farms or livestock facilities. Failure to declare any items may result in delays and fines up to $1,000. Individuals traveling from countries that have been designated as FMD-affected must have shoes disinfected if they have visited farms, ranches, or other high risk areas, such as zoos, circuses, fairs, and other facilities and events where livestock and animals are exhibited. APHIS recommends that travelers should shower and shampoo prior to and again after returning to the United States from an FMD-affected country. They should also launder or dry clean clothes before returning to the United States. Full-strength vinegar can be used by passengers to disinfect glasses, jewelry, watches, belts, hats, cell phones, hearing aids, camera bags, backpacks, and purses. If travelers had visited a farm or had any contact with livestock on their trip, they should avoid contact with livestock, zoo animals, or wildlife for five days after their return. Although dogs and cats cannot become infected with FMD, their feet, fur, and bedding should be cleaned of excessive dirt or mud. Pet bedding should not contain straw, hay, or other plant materials. The pet should be bathed as soon as it reaches its final destination and be kept away from all livestock for at least five days after entering the United States. In the United States, animal producers and private veterinarians also monitor domestic livestock for symptoms of FMD. Their surveillance activities are supplemented with the work of 450 specially trained animal disease diagnosticians from federal, state, and military agencies. These diagnosticians are responsible for collecting and analyzing samples from animals suspected of FMD infection. If an outbreak were confirmed, APHIS would quickly try to identify infected and exposed animals, establish and maintain quarantines, destroy all infected and exposed animals using humane euthanization procedures as quickly as possible, and dispose of the carcasses by an approved method such as incineration or burial. After cleaning and disaffection of facilities where the infected animals were housed, the facility
Environmental Encyclopedia 3
Stephen Alfred Forbes
would be left vacant for several weeks. After this period, a few susceptible animals would be placed in the facility and observed for signs of FMD. A large area around the facility would be quarantined, where animal and human movement would be restricted or prohibited. In some cases, all susceptible animals within a two-mile radius would be also euthanized and disposed of properly by incineration or burial. In addition, APHIS has developed plans to pay affected producers the fair market value of their animals. APHIS would consider vaccinating animals against FMD to enhance other eradication activities as well as to prevent spread to disease-free areas. However, vaccinated animals may still become infected and serve as a reservoir for the disease, even though they do not develop the disease symptoms themselves. Also for continued protective immunity, the vaccines must be given every few months. APHIS is working with the U.S. Armed Forces to ensure that military vehicles and equipment are cleaned and disinfected before returning to the United States. Preventing FMD from infecting and becoming established in an FMD-free area requires constant vigilance and a well-developed, thorough plan to control and eradicate any cases that might occur. [Judith L. Sims]
RESOURCES BOOKS Haynes, N. Bruce, and Robert F. Kahrs. Keeping Livestock Healthy. North Adams, MA: Storey Communications, Inc., 2001.
OTHER Animal and Plant Health Inspection Service, U.S. Department of Agriculture. Foot-and-Mouth Disease. [cited June 2002]. . Department for Environment, Food and Rural Affairs, United Kingdom.” Origin of the UK Foot and Mouth Disease Epidemic in 2001. June 2002 [cited June 2002]. “History of the World’s Worst Ever Foot and Mouth Epidemic: United Kingdom 2001.” [cited June 2002]. .
Stephen Alfred Forbes (1844 – 1930) American entomologist and naturalist Stephen Alfred Forbes, an entomologist and naturalist, was the son of Isaac Sawyer and Agnes (Van Hoesen) Forbes. Forbes was born at Silver Creek, Ill. His father was a farmer, and died when Stephen was 10 years old. An older brother, Henry, then 21 years old, had been independent since he was 14, working his way toward a college education, but on his father’s death he abandoned his career, took the burden of his father’s family on his shoulders, and supported and
educated the children. He taught Stephen to read French, sent him to Beloit to prepare for college; and when the Civil War came he sold the farm and gave the proceeds (after the mortgage was paid) to his mother and sister for their support. Both brothers then joined the 7th Illinois Cavalry, Henry having retained enough money to buy horses for both. Stephen, enlisting at 17, was rapidly promoted, and at 20 became a captain in the regiment of which his brother ultimately became colonel. In 1862, while carrying dispatches, he was captured and held in a Confederate prison for four months. After liberation and three months in the hospital recuperating, he rejoined his regiment and served until the end of the war. He had learned to read Italian and Spanish in addition to French, before the war, and studied Greek while in prison. He was a born naturalist. His farm life as a boy and his open-air life in the army intensified his interest in nature. After the close of the war, he began at once the study of medicine, entering the Rush Medical College where he nearly completed the course. His biographers have not as yet given the reason for the radical change in his plans which caused him to abandon medicine at this late stage in his education; but the writer has been told by his son, that it was “because of a series of incidents having to do mainly with operations without the use of anesthetics which convinced him that he was not temperamentally adapted to medical practice.” His scientific interests, however, had been thoroughly aroused, and for several years while he taught school in southern Illinois, he carried on studies in natural history. In 1872 through the interest and influence of Dr. George Vasey, the well-known botanist, he was made curator of the Museum of State Natural History at Normal, Ill., and three years later was made instructor in zoology at the normal school. In 1877 the Illinois State Museum was established at Springfield; and the museum at Normal, becoming the property of the state, was made the Illinois State Laboratory of Natural History. Forbes was made its director. During these years he had been publishing the results of his researches rather extensively, and had gone into a most interesting and important line of investigation, namely the food of birds and fishes. He studied intensively the food of the different species of fish inhabiting Illinois waters and the food of the different birds. This study, of course, kept him close to entomology, and in 1884 he was appointed professor of zoology and entomology in the University of Illinois. The State Laboratory of Natural History was transferred to the university and in 1917 was renamed the Illinois Natural History Survey. He retained his position as chief, and held it up to the time of his death. He was appointed state entomologist in 1882 and served until 1917, when the position was merged in the survey. He retired from his teaching 583
Environmental Encyclopedia 3
Francois-Alphonse Forel
position as an emeritus professor in 1921. He served as dean of the College of Science of the university from 1888 to 1905. All through his career Forbes had been publishing his writings actively. As early as 1895, Samuel Henshaw, in his Bibliography of the more Important Contributions to American Economic Entomology, listed 101 titles. It is said that his bibliography runs to more than 500 titles. And the range of these titles is extraordinary; they include papers on entomology, ornithology, limnology, ichthyology, ecology, and other phases of biology. All of his work was characterized by remarkable originality and depth of thought. Forbes was the first writer and teacher in America to stress the study of ecology, and thus began a movement which has gained great headway. He published 18 annual entomological reports, all of which have been models. He was the first and leading worker in America on hydrobiology. He studied the fresh-water organisms of the inland waters and was the first scientist to write on the fauna of the Great Lakes. His work on the food of fishes was pioneer work and has been of very great practical value. Forbes was a charter member of the American Association of Economic Entomologists and served twice as its president. He was also a charter member of the Illinois Academy of Science; a member of the National Academy of Sciences and of the American Philosophical Society; and in 1928 was made an honorary member of the Fourth International Congress of Entomology. Indiana University gave him the degree of Ph.D., in 1884, on examination and presentation of a thesis. He married, on December 25, 1873, Clara Shaw Gaston, whose death preceded his by only six months. A son, Dr. Ernest B. Forbes of State College, Pa., and three daughters survived him. [Leland Ossian Howard]
RESOURCES BOOKS Croker, Robert A. Stephen Forbes and the Rise of American Ecology. Washington, DC: Smithsonian Institution Press, 2001.
PERIODICALS Dictionary of American Biography. Base set. American Council of Learned Societies, 1928–1936. Reproduced in Biography Resource Center. Farmington Hills, MI: The Gale Group. 2002. Schneider, Daniel W. “Local Knowledge, Environmental Politics, and the Founding of Ecology in the United States: Stephen Forbes and ’The Lake as a Microcosm’ 1887.” Isis 91, no. 4 (December 2000): 681–705.
Francois-Alphonse Forel (1841 – 1912) Swiss professor of medicine Francois-Alphonse Forel created a legacy by spending a lifetime studying a lake near which he lived. As the recog584
nized founder of the science of the study of lakes, limnology Forel meticulously observed Lake Geneva. His observations not only contributed directly to the modern-day ecological and environmental direction. He helped to uncover the mysteries of the seiche—movements caused by wind or air pressure and that occur in lakes. Forel was born in Morges, Switzerland on February 2, 1841. As a professor of anatomy and physiology at the University of Geneva, (also known as Lake Le man) he spent most of his time investigating the life and movement of the lake near his home, world-famous Lake Geneva. To that purpose he created the word “limnology” referring to the study of lakes. On April 2, 1869, he found a nematode worm 40 meters down in the lake. That discovery was the beginning of his lifelong work. Before publishing his first major work, Forel recruited help in describing the types of algae and invertebrate animals he had found. A preliminary study he published in 1874, La Faune Profonde des Lacs Suisses, (The Bottom Fauna of Swiss Lakes), provided the basis for his later work. When his book on the lake, Le Le man, was published Forel himself noted there that, “This book is called Limnological Monography. I have to explain this new word and apologize, if it would be necessary. The aim of my description is a part of the Earth, that’s geography. But the geography of the ocean is called oceanography. But a lake, as big it is, is by not means an ocean; its limited area give it a special characteristic, which is very different of the one of the endless ocean. I had to find a more modest word to describe my work, like the word limnography. But, because a limnograph is a tool to measure the water level of lakes, I had to fabricate the new word limnology. The limnology is in fact the oceanography of the lakes.” A second book, Handbuch der Seenkunde (Handbook of Lake Studies) also became a standard work in the study of lakes. According to Ayer Company Publishers—the publisher that reprinted the work in 1978— Forel realized that his original work was an encyclopedia which might not hold the interest of students. He published his handbook to be used as the first textbook for the study of limnology. Forel died on August 7, 1912. In 1970 his work was honored when the F.-A. Forel Institute was founded in his honor by Professor Jean-Pierr Vernet who directed the laboratory until 1995. The institute became a part of the Earth Sciences department of the University of Geneva in 1980, and plays a vital role in the International Commission for the Protection of Lake Geneva’s water (CIPEL). The active research that occurs through the institute includes limnology, environmental geochemistry, ecotoxicology, and quaternary geology. It is also the home of the secretariat of the Center of Earth and Environmental Sciences Studies
Environmental Encyclopedia 3
Dave Foreman
(CESNE) and the Diplome d’etudes en sciences naturelles de l’environment (DESNE). One of the other ways in which Forel has been honored is the use of his name added to a device known as the ForelUle Color Scale. According to the Marine Educatin and Research Program at Occidental College, the scale is ued in marine studies, and consists of a series of numbered tubes (from 1 to 22) that contain various shades of colored fluids that range from blue (lowest numbers) through green, yellow, brown, and red (highest numbers). [Jane Spear]
RESOURCES BOOKS Forel, Francois-Alphonse. Le Leman. (Reprint) Geneva: Slatkine Reprints, 1969. Forel, Francois-Alphonse. La Faune Profonde des Lacs Suisse. (Reprint) Manchester, NH: Ayer Company Publishers. Forel, Francois-Alphonse. Handbuch der Seenkunde. (Reprint) Manchester, NH: Ayer Company Publishers, 1978.
PERIODICALS Korgen, Ben. The Seiche Newsletter. ldquo;Bonanza for Lake Superior,rdquo; February 2000.
OTHER F/-A. Forel Institute. “Dr. F.-A. Forel.” History. [cited July 2002]. . Marine Education and Research Program, Occidental College. “Water Cooler.” Equipment and Methodology. 2002 [cited July 2002]. .
ORGANIZATIONS F.-A. Forel Institute, 10 route de Suise, Geneva, Switzerland +4122950-92-10, Fax: +4122-755-13-82, Email:
[email protected], www.unige.ch/forel
Dave Foreman
(1946 – )
American radical environmental activist Dave Foreman is a self-described radical environmentalist, Co-founder of Earth First!, and a leading defender of “monkey-wrenching” as a direct-action tactic to slow or stop strip mining, clear-cut logging of old-growth forests, the damming of wild rivers, and other environmentally destructive practices. The son of a United States Air Force employee, Foreman traveled widely while growing up. In college he chaired the conservative Young Americans for Freedom and worked in the 1964 presidential election campaign of Senator Barry Goldwater. In the 1970s Foreman was a conservative Republican and moderate environmentalist who worked for the Wilderness Society in Washington, D.C. He came to believe that the petrochemical, logging, and mining interests
were “extremists” in their pursuit of profit, and that government agencies—the Forest Service, the Bureau of Land Management, the U.S. Department of Agriculture, and others—were “gutless” and unwilling or unable to stand up to wealthy and powerful interests intent upon profiting from the destruction of American wilderness. Well-meaning moderate organizations like the Sierra Club, Friends of the Earth, and the Wilderness Society were, with few exceptions, powerless to prevent the continuing destruction. What was needed, Foreman reasoned, was an immoderate and unrespectable band of radical environmentalists like those depicted in Edward Abbey’s novel The Monkey Wrench Gang (1975) to take direct action against anyone who would destroy the wilderness in the name of “development.” With several like-minded friends, Foreman founded Earth First!, whose motto is “No compromise in defense of Mother Earth.” From the beginning, Earth First! was unlike any other radical group. It did not issue manifestoes or publish position papers; it had “no officers, no bylaws or constitution, no incorporation, no tax status; just a collection of women and men committed to the Earth.” Earth First!, Foreman wrote, “would be big enough to contain street poets and cowboy bar bouncers, agnostics and pagans, vegetarians and raw steak eaters, pacifists and those who think that turning the other cheek is a good way to get a sore face.” Its weapons would include “monkeywrenching,” civil disobedience, music, “media stunts [to hold] the villains up to ridicule,” and self-deprecating humor: “Radicals frequently verge toward a righteous seriousness. But we felt that if we couldn’t laugh at ourselves we would be merely another bunch of dangerous fanatics who should be locked up (like the oil companies). Not only does humor preserve individual and group sanity, it retards hubris, a major cause of environmental rape, and it is also an effective weapon.” But besides humor, Foreman called for “fire, passion, courage, and emotionalism...We [environmentalists] have been too reasonable, too calm, too understanding. It’s time to get angry, to cry, to let rage flow at what the human cancer is doing to Mother Earth.” In 1987 Foreman published Ecodefense: A Field Guide to Monkeywrenching, in which he described in detail the tools and techniques of environmental sabotage or monkeywrenching. These techniques included “spiking” oldgrowth redwoods and Douglas firs to prevent loggers from felling them; “munching” logging roads with nails; sabotaging bulldozers and other earth-moving equipment; pulling up surveyors’ stakes; and toppling high-voltage power lines. These tactics, Foreman said, were aimed at property, not at people. But critics quickly charged that loggers’ lives and jobs were endangered by tree-spiking and other techniques 585
Forest and Rangeland Renewable Resources Planning Act (1974)
that could turn deadly. Moderate or mainstream environmental organizations joined in condemning the confrontational tactics favored by Foreman and Earth First! In his autobiography Confessions of an Eco-Warrior (1991) Foreman defends monkey-wrenching as an unfortunate tactical necessity that has achieved its primary purpose of attracting the attention of the American people and the media to the destruction of the nation’s remaining wilderness. It also attracted the attention of the FBI, whose agents arrested Foreman at his home in 1989 for allegedly financing and encouraging ecoteurs (ecological saboteurs) to topple high-voltage power poles. Foreman was put on trial to face felony charges, which he denied. The charges were questioned when it was disclosed that an FBI informant had infiltrated Earth First! with the intention of framing Foreman and discrediting the organization. In a plea bargain, Foreman pleaded guilty to a lesser charge and received a suspended sentence. Foreman left Earth First! to found and direct The Wildlands Project in Tucson, Arizona. He continues to lecture and write about the protection of the wilderness. [Terence Ball]
RESOURCES BOOKS Foreman, D. Confessions of an Eco-Warrior. New York: Harmony Books, 1991. ———. “Earth First!” In Ideals and Ideologies: A Reader, edited by T. Ball and R. Dagger. New York: Harper-Collins, 1991. ———, and B. Haywood. Ecodefense: A Guide to Monkeywrenching. Tucson: Ned Ludd Books, 1985. List, P. C., ed. Radical Environmentalism: Philosophy and Tactics. Belmont, CA: Wadsworth, 1993. Manes, C. Green Rage: Radical Environmentalism and the Unmaking of Civilization. Boston: Little, Brown, 1990. Scarce, R. Eco-Warriors: Understanding the Radical Environmental Movement. Chicago: Noble Press, 1990.
Forest and Rangeland Renewable Resources Planning Act (1974) The Forest and Rangeland Renewable Resources Planning Act (RPA) was passed in response to the growing tension between the timber industry and environmentalists in the late 1960s and the early 1970s. These tensions can be traced to increased controversy over and restrictions on timber harvesting on the national forests, due especially to wilderness designations and study areas and clear-cutting. These environmental restrictions, coupled with a dramatic increase in the price of timber in 1969, made Congress receptive to timber industry demands for a steadier supply of timber. Numerous bills addressing timber supply were introduced 586
Environmental Encyclopedia 3
and debated in Congress, but none passed due to strong environmental pressure. A task force appointed by President Richard Nixon, the President’s Panel on Timber and the Environment, delivered its recommendations in 1973, but these were geared toward dramatically increased harvests from the national forests, and hence were also unacceptable to environmentalists. One aspect of the various proposals that proved to be acceptable to all interested parties—the timber industry, environmentalists, and the Forest Service—was increased long-range resource planning. Senator Hubert Humphrey of Minnesota drafted a bill creating such a program and helped guide it to passage in Congress. RPA planning is based on a two-stage process, with a document accompanying each stage. The first stage is called the Assessment, which is an inventory of the nation’s forest and range resources (public and private). The second stage, which is based on the Assessment, is referred to as the Program. Based on the completed inventory, the Forest Service provides a plan for the use and development of the available resources. The Assessment is to be done every 10 years. A Program based on the Assessment will be completed every five years. This planning was to be done by interdisciplinary teams and to incorporate widespread public involvement. The RPA was quite popular with the Forest Service since the plans generated through the process gave the agency a solid foundation to base its budget requests on, increasing the likelihood of increased funding. This has proved to be successful, as the Forest Service budget increased dramatically in 1977, and the agency fared much better than other resource agencies in the late twentieth century. The RPA was amended by the National Forest Management Act of 1976. Based on this law, in addition to the broad national planning mandated in the 1974 law, an Assessment and Program was required for each unit of the national forest system. This has allowed the Forest Service to use the plans to help shield itself from criticism. Since these plans address all uses of the forests, and make budget recommendations for these uses, if Congress does not fund these recommendations, the Forest Service can point to Congress as the culprit. However, the plans have also been more visible targets for interest group criticism. Overall, the RPA has met with mixed results. The Forest Service has received increased funds and the planning process has been expanded to each national forest unit, but planning at such a scale is a difficult task. The act has also led to increased controversy and to increased bureaucracy. Perhaps most importantly, planning cannot solve a problem based on conflicting values, commodity use versus forest preservation, which is at the heart of forest management policy. See also Old-growth forest [Christopher McGrory Klyza]
Environmental Encyclopedia 3 RESOURCES BOOKS Clary, D. A. Timber and the Forest Service. Lawrence: University Press of Kansas, 1986. Dana, S. T., and S. K. Fairfax. Forest and Range Policy. 2nd ed. New York: McGraw-Hill, 1980. Stairs, G. R., and T. E. Hamilton, eds. The RPA Process: Moving Along the Learning Curve. Durham, NC: Center for Resource and Environmental Policy Research, Duke University.
Forest decline In recent decades there have been observations of widespread declines in vigor and dieback of mature forests in many parts of the world. In many cases, pollution may be a factor contributing to forest decline, for example in regions where air quality is poor because of acidic deposition or contamination with ozone, sulfur dioxide, nitrogen compounds, or metals. However, forest decline also occurs in some places where the air is not polluted, and in these cases it has been suggested that the phenomenon is natural. Forest decline is characterized by a progressive, often rapid deterioration in the vigor of trees of one or several species, sometimes resulting in mass mortality (or dieback) within stands over a large area. Decline often selectively affects mature individuals, and is thought to be triggered by a particular stress or a combination of stressors, such as severe weather, nutrient deficiency, toxic substances in soil, and air pollution. According to this scenario, excessively stressed trees suffer a large decline in vigor. In this weakened condition, trees are relatively vulnerable to lethal attack by insects and microbial pathogens. Such secondary agents may not be so harmful to vigorous individuals, but they can cause the death of severely stressed trees. The preceding is only a hypothetical etiology of forest dieback. It is important to realize that although the occurrence and characteristics of forest decline can be well documented, the primary environmental variable(s) that triggers the decline disease are not usually known. As a result, the etiology of the decline syndrome is often attributed to a vague but unsubstantiated combination of biotic and abiotic factors. The symptoms of decline differ among tree species. Frequently observed effects include: (1) decreased productivity; (2) chlorosis, abnormal size or shape, and premature abscission of foliage; (3) a progressive death of branches that begins at the extremities and often causes a “stag-headed” appearance; (4) root dieback; (5) an increased frequency of secondary attack by fungal pathogens and defoliating or wood-boring insects; and (6) ultimately mortality, often as a stand-level dieback.
Forest decline
One of the best-known cases of an apparently natural forest decline, unrelated to human activities, is the widespread dieback of birches that occurred throughout the northeastern United States and eastern Canada from the 1930s to the 1950s. The most susceptible species were yellow (Betula alleghaniensis) and paper birch (B. papyrifera), which were affected over a vast area, often with extensive mortality. For example, in 1951 at the time of peak dieback in Maine, an estimated 67% of the birch trees had been killed. In spite of considerable research effort, a single primary cause has not been determined for birch dieback. It is known that a heavy mortality of fine roots usually preceded deterioration of the above-ground tree, but the environmental cause(s) of this effect are unknown, although deeply frozen soils caused by a sparse winter snow cover are suspected as being important. No biological agent was identified as a primary predisposing factor, although fungal pathogens and insects were observed to secondarily attack weakened trees and cause their death. Another apparently natural forest decline is that of ohia (Metrosideros polymorpha), an endemic species of tree usually occurring in monospecific stands, that dominates the native forest of Hawaiian Islands. There are anecdotal accounts of events of widespread mortality of ohia extending back at least a century, but the phenomenon is probably more ancient than this. The most recent widespread decline began in the late 1960s and resulted in about 200 mi2 (518 km2) of forest with symptoms of ohia decline in a 1982 survey of 308 mi2 (798 km2). In most declining stands only the canopy individuals were affected. Understory saplings and seedlings were not in decline, and in fact were released from competitive stresses by dieback of the overstory. An hypothesis to explain the cause of ohia decline has been advanced by D. Mueller-Dombois and co-workers, who believe that the stand-level dieback is caused by the phenomenon of “cohort senescence.” This is a stage of the life history of ohia characterized by a simultaneously decreasing vigor in many individuals, occurring in old-growth stands. The development of senescence in individuals is governed by genetic factors, but the timing of its onset can be influenced by environmental stresses. The decline-susceptible, over mature, life history stage follows a more vigorous, younger, mature stage in an even-aged stand of individuals of the same generation (i.e., a cohort) that had initially established following a severe disturbance. In Hawaii, lava flows, events of deposition of volcanic ash, and hurricanes are natural disturbances that initiate succession. Sites disturbed in this way are colonized by a cohort of ohia individuals, which produce an even-aged stand. If there is no intervening catastrophic disturbance, the stand matures, then becomes senescent and enters a decline and dieback phase. The original stand is then replaced by another ohia forest 587
Forest decline
comprised of an advance regeneration of individuals released from the understory. Therefore, according to the cohort senescence theory, the ohia dieback should be considered to be a characteristic of the natural population dynamics of the species. Other forest declines are occurring in areas where the air is contaminated by various potentially toxic chemicals, and these cases might be triggered by air pollution. In North America, prominent declines have occurred in ponderosa pine (Pinus ponderosa), red spruce (Picea rubens) and sugar maple (Acer saccharum). In western Europe, Norway spruce (Picea abies) and beech (Fagus sylvatica) have been severely affected. The primary cause of the decline of ponderosa pine in stands along the western slopes of the mountains of southern California is believed to be the toxic effects of ozone. Ponderosa pine is susceptible to the effects of this gas at the concentrations that are commonly encountered in the declining stands, and the symptomalogy of damage is fairly clear. In the other cases of decline noted above that are putatively related to air pollution, the evidence so far is less convincing. The recent forest damage in Europe has been described as a “new” decline syndrome that may in some way be triggered by stresses associated with air pollution. Although the symptoms appear to be similar, the “new” decline is believed to be different from diebacks that are known to have occurred historically and are believed to have been natural. The modern decline syndrome was first noted in fir (Abies alba) in Germany in the early 1970s. In the early 1980s a larger-scale decline was apparent in Norway spruce, the most commercially-important species of tree in the region, and in the mid 1980s decline became apparent in beech and oak (Quercus spp.). Decline of this type has been observed in countries throughout Europe, extending at least to western Russia. The decline has been most intensively studied in Germany, which has many severely damaged stands, although a widespread dieback has not yet occurred. Decline symptoms are variable in the German stands, but in general: (1) mature stands older than about 60 years tend to be more severely affected; (2) dominant individuals are relatively vulnerable; and (3) individuals located at or near the edge of the stand are more-severely affected, suggesting that a shielding effect may protect trees in the interior. Interestingly, epiphytic lichens often flourish in badly damaged stands, probably because of a greater availability of light and other resources caused by the diminished cover of tree foliage. In some respects this is a paradoxical observation, since lichens are usually hypersensitive to air pollution, especially toxic gases. From the information that is available, it appears that the “new” forest decline in Europe is triggered by a variable combination of environmental stresses. The weakened trees 588
Environmental Encyclopedia 3 then decline rapidly, and may die as a result of attack by secondary agents such as fungal disease or insect attack. Suggestions of the primary inducing factor include gaseous air pollutants, acidification, toxic metals in soil, nutrient imbalance, and a natural climatic effect, in particular drought. However, there is not yet a consensus as to which of these interacting factors is the primary trigger that induces forest decline in Europe, and it is possible that no single stress will prove to be the primary cause. In fact, there may be several “different” declines occurring simultaneously in different areas. The declines of red spruce and sugar maple in eastern North America involve species that are long-lived and shadetolerant, but shallow-rooted and susceptible to drought. The modern epidemic of decline in sugar maple began in the late 1970s and early 1980s, and has been most prominent in Quebec, Ontario, New York, and parts of New England. During the late 1980s and early 1990s, the decline appeared to reverse, and most stands became more healthy. The symptoms are similar to those described for an earlier dieback, and include abnormal coloration, size, shape, and premature abscission of foliage, death of branches from the top of the tree downward, reduced productivity, and death of trees. There is a frequent association with the pathogenic fungus Armillaria mellea, but this is believed to be a secondary agent that only attacks weakened trees. Many declining stands had recently been severely defoliated by the forest tent caterpillar (Malacosoma disstria), and many stands were tapped each spring for sap to produce maple sugar. Because the declining maple stands are located in a region subject to a high rate of atmospheric deposition of acidifying substances, this has been suggested as a possible predisposing factor, along with soil acidification and mobilization of available aluminum. Natural causes associated with climate, especially drought, have also been suggested. However, little is known about the modern sugar maple decline, apart from the fact that it occurred extensively; no conclusive statements can yet be made about its causal factor(s). The stand-level dieback of red spruce has been most frequent in high-elevation sites of the northeastern United States, especially in upstate New York, New England, and the mid- and southern-Appalachian states. These sites are variously subject to acidic precipitation (mean annual pH about 4.0–4.1), to very acidic fog water (pH as low as 3.2– 3.5), to large depositions of sulfur and nitrogen from the atmosphere, and to stresses from metal toxicity in acidic soil. Declines of red spruce are anecdotally known from the 1870s and 1880s in the same general area where the modern decline is occurring. Up to one-half of the mature red spruce in the Adirondacks of New York was lost during that early episode of dieback, and there was also extensive damage in New England. As with the European forest de-
Environmental Encyclopedia 3
Forest management
cline, the “old” and “new” episodes appear to have similar symptoms, and it is possible that both occurrences are examples of the same kind of disease. The hypotheses suggested to explain the initiation of the modern decline of red spruce are similar to those proposed for European forest decline. They include acidic deposition, soil acidification, aluminum toxicity, drought, winter injury exacerbated by insufficient hardiness due to nitrogen fertilization, heavy metals in soil, nutrient imbalance, and gaseous air pollution. Climate change, in particular a long term warming that has occurred subsequent to the end of the Little Ice Age in the early 1800s, may also be important. At present, not enough is known about the etiology of the forest declines in Europe and eastern North America to allow an understanding of possible role(s) of air pollution and of natural environmental factors. This does not necessarily mean that air pollution is not involved. Rather, it suggests that more information is required before any conclusive statements can be made regarding the causes and effects of the phenomenon of forest decline. See also Forest management [Bill Freedman Ph.D.]
RESOURCES BOOKS Barnard, J. E. “Changes in Forest Health and Productivity in the United States and Canada.” In Acidic Deposition: State of Science and Technology. Vol. 3, Terrestrial, Materials, Health, and Visibility Effects. Washington DC: U. S. Government Printing Office, 1990. Freedman, B. Environmental Ecology. Second Edition. San Diego: Academic Press, 1995.
PERIODICALS Mueller-Dombois, D. “Natural Dieback in Forests.” Bioscience 37 (1987): 575–583.
Forest management The question of how forest resources should be used goes beyond the science of growing and harvesting trees; forest management must solve the problems of balancing economic, aesthetic, and biological value to entire ecosystems. The earliest forest managers in North America were native peoples, who harvested trees for building and burned forests to make room for grazing animals. But many native populations were wiped out by European diseases soon after Europeans arrived. By the mid-nineteenth century, it became apparent to many Americans that overharvesting of timber along with wasteful practices, such as uncontrolled burning of logging waste, was denuding forests and threatening future ecological and economic stability. The Forest Service (established in 1905) began studying ways to preserve
forest resources for their economic as well as aesthetic, recreational, and wilderness value. From the 1600s to 1820s, 370 million acres (150 million ha) of forests—about 34% of the United States’ total— were cleared, leaving about 730 million acres (296 million ha) today. Only 10–15% of the forests have never been cut. Many previously harvested areas, however, have been replanted and the annual growth now exceeds harvest overall. But the nature of the forests has been altered, many believe for the worse. If logging of old-growth forests were to continue at the rate maintained during the 1980s, all remaining unprotected stands would be gone by 2015. Some 33,000 timber-related jobs could also be lost during that time, not just from environmental protection but also from over-harvesting, increased mechanization, and increasing reliance on foreign processing of whole logs cut from private lands. Recent federal and court decisions, most notably to protect the northern spotted owl in the United States, have slowed the pace of old-growth harvesting and for now has put more old forests under protection. But the questions of how to use forest resources is still under fierce debate. For decades, clear-cutting of tracts has been the standard forestry management practice. Favored by timber companies, clear-cutting takes virtually all material from a tract. But clear-cutting has come under increasing criticism from environmentalists, who point out that the practice replaces mixed-age, biologically diverse forests with single-age, single or few species plantings. Clear-cutting also relies heavily on roads to haul out timber, causing root damage, topsoil erosion, and siltation of streams. Industry standards such as “best management practices (BMPs)” prevent most erosion and siltation by keeping roads away from stream beds. But BMPs only address water quality. Clear-cutting also removes small trees, snags, boles, and woody debris that are important to invertebrates and fungi. Rather than focusing on what is removed from a forest, sustainable forest management focuses on what is left behind. In sustainable forestry, tracts are never clear-cut: instead, individual trees are selected and removed to maintain diversity and health of the remaining ecosystem. Such methods avoid artificial replanting, herbicides, insecticides, and fertilizers. However, much debate remains on which trees and how many are chosen for harvesting under sustainable forestry. In a new management style known most commonly as new forestry, 85–90% of trees on a site are harvested, and the land is left alone for decades to recover. Proponents say this method would cut down on erosion and increase diversity left behind on a tract, especially where one or two species dominate. The Forest Service and some Northwest states are studying new forestry, but environmentalists say too little is known about its effects on old-growth stands to use the 589
Environmental Encyclopedia 3
Forest Service
practice. Timber companies say more and larger tracts would have to be harvested under new forestry to meet demand. Those who make their living from America’s forests, and those who place value on the biological ecosystems they support, must resolve the debate on how to best preserve our forests. One-third of forest resources now come from Forest Service lands, and the debate is an increasingly public one, involving interests ranging from the Sierra Club and sporting clubs to the Environmental Protection Agency (EPA), the Agriculture Department (and its largest agency, the Forest Service), and timber companies and their employees. The future of our forests depends on balancing real short-term needs with the high price of long-term forest health. [L. Carol Ritchie]
RESOURCES BOOKS Lansky, M. Beyond the Beauty Strip: Saving What’s Left of Our Forests. Gardiner, ME: Tilbury House Publishers, 1992.
PERIODICALS Franklin, K. “Timber! The Forest Disservice.” The New Republic 200 (January 2, 1989): 12–14. Gillis, A. M. “The New Forestry: An Ecosystem Approach to Land Management.” BioScience 40 (September 1990): 558–62. McLean, H. E. “Paying the Price for Old Growth.” American Forests 97 (September-October 1991): 22–25. Steen, H. K. “Americans and Their Forests: A Love-Hate Story.” American Forests 98 (September-October 1992): 18–20.
Forest Service The national forest system in the United States must be considered one of the great success stories of the conservation movement. This remains true despite the continual controversies that seem to accompany administration of national forest lands by the United States Forest Service. The roots of the Forest Service began with the appointment in 1876 of Franklin B. Hough as a “forestry agent” in the U.S. Department of Agriculture to gather information about the nation’s forests. Ten years later, Bernhard E. Fernow was appointed chief of a fledgling Division of Forestry. Part way through Fernow’s tenure, Congress passed the Forest Reserve Act of 1891, which authorized the president to withdraw lands from the public domain to establish federal forest reserves. The public lands were to be administered, however, by the General Land Office in the U.S. Department of the Interior. Gifford Pinchot succeeded Fernow in 1899 and was Chief Forester when President Theodore Roosevelt approved the transfer of 63 million acres (25 million ha) of 590
forest reserves into the Department of Agriculture in 1905. That same year, the name of the Bureau of Forestry was changed to the United States Forest Service. Two years later, the reserves were redesignated national forests. The Forest Service today is organized into four administrative levels: the office of the Chief Forester in Washington, D.C.; nine regional offices; 155 national forests; and 637 ranger districts. The Forest Service also administers twenty national grasslands. In addition, a research function is served by a forest products laboratory in Madison, Wisconsin, and eight other field research stations. These lands are used for a wide variety of purposes and given official statutory status with the passage of the Multiple Use-Sustained Yield Act of 1960. That act officially listed five uses--timber, water, range, wildlife, and recreation--to be administered on national forest lands. Wilderness was later included. Forest Service now administers more than 34 million acres (14 million ha) in 387 units of the wilderness preservation system. Despite a professionally trained staff and a sustained spirit of public service, Forest Service administration and management of national forest lands has been controversial from the beginning. The agency has experienced repeated attempts to transfer it to the Department of the Interior (or once to a new Department of Natural Resources); its authority to regulate grazing and timber use, including attempts to transfer national forest lands into private hands, has been frequently challenged and some of the Service’s management policies have been the center of conflict. These policies have included clear-cutting and “subsidized” logging, various recreation uses, preservation of the northern spotted owl, and the cutting of old-growth forests in the Pacific Northwest. [Gerald L. Young Ph.D.]
RESOURCES BOOKS Clary, D. A. Timber and the Forest Service. Lawrence: University Press of Kansas, 1986. O’Toole, R. Reforming the Forest Service. Washington, DC: Island Press, 1988. Steen, H. K. The United States Forest Service: A History. Seattle: University of Washington Press, 1976. ———. The Origins of the National Forests. Durham, NC: Forest History Society, 1992.
ORGANIZATIONS Forest Service, U. S. Department of Agriculture, Sidney R. Yates Federal Building 201 14th Street, SW at Independence Ave., SW, Washington, D.C. USA 20250 Email:
[email protected],
Environmental Encyclopedia 3
Forestry Canada see Canadian Forest Service
Forests see Coniferous forest; Deciduous forest; Hubbard Brook Experimental Forest; National forest; Old-growth forest; Rain forest; Taiga; Temperate rain forest; Tropical rain forest
Dr. Dian Fossey (1932 – 1985) American naturalist and primatologist Dian Fossey is remembered by her fellow scientists as the world’s foremost authority on mountain gorillas. But to the millions of wildlife conservationists who came to know Fossey through her articles and book, she will always be remembered as a martyr. Throughout the nearly 20 years she spent studying mountain gorillas in central Africa, the American primatologist tenaciously fought the poachers and bounty hunters who threatened to wipe out the endangered primates. She was brutally murdered at her research center in 1985 by what many believe was a vengeful poacher. Fossey’s dream of living in the wilds of Africa dates back to her lonely childhood in San Francisco. She was born in 1932, the only child of George, an insurance agent, and Kitty, a fashion model, (Kidd) Fossey. The Fosseys divorced when Dian was six years old. A year later, Kitty married a wealthy building contractor named Richard Price. Price was a strict disciplinarian who showed little affection for his stepdaughter. Although Fossey loved animals, she was allowed to have only a goldfish. When it died, she cried for a week. Fossey began her college education at the University of California at Davis in the preveterinary medicine program. She excelled in writing and botany, she failed chemistry and physics. After two years, she transferred to San Jose State University, where she earned a bachelor of arts degree in occupational therapy in 1954. While in college, Fossey became a prize-winning equestrian. Her love of horses in 1955 drew her from California to Kentucky, where she directed the occupational therapy department at the Kosair Crippled Children’s Hospital in Louisville. Fossey’s interest in Africa’s gorillas was aroused through primatologist George Schaller’s 1963 book, The Mountain Gorilla: Ecology and Behavior. Through Schaller’s book, Fossey became acquainted with the largest and rarest of three subspecies of gorillas, Gorilla gorilla beringei. She learned that these giant apes make their home in the mountainous forests of Rwanda, Zaire, and Uganda. Males grow
Dr. Dian Fossey
up to 6 ft (1.8 m) tall and weigh 400 lb (182 kg) or more. Their arms span up to 8 ft (2.4 m). The smaller females weigh about 200 lb (91 kg). Schaller’s book inspired Fossey to travel to Africa to see the mountain gorillas in their homeland. Against her family’s advice, she took out a three-year bank loan for $8,000 to finance the seven-week safari. While in Africa, Fossey met the celebrated paleoanthropologist Louis Leakey, who had encouraged Jane Goodall in her research of chimpanzees in Tanzania. Leakey was impressed by Fossey’s plans to visit the mountain gorillas. Those plans were nearly destroyed when she shattered her ankle on a fossil dig with Leakey. But just two weeks later, she hobbled on a walking stick up a mountain in the Congo (now Democratic Republic of the Congo) to her first encounter with the great apes. The sight of six gorillas set the course for her future. “I left Kabara (gorilla site) with reluctance but with never a doubt that I would, somehow, return to learn more about the gorillas of the misted mountains,” Fossey wrote in her book, Gorillas in the Mist. Her opportunity came three years later, when Leakey was visiting Louisville on a lecture tour. Fossey urged him to hire her to study the mountain gorillas. He agreed, if she would first undergo a preemptive appendectomy. Six weeks later, he told her the operation was unnecessary; he had only been testing her resolve. But it was too late. Fossey had already had her appendix removed. The L.S.B. Leakey and the Wilkie Brothers foundations funded her research, along with the National Geographic Society. Fossey began her career in Africa with a brief visit to Jane Goodall in Tanzania to learn the best methods for studying primates and collecting data. Fossey set up camp early in 1967 at the Kabara meadow in Zaire’s Parc National des Virungas, where Schaller had conducted his pioneering research on mountain gorillas a few years earlier. The site was ideal for Fossey’s research. Because Zaire’s park system protected them against human intrusion, the gorillas showed little fear of Fossey’s presence. Unfortunately, civil war in Zaire forced Fossey to abandon the site six months after she arrived. She established her permanent research site September 24, 1967, on the slopes of the Virunga Mountains in the tiny country of Rwanda. She called it the Karisoke Research Centre, named after the neighboring Karisimbi and Visoke mountains in the Parc National des Volcans. Although Karisoke was just five miles from the first site, Fossey found a marked difference in Rwanda’s gorillas. They had been harassed so often by poachers and cattle grazers that they initially rejected all her attempts to make contact. Theoretically, the great apes were protected from such intrusion within the park. But the government of the impov591
Dr. Dian Fossey
Dian Fossey photographing mountain gorillas in Zaire. (AP/Wide World Photos. Reproduced by permission.)
erished, densely populated country failed to enforce the park rules. Native Batusi herdsmen used the park to trap antelope and buffalo, sometimes inadvertently snaring a gorilla. Most trapped gorillas escaped, but not without seriously mutilated limbs that sometimes led to gangrene and death. Poachers who caught gorillas could earn up to $200,000 for one by selling the skeleton to a university and the hands to tourists. From the start, Fossey’s mission was to protect the endangered gorillas from extinction—indirectly, by researching and writing about them, and directly, by destroying traps and chastising poachers. Fossey focused her studies on some 51 gorillas in four family groups. Each group was dominated by a sexually mature silverback, named for the characteristic gray hair on its back. Younger, bachelor males served as guards for the silverback’s harem and their juvenile offspring. When Fossey began observing the reclusive gorillas, she followed the advice of earlier scientists by concealing herself and watching from a distance. But she soon realized that the only way she would be able to observe their behavior as closely as she wanted was by “habituating” the gorillas to her presence. She did so by mimicking their sounds and behavior. She learned to imitate their belches that signal contentment, their barks of curiosity, and a dozen other 592
Environmental Encyclopedia 3 sounds. To convince them she was their kind, Fossey pretended to munch on the foliage that made up their diet. Her tactics worked. One day early in 1970, Fossey made history when a gorilla she called Peanuts reached out and touched her hand. Fossey called it her most rewarding moment with the gorillas. She endeared laymen to Peanuts and the other gorillas she studied through her articles in National Geographic magazine. The apes became almost human through her descriptions of their nurturing and playing. Her early articles dispelled the myth that gorillas are vicious. In her 1971 National Geographic article she described the giant beasts as ranking among “the gentlest animals, and the shiest.” In later articles, Fossey acknowledged a dark side to the gorillas. Six of 38 infants born during a 13-year-period were victims of infanticide. She speculated the practice was a silverback’s means of perpetuating his own lineage by killing another male’s offspring so he could mate with the victim’s mother. Three years into her study, Fossey realized she would need a doctoral degree to continue receiving support for Karisoke. She temporarily left Africa to enroll at Cambridge University, where she earned her Ph.D. in zoology in 1974. In 1977, Fossey suffered a tragedy that would permanently alter her mission at Karisoke. Digit, a young male she had grown to love, was slaughtered by poachers. Walter Cronkite focused national attention on the gorillas’ plight when he reported Digit’s death on the CBS Evening News. Interest in gorilla conservation surged. Fossey took advantage of that interest by establishing the Digit Fund, a non-profit organization to raise money for anti-poaching patrols and equipment. Unfortunetly, the money wasn’t enough to save the gorillas from poachers. Six months later, a silverback and his mate from one of Fossey’s study groups were shot and killed defending their three-year-old son, who had been shot in the shoulder. The juvenile later died from his wounds. It was rumored that the gorilla deaths caused Fossey to suffer a nervous breakdown, although she denied it. What is clear is that the deaths prompted her to step up her fight against the Batusi poachers by terrorizing them and raiding their villages. “She did everything short of murdering those poachers,” Mary Smith, senior assistant editor at National Geographic, told contributor Cynthia Washam in an interview. A serious calcium deficiency that causes bones to snap and teeth to rot forced Fossey to leave Africa in 1980. She spent her three-year sojourn as a visiting associate professor at Cornell University. Fossey completed her book, Gorillas in the Mist, during her stint at Cornell. It was published in 1983. Although some scientists criticized the book for its abundance of anecdotes and lack of scientific discussion, lay readers and reviewers received it warmly.
Environmental Encyclopedia 3
Fossil fuels
When Fossey returned to Karisoke in 1983, her scientific research was virtually abandoned. Funding had run dry. She was operating Karisoke with her own savings. “In the end, she became more of an animal activist than a scientist,” Smith said. “Science kind of went out the window.” On Dec. 27, 1985, Fossey, 54, was found murdered in her bedroom at Karisoke, her skull split diagonally from her forehead to the corner of her mouth. Her murder remains a mystery that has prompted much speculation. Rwandan authorities jointly charged American research assistant Wayne McGuire, who discovered Fossey’s body, and Emmanuel Rwelekana, a Rwandan tracker Fossey had fired several months earlier. McGuire maintains his innocence. At the urging of U.S. authorities, he left Rwanda before the charges against him were made public. He was convicted in absentia and sentenced to die before a firing squad if he ever returns to Rwanda. Farley Mowat, the Canadian author of Fossey’s biography, Woman in the Mists, believes McGuire was a scapegoat. He had no motive for killing her, Mowat wrote, and the evidence against him appeared contrived. Rwelekana’s story will never be known. He was found dead after apparently hanging himself a few weeks after he was charged with the murder. Smith, and others, believe Fossey’s death came at the hands of a vengeful poacher. “I feel she was killed by a poacher,” Smith said. “It definitely wasn’t any mysterious plot.” Fossey’s final resting place is at Karisoke, surrounded by the remains of Digit and more than a dozen other gorillas she had buried. Her legacy lives on in the Virungas, as her followers have taken up her battle to protect the endangered mountain gorillas. The Dian Fossey Gorilla Fund, formerly the Digit Fund, finances scientific research at Karisoke and employs camp staff, trackers and anti-poaching patrols. The Rwanda government, which for years had ignored Fossey’s pleas to protect its mountain gorillas, on September 27, 1990, recognized her scientific achievement with the Ordre (sic) National des Grandes Lacs, the highest award it has ever given a foreigner. Gorillas in Rwanda are still threatened by cattle ranchers and hunters squeezing in on their habitat. According to the Colorado-based Dian Fossey Gorilla Fund, by the early 1990s, fewer than 650 mountain gorillas remained in Rwanda, Zaire, and Uganda. The Virunga Mountains is home to about 320 of them. Smith is among those convinced that the number would be much smaller if not for Fossey’s 18 years of dedication to save the great apes. “Her conservation efforts stand above everything else (she accomplished at Karisoke),” Smith said. “She single-handedly saved the mountain gorillas.” [Cynthia Washam]
RESOURCES BOOKS Brower, Montgomery. “The Strange Death of Dian Fossey.” People, February 17, 1986, 46–54. Fossey, D. Gorillas in the Mist. Houghton Mifflin Company, 1983. Hayes, H. T. P. The Dark Romance of Dian Fossey. Simon and Schuster, 1990. Mowat, Farley. Woman in the Mists. Warner Books, 1987. Schoumatoff, Alex. African Madness. Alfred A. Knopf, 1988.
Fossil fuels In early societies, wood or other biological fuels were the main energy source. Today in many non-industrial societies, they continue to be used widely. Biological fuels may be seen as part of a solar economy where energy is extracted from the sun in a way that makes them renewable. However industrialization requires energy sources at much higher density and these have generally been met through the use of fossil fuels such as coal, gas, or oil. In the twentieth century a number of other options such as nuclear or higher density renewable energy sources (wind power, hydro-electric power, etc.) have also been available. Nevertheless fossil fuels represent the principal source of energy for most of the industrialized world. Fossil fuels are types of sedimentary organic materials, often loosely called bitumens, with asphalt, a solid, and petroleum, the liquid form. More correctly bitumens are sedimentary organic materials that are soluble in carbon disulfide. It is this that distinguishes asphalt from coal, which is an organic material largely insoluble in carbon disulfide. Petroleum can probably be produced from any kind of organism, but the fact that these sedimentary deposits are more frequent in marine sediments has suggested that oils arise from the fats and proteins in material deposited on the sea floor. These fats would be stable enough to survive the initial decay and burial but sufficiently reactive to undergo conversion to petroleum hydrocarbons at low temperature. Petroleum consists largely of paraffins or simple alkanes, with smaller amounts of napthenes. There are traces of aromatic compounds such as benzene present at the percent level in most crude oils. Natural gas is an abundant fossil fuel that consists largely of methane and ethane, although traces of higher alkanes are present. In the past, natural gas was regarded very much as a waste product of the petroleum industry and was simply burnt or flared off. Increasingly it is being seen as the favored fuel. Coal, unlike petroleum, contains only a little hydrogen. Fossil evidence shows that coal is mostly derived from the burial of terrestrial vegetation with its high proportion of lignin and cellulose. 593
Fossil fuels
Most sediments contain some organic matter, and this can rise to many percent in shales. Here the organic matter can consist of both coals and bitumens. This organic material, often called sapropel, can be distilled to yield petroleum. Oil shales containing the sapropel kerogen are very good sources of petroleum. Shales are considered to have formed where organic matter was deposited along with fine grain sediments, perhaps in fjords, where restricted circulation keeps the oxygen concentrations low enough to prevent decay of the organic material. Fossil fuels are mined or pumped from geological reservoirs where they have been stored for long periods of time. The more viscous fuels, such as heavy oils, can be quite difficult to extract and refine, which has meant that the latter half of the twentieth century has seen lighter oils being favored. However, in recent decades natural gas has been popular because it is easy to pipe and has a somewhat less damaging impact on the environment. These fossil fuel reserves, although large, are limited and non-renewable. The total recoverable light to medium oil reserves are estimated at about 1.6 trillion barrels, of which about a third has already been used. Natural gas reserves are estimated at the equivalent of 1.9 trillion barrels of oil and about a sixth has already been used. Heavy oil and bitumen amount to about 0.6 and 0.34 trillion barrels, most of which has remained unutilized. The gas and lighter oil reserves lie predominantly in the eastern hemisphere, which accounts for the enormous petroleum industries of the Middle East. The heavier oil and bitumen reserves lie mostly in the western hemisphere. These are more costly to use and have been for the most part untapped. Of the 7.6-trillion ton coal reserve, only 2.5% has been used. Almost two thirds of the available coal is shared between China, the former Soviet Union, and the United States. Petroleum is not burnt in its crude form but must be refined, which is essentially a distillation process that splits the fuel into batches of different volatility. The lighter fuels are used in automobiles, with heavier fuels used as diesel and fuel oils. Modern refining can use chemical techniques in addition to distillation to help make up for changing demands in terms of fuel type and volatility. The combustion of fuels represents an important source of air pollutants. Although the combustion process itself can lead to the production of pollutants such as carbon or nitrogen oxides, it has often been the trace impurities in fossil fuels that have been the greatest source of air pollution. Natural gas is a much favored fuel because it has only traces of impurities such as hydrogen sulfide. Many of these impurities are removed from the gas by relatively simple scrubbing techniques, before it is distributed. Oil is refined, so although it contains more impurities than natural gas, these become redistributed in the refining process. Sulfur compounds tend 594
Environmental Encyclopedia 3 to be found only in trace amounts in the light automotive fuels. Thus automobiles are only a minor source of sulfur dioxide in the atmosphere. Diesel oil can have as much as a percent of sulfur, and heavier fuel oils can have even more, so in some situations these can represent important sources of sulfur dioxide in the atmosphere. Oils also dissolve metals from the rocks in which they are present. Some of the organic compounds in oil have a high affinity for metals, most notably nickel and vanadium. Such metals can reach high concentration in oils, and refining will mean that most become concentrated in the heavier fuel oils. Combustion of fuel oil will yield ashes that contain substantial fractions of the trace metals present in the original oil. This means that an element like vanadium is a useful marker of fuel oil combustion. Coal is often seen as the most polluting fuel because low grade coals can contain large quantities of ash, sulfur, and chlorine. However, it should be emphasized that the quantity of impurities in coal can vary widely, depending on where it is mined. The sulfur present in coal is found both as iron pyrites (inorganic) and bound up with organic matter. The nitrogen in coal is almost all organic nitrogen. Coal users are often careful to choose a fuel that meets their requirements in terms of the amount of ash, smoke or pollution risk it imposes. High rank coals such as anthracite have a high carbon content. They are mined in locations such as Pennsylvania and South Wales and contain little volatile matter and burn almost smokelessly. Much of the world’s coal reserve is bituminous, which means that it contains about 20–25% volatile matter. The fuel industry is often seen as responsible for pollutants and environmental risks that go beyond those produced by the combustion of its products. Mining and extraction processes result in spoil heaps, huge holes in open cast mining, and the potential for slumping of land (conventional mining). Petroleum refineries are large sources of hydrocarbons, although not usually the largest anthropogenic source of volatile organic compounds in the atmosphere. Refineries also release sulfur, carbon, and nitrogen oxides from the fuel that they burn. Liquid natural gas and oil spills are experienced both in the refining and transport of petroleum. Being a solid, coal presents somewhat less risk when being transported, although wind-blown coal dust can cause localized problems. Coal is sometimes converted to coke or other refined products such as Coalite, a smokeless coal. These derivatives are less polluting, although much concern has been expressed about the pollution damage that occurs near the factories that manufacture them. Despite this, the conversion of coal to less polluting synthetic solid, and liquid and gaseous fuels would appear to offer much opportunity for the future.
Environmental Encyclopedia 3
Four Corners
One of the principal concerns about the current reliance on fossil fuels relates not so much to their limited supply, but more to the fact that combustion releases such large amounts of carbon dioxide. Our use of fossil fuels over the last century has increased the concentration of carbon dioxide in the atmosphere. Already there is mounting evidence that this has increased the temperature of the earth through an enhanced greenhouse effect. [Peter Brimblecombe]
RESOURCES BOOKS Campbell, I. M. Energy and the Atmosphere. New York: Wiley, 1986.
Fossil water Water that occurs in an aquifer or zone of saturation protected or isolated from the current hydrologic cycle. This water, because it is old, does not have the levels of chemicals or contaminants used in of our industrialized society, and its unblemished nature often makes it prized drinking water. In other cases, however, these aquifers are so isolated and the original condition of the water is so high in inorganic salts that scientists have suggested using them for waste disposal or containment areas. See also Drinkingwater supply; Groundwater; Hazardous waste siting; Water quality
Four Corners The Hopi believe that the Four Corners—where Colorado, Utah, New Mexico and Arizona meet—is the center of the universe and holds all life on Earth in balance. It also has some of the largest deposits of coal, uranium, and oil shale in the world. According to the National Academy of Sciences the Four Corners is a “national sacrifice area.” This ancestral home of the Hopi and Dineh (Navajo) people is the center of the most intense energy development in the United States. Traditional grazing and farm land are being swallowed up by uranium mines, coal mines, and power plants. The Four Corners, sometimes referred to as the “jointuse area,” is comprised of 1.8 million acres (729,000 ha) of high desert plateau where Navajo sheep herders have grazed their flocks on idle Hopi land for generations. In 1972 Congress passed Public Law (PL) 93-531, which established the Navajo/Hopi Relocation Commission who had the power to enforce livestock reduction and the removal of over 10,000 traditional Navajo and Hopi, the largest forced relocation within the United States since the Japanese intern-
ment during World War II. Elders of both Nations issued a joint statement that officially opposed the relocation: “The traditional Hopi and Dineh (Navajo) realize that the socalled dispute is used as a disguise to remove both people from the JUA (joint use area), and for non-Indians to develop the land and mineral resources...Both the Hopi and Dineh agree that their ancestors lived in harmony, sharing land and prayers for more than four hundred years...and cooperation between us will remain unchanged.” The traditional Navajo and Hopi leaders have been replaced by Bureau of Indian Affairs (BIA) tribal councils. These councils, in association with the U.S. Department of the Interior, Peabody Coal, the Mormon Church, attorneys and public relation firms, created what is commonly known as the “Hopi-Navajo land dispute” to divide the jointuse area, so that the area could be opened up for energy development. In 1964, 223 Southwest utility companies formed a consortium known as the Western Energy and Supply Transmission Associates (WEST) which includes water and power authorities on the West Coast as well as Four Corners area utility companies. WEST drafted plans for massive coal surface mining operations and six coal-fired, electricitygenerating plants on Navajo and Hopi land. By 1966 John S. Boyden, attorney for the Bureau of Indian Affairs Hopi Tribal Council, secured lease arrangements with Peabody Coal to surface mine 58,000 acres (23,490 ha) of Hopi land and contracted WEST to build the power plants. This was done despite objections by the traditional Hopi leaders and the self-sufficient Navajo shepherds. Later that same year Kennecott Copper, owned in part by the Mormon Church, bought Peabody Coal. Peabody supplies the Four Corners’ power plant with coal. The plant burned 5 million tons of coal a year which is the equivalent of ten tons per minute. It emits over 300 tons of fly ash and other particles into the San Juan River Valley every day. Since 1968 the coal mining operations and the power plant have extracted over 60 million gal (227 million l) of water a year from the Black Mesa water table, which has caused extreme desertification of the area, causing the ground in some areas to sink by up to 12 ft (3.6 m). The worst nuclear accident in American history occurred at Church Rock, New Mexico, on July 26, 1979, when a Kerr-McGee uranium tailings pond spilled over into the Rio Puerco. The spill contaminated drinking water from Church Rock to the Colorado River, over 200 mi (322 km) to the west. The mill tailings dam broke—two months prior to the break cracks in the dam structure were detected yet repairs were never made—and discharged over 100 million gal (379 million l) of highly radioactive water directly into the Rio Puerco River. The main source of potable water for over 1,700 Navahoes was contaminated. 595
Environmental Encyclopedia 3
Fox hunting
When Kerr-McGee abandoned the Shiprock site in 1980 they left behind 71 acres (29 ha) of “raw” uranium tailings, which retained 85% of the original radioactivity of the ore at the mining site. The tailings were at the edge of the San Juan River and have since contaminated communities located downstream. What is the future of the Four Corners area, with its 100 plus uranium mines, uranium mills, five power plants, depleted watershed and radioactive contamination? One “solution” offered by the United States government is to zone the land into uranium mining and milling districts so as to forbid human habitation. [Debra Glidden]
RESOURCES BOOKS Garrity, M. “The U.S. Colonial Empire Is As Close As the Nearest Reservation.” In Trilateralism: The Trilateral Commission and Elite Planning For World Management. Boston: South End Press, 1980. Kammer, J. The Second Long Walk: The Navajo-Hopi Land Dispute. Albuquerque: University of New Mexico Press, 1980. Moskowitz, M. Everybody’s Business. New York: Harper and Row, 1980. Scudder, T., et al. Expected Impacts of Compulsory Relocation on Navajos, with Special Emphasis on Relocation from the Former Joint Use Area Required by Public Law 93-531. Binghamton, NY: Institute for Development of Anthropology, 1979.
PERIODICALS Tso, H., and L. Shields. “Navajo Mining Operations: Early Hazards and Recent Interventions.” New Mexico Journal of Science 20 (June 1980): 13.
Fox hunting Fox hunting is the sport of mounted riders chasing a wild fox with a pack of hounds. The sport is also known as riding to the hounds, because the fox is pursued by horseback riders following the hounds that chase the fox. The specially trained hounds pursue the fox by following its scent. The riders are called the “field” and their leader is called the “master of the foxhounds.rdquo; A huntsman manages the pack of hounds. Foxhunting originated in England, and dates back to the Middle Ages. People hunted foxes because they were predators that killed farm animals such as chickens and sheep. Rules were established reserving the hunt to royalty, the aristocracy (people given titles by royalty), and landowners. As the British Empire expanded, the English brought fox hunting to the lands they colonized. The first fox hunt in the United States was held in Maryland in 1650, according to the Masters of Foxhounds Association (MFHA), the organization that controls foxhunting in the United States. Although the objective of most fox hunts is to kill the fox, some hunts do not involve any killing. In a drag hunt, 596
hounds chase the scent of a fox on a trail prepared before the hunt. In the United States, a hunt ends successfully when the fox goes into a hole in the ground called the “earth.” In Great Britain, a campaign to outlaw fox hunting started in the late twentieth century. The subject drew strong debate on an issue that had not been resolved by March of 2002. Organized hunt supporters like the Countryside Alliance said a hunting ban would result in the loss of 14,000 jobs. In addition, advocates said that hunts help to eliminate a rural threat by controlling the fox population. Hunt supporters described foxes as vermin, a category of destructive, disease-bearing animals. Hunting was seen as less cruel than other methods of eliminating foxes. Opponents called foxhunting a “blood sport “ that was cruel to animals. According to the International Fund for Animal Welfare (IAFW), more than 15,000 foxes are killed during the 200 hunts held each year. The IAFW reported that young dogs are trained to hunt by pitting them against fox cubs. The group wants the British Parliament to outlaw fox hunting. Drag hunting was suggested as a more humane alternative. Another opposition group, the Nottingham Hunt Saboteurs Association, attempts to disrupt “blood sports” through “non-violent direct action.” Saboteurs’ methods include trying to distract hounds by laying false scent trails, shouting, and blowing horns. Both supporters and opponents of fox hunting claim public support for their positions. In 1997, the issue of a ban was brought to Parliament, the British national legislative body consisting of the House of Lords and the House of Commons. That year, the Labour Party won the general election and proposed a bill to ban hunting with hounds. The following year, the bill passed through some legislative readings. However, time ran out before a final vote was taken. In July 1999, Prime Minister Tony Blair promised to make fox hunting illegal. That November, Home Secretary Jack Straw called for an inquiry to study the effect of a ban on the rural economy. The Burns Inquiry concluded in June 2000 that a hunting ban would result in the loss of between 6,000 and 8,000 jobs. The inquiry did not find evidence that being chased was painful for foxes. However, the inquiry stated that foxes did not die immediately. This conclusion about a slower, painful death echoed opponents’ charges that hunting was a cruel practice. Two months before the inquiry was released, Straw proposed that lawmakers should have several options to vote on. One choice was a ban on fox hunting; another was to make no changes. A third option was to tighten fox hunting regulations.
Environmental Encyclopedia 3
Free riders
In March 2002, Parliament cast non-binding opinion votes on the options. The House of Commons voted for a ban. The House of Lord voted for licensed hunting. After the vote, Rural Affairs Minister Alun Michael said that the government would try to find a common ground before trying to legislate fox hunting. The process of trying to reach agreement was expected to take six months at most. Fox hunting in other countries The Scottish Parliament banned hunting in February of 2002. The ban was to take effect on Aug, 1, 2002. The Countryside Alliance announced plans to legally challenge that ruling. Fox hunting is legal in the following countries: Ireland, Belgium, Portugal, Italy, and Spain. Hunting with hounds is banned in Switzerland. In the United States, the MFHA was established in 1907. In March of 2002, the MFHA reported that there were 171 organized hunt clubs in North America. [Liz Swain]
RESOURCES BOOKS Pool, Daniel. What Jane Austen Ate and Charles Dickens Knew: From Fox Hunting to Whist—The Facts of Daily Life in Nineteenth-Century Enland. Carmichael, CA: Touchstone Books, 1994. Robards, Hugh J. Foxhunting in England, Ireland, and North America. Lanham, MD: Derrydale Press, 2000. Thomas, Joseph B., and Mason Houghland. Hounds and Hunting Through the Ages. Lanham, MD: Derrydale Press, 2001.
ORGANIZATIONS British Government Committee of Inquiry into Hunting with Dogs in England and Wales, , England Email:
[email protected], Countryside Alliance, The OldTown Hall, 367 Kensington Road, London SE11 4PT, England (011) 44-020-7840-9200, Fax: (011) 44-020-77938899, Email:
[email protected], International Fund for Animal Welfare, 411 Main Street, P.O. Box 193, Yarmouth Port, MA USA 02675 (508) 744-2000, Fax: (508) 744-2009, Toll Free: (800) 932-4329, Email:
[email protected], Masters of Foxhounds Association, Morven Park, P.O. Box 2420, Leesburg, VA USA 20177 Email:
[email protected], Nottingham Hunt Saboteurs, The Sumac Centre, 245 Gladstone Street, Nottingham NG7 6HX, England
by driving less, for instance, will still breathe cleaner air— and thus be a free rider—if the effort succeeds. In this sense, free riders are a major concern of the theory of collective action. As developed by economists and social theorists, this theory rests on a distinction between private and public (or collective) goods. A public good differs from a private good because it is indivisible and nonrival. A public good, such as clean air or national defense, is indivisible because it cannot be divided among people the way food or money can. It is nonrival because one person’s enjoyment of the good does not diminish anyone else’s enjoyment of it. Smith and Jones may be rivals in their desire to win a prize, but they cannot be rivals in their desire to breathe clean air, for Smith’s breathing clean air will not deprive Jones of an equal chance to do the same. Problems arise when a public good requires the cooperation of many people, as in a campaign to reduce pollution or conserve resources. In such cases, individuals have little reason to cooperate, especially when cooperation is burdensome. After all, one person’s contribution—using less gasoline or electricity, for example—will make no real difference to the success or failure of the campaign, but it will be a hardship for that person. So the rational course of action is to try to be a free rider who enjoys the benefits of the cooperative effort without bearing its burdens. If everybody tries to be a free rider, however, no one will cooperate and the public good will not be provided. If people are to prevent this from happening, some way of providing selective or individual incentives must be found, either by rewarding people for cooperating or punishing them for failing to cooperate. The free rider problem posed by public goods helps to illuminate many social and political difficulties, not the least of which are environmental concerns. It may explain why voluntary campaigns to reduce driving and to cut energy use so often fail, for example. As formulated in Garrett Hardin’s Tragedy of the Commons, moreover, collective action theory accounts for the tendency to use common resources—grazing land, fishing banks, perhaps the earth itself—beyond their carrying capacity. The solution, as Hardin puts it, is “mutual coercion, mutually agreed upon” to prevent the overuse and destruction of vital resources. Without such action, the desire to ride free may lead to irreparable ecological damage.
Free riders A free rider, in the broad sense of the term, is anyone who enjoys a benefit provided, probably unwittingly, by others. In the narrow sense, a free rider is someone who receives the benefits of a cooperative venture without contributing to the provision of those benefits. A person who does not participate in a cooperative effort to reduce air pollution
[Richard K. Dagger]
RESOURCES BOOKS Hardin, R. Collective Action. Baltimore: Johns Hopkins University Press, 1982. Olson, M. The Logic of Collective Action. New York: Schocken Books, 1971.
597
Freon
PERIODICALS Hardin, G. “The Tragedy of the Commons.” Science 162 (December 13, 1968): 1243–48.
Freon The generic name for several chlorofluorocarbons (CFCs) widely used in refrigerators and air conditioners, including the systems in houses and cars. Freon—comprised of chlorine, fluorine, and carbon atoms—is a non-toxic gas at room temperature. It is environmentally significant because it is extremely long-lived in the atmosphere, with a typical residence time of 70 years. This long life-span permits CFCs to disperse, ultimately reaching the stratosphere 19 mi (30 km) above the earth’s surface. Here, high energy photons in sunlight break down freon, and chlorine atoms liberated during this process participate in other chemical reactions that consume ozone. The final result is to decrease the stratospheric ozone layer that shields the earth from damaging ultraviolet radiation. Under the 1987 Montreal Protocol, 31 industrialized countries agreed to phase out CFC freon production. Freon substitutes use bromine atoms to replace the chlorine atoms, providing a substitute refrigeration compound that appears less damaging, although considerably more expensive and less energy efficient.
Fresh water ecology The study of fresh water habitats is called limnology, coming from the Greek word limnos, meaning “pool, lake, or swamp.” Fresh water habitats are normally divided into two groups: the study of standing bodies of water such as lakes and ponds (called lentic ecosystems) and the study of rivers, streams, and other moving sources of water (called lotic ecosystems). Another important area that should be included is fresh water wetlands. The historical roots of limnology go back to F. A. Forel, who studied Lake Geneva, Switzerland, in the late 1800s and E. A. Birge and C. Juday, who studied lakes in Wisconsin in the early 1900s. More recently, the modern “father” of limnology can arguably be attributed to G. Evelyn Hutchinson, who died in 1991 after teaching at Yale University for more than 40 years. Among his prolific writings are four treatises on limnology which offer the most detailed descriptions of lakes that have been published. Fresh water ecology is an intriguing field because of the great diversity of aquatic habitats. For example, lakes can be formed in different ways: volcanic origin such as Crater Lake, Oregon; tectonic (earth movement) origin like Lake Tahoe in California/Nevada and Lake Baikal in Siberia; glacially-derived lakes like the Great Lakes or smaller 598
Environmental Encyclopedia 3 kettle hole or cirque lakes; oxbow lakes which form as rivers change their meandering paths; and human-created reservoirs. Aquatic habitats are strongly influenced by the surrounding watershed, and lakes in the same geographic area tend to be of the same origin and have similar water quality. Lakes are characteristically non-homogeneous. Physical, chemical, and biological factors contribute to both horizontal and vertical zonations. For example, light penetration creates an upper photic (lighted) zone and a deeper aphotic (unlit) zone. Phytoplankton (microscopic algae such as diatoms, desmids, and filamentous algae) inhabit the photic zone and produce oxygen through photosynthesis. This creates an upper productive area called the trophogenic (productive) zone. The deeper area where respiration prevails is called the tropholytic (unproductive) zone. Zooplankton (microscopic invertebrates such as cladocerans, copepods, and rotifers) and nekton (free-swimming animals such as fish) inhabit both of these zones. The boundary area where oxygen produced from photosynthesis equals that consumed through respiration is called the compensation depth. Nearshore areas where light penetrates to the bottom and aquatic macrophytes such as cattails and bulrushes grow, are called the littoral zone. This is typically the most productive area, and it is more pronounced in ponds than in lakes. Open water areas are called the limnetic zone, where most plankton inhabit. Some species of zooplankton are found more concentrated in deeper waters during the day and in greater numbers in the upper waters during the night. One explanation for this vertical migration is that these zooplankton, often large and sometimes pigmented, are avoiding visuallyfeeding planktivorous fish. These zooplankton are thus able to feed on phytoplankton in the trophogenic zone during periods of darkness, and then swim to deeper waters during daylight hours. Phytoplankton are also adapted for existence in the limnetic zone. Some species are quite small in size (less than 20 microns in diameter and called nannoplankton), allowing them to be competitive at nutrient uptake due to their high surface-to-volume ratio. Other groups form large colonies, often with spines, lessening the negative impacts of herbivory and sinking. Blue-green algae produce oils that help them float on or near the water’s surface. Some are able to fix atmospheric nitrogen (called nitrogen fixation), giving them a competitive advantage in low-nitrogen conditions. Other species of blue-greens produce toxic chemicals, making them inedible to herbivores. There have even been reports of cattle deaths following ingestion of water with dense growths of these algae. Lakes can be isothermal (uniform temperature from top to bottom) during some times of the year, but during the summer months they are typically thermally stratified with an upper, warm layer called the epilimnion (upper lake), and a colder, deeper layer called the hypolimnion (lower
Environmental Encyclopedia 3 lake). These zones are separated by the metalimnion, which is determined by the depths with a temperature change of more than 1°C per meter depth, called the thermocline. The summer temperature stratification creates a density gradient that effectively prevents mixing between zones. Wind is another physical factor that influences aquatic habitats, particularly in lakes with broad surface areas exposed to the main direction of the wind, called fetch. Strong winds can produce internal standing waves called seiches that create a rocking motion in the water once the wind dies down. Other types of wind-generated water movements include Ekman spirals and Langmuir cells. Deep or chemically-stratified lakes that never completely mix are called meromictic. Lakes in tropical regions that mix several times a year are called polymictic. In regions with severe winters resulting in ice covering the surface of the lake, mixing normally occurs only during the spring and fall when the water is isothermal. These lakes are called dimictic, and the mixing process is called overturn. People living downwind of these lakes often notice a rotten egg smell caused by the release of hydrogen sulfide. This gas is a product of benthic (bottom-dwelling) bacteria that inhabit the anaerobic muds of productive lakes (discussed below in more detail). Lakes that receive a low-nutrient input remain fairly unproductive, and are called oligotrophic (low nourished). These lakes typically have low concentrations of phytoplankton, with diatoms being the main representative. Moderately productive lakes are called mesotrophic. Eutrophic (well nourished) lakes receive more nutrient input and are therefore more productive. They are typically shallower than oligotrophic lakes and have more accumulated bottom sediments that often experience summer anoxia. These lakes have an abundance of algae, particularly blue-greens (Cynaobacteria) which are often considered nuisance algae because they float on the surface and out-compete the other algae for nutrients and light. Most lakes naturally age and become more productive over time; however, large, deep lakes may remain oligotrophic. The maturing process is called eutrophication and it is regulated by the input of nutrients which are needed by the algae for growth. Definitive limnological studies done in the 1970s concluded that phosphorus is the key limiting nutrient in most lakes. Thus, the accelerated input of this chemical into streams, rivers, and eventually lakes by excess fertilization, sewage input (both human and animal), and erosion is called cultural eutrophication. Much debate and research has been spent on how to slow down or control this process. One interesting management tool is called biomanipulation, in which piscivorous (fish-eating) fish are added to lakes to consume planktivorous (plankton-eating) fish. Because the planktivores are visual feeders on the largest prey, this allows higher numbers of large zooplankton to
Fresh water ecology
thrive in the water, which consume more phytoplankton, particularly non-toxic blue-green algae. A more practical approach to controlling cultural eutrophication is by limiting the nutrient loading into our bodies of water. Although this isn’t easy, we must consider ways of limiting excessive uses of fertilizers, both at home and on farms, as well as more effectively regulating the release of treated sewage into rivers and lakes, particularly those which are vulnerable to eutrophication. Another lake management tool is to aerate the bottom (hypolimnetic) water so that it remains oxygenated. This keeps iron in the oxidized state (Fe+3), which chemically binds with phosphate (PO4) and prevents it from being available for algal uptake. Lakes that have anoxic bottom water keep iron in the reduced state (Fe+2), and phosphate is released from the sediment into the water. Fall and spring turnover then returns this limiting nutrient to the photic zone, promoting high algal growth. When these organisms eventually die and sink to the bottom, decomposers use up more oxygen, and we get a “snow ball” effect. Thus, productive lakes can become more eutrophic with time, and may eventually develop into hypertrophic (overly productive) systems. Lotic ecosystems differ from lakes and ponds in that currents are more of a factor and primary production inputs are generally external (allochthonous) instead of internal (autochthonous). Thus, a river or stream is considered a heterotrophic ecosystem along most of its length. Gradients in physical and chemical parameters also tend to be more horizontal than vertical in running water habitats and organisms living in lotic ecosystems are specially adapted for surviving in these conditions. For example, trout require higher amounts of dissolved oxygen, and are primarily found in relatively cold, fast-moving water with low nutrient input. Carp are able to tolerate warmer, slower, more productive bodies of running water. Darters are fish that quickly dart back and forth behind rocks in the bottom of fast-moving streams and rivers as they feed on aquatic insects. Many of these insects are shredders and detritivores on the organic material like leaves that enter the water. Other groups specialize by scraping algae and bacteria off rocks in the water. Recently, ecologists have begun to take a greater interest in studying fresh water wetlands. These areas are defined as being inundated or saturated by surface or ground water for most of the year, and therefore having characteristic wetland vegetation. Although some people consider these areas a nuisance and prefer them being drained, ecologists realize that they are valuable habitats for migrating water fowl. They also serve as major “adsorptive” areas for nutrients, which are particularly useful around sewage lagoons. We must therefore take greater care in preserving these habitats. [John Korstad] 599
Environmental Encyclopedia 3
Friends of the Earth
RESOURCES BOOKS Horne, A. J., and C. R. Goldman. Limnology. New York: McGraw Hill, 1994. Hutchinson, G.E. A Treatise on Limnology. vol. 1: Geography, Physics, Chemistry; vol. 2: Introduction to Lake Biology and Limnoplankton; vol. 3: Limnological Botany; vol. 4: The Zoobenthos. New York: Wiley, 1993. Smith, R. L. Ecology and Field Biology. 4th ed. New York: Harper Collins, 1996. Wetzel, R. Limnology. 2nd ed. Philadelphia: Saunders, 1983.
Friends of the Earth Friends of the Earth (FOE) is a public interest environmental group committed to the conservation, restoration, and rational use of the environment. Founded by David Brower and other militant environmentalists in San Francisco in 1969, FOE works on the local, national, and international levels to prevent and reverse environmental degradation, and to promote the wise use of natural resources. FOE has an international membership of one million. Its particular areas of interest include ozone layer depletion, greenhouse effect, toxic chemical safety, coal mining, coastal and ocean pollution, the destruction of tropical forests, groundwater contamination, corporate accountability, and nuclear weapons production. In addition to its efforts to influence policy and increase public awareness of environmental issues, FOE’s ongoing activities include the operation of the Take Back the Coast Project and the administration of the Oceanic Society. Over the years, FOE has published numerous books and reports on various topics of concern to environmentalists. FOE was originally organized to operate internationally and now has national organizations in some 63 countries. In several of these, most notably the United Kingdom, FOE is considered to be the best-known and most effective public interest group concerned with environmental issues. The organization has changed its strategies considerably over the years, and not without considerable controversy within its own ranks. Under Brower’s leadership, FOE’s tactics were media-oriented and often confrontational, sometimes taking the form of direct political protests, boycotts, sit-ins, marches, and demonstrations. Taking a holistic approach to the environment, the group argued that fundamental social change was required for lasting solutions to many environmental problems. FOE eventually moved away from confrontational tactics and towards a new emphasis on lobbying and legislation, which helped provoke the departure of Brower and some of the group’s more radical members. FOE began downplaying several of its more controversial stances (for example, on the control of nuclear weapons) and moved its headquarters from 600
San Francisco to Washington, D.C. More recent controversies have concerned FOE’s endorsement of so-called green products and its acceptance of corporate financial contributions. FOE remains committed, however, to most of its original goals, even if it has foresworn its earlier illegal and disruptive tactics. Relying more on the technical competence of its staff and the technical rationality of its arguments than on idealism, FOE has been highly successful in influencing legislation and in creating networks of environmental, consumer, and human rights organizations worldwide. Its publications and educational campaigns have been quite effective in raising public consciousness of many of the issues with which FOE is concerned. [Lawrence J. Biskowski]
RESOURCES ORGANIZATIONS Friends of the Earth, 1025 Vermont Ave. NW, Washington, D.C. USA 20005 (202) 783-7400, Fax: (202) 783-0444, Email:
[email protected],
Frogs Frogs are amphibians belonging to the order anura. The anuran group has nearly 2,700 species throughout the world and includes both frogs and toads. The word anura means “without a tail,” and the term applies to most adult frogs. The anura are distinguished from tailed amphibians (urodeles) such as salamanders because the latter retain a tail as an adult. One of the most studied and best understood frogs is the northern leopard frog, Rana pipiens. This species is wellknown to most children, to people who love the outdoors, and to scientists. Leopard frogs live throughout much of the United States as well as Canada and northern Mexico. Inhabiting a diverse array of environmental conditions, the order anura exhibits an impressive display of anatomical and behavioral variations among its members. Despite such diversity, the leopard frog is often used as a model that represents all members of the group. Leopard frogs mate in the early spring. The frogs deposit their eggs in jelly-like masses. These soft formless clumps may be seen in temporary ponds where frogs are common. Early embryonic development occurs within the jelly mass after which the eggs hatch releasing small swimming tadpoles. The tadpoles feed on algal periphyton, fine organic detritus, and yolk reserves through much of the spring and early summer. Next, metamorphosis begins a few weeks or up to two years after the eggs is hatched depending
Environmental Encyclopedia 3 of the species. Metamorphosis is the process whereby amphibious tadpoles lose their gills and tails and develop arms and legs. The process of metamorphosis is complex and involves not only the loss of the tail and the development of limbs, but also a fundamental reorganization of the gut. For example, in the leopard frog, the relatively long intestine of the vegetarian tadpole is reorganized to form the short intestine of the carnivorous adult frog because nutrients are more difficult to extract from plant sources. Additionally, metamorphosis profoundly changes the method of respiration in the leopard frog. As the animal loses it gills, airbreathing lungs are formed and become functional. A significant portion of respiration and gas exchange will occur through the skin of the adult frog. When metamorphosis of a tadpole is complete, the result is a terrestrial, insecteating, air-breathing frog. Frogs are important tolls for biological learning and research. Many students first encounter vertebrate anatomy with the dissection of an adult leopard frog. Consequently physiologists have used frogs for the study of muscle contraction and the circulation of blood which is easily seen in the webbing between the toes of a frog. Embryologists have used frogs for study because they lay an abundance of eggs. A mature R. pipiens female may release 3,000 or more eggs during a single spawning. Frog eggs are relatively large and abundant which simplifies manipulation and experimentation. Another anuran, the South African clawed frog, Xenopus laevis, is useful in research because it can be cultivated to metamorphosis easily and can be bred any time of year. For these reasons, frogs emerge as extremely useful laboratory test animals that provide valuable information about human biology. Frogs and biomedical research Many biological discoveries have been made or enhanced by using frogs. For example, the role of sperm in development was studied in the late 1700s in frogs. Amphibians in general, including frogs, have significantly more deoxyribonucleic acid (DNA) per cell than do other chordates. Thus, their chromosomes are large and easy to see with a microscope. In addition, induced ovulation in frogs by pituitary injection was developed during the early 1930s. Early endocrinologists studied the role of the hormone thyroxine (also found in human beings) in vertebrate development in 1912. The role of viruses in animal and human cancer is receiving renewed interest—the first herpes virus known to cause a cancer was the frog cancer herpes virus. Furthermore, mammalian and human cloning, a controversial topic, has its foundations in the cloning of frogs. The first vertebrate ever cloned was Rana pipiens, in an experiment published by Thomas King and Robert Briggs in 1952. As such, experimentation with frogs has contributed greatly to biomedical research.
Frogs
Unfortunately, the future of amphibians, like the northern leopard frog, appears to be jeopardized. Most amphibians in the world are frogs and toads. Since the 1980s, scientists have noted a distinct decline in amphibian populations. Worldwide, over 200 species of amphibians have experienced recent population declines. At least 32 documented species extinctions have occurred. Of the 242 native North American amphibian species, the U.S. Nature Conservancy has identified three species that are presumed to be extinct, another three classified as possibly extinct, with an additional 38% categorized as vulnerable to extinction. According to the United States Fish and Wildlife Service, there are four frog species listed as endangered and three listed as threatened. The actual cause of the reductions in frog and amphibian populations remains unclear, but many well-supported hypotheses exist. One speculation is the run-off of chemicals that poison ponds producing defective frogs that cannot survive well. A second possibility is an increase in amphibian disease caused by virulent pathogens. Viral and fungal infection of frogs has led to recent declines in many populations. A third explanation involves parasitic infections of frogs with flatworms, causing decreased survival. Another theory blames atmospheric ozone depletion for frog decline. Yet another implicates a toxic combination of intensified agriculture, drainage of habitat, and predator changes as the cause for the drastic declines in frog populations. While each argument has merits of its own, it is unlikely that any single cause can adequately explain all amphibian decline. The poisoned pond hypothesis brought the issue of frog population decline to the forefront. Several years ago, students on a field trip at a pond near the Minnesota community of Henderson discovered a number of frogs with extremely abnormal limbs. Some frogs had three or more hind legs, some had fused appendages, and some had no legs at all. Concerned with what they had found, their teacher, Cindy Reinitz, and her students contacted the Minnesota Pollution Control Agency. Soon, state agency biologists and officials from the University of Minnesota confirmed the presence of many abnormal frogs at the site. The word spread, and by late 1996, many sites in Minnesota were known to have abnormal frogs. Frog developmental abnormalities are not new to science. What made the Minnesota case of frog malformations different, however, was the extraordinary concentration of abnormal frogs in an unusually large number of locations. Concern became more urgent when abnormal animals also were reported in several other states, then Canada, and finally in Japan. Many of the abnormal frogs were found in agricultural areas where large amounts of fertilizers and pesticides were used. 601
Environmental Encyclopedia 3
Frogs
One reason for concern is that frogs and humans metabolize toxic xenobiotic chemicals, such as fertilizers or pesticides, in similar ways. Also, human beings and frogs have very similar early developmental stages, which is why frogs are used as model animals in embryology. Because of this, worry exists regarding the potential for human developmental abnormalities from such chemicals. Some of the chemicals implicated in the frog malformations were retinoids. Retinoids are derivatives of vitamin A that are potent and crucial hormonal regulators of vertebrate embryological development. Numerous laboratory studies using retinoic acid (a form of retinoid) have reproduced the abnormalities seen in the Minnesota frogs, and the role of retinoids in the regulation of genes involved in limb development is wellcharacterized. Frog anatomical defects are regarded as a warning sign for potential problems in humans exposed to the same chemicals, since retinoic acid regulates human limb development as well. Disease is a growing problem for frogs. Recently, two important frog pathogens have been identified and are suspected to play a role in global frog decline. Iridiovirus, a virus that commonly infects fish and insects, has now been found to infect frogs. An aquatic fungus, chytrid, is also implicated in frog decline. Infection with the fungus is called chytridiomycosis. The chytrid fungus, Batrachochytrium dendrobatidis, which normally resides in decaying plant material, causes the degeneration of tadpole mouthparts. This results in the death of post-metamorphic frogs, and is reportedly involved in amphibian decline in Australia, Europe, and recently in the Sierra Nevada region of California. In Australia the depletion of the frog population may have been caused by a disease introduced through infected fish. The Australian cane toad, Bufo marinus, may be responsible for spreading a virus to other frogs outside its native range. Another potential cause for the striking decline of frog populations involves parasitism, the act of being a parasite. Parasitism is a type of symbiosis in which a parasitic organism gains benefit from living within or upon a host organism. The host, in turn, gains no benefit, and in fact may be harmed by the symbiosis. Scientists have discovered that a parasitic trematode or flatworm is threatening many frog populations. Trematode larvae erupt from snails inhabiting ponds, which then infect frog tadpoles. Once inside the tadpoles, the flatworm larvae multiply and physically scramble developing limb bud cells in the hind quarters of the tadpole. If the limb bud cells are not in their proper places, malformations are the consequence. The result of such parasitism by flatworms is adult frogs with multiple legs or fused legs. It is believed that the flatworms derive benefit from the relationship because frogs with defective legs are easier for birds to prey upon. Birds that eat infected frogs in-turn 602
become infected. The cycle is complete when bird droppings littered with flatworm larvae are dropped into pond water. A fourth possible explanation for the decline of frogs involves ozone depletion. Some scientists believe that atmospheric ozone loss has led to an increase in ultraviolet light penetration to the surface of the earth. Frogs are exquisitely sensitive to ultraviolet light. It is believed that due to increased UV-B penetration, mutations in frogs has been increased, resulting in limb malformations. Evidence for this hypothesis exists in laboratory experiments that have been able to reliably replicate the limb malformations observed in the Minnesota ponds using UV-B radiation on experimental frogs. Many scientists believe, however, that multiple factors are to blame for frog decline. They believe that synergy, or the combination of many factors is the most plausible explanation for the decrease in the number and diversity of frogs. Climate change, urbanization, prolonged drought, global warming, secondary succession, drainage of habitat for housing developments, habitat fragmentation, introduced predators, loss of territory, and the aforementioned infectious and poisoning reasons may all simultaneously contribute to the decline of frog populations worldwide. Humans perpetuate the decline by hunting frogs for food. Because the decrease in numbers and diversity of frogs is so striking, conservationists are concerned that it is an early indicator of the consequences of human progress and overpopulation. [Robert G McKinnell]
RESOURCES BOOKS Behler, J. L., and F. W. King. National Audubon Society Field Guide to North American Reptiles & Amphibians. New York: Alfred A. Knopf, Inc., 1997. DiBerardino, M. A. Genomic Potential of Differentiated Cells. New York: Columbia University Press, 1997. Duellman, W. E., and L. Trueb. Biology of Amphibians. New York: McGraw-Hill Book Company, 1986. Gilbert, S. F. Development Biology. 5th ed. Sunderland, MA: Sinauer Associates, Inc., 1997. Stebbins, R. C., and N. W. Cohen. A Natural History of Amphibians. Princeton: Princeton University Press, 1995.
PERIODICALS Carlson, D. L., L. A. Rollins-Smith, and R. G. McKinnell. “The Lucke´ Herpesvirus Genome: Its Presence in Neoplastic and Normal Kidney Tissue,” Journal of Comparative Pathology 110 (1995): 349–355. Cheh, A. M., et al. “A Comparison of the Ability of Frog and Rat S-9 to Activate Promutagens in the Ames Test,” Environmental Mutagenesis 2 (1980): 487–508.
OTHER CSIRO Australian Animal Health Laboratory. December 1995 [June 2002]. .
Environmental Encyclopedia 3 Tennessee Wildlife Resources Agency. March 22, 2002 [cited June 2002]. . United State Fish and Wildlife Services, Division of Endangered Species. December 7, 2001 [cited June 2002]. .
Frontier economy An economy similar to that which was prevalent at the “frontier” of European settlement in North America in the eighteenth and nineteenth centuries. A frontier economy is characterized by relative scarcities (and high prices) of capital equipment and skilled labor, and by a relative abundance (and low prices) of natural resources. Because of these factors, producers will look to utilize natural resources instead of capital and skilled labor whenever possible. For example, a sawmill might use a blade that creates large amounts of wood waste since the cost of extra logs is less than the cost of a better blade. The long-term environmental effects of high natural resource use and pollution from wastes are ignored since they seem insignificant compared to the vastness of the natural resource base. A frontier economy is sometimes contrasted with a spaceship economy, in which resources are seen as strictly limited and need to be endlessly recycled from waste products.
Frost heaving The lifting of earth by soil water as it freezes. Freezing water expands by approximately nine percent and exerts a pressure of about fifteen tons per square inch. Although this pressure and accompanying expansion are exerted equally in all directions, movement takes place in the direction of least resistance, namely upward. As a result, buried rocks, varying from pebbles to boulders, can be raised to the ground surface; small mounds and layers of soil can be heaved up; young plants can be ripped from the earth or torn apart below ground; and pavement and foundations can be cracked and lifted. Newly planted tree seedlings, grass, and agricultural crops are particularly vulnerable to being lifted by acicular ice crystals during early fall and late spring frosts. Extreme cases in cold climates at high latitudes or high elevation at mid-latitudes result in characteristic patterned ground.
Fuel cells Fuel cells produce energy through electrochemical reactions rather than through the process of combustion. They convert hydrogen and oxygen into electricity and heat. Fuel cells are sometimes compared to batteries because, like bat-
Fuel cells
teries, they have two electrodes, an anode and a cathode, through which an electrical current flows into and out of the cell. But fuel cells are fundamentally different electrical devices from batteries since the latter simply store electrical energy, while the former are a source of electrical energy. In a fuel cell, chemical energy is converted directly into electrical energy by means of an oxidation-reduction reaction. The British physicist Sir William Grove developed the fuel cell concept in 1839. A practical, working model of the concept was not constructed until a century later, however. The earliest fuel cells carried out this energy conversion by means of the reaction between hydrogen gas and oxygen gas, and formed water as a by-product. In this chemical reaction, each hydrogen atom loses one electron to an oxygen atom. The exchange of electrons, from hydrogen to oxygen, is what characterizes an oxidation-reduction reaction. A fuel cell thus creates a pathway through which electrons lost by hydrogen atoms must flow before they reach oxygen atoms. The cell typically consists of four basic parts: an anode, a cathode, an electrolyte, and an external circuit. In the simplest fuel cell, hydrogen gas is pumped into one side of the fuel cell where it passes into a hollow, porous anode. At the anode, hydrogen atoms lose an electron to hydroxide ions present in the electrolyte. The source hydroxide ions is a solution of potassium hydroxide in water. The electrons released in this reaction travel up the anode, out of the fuel cell, and into the external circuit, which carries the flow of electrons (an electric current) to some device such as a light bulb where it can be used. Meanwhile, a second reaction is taking place at the opposite pole of the fuel cell. Oxygen gas is pumped into this side of the fuel cell where it passes into the hollow, porous cathode. Oxygen atoms pick up electrons from the cathode and react with water in the electrolyte to regenerate hydroxide ions. As a result of the two chemical reactions taking place at the two poles of the fuel cell, electrons are removed from hydroxide ions at the anode, passed through the external circuit where they can be used to do work, returned to the cell through the cathode, and then returned to water molecules in the electrolyte. Meanwhile, oxygen and hydrogen are used up in the production of water. A fuel cell such as the one described here should have a voltage of 1.23 volts and a theoretical efficiency (based on the heat of combustion of water) of 83%. The actual voltage of a typical hydrogen/ oxygen fuel cell normally ranges between 0.6 to 1.1 volts depending on operating conditions. Fuel cells have many advantages as energy sources. They are significantly more efficient than energy sources such as nuclear or fossil-fueled power plants. In addition, a fuel cell is technically simple and lightweight. Also, the product of the fuel cell reaction —water— is of course harmless to humans and the rest of the environment. Finally, 603
Fuel cells
both hydrogen and oxygen, the raw materials used in a fuel cell, are abundant in nature. They can both be obtained from water, the most common single compound on the planet. Until recently, electricity produced from fuel cells was more expensive than that obtained from other sources, and they have been used in only specialized applications. One of these is in spacecrafts, where their light weight represents an important advantage. For example, the fuel cell used on an 11-day Apollo moon flight, weighed 500 lb (227 kg), while a conventional generator would have weighed several tons. In addition, the water produced in the cell was purified and then used for drinking. A great many variations on the simple hydrogen/oxygen fuel cell have been investigated. In theory, any fuel that contains hydrogen can be used at the anode, while any oxidizing agent can be used at the cathode. Elemental hydrogen and oxygen are only the simplest, most fundamental examples of each. Among the hydrogen-containing compounds explored as possible fuels are hydrazine, methanol, ammonia, and a variety of hydrocarbons. The order in which these fuels are listed here corresponds to the efficiency with which they react in a fuel cell, with hydrazine being most reactive (after hydrogen itself) and the hydrocarbons being least reactive. Each of these potential alternatives has serious disadvantages. Hydrazine, for example, is expensive to manufacture and dangerous to work with. In addition to oxygen, liquid oxidants such as hydrogen peroxide and nitric acid have also been investigated as possible cathode reactants. Again, neither compound works as efficiently as oxygen itself, and each presents problems of its own as a working fluid in a fuel cell. The details of fuel cell construction often differ depending on the specific use of the cell. Cells used in spacecraft, for example, use cathodes made of nickel oxide or gold and anodes made of a platinum alloy. The hydrogen and oxygen used in such cells are supplied in liquid form that must be maintained at high pressure and very low temperature. Fuel cells can also operate with an acidic electrolyte such as phosphoric or a fluorinated sulfuric acid. The chemical reactions that occur in such a cell are different from those described for the alkaline (potassium hydroxide) cell described above. Fuel cells of the acidic type are more commonly used in industrial applications. They operate at a higher temperature than alkaline cells, with slightly less efficiency. Another type of fuel cell makes use of a molten salt instead of a water solution as the electrolyte. In a typical cell of this kind, hydrogen is supplied at the anode of the cell, carbon dioxide is supplied at the cathode, and molten 604
Environmental Encyclopedia 3 potassium carbonate is used as the electrolyte. In such a cell, the fundamental process is the same as in an aqueous electrolyte cell. Electrons are released at the anode. The electrons then travel through the external circuit where they can do work. They return to the cell through the cathode where they make the cathodic reaction possible. Yet a third type of fuel cell is now being explored, one that makes use of state-of-the-art and sometimes exotic solid-state technology. This is the high- temperature solid ionic cell. In one design of this cell, the anode is made of nickel metal combined with zirconium oxide while the cathode is composed of a lanthanum-manganese alloy doped with strontium. The electrolyte in the cell is a mixed oxide of yttrium and zirconium. The fuel provided to the cell is carbon monoxide or hydrogen, either of which is oxidized by the oxygen in ordinary air. A series of these cells are connected to each other by connections made of lanthanum chromite doped with magnesium metal. The solid ionic cell is particularly attractive to electric utilities and other industries because it contains no liquid, as the other fuel cell designs do. The presence of such liquids creates problems in the handling and maintenance of conventional fuel cells that are eliminated with the all-solid cell. The Proton Exchange Membrane (PEM) fuel cell and Direct Methanol Fuel cells (DMFC) are similar in that they both use a polymer membrane as the electrolyte, but a DMFC has an anode catalyst that utilizes hydrogen from liquid methanol, rather than from hydrogen gas. This improvement avoids the storage problems that are inherent with the use of hydrogen gas. Other possibilities are made up of stacks of fuel cells that use PEM-type technology. In 2001, the U.S. Department of Energy started a new program designed to implement the development of efficient and low-cost versions of these “planar solid-oxide fuel cells", or SOFCs. Some experts are enthusiastic about the future role of fuel cells in the world’s energy equation. If costs for all these emerging technologies can be reduced, their high efficiency should make them attractive alternatives to fossil fuel- and nuclear-generated electricity. The major concerns about fuel cell technologies include the inability of fuel cells to store power and the associated requirement that energy be generated on an “as needed” basis. Proposals have been made to use pumped water, compressed air, and batteries to store energy that is generated when times of demand are low. For that reason, still more variations in the fuel cell are being explored. One of the most intriguing of these future possibilities is a cell no more than 0.04–0.07 in (1– 2 mm) in diameter. These cells have the advantage of a greater electrode surface area on which oxidation and reduction occur than do conventional cells. The distance that electrons have to travel in such cells is also much shorter,
Environmental Encyclopedia 3 resulting in their having a greater power output per unit volume than that of conventional cells. The technology for constructing and maintaining such small cells is not, however, fully developed. Because of more stringent clean air standards, the transportation industry is especially interested in increasing energy efficiency of the automobile industry. The utility of hybrid vehicles that use conventional fuels when accelerating and fuel cells for highway driving is being investigated. These cars would use gasoline or methanol as a fuel source for both the conventional engine and as a source of hydrogen for the fuel cells. Fuel cells are also being researched for the aviation industry. An innovative approach is to couple photovoltaic or solar cell technology with fuel cell technology to provide a source of energy for the plane that could be used day or night. [David E Newton and Marie H. Bundy]
RESOURCES BOOKS Hoffmann, Peter, and Tom Harkin. Tomorrow’s Energy: Hydrogen, Fuel Cells, and the Prospects for a Cleaner Planet. Cambridge, MA: MIT Press, 2001. Hoogers, G., ed. Fuel Cell Technology Handbook. Boca Raton: CRC Press, 2002. Joesten, M. D., et al. World of Chemistry. Philadelphia: Saunders College Publishing, 1991. Larminie, James, and Andrew Dicks. Fuel Cell Systems Explained. New York: John Wiley & Sons, 2000.
ORGANIZATIONS Argonne National Laboratory, 9700 S. Cass Avenue, Argonne, IL USA 60439 (630) 252-2000, DOE Office of Energy Efficiency and Renewable Energy (EERE), Department of Energy, Mail Stop EE-1, Washington, DC USA 20585 (202) 586-9220, The Online Fuel Cell Information Center,
Fund for Animals
Fugitive emissions Contaminants that enter the air without going through a smokestack and, thus, are often not subject to control by conventional emission control equipment or techniques. Most fugitive emissions are caused by activities involving the production of dust, such as soil erosion and strip mining, or building demolition, or the use of volatile compounds. In a steel-making complex, for example, there are several identifiable smokestacks from which emissions come, but there are also numerous sources of fugitive emissions, which escape into the air as a result of processes such as producing coke, for which there is no identifiable smokestack. The control of fugitive emissions is generally much more complicated and costly than the control of smokestack emissions for which known add-on technologies to the smokestack have been developed. Baghouses and other costly mechanisms typically are needed to control fugitive emissions.
Fumigation Most commonly, fumigation refers to the process of disinfecting a material or an area by using some type of toxic material in gaseous form. The term has a more specialized meaning in environmental science, where it refers to the process by which pollutants are mixed in the atmosphere. Under certain conditions, emitted pollutants rise above a stable layer of air near the ground. These pollutants remain aloft until convective currents develop, often in the morning, at which time the cooler pollutants “trade places” with air at ground level as it is warmed by the sun and rises. The resulting damage to ecosystems from the pollutants is most obvious around metal smelters.
Fund for Animals
Fuel switching Fuel switching is the substitution of one energy source for another in order to meet requirements for heat, power, and/ or electrical generation. Generally, this term refers to the practices of some industries that can substitute among natural gas, electricity, coal, and LPG within 30 days without modifying their fuel-consuming equipment and that can resume the same level of production following the change. The U.S. Department of Energy estimates that among manufacturers in 1991, 2.8 quadrillion Btus of energy consumption could be switched from a total consumption of about 20.3 quadrillion Btus, representing about 14%. Price is the primary reason for fuel switching; however, additional factors may include environmental regulations, agreements with energy or fuel suppliers, and equipment capabilities.
Founded in 1967 by author and humorist Cleveland Amory, the Fund is one of the most activist of the national animal protection groups. Formed “to speak for those who can’t,” it has led sometimes militant campaigns against sport hunting, trapping, and wearing furs, as well as the killing of whales, seals, bears, and other creatures. Amory, in particular, has campaigned tirelessly against these activities on television and radio, and in lectures, articles, and books. In the early 1970s, the Fund worked effectively to rally public opinion in favor of passage of the Marine Mammal Protection Act, which was signed into law in October 1972. This act provides strong protection to whales, seals and sea lions, dolphins, sea otters, polar bears, and other ocean mammals. In 1978, the Fund bought a British trawler and renamed it Sea Shepherd. Under the direction of its captain, 605
Environmental Encyclopedia 3
Fungi
Paul Watson, they used the ship to interfere with the baby seal kill on the ice floes off Canada. Activists sprayed some 1,000 baby harp seals with a harmless dye that destroyed the commercial value of their white coats as fur, and the ensuing publicity helped generate worldwide to the seal kill and a ban on imports into Europe. In 1979, Sea Shepherd hunted down and rammed Sierra, an outlaw whaling vessel that was illegally killing protected and endangered species of whales. After Sea Shepherd was seized by Portuguese authorities, Watson and his crew scuttled the ship to prevent it from being given to the owners of Sierra for use as a whaler. Also in 1979, the Fund used helicopters to airlift from the Grand Canyon almost 600 wild burros that were scheduled to be shot by the National Park Service. The airlift was so successful, and generated so much favorable publicity, that it led to similar rescues of feral animals on public lands that the government wanted removed to prevent damage to vegetation. Burros were also airlifted by the Fund from Death Valley National Monument, as were some 3,000 wild goats on San Clemente Island, off the coast of California, scheduled to be shot by the United States Navy. Many of the wild horses, burros, goats, and other animals rescued by the Fund end up, at least temporarily before adoption, at Black Beauty Ranch, a 1430-acre (578ha) sanctuary near Arlington, Texas. The ranch has provided a home for abused race and show horses, a non-performing elephant, and Nim, the famous signing chimpanzee who was saved from a medical laboratory. Legal action initiated by the Fund has resulted in the addition of almost 200 species to the U.S. Department of the Interior’s list of threatened and endangered species, including the grizzly bear, the Mexican wolf, the Asian elephant, and several species of kangaroos. The Fund is also active on the grassroots level, working on measures to restrict hunting and trapping. A recent example is the passage of an initiative in Colorado in November 1992 banning the use of dogs and bait to hunt bears, and halting the spring bear hunt, when mothers are still nursing their cubs. [Lewis G. Regenstein]
RESOURCES ORGANIZATIONS The Fund for Animals, 200 West 57th Street, New York, NY USA 10019 (212) 246-2096, Fax: (212) 246-2633, Email:
[email protected],
Fungi Fungi are one of the five Phyla of organisms. Fungi are broadly characterized by cells that possess nuclei and rigid 606
cell walls but lack chlorophyll. Fungal spores germinate and grow slender tube-like structures called hyphae, separated by cell walls called septae. The vegetative biomass of most fungi in nature consists of masses of hyphae, or mycelia. Most species of fungi inhabit soil, where they are active in the decomposition of organic matter. The biologically most complex fungi periodically form spore-producing fruiting structures, known as mushrooms. Some fungi occur in close associations, known as mycorrhizae, with the roots of many species of vascular plants. The plant benefits mostly through an enhancement of nutrient uptake, while the fungus benefits through access to metabolites. Certain fungi are also partners in the symbioses with algae known as lichens. See also Fungicide
Fungicide A fungus is a tiny plant-like organism that obtains its nourishment from dead or living organic matter. Some examples of fungi include mushrooms, toadstools, smuts, molds, rusts, and mildew. Fungi have long been recognized as a serious threat to natural plants and human crops. They attack food both while it is growing and also after it has been harvested and placed into storage. One of the great agricultural disasters of the second half of the twentieth century was caused by a fungus. In 1970, the fungus that causes southern corn-leaf blight swept through the southern and midwestern United States and destroyed about 15% of the nation’s corn crop. Potato blight, wheat rust, wheat smut, and grape mildew are other important disasters caused by fungi. Chestnut blight is another example of the devastation that can be caused by fungi. Until 1900, chestnut trees were common in many parts of the United States. In 1904, however, chestnut trees from Asia were imported and planted in parts of New York. The imported trees carried with them a fungus that attacked and killed the native chestnut trees. Over a period of five decades, the native trees were all but totally eliminated from the eastern part of the country. It is hardly surprising that humans began looking for fungicides—substances that will kill or control the growth of fungi—early on in history. The first of these fungicides was a naturally occurring substance, sulfur. One of the most effective of all fungicides, Bordeaux mixture was invented in 1885. Bordeaux mixture is a combination of two inorganic compounds, copper sulfate and lime. With the growth of the chemical industry during the twentieth century, a number of synthetic fungicides have been invented; these include ferbam, ziram, captan, naban, dithiocarbonate, quinone, and 8-hydroxyquinoline.
Environmental Encyclopedia 3
Furans
For a period of time, compounds of mercury and cadmium were very popular as fungicides. Until quite recently, for example, the compound methylmercury was widely used by farmers in the United States who used it to protect growing plants and to treat stored grains. During the 1970s, however, evidence began to accumulate about a number of adverse effects of mercury- and cadmium-based fungicides. The most serious effects were observed among birds and small animals who were exposed to sprays and dusting or who ate treated grain. A few dramatic incidents of methylmercury poisoning among humans, however, were also recorded. The best known of these was the 1953 disaster at Minamata Bay, Japan. At first, scientists were mystified by an epidemic that spread through the Minamata Bay area between 1953 and 1961. Some unknown factor caused serious nervous disorders among residents of the region. Some sufferers lost the ability to walk, others developed mental disorders, and still others were permanently disabled. Eventually researchers traced the cause of these problems to methylmercury in fish eaten by residents of the area. For the first time, the terrible effects of the compound had been confirmed. As a result of the problems with mercury and cadmium compounds, scientists have tried to develop less toxic substitutes for the more dangerous fungicides. Dinocap, binapacryl, and benomyl are three examples of such compounds. Another approach has been to use integrated pest management and to develop plants that are resistant to fungi. The latter approach was used with great success during the corn blight disaster of 1970. Researchers worked quickly to develop strains of corn that were resistant to the cornleaf blight fungus and by 1971 had provided farmers with seeds of the new strain. See also Minamata disease [David E. Newton]
RESOURCES BOOKS Chemistry and the Food System. A Study by the Committee on Chemistry and Public Affairs. Washington, DC: American Chemical Society, 1980. Fletcher, W. W. The Pest War. New York: Wiley, 1974. Selinger, B. Chemistry in the Marketplace. 4th ed. Sydney: Harcourt Brace Jovanovich, 1989.
Furans Furans are by-products of natural and industrial processes and are considered environmental pollutants. They are chemical substances found in small amounts in the environment, including air, water and soil. They are also present in some foods. Although the amounts are small, they are persistent and remain in the environment for long periods
of time, also accumulating in the food chain. The U.S. Environmental Protection Agency’s (EPA) Persistent Bioaccumulative and Toxic (PBT) Chemical Program classifies furans as priority PBTs. Furans belong to a class of organic compounds known as heterocyclic aromatic hydrocarbons. The basic furan structure is a five-membered ring consisting of four atoms of carbon and one oxygen. Various types of furans have additional atoms and rings attached to the basic furan structure. Some furans are used as solvents or as raw materials for synthesizing chemicals. Polychlorinated dibenzofurans (PCDFs) are of particular concern as environmental pollutants. These are threeringed structures, with two rings of six carbon atoms each (benzene rings) attached to the furan. Between one and eight chlorine atoms are attached to the rings. There are 135 types of PCDFs, whose properties are determined by the number and position of the chlorine atoms. PCDFs are closely related to polychlorinated dioxins (PCDDs) and polychlorinated biphenyls (PCBs). These three types of toxic compounds often occur together and PCDFs are major contaminants of manufactured PCBs. In fact, the term dioxin commonly refers to a subset of these compounds that have similar chemical structures and toxic mechanisms. This subset includes 10 of the PCDFs, as well as seven of the PCDDs and 12 of the PCBs. Less frequently, the term dioxin is used to refer to all 210 structurally-related PCDFs and PCDDs, regardless of their toxicities. Furans are present as impurities in various industrial chemicals. PCDFs are trace byproducts of most types of combustion, including the incineration of chemical, industrial, medical, and municipal waste, the burning of wood, coal, and peat, and automobile emissions. Thus most PCDFs are released into the environment through smokestacks. However the backyard burning of common household trash in barrels has been identified as potentially one of the largest sources of dioxin and furan emissions in the United States. Because of the lower temperatures and inefficient combustion in burn barrels, they release more PCDFs than municipal incinerators. Some industrial chemical processes, including chlorine bleaching in pulp and paper mills, also produce PCDFs. PCDFs that are released into the air can be carried by currents to all parts of the globe. Eventually they fall to earth and are deposited in soil, sediments, and surface water. Although furans are slow to volatilize and have a low solubility in water, they can wash from soils into bodies of water, evaporate, and be re-deposited elsewhere. Furans have been detected in soils, surface waters, sediments, plants, and animals throughout the world, even in arctic organisms. They are very resistant to both chemical breakdown and biological degradation by microorganisms. 607
Environmental Encyclopedia 3
Future generations
Most people have low but detectable levels of PCDDs and PCDFs in their tissues. Furans enter the food chain from soil, water, and plants. They accumulate at the higher levels of the food chain, particularly in fish and animal fat. The concentrations of PCDDs and PCDFs may be hundreds or thousands of times higher in aquatic organisms than in the surrounding waters. Most humans are exposed to furans through animal fat, milk, eggs, and fish. Some of the highest levels of furans are found in human breast milk. The presence of dioxins and furans in breast milk can lead to the development of soft, discolored molars in young children. Industrial workers can be exposed to furans while handling chemicals or during industrial accidents. Furans bind to aromatic hydrocarbon receptors in cells throughout the body, causing a wide range of deleterious effects, including developmental defects in fetuses, infants, and children. Furans also may adversely affect the reproductive and immune systems. At high exposure levels, furans can cause chloracne, a serious acne-like skin condition. Furan itself, as well as PCDFs, are potential cancer-causing agents. The switch from leaded to unleaded gasoline, the halting of PCB production in 1977, changes in paper manufacturing processes, and new air and water pollution controls, have reduced the emissions of furans. In 1999 the United States, Canada, and Mexico agreed to cooperate to further reduce the release of dioxins and furans. On May 23, 2001, EPA Administrator Christie Whitman, along with representatives from more than 90 other countries, signed the global treaty on Persistent Organic Pollutants. The treaty phases out the manufacture and use of 12 toxic chemicals, the so-called “dirty dozen,” that includes furans. The United States opposed a complete ban on furans and dioxins; thus, unlike eight of the other chemicals that were banned outright, the treaty calls for the use of dioxins and furans to be minimized and eliminated where feasible. [Margaret Alic Ph.D.]
RESOURCES BOOKS Lippmann, Morton, ed. Environmental Toxicants: Human Exposures and their Health Effects. New York: Wiley-Interscience, 2000. Paddock, Tod. Dioxins and Furans: Questions and Answers. Philadelphia: Academy of Natural Sciences, 1989. Wittich, Rolf-Michael, ed. Biodegradation of Dioxins and Furans. Austin: R. G. Landes Co., 1998.
OTHER “Dioxins, PCBs, Furans, and Mercury.” Fox River Watch. [cited May 15, 2002]. . “Polychlorinated Dibenzo-p-dioxins and Related Compounds Update: Impact on Fish Advisories.” EPA Fact Sheet. United States Environmental Protection Agency. September 1999. [cited May 8, 2002]. .
608
U.S. Environmental Protection Agency. “Priority PBTs: Dioxins and Furans.” Persistent Bioaccumulative and Toxic (PBT) Chemical Program. March 9, 2001 [cited May 13,2002]. .
ORGANIZATIONS Clean Water Action Council, 1270 Main Street, Suite 120, Green Bay, WI USA 54302 (920) 437-7304, Fax: (920) 437-7326, Email:
[email protected], United Nations Environment Programme: Chemicals, 11-13, chemin des Ane´mones, 1219 Chaˆtelaine, Geneva, Switzerland, Email: opereira@ unep.ch, United States Environmental Protection Agency, 1200 Pennsylvania Avenue, NW, Washington , DC USA 20460 Toll Free: (800) 490-9198, Email:
[email protected],
Fusion see Nuclear fusion
Future generations According to demographers, a generation is an age-cohort of people born, living, and dying within a few years of each other. Human generations are roughly defined categories, and the demarcations are not as distinct as they are in many other species. As the Scottish philosopher David Hume noted in the eighteenth century, generations of human beings are not like generations of butterflies, who come into existence, lay their eggs, and die at about the same time, with the next generation hatching thereafter. But distinctions can still be made, and future generations are all age-cohorts of human beings who have not yet been born. The concept of future generations is central to environmental ethics and environmental policy, because the health and well-being—indeed the very existence—of human beings depends on how people living today care for the natural environment. Proper stewardship of the environment affects not only the health and well-being of people in the future but their character and identity. In The Economy of the Earth, Mark Sagoff compares environmental damage to the loss of our rich cultural heritage. The loss of all our art and literature would deprive future generations of the benefits we have enjoyed and render them nearly illiterate. By the same token, if we destroyed all our wildernesses and dammed all our rivers, allowing environmental degradation to proceed at the same pace, we would do more than deprive people of the pleasures we have known. We would make them into what Sagoff calls “environmental illiterates,” or “yahoos” who would neither know nor wish to experience the beauties and pleasures of the natural world. “A pack of yahoos,” says Sagoff, “will like a junkyard environment” because they will have known nothing better.
Environmental Encyclopedia 3 The concept of future generations emphasizes both our ethical and aesthetic obligations to our environment. In relations between existing and future generations, however, the present generation holds all the power. While we can affect them, they can do nothing to affect us. Though, as some environmental philosophies have argued, our moral code is in large degree based on reciprocity, the relationship between generations cannot be reciprocal. Adages such as “like for like,” and “an eye for an eye,” can apply only among contemporaries. Since an adequate environmental ethic would require that moral consideration be extended to include future people, views of justice based on the norm of reciprocity may be inadequate. A good deal of discussion has gone into what an alternative environmental ethic might look like and on what it might be based. But perhaps the important point to note is
Future generations
that the treatment of generations yet unborn has now become a lively topic of philosophical discussion and political debate. See also Environmental education; Environmentalism; Intergenerational justice [Terence Ball]
RESOURCES BOOKS Barry, B., and R. I. Sikora, eds. Obligations to Future Generations. Philadelphia: Temple University Press, 1978. Fishkin, J., and P. Laslett, eds. Justice Between Age Groups and Generations. New Haven, CT: Yale University Press, 1992. Partridge, E., ed. Responsibilities to Future Generations. Buffalo, NY: Prometheus Books, 1981. Sagoff, M. The Economy of the Earth: Philosophy, Law and the Environment. Cambridge and New York: Cambridge University Press, 1988.
609
This Page Intentionally Left Blank
G
Gaia hypothesis The Gaia hypothesis was developed by British biochemist James Lovelock, and it incorporates two older ideas. First, the idea implicit in the ancient Greek term Gaia, that the earth is the mother of all life, the source of sustenance for all living beings, including humans. Second, the idea that life on earth and many of earth’s physical characteristics have coevolved, changing each other reciprocally as the generations and centuries pass. Lovelock’s theory contradicts conventional wisdom, which holds “that life adapted to the planetary conditions as it and they evolved their separate ways.” The Gaia hypothesis is a startling break with tradition for many, although ecologists have been teaching the coevolution of organisms and habitat for at least several decades, albeit more often on a local than a global scale. The hypothesis also states that Gaia will persevere no matter what humans do. This is undoubtedly true, but the question remains: in what form, and with how much diversity? If humans don’t change the nature and scale of some of their activities, the earth could change in ways that people may find undesirable—loss of biodiversity, more “weed” species, increased desertification, etc. Many people, including Lovelock, take the Gaia hypothesis a step further and call the earth itself a living being, a long-discredited organismic analogy. Recently a respected environmental science textbook defined the Gaia hypothesis as a “proposal that Earth is alive and can be considered a system that operates and changes by feedbacks of information between its living and nonliving components.” Similar sentences can be found quite commonly, even in the scholarly literature, but upon closer examination they are not persuasive. A furnace operates via a positive and negative feedback system—does that imply it is alive? Of course not. The important message in Lovelock’s hypothesis is that the health of the earth and the health of its inhabitants are inextricably intertwined. See also Balance of nature; Biological community; Biotic community; Ecology; Ecosystem; Environment;
Environmentalism; Evolution; Nature; Sustainable biosphere [Gerald L. Young]
RESOURCES BOOKS Schneider, S. H., and P. J. Boston, eds. Scientists on Gaia. Cambridge: MIT Press, 1991. Joseph, L. E. Gaia: The Growth of an Idea. New York: St. Martin’s Press, 1990. Lovelock, J. E. Gaia: A New Look at Life on Earth. Oxford: Oxford University Press, 1979. ———. The Ages of Gaia: A Biography of Our Living Earth. New York: Norton, 1988.
PERIODICALS Lyman, F. “What Gaia Hath Wrought: The Story of a Scientific Controversy.” Technology Review 92 (July 1989): 54–61.
´ pagos Islands Gala Within the theory of evolution, the concept of adaptive radiation (evolutionary development of several species from a single parental stock) has had as its prime example, a group of birds known as Darwin’s finches. Charles Darwin discovered and collected specimens of these birds from the Gala´pagos Islands in 1835 on his five-year voyage around the world aboard the HMS Beagle. His cumulative experiences, copious notes, and vast collections ultimately led to the publication of his monumental work, On the Origin of Species, in 1859. The Gala´pagos Islands and their unique assemblage of plants and animals were an instrumental part of the development of Darwin’s evolutionary theory. The Gala´pagos Islands are located at 90° W longitude and 0° latitude (the equator), about 600 mi (965 km) west Ecuador. These islands are volcanic in origin and are about 10 million years old. The original colonization of the Gala´pagos Islands occurred by chance transport over the ocean as indicated by the gaps in the flora and fauna of this archipel611
´ pagos Islands Gala
ago compared to the mainland. Of the hundreds of species of birds along the northwestern South American coast, only seven species colonized the Gala´pagos Islands. These evolved into 57 resident species, 26 of which are endemic to the islands, through adaptive radiation. The only native land mammals are a rat and a bat. The land reptiles include iguanas, a single species each of snake, lizard, and gecko, and the Gala´pagos giant tortoise (Geochelone elephantopus). No amphibians and few insects or mollusks are found in the Gala´pagos. The flora has large gaps as well—no conifers or palms have colonized these islands. Many of the open niches have been filled by the colonizing groups. The tortoises and iguanas are large and have filled niches normally occupied by mammalian herbivores. Several plants, such as the prickly pear cactus, have attained large size and occupy the ecological position of tall trees. The most widely known and often used example of adaptive radiation is Darwin’s finches, a group of 14 species of birds that arose from a single ancestor in the Gala´pagos Islands. These birds have specialized on different islands or into niches normally filled by other groups of birds. Some are strictly seed eaters, while others have evolved more warbler-like bills and eat insects, still others eat flowers, fruit, and/or nectar, and others find insects for their diet by digging under the bark of trees, having filled the niche of the woodpecker. Darwin’s finches are named in honor of their discoverer, but they are not referred to as Gala´pagos finches because there is one of their numbers that has colonized Cocos Island, located 425 mi (684 km) north-northeast of the Gala´pagos. Because of the Gala´pagos Islands’ unique ecology, scenic beauty and tropical climate, they have become a mecca for tourists and some settlement. These human activities have introduced a host ofenvironmental problems, including introduced species of goats, pigs, rats, dogs, and cats, many of which become feral and damage or destroy nesting bird colonies by preying on the adults, young, or eggs. Several races of giant tortoise have been extirpated or are severely threatened with extinction, primarily due to exploitation for food by humans, destruction of their food resources by goats, or predation of their hatchlings by feral animals. Most of the 13 recognized races of tortoise have populations numbering only in the hundreds. Three races are tenuously maintaining populations in the thousands, one race has not been seen since 1906, but it is thought to have disappeared due to natural causes, another race has a population of about 25 individuals, and the Abingdon Island tortoise is represented today by only one individual, “Lonesome George,” a captive male at the Charles Darwin Biological Station. For most of these tortoises to survive, an active capture or extermination program of the feral animals will have to continue. One other potential threat to the Gala´pa612
Environmental Encyclopedia 3
´ pagos Islands. (Photograph Coastline in the Gala by Anthony Wolff. Phototake NYC. Reproduced by permission.)
gos Islands is tourism. Thousands of tourists visit these islands each year and their numbers can exceed the limit deemed sustainable by the Ecuadoran government. These tourists have had, and will continue to have, an impact on the
Environmental Encyclopedia 3
Birute Marija Filomena Galdikas
fragile habitats of the Gala´pagos. See also Endemic species; Ecotourism [Eugene C. Beckham]
RESOURCES BOOKS Harris, M. A Field Guide to the Birds of the Gala´pagos. London: Collins, 1982. Root, P., and M. McCormick. Gala´pagos Islands. New York: Macmillan, 1989. Steadman, D. W., and S. Zousmer. Gala´pagos: Discovery on Darwin’s Islands. Washington, DC: Smithsonian Institution Press, 1988.
Birute Marija (1948 – )
Filomena
Galdikas
Lithuanian/Canadian primatologist The world’s leading expert on orangutans, Birute Galdikas has dedicated much of her life to studying the orangutans of Indonesia’s Borneo and Sumatra islands. Her work, which has complemented that of such other scientists as Dian Fossey and Jane Goodall, has led to a much greater understanding of the primate world and more effective efforts to protect orangutans from the effects of human infringement. Galdikas has also been credited with providing valuable insights into human culture through her decades of work with primates. She discusses this aspect of her work in her 1995 autobiography, Reflections of Eden: My Years with the Orangutans of Borneo. Galdikas was born on May 10, 1948 in Wiesbaden in what was then West Germany, while her family was en route from their native Lithuania. She was the first of four children. The family moved to Toronto, Canada when she was two, and she grew up in that city. As a child, Galdikas was already enamored of the natural world, and spent much of her time in local parks and reading books on jungles and their inhabitants. She was already especially interested in orangutans. The Galdikas family eventually moved to Los Angeles, where Birute attended the local campus of the University of California. She earned a BA there in 1966 and immediately began work on a master’s degree in anthropology. Galdikas had already decided to start a long-term study of orangutans in the rain forests of Indonesia, where most of the world’s last remaining wild individuals live. Galdikas began to realize her dream in 1969 when she approached famed paleoanthropologist Louis Leakey after he gave a lecture at UCLA. Leakey had helped launch the research efforts of Fossey and Goodall, and she asked him to do the same for her. He agreed, and by 1971 had helped her raise enough money to get started. With her first husband, Galdikas traveled to southern Borneo’s Tanjung Puting National Park in East Kalimantan to start setting up her
research station. Such challenges as huge leeches, extremely toxic plants, perpetual dampness, swarms of insects, and aggressive viruses slowed Galdikas down, but did not ruin her enthusiasm for her new project. After finally locating the area’s elusive orangutan population, Galdikas faced the difficulty of getting the shy animals accustomed enough to her presence that they would permit her to watch them even from a distance. Once Galdikas accomplished this, she was able to begin documenting some of the traits and habits of the littlestudied orangutans. She compiled a detailed list of staples in the animals’ diets, discovered that they occasionally eat meat, and recorded their complex behavioral interactions. Eventually, the animals came to accept Galdikas and her husband so thoroughly that their camp was often overrun by them. Galdikas recalled in a 1980 National Geographic article that she sometimes felt as though she were “surrounded by wild, unruly children in orange suits who had not yet learned their manners.” Meanwhile, she applied her findings to her UCLA education, earning both her master’s degree and doctorate in 1978. During her first decade on Borneo, Galdikas founded the Orangutan Project, which has since been funded by such organizations as the National Geographic Society, the World Wildlife Fund, and Earthwatch. The Project not only carries out primate research, but also rehabilitates hundreds of former captive orangutans. She also founded the Los Angeles-based nonprofit Orangutan Foundation International in 1987. From 1996 to 1998, Galdikas served as a senior adviser to the Indonesian Forestry Ministry on orangutan issues as that government attempted to rectify the mistreatment of the animals and the mismanagement of their dwindling rain forest habitat. As part of these efforts, the Jakarta government also helped Galdikas establish the Orangutan Care Center and Quarantine near Pangkalan Bun, which opened in 1999. This center has since cared for many of the primates injured or displaced by the devastating fires in the Borneo rain forest in 1997–1998. Divorced in 1979, Galdikas married a native Indonesian man of the Dayak tribe in 1981. She has one son with her first husband and two children with her second. Galdikas and her second husband currently live in a Dayak village, but Galdikas travels to Canada once a year to visit her first son. She became a visiting professor at Simon Fraser University in 1981, but since then has been appointed full professor. Besides her work, Galdikas reportedly most enjoys playing with her children, reading, walking, and listening to native Indonesian music. She has been featured on such popular television shows as “Good Morning, America” and “Eye to Eye,” using such exposure to increase public aware613
Game animal
Environmental Encyclopedia 3 with state agencies for regulating harvests of migratory game animals, principally waterfowl.
Game preserves
Birute Galdikas being embraced by two orangutans in Borneo. (The Liaison Network. ©Liaison Agency. Reproduced by permission.)
ness of her programs and the danger faced by the world’s remaining orangutan population. RESOURCES BOOKS Galdikas, Birute. Reflections of Eden: My Years with the Orangutans of Borneo. Little, Brown: New York, 1995. Montgomery, Sy. Walking with the Great Apes: Jane Goodall, Dian Fossey, Birute Galdikas. 1991. Notable Scientists: From 1900 to the Present. Farmington Hills, MI: Gale Group, 2001.
Game animal Birds and mammals commonly hunted for sport. The major groups include upland game birds (quail, pheasant, and partridge), waterfowl (ducks and geese), and big game (deer, antelope, and bears). Game animals are protected to varying degrees throughout most of the world, and hunting levels are regulated through the licensing of hunters as well as by seasons and bag limits. In the United States, state wildlife agencies assume primary responsibility for enforcing hunting regulations, particularly for resident or non-migratory species. The Fish and Wildlife Service shares responsibility 614
Game preserves (also known as game reserves, or wildlife refuges) are a type of protected area in which hunting of certain species of animals is not allowed, although other kinds of resource harvesting may be permitted. Game preserves are usually established to conserve populations of larger game species of mammals or waterfowl. The protection from hunting allows the hunted species to maintain relatively large populations within the sanctuary. However, animals may be legally hunted when they move outside of the reserve during their seasonal migrations or when searching for additional habitat. Game preserves help to ensure that populations of hunted species do not become depleted through excessive harvesting throughout their range. This conservation allows the species to be exploited in a sustainable fashion over the larger landscape. Although hunting is not allowed, other types of resource extraction may be permitted in game reserves, such as timber harvesting, livestock grazing, some types of cultivated agriculture, mining, and exploration and extraction of fossil fuels. However, these land-uses are managed to ensure that the habitat of game species is not excessively damaged. Some game preserves are managed as true ecological reserves, where no extraction of natural resources is allowed. However, low-intensity types of land-use may be permitted in these more comprehensively protected areas, particularly non-consumptive recreation such as hiking and wildlife viewing. Game preserves as a tool in conservation The term “conservation” refers to the wise (i.e., sustainable) use of natural resources. Conservation is particularly relevant to the use of renewable resources, which are capable of regenerating after a portion has been harvested. Hunted species of animals are one type of renewable resource, as are timber, flowing water, and the ability of land to support the growth of agricultural crops. These renewable resources have the potential to be harvested forever, as long as the rate of exploitation is equal to or less than the rate of regeneration. However, potentially renewable resources can also be harvested at a rate exceeding their regeneration. This is known as over-exploitation, a practice that causes the stock of the resource to decline and may even result in its irretrievable collapse. Wildlife managers can use game preserves to help conserve populations of hunted species. Other methods of conservation of game animals include: (1) regulation of the
Environmental Encyclopedia 3 time of year when hunting can occur; (2) setting of “bag limits” that restrict the maximum number of animals that any hunter can harvest; (3) limiting the total number of animals that can be harvested in a particular area, and; (4) restricting the hunt to certain elements of the population. Wildlife managers can also manipulate the habitat of game species so that larger, more productive populations can be sustained, for example by increasing the availability of food, water, shelter, or other necessary elements of habitat. In addition, wildlife managers may cull the populations of natural predators to increase the numbers of game animals available to be hunted by people. Some or all of these practices, including the establishment of game preserves, may be used as components of an integrated game management system. Such systems may be designed and implemented by government agencies that are responsible for managing game populations over relatively large areas such as counties, states, provinces, or entire countries. Conservation is intended to benefit humans in their interactions with other species and ecosystems, which are utilized as valuable natural resources. When defined in this way, conservation is very different from the preservation of indigenous species and ecosystems for their ecocentric and biocentric values, which are considered important regardless of any usefulness to humans or their economic activities. Examples of game preserves The first national wildlife refuge in the United States was established by President Theodore Roosevelt in 1903. This was a breeding site for brown pelicans (Pelecanus occidentalis) and other birds in Florida. The U.S. national system of wildlife refuges now totals some 437 sites covering 91.4 million acres (37 million ha); an additional 79 million acres (32 million ha) of habitat are protected in national parks and monuments. The largest single wildlife reserve is the Alaska Maritime Wildlife Refuge, which covers 3.5 million acres (1.4 million ha); in fact, about 85% of the national area of wildlife refuges is in Alaska. Most of the national wildlife refuges protect migratory, breeding, and wintering habitats for waterfowl, but others are important for large mammals and other species. Some wildlife refuges have been established to protect critical habitat of endangered species, such as the Aransas Wildlife Refuge in coastal Texas, which is the primary wintering grounds of the whooping crane (Grus americana). Since 1934, sales of Migratory Bird Hunting Stamps, or “duck stamps,” have been critical to providing funds for the acquisition and management of federal wildlife refuges in the United States. Although hunting is not permitted in many National wildlife refuges, in 1988, closely regulated hunting was permitted in 60% of the refuges, and fishing was allowed in 50%. In addition, some other resource-related activities are
Game preserves
allowed in some refuges. Depending on the site, it may be possible to harvest timber, graze livestock, engage in other kinds of agriculture, or explore for or mine metals or fossil fuels. The various components of the multiple-use plans of particular national wildlife refuges are determined by the Secretary of the Interior. Any of these economically important activities may cause damage to wildlife habitats, and this has resulted in intense controversy between economic interests and some environmental groups. Environmental organizations such as the Sierra Club, Ducks Unlimited, and the Audubon Society have lobbied federal legislators to further restrict exploration and extraction in national wildlife refuges, but business interests demand greater access to valuable resources within national wildlife refuges. Many states and provinces also establish game preserves as a component of wildlife-management programs on their lands. For example, many jurisdictions in eastern North America have set up game preserves for management of populations of white-tailed deer (Odocoileus virginianus), a widely hunted species. Game preserves are also used to conserve populations of mule deer (Odocoileus hemionus) and elk (Cervus canadensis) in more western regions of North America. Some other types of protected areas, such as state, provincial, and national parks are also effective as wildlife preserves. These protected areas are not primarily established for the conservation of natural resources—rather, they are intended to preserve natural ecosystems and wild places for their intrinsic value. Nevertheless, relatively large and productive populations of hunted species often build up within parks and other large ecological reserves, and the surplus animals are commonly hunted in the surrounding areas. In addition, many protected areas are established by nongovernmental organizations, such as The Nature Conservancy, which has preserved more than 16 million acres (6.5 million ha) of natural habitat throughout the United States. Yellowstone National Park is one of the most famous protected areas in North America. Hunting is not allowed in Yellowstone, and this has allowed the build-up of relatively large populations of various species of big-game mammals, such as white-tailed deer, elk, bison (Bison bison), and grizzly bear (Ursus arctos). Because of the large populations in the park, the overall abundance of game species on the greater landscape is also larger. This means that a relatively high intensity of hunting can be supported. This is considered important because it provides local people with meat and subsistence as well as economic opportunities through guiding and the marketing of equipment, accommodations, food, fuel, and other necessities to non-local hunters. By providing a game-preserve function for the larger landscape, wildlife refuges and other kinds of protected areas 615
Environmental Encyclopedia 3
Gamma ray
help to ensure that hunting can be managed to allow populations of exploited species to be sustained, while providing opportunities for people to engage in subsistence and economic activities. [Bill Freedman Ph.D.]
RESOURCES BOOKS Freedman, B. Environmental Ecology. 2nd ed. San Diego: Academic Press, 1995. Miller, G. T. Resource Conservation and Management. Belmont, CA: Wadsworth Publishing Co., 1990. Owen, O. S., and D. D. Chiras. Natural Resource Conservation. Management for a Sustainable Future. Englewood Cliffs, NJ: Prentice Hall, 1995. Robinson, W.L., and E. G. Bolen. Wildlife Ecology and Management. 3rd ed. New York: MacMillan Publishing Co., 1995.
Gamma ray High energy forms of electromagnetic radiation with very short wavelengths. Gamma rays are emitted by cosmic sources or by radioactive decay of atomic nuclei which occurs during nuclear reactions or the detonation of nuclear weapons. Gamma rays are the most penetrating of all forms of nuclear radiation. They travel about 100 times deeper into human tissue than beta particles and 10,000 times deeper than alpha particles. Gamma rays cause chemical changes in cells through which they pass. These changes can result in the cells’ death or the loss of their ability to function properly. Organisms exposed to gamma rays may suffer illness, genetic damage, or death. Cosmic gamma rays do not usually pose a danger to life because they are absorbed as they travel through the atmosphere. See also Ionizing radiation; Radiation exposure; Radioactive fallout
Mohandas Karamchand (1869 – 1948)
Gandhi
Indian Religious leader Mohandas Karamchand Gandhi led the movement that freed India from colonial occupation by the British. His leadership was based not only on his political vision but also on his moral, economic, and personal philosophies. Gandhi’s beliefs have influenced many political movements throughout the world, including the civil rights movement in the United States, but their relevance to the modern environmental movement has not been widely recognized or understood until recently. 616
In developing the principles that would enable the Indian people to form a united independence movement, one of Gandhi’s chief concerns was preparing the groundwork for an economy that would allow India to be both self-sustaining and egalitarian. He did not believe that an independent economy in India could be based on the Western model; he considered a consumer economy of unlimited growth impossible in his country because of the huge population base and the high level of poverty. He argued instead for the development of an economy based on the careful use of indigenous natural resources. His was a philosophy of conservation, and he advocated a lifestyle based on limited consumption, sustainable agriculture, and the utilization of labor resources instead of imported technological development. Gandhi’s plans for India’s future were firmly rooted both in moral principles and in a practical recognition of its economic strengths and weaknesses. He believed that the key to an independent national economy and a national sense of identity was not only indigenous resources but indigenous products and industries. Gandhi made a point of wearing only homespun, undyed cotton clothing that had been handwoven on cottage looms. He anticipated that the practice of wearing homespun cotton cloth would create an industry for a product that had a ready market, for cotton was a resource that was both indigenous and renewable. He recognized that India’s major economic strength was its vast labor pool, and the low level of technology needed for this product would encourage the development of an industry that was highly decentralized. It could provide employment without encouraging mass migration from rural to urban areas, thus stabilizing rural economies and national demography. The use of cotton textiles would also prevent dependence on expensive synthetic fabrics that had to be imported from Western nations, consuming scarce foreign exchange. He also believed that synthetic textiles were not suited to India’s climate, and that they created an undesirable distinction between the upper classes that could afford them and the vast majority that could not. The essence of his economic planning was a philosophical commitment to living a simple lifestyle based on need. He believed it was immoral to kill animals for food and advocated vegetarianism; advocated walking and other simple forms of transportation contending that India could not afford a car for every individual; and advocated the integration of ethical, political, and economic principles into individual lifestyles. Although many of his political tactics, particularly his strategy of civil disobedience, have been widely embraced in many countries, his economic philosophies have had a diminishing influence in a modern, independent India, which has been pursuing sophisticated technologies and a place in the global economy. But to some,
Environmental Encyclopedia 3
Mohandas Karamchand Gandhi. World Photos. Reproduced by permission.)
Garbage
(AP/Wide
his work seems increasingly relevant to a world with limited resources and a rapidly growing population. [Usha Vedagiri and Douglas Smith]
RESOURCES BOOKS Chadha, Yogesh. Gandhi: A Life. Wiley, 1998. Fischer, L. The Life of Mahatma Gandhi. New York: Harper, 1950. Mehta, V. Mahatma Gandhi and His Apostles. New York: Viking, 1977.
Garbage In 1999, the United States generated 230 million tons of municipal solid waste, compared with 195 million tons in 1990, according to Environmental Protection Agency (EPA) estimates. On average, each person generated 4.6 lb (2.1 kg) of such waste per day in 1999, and the EPA expects that amount to continue increase. That waste includes cans, bottles, newspapers, paper and plastic packages, uneaten food, broken furniture and appliances, old tires, lawn clip-
pings, and other refuse. This waste can be placed in landfills, incinerated, recycled, or in some cases composted. Landfilling—waste disposed of on land in a series of layers that are compacted and covered, usually with soil— is the main method of waste management in this country, accounting for about 57%of the waste. But old landfills are being closed and new ones are hard to site because of community opposition. Landfills once were open dumps, causing unsanitary conditions, methane explosions, and releases of hazardous chemicals into groundwater and air. Old dumps make up 22% of the sites on the Superfund National Priorities List. Today, landfills must have liners, gas collection systems, and other controls mandated under Subtitle D of the Resource Conservation and Recovery Act (RCRA). Incineration has been popular among solid waste managers because it helps to destroy bacteria and toxic chemicals and to reduce the volume of waste. But public opposition, based on fears that toxic metals and other chemical emissions will be released from incinerators, has made the siting of new facilities extremely difficult. In the past, garbage burning was done in open fields, in dumps, or in backyard drums, but the Clean Air Act (1970) banned open burning, leading to new types of incinerators, most of which are designed to generate energy. Recycling, which consists of collecting materials from waste streams, preparing them for market, and using those materials to manufacture new products, is catching national attention as a desirable waste management method. As of 1999, all states and the District of Columbia had some type of statewide recycling law aimed at promoting greater recycling of glass, paper, metals, plastics, and other materials. Used oil, household batteries, and lead-acid automotive batteries are recyclable waste items of particular concern because of their toxic constituents. Composting is a waste management approach that relies on heat and microorganisms—mostly bacteria and fungi—to decompose yard wastes and food scraps, turning them into a nutrient-rich mix called humus or compost. This mix can be used as fertilizer. However, as with landfills and incinerators, composting facilities have been difficult to site because of community opposition, in part because of the disagreeable smell generated by some composting practices. Recently, waste managers have shown interest in source reduction, reducing either the amount of garbage generated in the first place or the toxic ingredients of garbage. Reusable blankets instead of throw-away cardboard packaging for protecting furniture is one example of source reduction. Businesses are regarded as a prime target for source reduction, such as implementing double-sided photocopying 617
Environmental Encyclopedia 3
Garbage Project
to save paper, because the approach offers potentially large cost savings to companies. [David Clarke]
RESOURCES BOOKS Blumberg, L., and R. Gottlieb. War on Waste: Can America Win Its Battle With Garbage? Washington, DC: Island Press, 1989. Underwood, J., A. Hershkowitz, and M. de Kadt. Garbage—Practices, Problems, Remedies. New York: INFORM, 1988. U.S. Office of Technology Assessment. Facing America’s Trash: What Next For Municipal Solid Waste? Washington, DC: U.S. Government Printing Office, 1989.
Garbage Project The Garbage Project was founded in 1973, shortly after the first Earth Day, by William Rathje, professor of anthropology, and fellow archaeologists at the University of Arizona. The objective was to apply the techniques and tools of their science to the study of modern civilization by analyzing its garbage. Using sample analysis and assessing biodegradation, they also hoped to increase their understanding of resource depletion and environmental and landfill-related problems. Because it requires sunlight, moisture, and oxygen, as well as organic material and bacteria, little biodegradation actually takes place in landfills, resulting in perfectly preserved heads of lettuce, 40-year-old hot dogs, and completely legible 50year-old newspapers. In Rubbish: The Archaeology of Garbage, published in 1992, Rathje and Atlantic Monthly managing editor Cullen Murphy discuss some of the data gleaned from the project. For example, the accumulation of refuse has raised the City of New York 6–30 ft (1.8–9 m) since its founding. Also, the largest proportion—40%—of landfilled garbage is paper, followed by the leftovers from building construction and demolition. In fact, newspapers alone make up about 13% of the total volume of trash. Just as interesting as what they found was what they did not find. Contrary to much of public opinion, fast-food packaging made up only one-third of 1% of the total volume of trash landfilled between 1980 and 1989, while expanded polystyrene foam accounted for no more than 1%. Even disposable diapers averaged out at only 1% by weight of the total solid waste contents (1.4% by volume). Of all the garbage examined, plastics constituted from 20–24%. Surveys of several national landfills revealed that organic materials made up 40–52% of the total volume of waste. The Garbage Project also debunked the idea that the United States is running out of space for landfills. While it 618
is true that many landfills have been shut down, it is also true that many of those were quite small to begin with and that they now pose fewer environmental hazards. It is estimated that one landfill 120 ft (35.4 m) deep and measuring 44 mi2 (71 km2) would adequately handle the needs of the entire nation for the next 100 years (assuming current levels of waste production). In “A Perverse Law of Garbage,” Rathje extrapolated from “Parkinson’s Law” to define his Parkinson’s Law of Garbage: “Garbage expands so as to fill the receptacles available for its containment.” As evidence he cites a Garbage Project study of the recent mechanization of garbage pickup in some larger cities and the ensuing effects. As users were provided with increasingly larger receptacles (in order to accommodate the mechanized trucks), they continued to fill them up. Rathje attributes this to the newfound convenience of disposing of that which previously had been consigned to the basement or secondhand store, and concludes that the move to automation may be counterproductive to any attempt to reduce garbage and increase recycling. [Ellen Link]
RESOURCES BOOKS Rathje, W. L., and C. Murphy. Rubbish!: The Archaeology of Garbage. New York: Harper Collins, 1992.
PERIODICALS Lilienfeld, R. M. “Six Enviro-Myths: Recycling is the Key.” New York Times (January 21, 1995): 23(L). Rathje, W. L. “A Perverse Law of Garbage.” Garbage 4, no. 6 (DecemberJanuary 1993): 22.
Garbology The study of garbage, through either archaeological excavation of landfills or analysis of fresh garbage, to determine what the composition of municipal solid waste says about the society that generated it. The term is associated with the Garbage Project of the University of Arizona (Tucson), co-directed by William Rathje and Wilson Hughes, which began studying trash in 1973 and excavating landfills in 1987. They found that little degradation occurs in landfills; New York newspapers from 1949 and an ear of corn discarded in 1971 were found intact. Unexpectedly, the project also found that plastics make up less than one percent of the total volume of landfills.
Gardens see Botanical garden; Organic gardening and farming
Environmental Encyclopedia 3
Gasohol Gasohol is a term used for the mixture of 10% ethyl alcohol (also called ethanol or grain alcohol) with gasoline. Ethanol raises the octane rating of lead-free automobile fuel and significantly decreases the carbon monoxide released from tailpipes. It has also been promoted as a means of reducing corn surpluses. By 2001, 2.2 billion gal (8.3 billion l) were being produced a year and this number is expected to rise to 4.6 billion gal (17.4 billion l). However, ethanol also raises the vapor pressure of gasoline, and it has been reported to increase the release of “evaporative” volatile hydrocarbons from the fuel system and oxides of nitrogen from the exhaust. These substances are components of urban smog, and thus the role of ethanol in reducing pollution is controversial.
Gasoline Crude oil in its natural state has very few practical uses. However, when it is separated into its component parts by the process of fractionation, or refining, those parts have an almost unlimited number of applications. In the first 60 years after the process of petroleum refining was invented, the most important fraction produced was kerosene, widely used as a home heating product. The petroleum fraction slightly lighter than kerosene — gasoline — was regarded as a waste product and discarded. Not until the 1920s, when the automobile became popular in the United States, did manufacturers find any significant use for gasoline. From then on, however, the importance of gasoline has increased with automobile use. The term gasoline refers to a complex mixture of liquid hydrocarbons that condense in a fractionating tower at temperatures between 100° and 400°F (40° and 205°C). The hydrocarbons in this mixture are primarily single- and double-bonded compounds containing five to 12 carbon atoms. Gasoline that comes directly from a refining tower, known as naphtha or “straight-run” gasoline, was an adequate fuel for the earliest motor vehicles. But as improvements in internal combustion engines were made, problems began to arise. The most serious problem was “knocking.” If a fuel burns too rapidly in an internal combustion engine, it generates a shock wave that makes a “knocking” or “pinging” sound. The shock wave will, over time, also cause damage to the engine. The hydrocarbons that make up straight-run gasolines proved to burn too rapidly for automotive engines developed after 1920. Early in the development of automotive fuels, engineers adopted a standard for the amount of knocking caused by a fuel and, hence, for the fuel’s efficiency. That standard
Gasoline
was known as “octane number.” To establish a fuel’s octane number, it is compared with a very poor fuel (n-heptane), assigned an octane number of zero, and a very good fuel (isooctane), assigned an octane number of 100. The octane number of straight-run gasoline is anywhere from 50 to 70. As engineers made more improvements in automotive engines after the 1920s, chemists tried to keep pace by developing better fuels. One approach they used was to subject straight-run gasoline (as well as other crude oil fractions) to various treatments that changed the shape of hydrocarbon molecules in the gasoline mixture. One such method, called cracking, involves the heating of straight-run gasoline or another petroleum fraction to high temperatures. The process results in a better fuel from newly-formed hydrocarbon molecules. Another method for improving the quality of gasoline is catalytic reforming. In this case, the cracking reaction takes place over a catalyst such as copper, platinum, rhodium, or other “noble” metal, or a form of clay known as zeolite. Again, hydrocarbon molecules formed in the fraction are better fuels than straight-run gasoline. Gasoline produced by catalytic cracking or reforming has an octane number of at least eighty. A very different approach to improving gasoline quality is the use of additives, chemicals added to gasoline to improve the fuel’s efficiency. Automotive engineers learned more than 50 years ago that adding as little as two grams of tetraethyl lead, the best-known additive, to one gallon of gasoline raises its octane number by as much as ten points. Until the 1970s, most gasolines contained tetraethyl lead. Then, concerns began to grow about the release of lead to the environment during the combustion of gasoline. Lead concentrations in urban air had reached a level five to 10 times that of rural air. Residents of countries with few automobiles, such as Nepal, had only one-fifth the lead in their bodies as did residents of nations such as the United States, with many automotive vehicles. The toxic effects of lead on the human body have been known for centuries, and risks posed by leaded gasoline became a major concern. In addition, leaded gasoline became a problem because it damaged a car’s catalytic converter, which reduced air pollutants in exhaust. Finally, in 1973, the Environmental Protection Agency (EPA) acted on the problem and set a time-scale for the gradual elimination of leaded fuels. According to this schedule, the amount of lead was to be reduced from 2 to 3 grams per gallon (the 1973 average) to 0.5 g/gal by 1979. Ultimately, the additive was to be totally eliminated from all gasoline. The elimination of leaded fuels has been made possible by the invention of new and safer additives. One of the most popular is methyl-t-butyl ether (MTBE). By 1988 MTBE 619
Environmental Encyclopedia 3
Gasoline tax
had become so popular that it was among the 40 most widely produced chemicals in the United States. In 2001, MTBE was asked to be phased out of production in California by 2003. Yet another approach to improving fuel efficiency is the mixing of gasoline and ethyl or methanol. This product, known as gasohol, has the advantage of high octane rating, lower cost, and reduced emission of pollutants, compared to normal gasoline. See also Air pollution; Alternative fuels [David E. Newton]
RESOURCES BOOKS Joesten, M. D., et al. World of Chemistry. Philadelphia: Saunders, 1991. Lapedes, D. N., ed. McGraw-Hill Encyclopedia of Energy. New York: McGraw-Hill, 1976.
PERIODICALS “MBTE Growth Limited Despite Lead Phasedown in Gasoline.” Chemical & Engineering News (July 15, 1985): 12. Williams, R. “On the Octane Trail.” Technology Illustrated (May 1983): 52–53.
Gasoline tax Gasoline taxes include federal, state, county, and municipal
taxes imposed on gasoline motor vehicle fuel. In the United States, most of the federal tax is used to fund maintenance and improvements in such transportation infrastructures as interstate highways. As of mid-2002, the federal excise tax for gasoline stood at 18.4 cents per gallon, and state excise taxes ranged from 7.5 cents in Georgia to 29 cents in Rhode Island (making the weighted national average state tax 19.97 cents per gallon). In total, the U.S. national average gasoline tax (combining federal and state) was 38 cents per gallon. Oregon became the first state to institute a tax on gasoline in 1919. By the time the federal government established its own 1 cent gas tax in 1932, every state had a gas tax. After several small increases in the 1930s and 1940s, the gas tax was raised to 3 cents to finance the Highway Trust Fund in 1956. The Trust Fund was earmarked to pay for federal interstate construction and other road work. In 1982, the federal gasoline tax was increased to 9 cents to fund road maintenance and mass transit. The tax was hiked again in 1990 to 14.1 cents, and to 18.4 cents in 1993— where it remained as of July 2002. Over this time, gasoline prices increased from about 20 cents per gallon in 1938 to a U.S. national average of 139.2 cents in May of 2002. The average national gasoline tax (both federal and a weighted average of state taxes) accounts for 30.2% of the retail price of a gallon of gas. 620
In some countries, diesel fuels are taxed and priced less than gasoline. Commercial vehicles are major consumers of diesel, and lower taxes avoided undue impacts on trucking and commerce. In the United States, diesel is taxed at a higher rate than gasoline—an average of 44 cents per gallon (including 24.3 cents federal tax and the weight average of state taxes). In contrast, gasohol, an alternative fuel consisting of 90% gasoline and 10% ethanol, has a lower rate of taxation (with a 13.1 cent federal excise tax). Although federal gasoline taxes are a manufacturer’s excise tax, meaning that the government collects the tax directly from the manufacturer, rate hikes are often passed on to consumers at the pump. In this light, gasoline taxes have been criticized as regressive and thus inequitable—i.e., lower income individuals pay a greater share of their income as tax than higher income individuals. Also, the tax as a share of the pump price has been increasing. However, it should be noted that the general trend of the real price of gasoline (adjusted for inflation and including taxes) has been declining for many decades. In addition, automobile efficiency (mileage) has been improving and thus the fuel requirements and cost-per-mile-traveled have declined. These factors can be viewed as largely or completely offsetting the impact of gasoline taxes. For example, Congress attempted in May 1996 to rollback the 4.3 cents tax increase of 1993. The impact of this repeal for a family of four who drive 12,000 miles a year at 20 miles per gallon is a savings of $26, which in the House debates was compared to the cost of a family dinner at McDonald’s. On the other hand, a rollback could have bigger consequences for the future upkeep of the country’s highways and interstates; a 2000 Congressional report estimated that a repeal of the federal gasoline tax would mean a $5.2 billion annual loss in revenues for the Highway Trust Fund. Outside of the United States, both gasoline prices and gas tax rates are typically far higher (e.g., gasoline taxes were 307 cents per gallon in the United Kingdom as of March 2002). In addition to funding governments, high gasoline taxes form part of a strategy to encourage the use of public transportation, reduce pollution, conserve energy, and improve national security (since most gasoline is imported). [Stuart Batterman and Paula Anne Ford-Martin]
RESOURCES PERIODICALS Talley, Louis Allen. Congressional Research Service. “The Federal Excise Tax on Gasoline and the Highway Trust Fund: A Short History” CRS Report for Congress. Washington, DC: Congressional Research Service, March 2000.
Environmental Encyclopedia 3 OTHER American Petroleum Institute, Policy Analysis and Statistics Department. How Much We Pay for Gasoline: April 2001 Review. Washington, DC: API, 2001. U.S. Department of Energy, Energy Information Administration. Gasoline and Diesel Fuel Update. [cited July 9, 2002]. . U.S. Department of Transportation, Federal Highway Administration. Our Nation’s Highways: Selected Facts and Figures 2000. Publication No. FHWAPL-01-1012. [cited July 6, 2002]. .
Gastropods Gastropods are invertebrate animals that make up the largest class in the phylum Mollusca. Examples of common gastropods include all varieties of snails, abalone, limpets, and land and sea slugs. There are over 35,000 existing species, as well an additional 15,000 separate fossil species. Gastropods first appeared in the fossil record during the early Cambrian period, approximately 550 million years ago. This diverse group of animals is characterized by a soft body, made up of three main parts: the head, foot, and visceral mass. The head contains a mouth and often sensing tentacles. The lower portion of the body makes up the foot, which allows slow creeping along rocks and other solid surfaces. The visceral mass is the main part of the body, containing most of the internal organs. In addition to these body parts, gastropods possess a mantle, or fold which secretes a hard, calcium carbonate shell. The single, asymmetrical shell of a gastropod is most often spiral shaped, however, it can be flattened or cone-like. This shell is an important source of protection. Predators have a difficult time accessing the soft flesh inside, especially if there are sharp points on the outside, as there are on the shells of some of the more ornate gastropods. There are also some gastropods, such as slugs and sea hares, that do not have shells or have greatly reduced shells. Some of the shelless types that live in the ocean (i.e., nudibranchs or sea slugs) are able to use stinging cells from prey that they have consumed as a means of protection. In addition to a spiraling of their shells, the soft bodies of most gastropods undergo 180 degrees of twisting, or torsion, during early development, when one side of the visceral mass grows faster than the other. This characteristic distinguishes gastropods from other molluscs. Torsion results in a U-shaped digestive tract, with the anal opening slightly behind the head. The torsion of the soft body and the spiraling of the shell are thought to be unrelated evolutionary events. Gastropods have evolved to live in a wide variety of habitats. The great majority are marine, living in the world’s oceans. Numerous species live in fresh water, while others live entirely on land. Of those that live in water, most are
Gene bank
found on the bottom, attached to rocks or other surfaces. There are even a few species of gastropods without shells, including sea butterflies, that are capable of swimming. Living in different habitats has resulted in a wide variety of structural adaptations within the class Gastropoda. For example, those gastropods that live in water use gills to obtain the oxygen necessary for respiration, while their terrestrial relatives have evolved lungs to breathe. Gastropods are important links in food webs in the habitats in which they live, employing a wide variety of feeding strategies. For example, most gastropods move a rasping row of teeth on a tongue like organ called a radula back and forth to scrape microscopic algae off rocks or the surface of plants. Because the teeth on the radula gradually wear away, new teeth are continuously secreted. Other gastropods have evolved a specialized radula for drilling through shells of animals to get at their soft flesh. For example, the oyster drill, a small east coast gastropod, bores a small hole in the shell of neighboring molluscs such as oysters and clams so that it can consume the soft flesh. In addition, some terrestrial gastropods such as snails use their radula to cut through pieces of leaves for food. Gastropods are eaten by numerous animals, including various types of fish, birds, and mammals. They are also eaten by humans throughout the world. Abalone, muscular shelled gastropods that cling to rocks, are consumed on the west coast of the United States and in Asia. Fritters and chowder are made from the large, snail-like queen conch on many Caribbean islands. Escargot (snails in a garlicky butter sauce) are a European delicacy. [Max Strieb]
RESOURCES BOOKS Barnes, R. D. Invertebrate Zoology. 5th ed. Philadelphia: Saunders College Publishing, 1987.
Gene bank The term gene bank refers to any system by which the genetic composition of some population is identified and stored. Many different kinds of gene banks have been established for many different purposes. Perhaps the most numerous gene banks are those that consist of plant seeds, known as germ banks. The primary purpose for establishing a gene bank is to preserve examples of threatened or endangered species. Each year, untold numbers of plant and animal species become extinct because of natural processes and more com621
Environmental Encyclopedia 3
Gene pool
monly, as the result of human activities. Once those species become extinct, their gene pools are lost forever. Scientists want to retain those gene pools for a number of reasons. For example, agriculture has been undergoing a dramatic revolution in many parts of the world over the past half century. Scientists have been making available to farmers plants that grow larger, yield more fruit, are more diseaseresistant, and have other desirable characteristics. These plants have been produced by agricultural research in the United States and other nations. Such plants are very attractive to farmers, and they are also important to governments as a way of meeting the food needs of growing populations, especially in Third World countries. When farmers switch to these new plants, however, they often abandon older, more traditional crops that may then become extinct. Although the traditional plants may be less productive, they have other desirable characteristics. They may, for example, be able to survive droughts or other extreme environmental conditions that new plants cannot. Placing seeds from traditional plants in a gene bank allows them to be preserved. At some later time, scientists may want to study these plants further and perhaps identify the genes that are responsible for various desirable properties of the plants. The U.S. Department of Agriculture (USDA) has long maintained a seed bank of plants native to the United States. About 200,000 varieties of seeds are stored at the USDA’s Station at Fort Collins, Colorado, and another 100,000 varieties are kept at other locations around the country. Efforts are now underway to establish gene banks for animals, too. Such banks consist of small colonies of the animals themselves. Animal gene banks are desirable as a way of maintaining species whose natural population is very low. Sometimes the purpose of the bank is simply to maintain the species to prevent its becoming extinct. In other cases, species are being preserved because they were once used as farm animals although they have since been replaced by more productive modern hybrid species. The Fayoumi chicken native to Egypt, for example, has now been abandoned by farmers in favor of imported species. The Fayoumi, without some form of protection, is likely to become extinct. Nonetheless, it may well have some characteristics (genes) that are worth preserving. In recent years, another type of gene bank has become possible. In this kind of gene bank, the actual base sequence of important genes in the human body will be determined, collected, and catalogued. This effort, begun in 1990, is a part of the Human Genome Project effort to map all human genes. See also Agricultural revolution; Extinction; Genetic engineering; Population growth [David E. Newton] 622
RESOURCES PERIODICALS Anderson, C. “Genetic Resources: A Gene Library That Goes ’Moo’.” Nature 355 (January 30, 1992): 382. Crawford, M. “USDA Bows to Rifkin Call for Review of Seed Bank.” Science 230 (December 6, 1985): 1146–1147. Roberts, L. “DOE to Map Expressed Genes.” Science 250 (November 16, 1990): 913.
Gene pool The term gene pool refers to the sum total of all the genetic information stored within any given population. A gene is a specific portion of a DNA (deoxyribose nucleic acid) molecule, so a gene pool is the sum total of all of the DNA contained within a population of individuals. The concept of gene pool is important in ecological studies because it reveals changes that may or may not be taking place within a population. In a population living in an ideal environment for its needs, the gene pool is likely to undergo little or no change. If individuals are able to obtain all the food, water, energy, and other resources they need, they experience relatively little stress and there is no pressure to select one or another characteristic. Changes do occur in gene frequency because of natural factors in the environment. For example, natural radiation exposure causes changes in DNA molecules that are revealed as genetic changes. These natural mutations are one of the factors that make possible continuous changes in the genetic constitution of a population that, in turn, allows for evolution to occur. Natural populations seldom live in ideal situations, however, and so they experience various kinds of stress that lead to changes in the gene pool. A classical example of this kind of change was reported by J. B. S. Haldane in 1937. Haldane found that a population of moths gradually became darker in color over time as the trees on which they lived also became darker because of pollution from factories. Moths in the population who carried genes for darker color were better able to survive and reproduce than were their lighter-colored cousins, so the composition of the gene pool changed to relieve stress. Humans have the ability to make conscious changes in gene pools that no other species has. Sometimes we make those changes in the gene pools of plants or animals to serve our own needs for food or other resource. Hybridization of plants to produce populations that have some desirable quality such as resistance to disease, shorter growing season, or better-tasting fruit. The modern science of genetic engineering is perhaps the most specific and deliberate way of changing in gene pools today.
Environmental Encyclopedia 3
Genetic engineering
Humans can also change the gene pool of their own species. For example, individuals with various genetic disorders were at one time doomed to death. Our inability to treat diabetes, sickle-cell anemia, phenylketonuria, and other hereditary conditions meant that the frequency of the genes causing those disorders in the human gene pool was kept under control by natural forces. Today, many of those same disorders can be treated by medical or genetic techniques. That results in positive benefit for the individuals who are cured, but raises questions about the quality of the human gene pool overall. Instead of having many of those deleterious genes being lost naturally by an individual’s death, they are now retained as part of the gene pool. This fact has at times raised questions about the best way in which medical science should deal with genetic disorders. See also Agricultural revolution; Birth defects; Extinction; Gene bank; Population growth [David E. Newton]
RESOURCES BOOKS Patt, D. I., and G. R. Patt. An Introduction to Modern Genetics. Reading, MA: Addison-Wesley, 1976.
Genetic engineering Genetic engineering is the manipulation of the hereditary material of organisms at the molecular level. The hereditary material of most cells is found in the chromosomes, and it is made of deoxyribose nucleic acid (DNA). The total DNA of an organism is referred to as its genome. In the 1950s, scientists first discovered how the structure of DNA molecules worked and how they stored and transmitted genetic information. Genetic engineering relies on recombinant DNA technology to manipulate genes. Methods are now available for rapidly sequencing the nucleotides of pieces of DNA, as well as for identifying particular genes of interest, and for isolating individual genes from complex genomes. This allows genetic engineers to alter genetic materials to produce new substances or create new functions. The biochemical tools used by genetic engineers or molecular biologists include a series of enzymes that can “cut and paste” genes. Enzymes are used to cut a piece of DNA, insert into it a new piece of DNA from another organism, and then seal the joint. One important group of these are restriction enzymes, of which well over 500 are known. Most restriction enzymes
are endonucleases—enzymes that break the double helix of DNA within the molecule, rather than attacking the ends of the helix. Every restriction enzyme is given a specific name to identify it uniquely. The first three letters, in italics, indicate the biological source of the enzyme, the first letter being the initial of the genus, the second and third letters being the first two letters of the species name. Thus restriction enzymes from Escherichia coli are called Eco, those from Haemophilus influenzae are Hin, from Diplococcus pneumoniae comes Dpn, and so on. The genetic engineer can use a restriction enzyme to locate and cut almost any sequence of bases. Cuts can be made anywhere along the DNA, dividing it into many small fragments or a few longer ones. The results are repeatable: cuts made by the same enzyme on a given sort of DNA will always be the same. Some enzymes recognize sequences as long as six or seven bases; these are used for opening a circular strand of DNA at just one point. Other enzymes have a smaller recognition site, three or four bases long; these produce small fragments that can then be used to determine the sequence of bases along the DNA. The cut that each enzyme makes varies from enzyme to enzyme. Some, like Hin dII, make a clean cut straight across the double helix, leaving DNA fragments with ends that are flush. Other enzymes (Eco RI) make a staggered break, leaving single strands with protruding cohesive ends ("sticky ends") that are complementary in base sequence. Following breakage, and under the right conditions, the complementary bases from different sources can be rejoined to form recombinant DNA. Another important biochemical tool used by genetic engineers is DNA polymerase, an enzyme that normally catalyses the growth of a nucleic acid chain. DNA polymerase is used by genetic engineers to seal the gaps between the two sets of fragments in newly joined chimera molecules of recombinant DNA. DNA polymerase is also used to label DNA fragments, for DNA polymerase labels practically every base, allowing minute quantities of DNA to be studied in detail. If a piece of ribonucleic acid (RNA) of the target gene is the starting point, then the enzyme reverse transcriptase is used to produce a strand of complementary DNA (cDNA). Genetic engineers usually need large numbers of genetically identical copies of the DNA fragment of interest. One way of doing this is to insert the gene into a suitable gene carrier, called a cloning vector. Common cloning vectors are bacterial plasmids or viruses such as the bacteriophage lambda, which are small circles of DNA found in bacterial cells independently of the main DNA molecule. When the cloning vectors divide, they replicate both themselves and the foreign DNA segment linked to it. 623
Genetic engineering
Environmental Encyclopedia 3
DNA injected into a mouse embryo. (Photograph by Jon Gordon. Phototake. Reproduced by permission.)
In the plasmid insertion method, restriction enzymes are used to cleave the plasmid double helix so that a stretch of DNA (previously cleaved with the same enzyme) can be inserted into the plasmid. As a result, the “sticky ends” of the plasmid DNA and the foreign DNA are complementary and base-pair when mixed together. The fragments held together by base pairing are permanently joined by DNA ligase. The host bacterium, with its 20- to 30-minute reproductive cycle, is like a manufacturing plant. With repeated doublings of its offspring on a controlled culture medium, millions of clones of the purified DNA fragments can be produced overnight. Similarly, if viruses (bacteriophages) are used as cloning vectors, the gene of interest is inserted into the phage DNA, and the virus is allowed to enter the host bacterial cell where it multiplies. A single parental lambda phage particle containing recombinant DNA can multiply to several hundred progeny particles inside the bacterial cell (E. coli) within roughly 20 minutes. Cosmids are another type of viral cloning vehicle that attaches foreign DNA to the packaging sites of a virus and thus introduces the foreign DNA into an infective viral 624
particle. Cosmids allow researchers to insert very long stretches of DNA into host cells where cell multiplication amplifies the amount of DNA available. Large artificial chromosomes of yeast (called megaYACs) are also used as cloning vehicles, since they can store even larger pieces of DNA, 35 times more than can be stored conventionally in bacteria. The polymerase chain reaction (PCR) technique is an important new development in the field of genetic engineering, since it allows the mass production of short segments of DNA directly, and offers the advantage of bypassing the several steps involved in using bacterial and viruses as cloning vectors. DNA fragments can be introduced into mammalian cells, but a different method must be used. Here, genes packed in solid calcium phosphate are placed next to a cell membrane that surrounds the fragment and transfers it to the cytoplasm. The gene is delivered to the nucleus during mitosis (when the nuclear membrane has disappeared) and the DNA fragments are incorporated into daughter nuclei, then into daughter cells. A mouse containing human cancer genes (the onchomouse) was patented in 1988.
Environmental Encyclopedia 3 The potential benefits of recombinant DNA research are enormous. In recent years, scientists have identified the genetic basis of a number of medical disorders. Genetic engineering helps scientists replace a particular missing or defective gene with correct copies of that gene. If that gene then begins functioning in an individual, a genetic disorder may be cured. Researchers have already met with great success in finding the single genes that are responsible for common diseases like cystic fibrosis and hemochromatosis. In the early twenty-first century, scientists were applying knowledge from the human genome project to try to map multiple genes responsible for diseases like diabetes, hypertension, and schizophrenia. Genetic engineering has also led to a great deal of controversy, however. Since scientists can duplicate genes, they can also duplicate, or “clone” animals and humans. In December 2001, scientists announced the birth of the first cloned female cat, although the 1990s saw big headlines for the first big mammal cloning of sheep. The announcement was made in April 2002 that the first human baby produced by a human cloning program would be born in November, sparking considerable medical and ethical controversy. In August 2001, President George W. Bush struggled with the debate over allowing federal funding for embryonic stem cell research. He allowed funding for only existing lines of cells, leaving further research in the hands of those who could seek private funding. Genetic engineering also presents positive and negative controversy in its application to agriculture and the environment. Recombinant DNA techniques help scientists produce plants that offer medicinal value too. For example, calcium-fortified orange juice or vitamin-enriched milk boost the nutritional value of these foods. However, scientists can now further genetically modify foods, offering benefits and risks to the environment. The benefits mean possible cheaper and easier production of certain medicines, while the risks include unnatural introduction of plants and animals produced in a laboratory environment. At the University of Arizona, scientists have modified tomatoes to produce vaccines for diarrhea and hepatitis B, and that these vaccines will likely be much cheaper to produce than current drugs. Some critics worry that once the new crops move out of the safety of locked greenhouses and into crop fields, them may cross-pollinate with conventional tomato crops and contaminate them with modified genes. A poll reported on in 2002 showed that Americans were fairly evenly split on their feelings about the risks and benefits of genetically modified foods and biotechnology. Most like the idea that scientists can create plants that will help clean up toxic soils, reduce soil erosion and reduce fertilizer run-off into streams and lakes. They also favor production of genetically engineered methods to reduce the
Genetic resistance (or genetic tolerance)
amount of water used to grow crops and development of disease-resistant tree varieties to replace those that might be threatened or endangered. Americans also favor use of genetic engineering to reduce the need to log in native forests and to reduce the amount of chemical pesticides used by farmers. On the other hand, Americans express concern over possible environmental effects of genetically modified plants or fish contaminating ordinary plants and fish, reducing genetic diversity, increasing the number of insects that might become resistant to pesticides, or changing the ecosystem. See also Gene bank; Gene pool [Neil Cumberlidge Ph.D.]
RESOURCES BOOKS Cherfas, J. Man Made Life: An Overview of the Science and Technology and Commerce of Genetic Engineering. New York: Pantheon Books, 1982. Watson, J. D, J. Tooze, and D. T. Kurtz. Recombinant DNA: A Short Course. San Francisco: W. H. Freeman, 1983. Wheale, P. R., and R. M. McNally. Genetic Engineering: Catastrophe or Utopia? New York: St. Martin’s Press, 1988.
PERIODICALS “Americans Evenly Divided Over Environmental Risks, Benefits of Genetically Modified Food and Biotech.” Health and Medicine Week, March 4, 2002, 18. Coghlan, A. “Engineering the Therapies of Tomorrow.” New Scientist 137 (April 24, 1993): 26–31. Kahn, P. “Genome on the Production Line.” New Scientist 137 (April 24, 1993): 32–36. Miller, S. K. “To Catch a Killer Gene.” New Scientist 137 (April 24, 1993): 37–40. Primedia Intertec, Inc. “Cloning Controversy.”Better Nutrition 64 (July 2002): 18–21. Verma, I. M. “Gene Therapy.” Scientific American 263 (November 1990): 68–84.
OTHER “Genetic Engineers Blurring Lines Between Kitchen Pantry and Medicine Cabinet.” Pew Initiative on Food and Biotechnology Newsroom. [cited July 9, 2002]. .
ORGANIZATIONS Pew Initiative on Food and Biotechnology, 1331 H Street, Suite 900, Washington, DC USA 20005 (202) 347-9044, Fax: (202) 347-9047, Email:
[email protected], http://pewagbiotech.org
Genetic resistance (or genetic tolerance) Genetic resistance (or genetic tolerance) refers to the ability of certain organisms to endure environmental conditions that are extremely stressful or lethal to non-adapted individuals of the same species. Such tolerance has a genetic basis, and 625
Environmental Encyclopedia 3
Genetically engineered organism
it evolves at the population level in response to intense selection pressures. Genetic resistance occurs when genetically variable populations contain some individuals that are relatively tolerant of an exposure to some environmental factor, such as the presence of a high concentration of a specific chemical. If the tolerance is genetically based (i.e., due to specific information embodied in the DNA of the organism’s chromosomes), some or all of the offspring of these individuals will also be tolerant. Under conditions in which the chemical occurs in concentrations high enough to cause toxicity to non-tolerant individuals, the resistant ones will be relatively successful. As time passes their offspring will become increasingly more prominent in the population. Acquiring genetic resistance is an evolutionary process, involving increased tolerance within a population, for which there is a genetic basis, and occurring in response to selection for resistance to the effects of a toxic chemical. Some of the best examples of genetic resistance involve the tolerance of certain bacteria to antibiotics and of certain pests to pesticides. Resistance to antibiotics Antibiotics are chemicals used to treat bacterial infections of humans and domestic animals. Examples of commonly used antibiotics include various kinds of penicillins, streptomycins, and tetracyclines, all of which are metabolic byproducts created by certain microorganisms, especially fungi. There are also many synthetic antibiotics. Antibiotics are extremely toxic to non-resistant strains of bacteria, and this has been very beneficial in the control of bacterial infections and diseases. However, if even a tiny fraction of a bacterial population has a genetically based tolerance to a specific antibiotic, evolution will quickly result in the development of a population that is resistant to that chemical. Bacterial resistance to antibiotics was first demonstrated for penicillin, but the phenomenon is now quite widespread. This is an important medical problem because some serious pathogens are now resistant to virtually all of the available antibiotics, which means that infections by these bacteria can be extremely difficult to control. Bacterial resistance has recently become the cause of infections by some virulent strains of Staphylococcus and other potentially deadly bacteria. Some biologists believe that this problem has been made worse by the failure of many people to finish their course of prescribed antibiotic treatments, which can allow tolerant bacteria to survive and flourish. Also possibly important has been the routine use of antibiotics to prevent diseases in livestock kept under crowded conditions in industrial farming. The small residues of antibiotics in meat, eggs, and milk may be resulting in low-level selection for resistant bacteria in exposed populations of humans and domestic animals. 626
Resistance to Pesticides The insecticide dichlorodiphenyl-trichloroethane (DDT) was the first pesticide to which insect pests developed resistance. This occurred because the exposure of insect populations to toxic DDT results in intense selection for resistant genotypes. Tolerant populations can evolve because genetically resistant individuals are not killed by the pesticide and therefore survive to reproduce. Almost 500 species of insects and mites have populations that are known to be resistant to at least one insecticide. There are also more than 100 examples of fungicide-resistant plant pathogens and about 50 herbicide-resistant weeds. Insecticide resistance is most frequent among species of flies and their relatives (order Diptera), including more than 50 resistant species of malariacarrying Anopheles mosquitoes. In fact, the progressive evolution of insecticide resistance by Anopheles has been an important factor in the recent resurgence of malaria in countries with warm climates. In addition, the protozoan Plasmodium, which actually causes malaria, has become resistant to some of the drugs that used to effectively control it. Crop geneticists have recently managed to breed varieties of some plant species that are resistant to glyphosate, a commonly used agricultural herbicide that is effective against a wide range of weeds, including both monocots and dicots. The development of glyphosate-tolerant varieties of such crops as rapeseed means that this effective herbicide can be used to control difficult weeds in planted fields without causing damage to the crop. [Bill Freedman Ph.D.]
RESOURCES BOOKS Freedman, B. Environmental Ecology. 2nd ed. San Diego: Academic Press, 1995. Hayes, W. C., and E. R. Laws, eds. Handbook of Pesticide Toxicology. San Diego: Academic Press, 1991. National Research Council (NRC). Pesticide Resistance. Washington, DC: National Academy Press, 1986. Raven, P. H., and G. B. Johnson. Biology. 3rd ed. St. Louis: Mosby Year Book, 1992.
Genetically engineered organism The modern science of genetics began in the mid-nineteenth century with the work of Gregor Mendel, but the nature of the gene itself was not understood until James Watson and Francis Crick announced their findings in 1953. According to the Watson and Crick model, genetic information is stored in molecules of DNA (deoxyribose nucleic acid) by means of certain patterns of nitrogen base that occur in such molecules. Each set of three such nitrogen bases were
Environmental Encyclopedia 3 codes, they said, for some particular amino acid, and a long series of nitrogen bases were codes for a long series of amino acids or a protein. Deciphering the genetic code and discovering how it is used in cells has taken many years of work since that of Watson and Crick. The basic features of that process, however, are now well understood. The first step involves the construction of a RNA (ribonucleic acid) molecule in the nucleus of a cell, using the code stored in DNA as a template. The RNA molecule then migrates out of the nucleus to a ribosome in the cell cytoplasm. At the ribosome, the sequence of nitrogen bases stored in RNA act as a map that determines the sequence of amino acids to be used in constructing a new protein. This knowledge is of critical importance to biologists because of the primary role played by proteins in an organism. In addition to acting as the major building materials of which cells are made, proteins have a number of other crucial functions. All hormones and enzymes, for example, are proteins, and therefore nearly all of the chemical reactions that occur within an organisms are mediated by one protein or another. Our current understanding of the structure and function of DNA makes it at least theoretically possible to alter the biological characteristics of an organism. By changing the kind of nitrogen bases in a DNA molecule, their sequence, or both, a scientist can change the genetic instructions stored in a cell and thus change the kind of protein produced by the cell. One of the most obvious applications of this knowledge is in the treatment of genetic disorders. A large majority of genetic disorders occur because an organism is unable to correctly manufacture a particular protein molecule. An example is Lesch-Nyhan syndrome. It is a condition characterized by self-mutilation, mental retardation, and cerebral palsy which arises because a person’s body is unable to manufacture an enzyme known as hypoxanthine guanine phosphoribosyl transferase (HPRT). The general principles of the techniques required to make such changes are now well understood. The technique is referred to as genetic engineering or genetic surgery because it involves changes in an organism’s gene structure. When used to treat a particular disorder in humans, the procedure is also called human gene therapy. Developing specific experimental techniques for carrying out genetic engineering has proved to be an imposing challenge, yet impressive strides have been made. A common procedure is known as recombinant DNA (rDNA) technology. The first step in an rDNA procedure is to collect a piece of DNA that carries a desired set of instructions. For a genetic surgery procedure for a person with Lesch-Nyhan syndrome, a researcher would need a piece of DNA that
Genetically engineered organism
codes for the production of HPRT. That DNA could be removed from the healthy DNA of a person who does not have Lesch-Nyhan syndrome, or the researcher might be able to manufacture it by chemical means in the laboratory. One of the fundamental tools used in rDNA technology is a closed circular piece of DNA found in bacteria called a plasmid. Plasmids are the vehicle or vector that scientists use for transferring new pieces of DNA into cells. The next step in an rDNA procedure, then, would be to insert the correct DNA into the plasmid vector. Cutting open the plasmid can be accomplished using certain types of enzymes that recognize specific base sequences in a DNA molecule. When these enzymes, called restriction enzymes, encounter the recognized sequence in a DNA molecule, they cleave the molecule. After the plasmid DNA has been cleaved and the correct DNA mixed with it, a second type of enzyme is added. This kind of enzyme inserts the correct DNA into the plasmid and closes it up. The process is known as gene splicing. In the final step, the altered plasmid vector is introduced into the cell where it is expected to function. In the case of a Lesch-Nyhan patient, the plasmid would be introduced into the cells where it would start producing HPRT from instructions in the correct DNA. Many technical problems remain with rDNA technology, and this last step has caused some of the greatest obstacles. It has proven very difficult to make introduced DNA function. Even when the plasmid vector with its new DNA gets into a cell, it may never actually begin to function. Any organism whose cells contain DNA altered by this or some other technique is called a genetically engineered organism. The first human patient with a genetic disorder who is treated by human gene therapy will be a genetically engineered organism. The use of genetic engineering on human subjects has gone forward very slowly for a number of reasons. One reason is that humans are very complex organisms. Another reason is that changing the genetic make-up of a human involves more ethical questions and more difficult questions than does the genetic engineering of bacteria, mice, or cows. Most of the existing examples of genetically engineered organisms, therefore, involve plants, non-human animals, or microorganisms. One of the earliest success stories in genetic engineering involved the altering of DNA in microorganisms to make them capable of producing chemicals they do not normally produce. Recombinant DNA technology can be used, for instance, to insert the DNA segment or gene that codes for insulin production into bacteria. When these bacteria are allowed to grow and reproduce in large fermentation tanks, they produce insulin. The list of chemicals produced by this mechanism now includes somatostatin, alpha interferon, tissue plasminogen activator 627
Environmental Encyclopedia 3
Genetically modified organism
(tPA), Factor VIII, erythroprotein, and human growth hormone, and this list continues to grow each year. [David E. Newton]
RESOURCES PERIODICALS Hoffman, C. A. “Ecological Risks of Genetic Engineering of Crop Plants.” BioScience 40 (June 1990): 434–437. Kessler, D. A., et al. “The Safety of Foods Developed by Biotechnology.” Science 256 (June 1992): 1747–1749+. Kieffer, G. H. Biotechnology, Genetic Engineering, and Society. Reston, VA: National Association of Biology Teachers, 1987. Mellon, M. Biotechnology and the Environment. Washington, DC: National Biotechnology Policy Center of the National Wildlife Federation, 1988. Pimentel, D., et al. “Benefits and Risks of Genetic Engineering in Agriculture.” BioScience 39 (October 1989): 606-614. Weintraub, P. “The Coming of the High-Tech Harvest.” Audubon 94 (July–August 1992): 92–4+. Wheale, P. R., and R. M. McNally. Genetic Engineering: Catastrophe or Utopia? New York: St. Martin’s Press, 1988.
Genetically modified organism A genetically modified organism, or GMO, is an organism whose genetic structure has been altered by incorporating a single gene or multiple genes—from another organism or species—that adds, removes, or modifies a trait in the organism by a technique called gene splicing. An organism that has been genetically modified—or engineered—to contain a gene from another species is also called a transgenic organism (because the gene has been transferred) or a living modified organism (LMO). Most often the transferred gene allows an organism—such as a bacterium, fungus, virus, plant, insect, fish, or mammal—to express a trait that enhances its desirability to producers or consumers of the final product. Overview Plants and livestock have been bred for desired qualities (selective breeding) for thousands of years—long before people knew anything about the science of genetics. As technology advanced, however, so did the means by which people could select desired traits. Modern biotechnology represents a significant step in the history of genetic modification. Until the final decades of the twentieth century, breeding techniques were limited to the transfer of desired traits within the same or closely related species. Genetically modified organisms, however, may contain traits transferred from completely dissimilar species. For example, before modern biotechnology, apple breeders could only cross-breed apples with apples or other closely related species. So, if a breeder wanted to make a certain tasty apple variety that was more 628
tolerant to the cold, a cold-tolerant apple variety had to be hybridized with a tasty variety. This process usually involved significant trial and error because there was little assurance the cold-tolerance ability would be transferred in any individual attempt to hybridize the two varieties. Development of modern biotechnology The characteristics of all organisms are determined by genes—the basic units of heredity. A gene—a segment of deoxyribonucleic acid (DNA)—is capable of replication and mutation, occupying a fixed position on a chromosome (a group of several thousand genes; humans are defined by 22 pairs of chromosomes plus X and Y), and is passed on from parents to offspring during reproduction. A gene determines the structure of a protein or a ribonucleic acid (RNA) molecule. Found in all cells, DNA carries the genetic instructions for creating proteins; RNA decodes those instructions. Proteins perform diverse biological functions in the body, from helping muscles contract, to enabling blood to clot, to allowing biochemical reactions to proceed quickly enough to sustain life. By modifying a protein, particular phenotypic (physical) or physiologic changes—such as the color of a rose or the ability to bioluminesce (glow like a firefly)—are created. A fundamental aspect of modern biotechnology is the belief that the essential genetic elements of all life are the same. Since the 1950s, molecular biologists have known that the DNA in every organism is made up of pairs of four nitrogen-containing bases, or building blocks: adenine (A), thymine (T), cytosine (C), and guanine (G). In 1953, scientists James Watson and Francis Crick discovered that DNA is constructed in a double helix pattern—sort of a twisted ladder. Although A always pairs with T, and G with C, there is significant variety in how the pairs stack. The variable sequence of these DNA base pairs constitutes, in effect, the variety of life. So even all organisms are made from the same basic building blocks, their differences are a result of varying DNA sequences. A principle of modern molecular biology and biotechnology is that because these genetic building blocks are the same for all species, DNA can be extracted and inserted across species. Genetic engineering techniques The tools of modern biotechnology allow the transfer of specific genes, hence specific traits, to occur with more precision than ever before. The individual gene conveying a trait can be identified, isolated, replicated, and the inserted into another organism. This process is called genetic engineering, or recombinant DNA (rDNA) technology—that is, recombining DNA from multiple organisms. Through such engineering, the apple breeder who wants the tasty apple variety to be cold-tolerant as well has the potential to find a gene that conveys cold tolerance and insert that gene directly into the tasty variety. Although
Environmental Encyclopedia 3 there is still some trial and error in this process, overall, there is greater precision in the ability to move genes from one organism to another. The gene that conveys a coldtolerance ability does not need to come from another apple variety; it may come from any other organism. A coldtolerant fish, for example, might have a suitable gene. There are multiple methods by which genetic material may be transferred from one organism to another. Before a gene is moved between organisms, however, it is necessary to identify the individual gene that confers the desired trait. This stage is often quite time-consuming and difficult because often it is not clear what gene is needed or where to find it. Finding the gene entails using or creating a library of the species’ DNA, specifying the amino acid sequence of the desired protein, and then devising a probe (any biochemical agent labeled or tagged in some way so that it can be used to identify or isolate a gene, RNA, or protein) for that sequence. As of 2002, isolating the desired gene is one of the most limiting aspects to creating a GMO. Once the desired gene has been identified it must be extracted from the organism, which is usually done with a restriction endonuclease (an enzyme). Restriction endonucleases recognize particular base sequences in a DNA molecule and cut and isolate these sequences in a predictable and consistent manner. Once isolated, the gene must be replicated to generate sufficient usable material, as more than one copy of the gene is needed for the next steps in the engineering process. One common method of gene replication is called polymerase chain reaction (PCR). Through the PCR method, the strands of the DNA are broken apart—in effect, the ladder is divided down the middle—and then exact copies of the opposite sides of the ladder are produced, creating thousands or millions of copies of the complete gene. After replication, the gene must be inserted into the new organism via a vector (an agent that transfers material— typically DNA—from one host to another). A common vector is Agrobacterium tumefaciens, a bacterium that normally inserts itself into plants, causing a tumor. By genetically engineering A. tumefaciens, however, the bacterium can be used to insert the desired gene into a plant, replacing its tumor-causing genetic material. Another vector, called ballistics, involves coating a microprojectile—usually a heavy metal such as tungsten— with the desired gene, then literally shooting the microprojectile into material from the new host organism. The microprojectile breaks apart the DNA of the host organism. Then when the DNA reassembles, some of the new, desired genetic material is inserted. Once the DNA has been inserted, the host organism can be grown, raised, or produced normally and tests can be performed to observer whether the desired trait manifests.
Genetically modified organism
Varieties of GMOs In 2000, of the total land planted with GMO crops worldwide, the United States occupied 68%, Argentina 23%, and China 1%. The United States produces many types of GMOs for commercial and research purposes. Agricultural crops A common type of GMO is the modified agricultural seed. Corn, soybeans, and cotton are a few examples of staple agricultural GMO products grown in the United States. Genetic modifications to these products may, for example, alter a crop’s nutritional content, storage ability, or taste. With the advent of genetic engineering, for example, common hybrid crops such as the tomato were launched into a new era. The first food produced from gene splicing and evaluated by the Food and Drug Administration (FDA) was the Flavr Savr Tomato in 1994. Tomatoes usually get softer as they ripen because of a protein in the tomato that breaks down the cell walls of the tomato. Because it is difficult to ship a quality ripe tomato across the country before the tomato spoils, tomatoes are usually shipped unripened. Engineers of the Flavr Savr Tomato spliced a gene into its deoxyribonucleic acid (DNA) to prevent the breakdown of the tomato’s cell walls. The result of adding the new gene was a firm ripe tomato that was more desirable to consumers than the tasteless variety typically found on store shelves, particularly in the winter months. Genetic modifications may also confer to a species an ability to produce its own pesticide biologically, thereby potentially reducing or even eliminating the need to apply external pesticides. For example, Bacillus thuringiensis (Bt) is a soil bacterium that produces toxins against insects (mainly in the genera Lepidoptera, Diptera, and Coleoptera).When they are genetically modified to carry genetic material from the Bt bacterium, plants such as soybeans will able to produce their own Bt toxin and be resistant to insects such as the cornstalk borer and velvetbean catepillar. Researchers at the Monsanto chemical company estimate that Bt soybeans will be commercially available by about 2006. By means of genetic engineering, a gene from a soil bacterium called Agrobacterium sp confers glyphosate resistance to a plant. Glyphosate (brand names include Roundup, Rodeo, and Accord) is a broad-spectrum herbicide (kills all green plants). As of 2002, Monsanto was the sole developer of all glyphosate resistant crops on the market. These crops (often called “Roundup Ready") include corn, soybeans, cotton, and canola. Pharmaceuticals Genetically engineered pharmaceuticals are also useful GMO products. One of the first GMOs was a bacterium with a human gene inserted into its genetic code to produce a very high quality human insulin for diabetes. Vaccines 629
Environmental Encyclopedia 3
Genetically modified organism
against diseases such as meningitis or hepatitis B, for example, are produced by genetically engineered yeast or bacteria. Other pharmaceuticals produced by using GMO microbes include interferon for cancer, erythropoetin for anemia, growth hormone for the treatment of dwarfism, tissue plasminogen activator for heart attack victims. Through genetic engineering, transgenic plants are likely to become a commercially viable source of pharmaceuticals. Benefits and risks Although the technological advances that allowed the creation of GMOs are impressive, as with any new technology, the benefits must be weighed against the risks. Useful pharmaceutical products have been created as a result of genetic engineering. Also, GMOs have produced and new and improved agricultural products that are resistant to crop pests, thus improving production and reducing chemical pesticide usage. These developments have had a major impact on food quality and nutrition. The possibility that biotech crops could make a substantial contribution to providing sufficient food for an expanding world is also a reason given for engaging in the research that underlies their development. However, the debate over GMOs continues among scientists and between consumers and modern agricultural producers throughout the world regarding issues such as regulation, labelling, human health risk, and environmental impact. In the United States, for example, there has been much controversy over the health risks of canola oil. Canola oil is genetically engineered rapeseed oil or “LEAR” oil (low erucic acid rape), a semi-drying industrial oil used as a lubricant, fuel, soap, and synthetic rubber base, and as an illuminant to give color pages in magazines their slick look. It was first developed in Canada (thus the name “canola oil"). Canola oil is derived from the mustard family and is considered a toxic and poisonous weed. In 1998, the EPA classified canola oil as a biopesticide with “low chronic toxicities,” yet placed it on the “Generally Considered Safe” list of foods. Proponents, who tout the oil’s health benefits, claim that due to genetic engineering and irradiation, it is completely safe, pointing to its unsaturated structure and digestibility. Widely used as a cooking oil, and because it is so inexpensive, it is used in thousands of processed foods in the United States and North America. Thus, millions of people have been exposed—most of them unknowingly— to genetically engineered foods. However, there has been little research on the potential adverse effects of these products on humans. When processed, canola oil becomes rancid very easily. It has been shown to cause health problems such as lung cancer and has been associated with loss of vision, disruption of the central nervous system, respiratory illness, anemia, 630
constipation, increased incidence of heart disease and cancer, low birth weight in infants, and irritability. It has a tendency to inhibit proper metabolism of foods and curbs normal enzyme function. Generally, rapeseed has a cumulative effect, often taking nearly ten years for symptoms to manifest. The dangers of introducing genes that may cause undesirable effects in the environment is also a concern among many. For example, when farmers spray an herbicide to remove weeds growing among crops, the sprayed chemical often damages the crop plants. If the crop is genetically engineered to be resistant to the chemical, the weeds are killed, but the crop plants remain undamaged. Although this situation appears to be beneficial, it is likely to lead to greater use of the particular herbicide, which would have several negative effects: the crop is likely to contain greater herbicide residues; and the increased spraying contaminates the rest of the environment. Although not all herbicides are dangerous, the safer choice seems to minimize rather than maximize their use. Also, if genes added to produce pesticide resistance in crop plants jumped to a weed species, then weeds would thrive and be difficult to control. Thus, as debate over GMOs rages, it is important to note that a wide variety of ecological and human health concerns exist side by side with the new advances made possible by genetic engineering. [Paul R. Phifer Ph.D.]
RESOURCES BOOKS Anderson, Luke. Genetic Engineering, Food, and Our Environment. White River Junction, VT: Chelsea Green Publishing Co., 1999. Ho, Mae-Wan. Genetic Engineering Dream or Nightmare: Turning the Tide on the Brave New World of Bad Science and Big Business. New York: Continuum Publishing Group, 2000. Kreuzer, H., and A. Massey. Recombinant DNA and Biotechnology: A Guide for Students, 2nd Edition. Washington, DC: ASM Press, 2001.
PERIODICALS Brown, K. “Seeds of Concern.” Scientific American (April 2001): 52–57. Nemecek, S. “Does the World Need GM Foods?” Scientific American (April 2001): 48–51. Wolfenbarger, L., and P. Phifer. “The Ecological Risks and Benefits of Genetically Engineered Plants.” Science 290 (2000): 2088–2093.
ORGANIZATIONS Environmental Protection Agency (EPA), 1200 Pennsylvania Avenue, NW, Washington, DC USA 20460 (202) 260-2090, Email:
[email protected], Human Genome Project Information, Oak Ridge National Laboratory, 1060 Commerce Park MS 6480, Oak Ridge, TN USA 37830 (865) 576-6669, Fax: (865) 574-9888, Email:
[email protected],
Environmental Encyclopedia 3
Geodegradable The term geodegradable refers to a material that could degrade in the environment over a geologic time period. While biodegradable generally refers to items that may degrade within our lifetime, geodegradable material does not decompose readily and may take hundreds or thousands of years. Radioactive waste, for example, is degraded only over thousands of years. The glass formed as an end result of a hazardous waste treatment technology known as “in situ vitrification,” is considered geodegradable only after a million years. See also Half-life; Hazardous waste site remediation; Hazardous waste siting; Waste management
Geographic information systems A Geographic Information System (GIS) is a computer system capable of assembling, storing, mapping, modeling, manipulating, querying, analyzing, and displaying geographically referenced information (i.e., data that are identified according to their locations). Some practitioners expand the definition of GIS to include the personnel involved, the data that are entered into the system, and the uses, decisions, and interpretations that are made possible by the system. A GIS can be used for scientific investigations, resource management, and planning. The development of GIS was made possible from innovations in many different disciplines, including geography, cartography, remote sensing, surveying, civil engineering, statistics, computer science, operations research, artificial intelligence, and demography. Even early man used GIS-type systems. Thirty-five thousand years ago Cro-Magnon hunters in Lascaux, France, drew pictures of the animals they hunted. Along with the animal drawings were track lines that are thought to show migration routes. These early records included two essential elements of modern GIS: a graphic file linked to an attribute data base. In a GIS, maps and other data from many different sources and in many different forms are stored or filed as layers of information. A GIS makes it possible to link and integrate information that is difficult to associate through other means. A GIS can combine mapped variables to build and analyze new variables. For example, by knowing and entering data for water use for a specific residence, predictions can be made on the amount of wastewater contaminants generated and released to the environment from that residence. The primary requirement for the source data is that the locations for the variables are known. Any variable that can be located spatially can be entered into a GIS. Location may be determined by x, y, and z coordinates of longitude,
Geographic information systems
latitude, and elevation, or by other systems such as ZIP codes or highway mile markers. A GIS can convert existing digital information, which may not be in map form, into forms it can recognize and utilize. For example, census data can be converted into map form, such that different types of information can be presented in layers. If data are not in digital form, (i.e., not in a form that the computer can utilize), various techniques are available to capture the information. Maps can be hand traced with a computer mouse to collect the coordinates of features. Electronic scanning devices can be used to convert map lines and points to digits. Data capture, i.e., entering the information into the system, is the most time-consuming component of a GIS. Identities of the objects on the map and their spatial relationships must be specified. Editing of information that is captured can be difficult. For example, an electronic scanner will record blemishes on a map just as it will record actual map features. A fleck of dirt that is scanned may connect two lines that should not be connected. Such extraneous information must be identified and removed from the digital data file. Different maps may be at different scales, so map information in a GIS must be manipulated so that it fits with information collected from other maps. Data may also have to undergo projection conversion before being integrated into a GIS. Projection, a fundamental component of map making, is a mathematical means of transferring information from the earth’s three-dimensional curved surface to a two-dimensional medium (i.e., paper or a computer screen). Different projections are used for different types of maps based on the type of projection appropriate for a specific use. Much of the information in a GIS comes from existing maps, so the computing power of the GIS can integrate digital information from different sources with different projections into a common projection. Digital data are collected and stored in various ways, and so different data sources may not be compatible. A GIS must be able to convert data from one structure to another. For example, image data from a satellite that has been interpreted by a computer to produce a land use map can be entered into the GIS in raster format. Raster format is like a spreadsheet, with rows and columns into which numbers are placed. These rows and columns are linked to x,y coordinates, with the intersection of each row and column forming a cell that corresponds to a specific point in the world. These cells contain numbers that can represent such features as elevation, soils, archeological sites, etc. Maps in a raster GIS can be handled like numbers (e.g., added, subtracted, etc.) to form new maps. Raster data files can be quickly manipulated by computer but are less detailed and may be less visually appealing than vector data files, which appear more like traditional hand-drafted maps. Vector digital data are 631
Environmental Encyclopedia 3
Geological Survey
captured as points, lines (a series of point coordinates), or areas (shapes bounded by lines). A Vector GIS output looks more like a traditional paper map. A GIS can be used to depict two- and three-dimensional characteristics of the earth’s surface, subsurface, and atmosphere from information points. Each thematic map (i.e., a map displaying information about one characteristic of a region) is referred to as a layer, coverage, or level. A specific thematic map can be overlain and analyzed with any other thematic map covering the same area. Not all analyses may require using all of the map layers at the same time. A researcher may use information selectively to consider relationships among specific layers. Information from two or more layers may be combined and transformed into a new layer for use in subsequent analyses. For example, with maps of wetlands, slopes, streams, land use, and soils, a GIS can produce a new overlay that ranks the wetlands according to the relative sensitivity to damage from nearby factories or homes. This process of combining and transforming information from different layers is referred to as map “algebra,” as it involves adding and subtracting information. Recorded information from off-screen files can be retrieved from the maps by pointing at a location, object, or area on the screen. Conditions of adjacency (what is next to what) containment (what is enclosed by what), and proximity (how close something is to something else) can be determined using a GIS. A GIS can also be used for “what if” scenarios by simulating the route of materials along a linear network. For example, an evaluation could be made on how long it would take for chemicals accidentally released into a river from a factory to move into a wetland area. Direction and speed can be assigned to the digital stream, and the contaminants can be traced through the stream system. An important component of a GIS is the ability to produce graphics on the computer screen or to output to paper (e.g., wall maps) to inform resource decision makers about the results of analyses. The viewers of such materials can visualize and thus better understand the results of analyses or simulations of potential events. A GIS can be used to produce not only maps, but also drawings and animations that allow different ways of viewing information. These types of images are especially helpful in conveying technical concepts to non-scientists. GIS technology is an improvement in the efficiency and analytical power of cartographic science. Traditional maps are abstractions of the real world, with important elements portrayed on a sheet of paper with symbols to represent physical objects. For example, topographic maps show the shape of the land surface with contour lines, but the actual shape of the land can only be imagined. Graphic 632
display techniques in a GIS illustrate more clearly relationships among the map elements, increasing the ability to extract and analyze information. Many commercial GIS software systems are available, some of which are specialized for use in specific types of decision-making situations. Other programs are more general and can be used for a wide number of applications or can be customized to meet individual requirements. GIS applications are useful for a wide range of disciplines, including urban planning, environmental and natural resource management, facilities management, habitat studies, archaeological analyses, hazards management, emergency planning, marketing and demographic analyses, and transportation planning. The ability to separate information in layers and then combine the layers with other layers of information is the key to making GIS technology a valuable tool for the analysis and display of large volumes of data, thus allowing better management and understanding of information and increased scientific productivity. [Judith L. Sims]
RESOURCES BOOKS Heywood, D. Ian, Ian Heywood, Sarah Cornelius, and Steve Carver. An Introduction to Geographical Information Systems.: Prentice Hall, 2000.
OTHER “GIS WWW Resource List.” University of Edinburgh Web Page. 1996 [cited June 1, 2002]. . “Geographic Information Systems.” United States Geological Survey Web Page. April 24, 2001 [cited June 1, 2002]. . Geographic Information Systems as an Integrating Technology: Context, Concepts, and Definitions.Kenneth E. Foote and Margaret Lynch, The Geographer’s Craft Project, Department of Geography, University of Colorado at Boulder, october 12, 1997. [cited June 1, 2002], . Geographic Information Systems: Internet Web Sites.Federal Facilities Restoration and Reuse Office, U.S. Environmental Protection Agency, Washington, DC, November 5, 2000 [cited June 23, 2002], .
Geological Survey The United States Geological Survey (USGS) is the federal agency responsible for surveying and publishing maps of topography (giving landscape relief and elevation), geology, and natural resources—including minerals, fuels, and water. The USGS, part of the U. S. Department of the Interior, was formed in 1879 as the United States began systematically to explore its newly expanded western territories. Today it has an annual budget of about $700 million, which is devoted to primary research, resource assessment and monitoring,
Environmental Encyclopedia 3
Georges Bank (collapse of the ground fishery)
map production, and providing information to the public and to other government agencies. The United States Geological Survey, now based in Reston, Virginia, originated in a series of survey expeditions sent to explore and map western territories and rivers after the Civil War. Four principal surveys were authorized between 1867 and 1872: Clarence King’s exploration of the fortieth parallel, Ferdinand Hayden’s survey of the Rocky Mountain territories, John Wesley Powell’s journey down the Colorado River and through the Rocky Mountains, and George Wheeler’s survey of the 100th meridian. Twelve years later, in 1879, these four ongoing survey projects were combined to create a single agency, the United States Geological Survey. The USGS’ first director was Clarence King. In 1881 his post was taken by John Wesley Powell, whose name is most strongly associated with the early Survey. It was Powell who initiated the USGS topographic mapping program, a project that today continues to produce the most comprehensive map series available of the United States and associated territories. In addition to topographic mapping, the USGS began detailed surveys and mapping of mineral resources in the 1880s. Mineral exploration led to mapping geologic formations and structures and a gradual reconstruction of geologic history in the United States. Research and mapping of glacial history and fossil records naturally followed from mineral explorations, so that the USGS became the primary body in the United States involved in geologic field research and laboratory research in experimental geophysics and geochemistry. During World Wars I and II, the USGS’ role in identifying and mapping tactical and strategic resources increased. Water and fuel resources (coal, oil, natural gas, and finally uranium) were now as important as copper, gold, and mineral ores, so the Survey took on responsibility for assessing these resources as well as topographic and geologic mapping. Today the USGS is one of the world’s largest earth science research agencies and the United States’ most important map publisher. The Survey conducts and sponsors extensive laboratory and field research in geology, hydrology, oceanography, and cartography. The agency’s three divisions, Water Resources, Geology, and National Mapping, are responsible for basic research. They also publish, in the form of maps and periodic written reports, information on the nation’s topography, geology, fuel and mineral resources, and other aspects of earth sciences and natural resources. Most of the United States’ hydrologic records and research, including streamflow rates, aquifer volumes, and water quality, are produced by the USGS. In addition, the USGS publishes information on natural hazards, including earthquakes, volcanoes, landslides, floods, and droughts. The Survey is the primary body responsible for providing basic earth
science information to other government agencies, as well as to the public. In addition, the USGS undertakes or assists research and mapping in other countries whose geologic survey systems are not yet well developed. [Mary Ann Cunningham Ph.D.]
RESOURCES BOOKS U.S. Geological Survey. Maps for America. Reston, VA: U.S. Government Printing Office, 1981. USGS Yearbook: Fiscal Year 1985. Washington, DC: U.S. Government Printing Office, 1985.
Georges Bank (collapse of the ground fishery) Until the 1990s Georges Bank, off the coasts of New England and Nova Scotia, was one of the world’s most valuable fisheries. A bank is a plateau found under the surface of shallow ocean water. Georges Bank is the southernmost and the most productive of the banks that form the continental shelf. A majority of the $800 million northeastern fishery industry comes from Georges Bank. The oval shaped bank is 149 mi long and 74.5 mi wide (240 km by 120 km). Georges Bank covers an area larger than the state of Massachusetts. The ocean bottom of Georges Bank formed ideal habitat for favorable quantities of groundfish, demersal finfishes, fish which feed off or near the ocean’s floor mdash; cod, Gadidae, clam, Pelecypoda, haddock, Melanogrammus aeglefinus, hake, Merlucciidae, herring, Clupea harengus, lobster, Homarus, pollock, Pollachaius, flounder, and scallops, Pectinidae. But by 1994 the Georges Bank ground fishery had collapsed. Cod were by far the most numerous and valuable of the Georges Bank’s fish. Atlantic cod form distinct stocks and the Georges Bank stock grows faster than those of the colder waters further north. They are the world’s largest and thickest cod. In 1938 a cod weighing 180 lb (82 kg) was caught off the bank. The cod move in schools from feeding to spawning grounds, in dense aggregates of hundreds of millions of fish, making them easy prey for fishing nets. During the second half of the twentieth century, gigantic trawlers towing enormous nets could haul in 200 tons (181.4 metric tons) of fish an hour off Georges Bank. At its peak in 1968, 810,000 tons (734,827 metric tons) of cod were harvested. By the 1970s, fleets of Soviet, European, and Japanese factory ships were trawling the cod-spawning grounds, scooping up the fish before they could reproduce. If the catch was of mixed species or the wrong size, the nets were dumped, leaving the ocean surface teaming with dead fish. After the catch was sorted, many species of dead 633
Georges Bank (collapse of the ground fishery)
Environmental Encyclopedia 3
bycatch, including young cod, flounder, and crabs, were
At the time of the cod moratorium, it was argued that the population would recover in five years; however there were few signs of recovery as of 2002. Not only is the cod stock near an all-time low, but so are populations of other commercial fish and many other species. The average size of the bottom-dwelling fish of Georges Bank is a fraction of what it was twenty years ago. Georges Bank is just one example of a eastern coastal area negatively affected by excessive trawling. Even though there is $800 million worth of fish extracted from Georges Bank and the surrounding area, there is an overall decline in groundfish stock along the entire boreal and sub-arctic coast of eastern North America. American and Canadian moratoriums on gas and oil exploration and extraction from Georges Bank—activities that could further disrupt the fishery—are in effect until at least 2012.
discarded. For every three tons of processed fish, at least a ton of bycatch died. These ships also trawled for herring, capelin, mackerel, and other small fish that the cod and other groundfish depend on for food. The Fisheries Conservation and Management Act of 1976 extended exclusive American fishing rights from 12– 200 mi (19–322 km) offshore. Since much of Georges Bank is within the 200-mi (322 km) limit of Nova Scotia, conflict erupted between American and Canadian fishermen. International arbitration eventually gave Canada the northeast corner of the bank. The legislation also established the New England Fishery Management Council to regulate fishing. Although the goal was to conserve fisheries as well as to create exclusive American fishing grounds, the council was controlled by commercial interests. The result was the development of financial incentives and boat-building subsidies to modernize the fishing fleet. Soon the New England fleet surpassed the fishing capacities of the foreign fleets it replaced and every square foot of Georges Bank had been scraped with the heavy chains that hold down the trawling nets and stir up fish. This destroyed the rocky bottom structure of the bank and the vegetation and marine invertebrates that created habitat for the groundfish. Cod, pollack, and haddock were replaced by dogfish and skates, so-called “trash fish.” During the 1990s tiny hydroids, similar to jellyfish, began appearing off Georges Bank, in concentrations as high as 100 per gal of water. Although they were drifting in the water, they were in their sedentary life-stage form, indicating that they may have been ripped from their attachments by storms or commercial trawlers. These hydroids ate most of the small crustaceans that the groundfish larvae depend on. They also directly killed cod larvae. In 1994 the National Marine Fisheries Service found that the Georges Bank cod stock had declined by 40% since 1990, the largest decline ever recorded. Furthermore, the yellowtail flounder stock had collapsed. In a given year, only eight out of 100 flounder survived and the breeding population had fallen 94% in three years. The last successful flounder spawning was in 1987; but 60% of the catch from that year’s group were too small to sell and were discarded. In response the Fisheries Service closed large areas of Georges Bank, but fishing continued in the Canadian sector and western portions of the American sector. With the goal of annually harvesting only 15% of the remaining stock, each vessel was restricted to 139 days of ground fishing. Nevertheless by 1996, 55% of the remaining Georges Bank cod stock — the only surviving North Atlantic population — had been caught. Fishing was restricted to 88 days. A satellite-based vessel monitoring system is used to detect fishing boats that enter closed areas of Georges Bank. 634
[Margaret Alic Ph.D.]
RESOURCES BOOKS Dobbs, David. The Great Gulf: Fishermen, Scientists, and the Struggle to Revive the World’s Greatest Fishery. Washington, DC: Island Press, 2000. Kurlansky, Mark. Cod: A Biography of the Fish that Changed the World. New York: Walker and Company, 1997.
PERIODICALS Hattam, Jennifer. “Victory at Sea.” Sierra 85, no. 3 (May/June 2000): 91. Molyneaux, Paul. “Vessel Monitor Convicts New Bedford Scalloper.” National Fisherman 82, no. 11 (March 2002): 50.
OTHER American Museum of Natural History. Georges Bank—The Sorry Story of Georges Bank. [cited June 2002]. . Public Broadcasting System. Empty Oceans, Empty Nets. 2002 [cited May 2002]. . Status of the Fishery Resources off the Northeastern United States. Resource Evaluation and Assessment Division, Northeast Fisheries Science Center. June 2001 [cited May 2002]. <www.nefsc.nmfs.gov/sos/index.html>. United States Geological Survey. Geology and the Fishery of Georges Bank. January 3, 2001 [cite June 2002]. .
ORGANIZATIONS Cape Cod Commercial Hook Fishermen’s Association, 210 Orleans Road, North Chatham, MA USA 02650 (508) 945-2432, Fax: (508) 945-0981, Email:
[email protected], Coastal Waters Project/Task Force Atlantis, 418 Main Street, Rockland, ME USA 04841 (207) 594-5717, Email:
[email protected], Northeast Fisheries Science Center, 166 Water Street, Woods Hole, MA USA 02543-1026 (508) 495-2000, Fax: (508) 495-2258, , U.S. GLOBEC Georges Bank Program, Woods Hole Oceanographic Institution, Woods Hole, MA USA 02543-1127 (508) 289-2409, Fax: (508) 457-2169, Email:
[email protected],
Environmental Encyclopedia 3
Geosphere The solid portion of the earth. It is also known as the lithosphere. From a technical standpoint, the geosphere includes inner parts of the earth virtually inaccessible to human study, the inner and outer core and mantle, as well as the outermost crust. For the most part, however, environmental scientists are primarily interested in the relatively thin outer layer of the crust on which plants and animals live, in the ores and minerals that occur within the crust, and in the changes that take place in the crust as a result of erosion and mountain-building.
Geothermal energy Geothermal energy is obtained from hot rocks beneath the earth’s surface. The planet’s core, which may generate temperatures as high as 8,000°F (4,500°C), heats its interior, whose temperature increases, on an average, by about 1°C (2°F) for every 60 ft (18 m) nearer the core. Some heat is also generated in the mantle and crust as a result of the radioactive decay of uranium and other elements. In some parts of the earth, rocks in excess of 212°F (100°C) are found only a few miles beneath the surface. Water that comes into contact with the rock will be heated above its boiling point. Under some conditions, the water becomes super-heated, that is, is prevented from boiling even though its temperature is greater than 212°F (100°C). Regions of this kind are known as wet steam fields. In other situations the water is able to boil normally, producing steam. These regions are known as dry steam fields. Humans have long been aware of geothermal energy. Geysers and fumaroles are obvious indications of water heated by underground rock. The Maoris of New Zealand, for example, have traditionally used hot water from geysers to cook their food. Natural hot spring baths and spas are a common feature of many cultures where geothermal energy is readily available. The first geothermal well was apparently opened accidentally by a drilling crew in Hungary in 1867. Eventually, hot water from such wells was used to heat homes in some parts of Budapest. Geothermal heat is still an important energy source in some parts of the world. More than 99% of the buildings in Reykjavik, the capital of Iceland, are heated with geothermal energy. The most important application of geothermal energy today is in the generation of electricity. In general, hot steam or super-heated water is pumped to the planet surface where it is used to drive a turbine. Cool water leaving the generator is then pumped back underground. Some water is lost by evaporation during this process, so the energy that comes
Geothermal energy
from geothermal wells is actually non-renewable. However, most zones of heated water and steam are large enough to allow a geothermal mine to operate for a few hundred years. A dry steam well is the easiest and least expensive geothermal well to drill. A pipe carries steam directly from the heated underground rock to a turbine. As steam drives the turbine, the turbine drives an electrical generator. The spent steam is then passed through a condenser where much of it is converted to water and returned to the earth. Dry steam fields are relatively uncommon. One, near Larderello, Italy, has been used to produce electricity since 1904. The geysers and fumaroles in the region are said to have inspired Dante’s Inferno. The Larderello plant is a major source of electricity for Italy’s electric railway system. Other major dry steam fields are located near Matsukawa, Japan, and at Geysers, California. The first electrical generating plant at the Geysers was installed in 1960. It and companion plants now provide about 5% of all the electricity produced in California. Wet steam fields are more common, but the cost of using them as sources of geothermal energy is greater. The temperature of the water in a wet steam field may be anywhere from 360–660°F (180–250°C). When a pipe is sunk into such a reserve, some water immediately begins to boil, changing into very hot steam. The remaining water is carried out of the reserve with the steam. At the surface, a separator is used to remove the steam from the hot water. The steam is used to drive a turbine and a generator, as in a dry steam well, before being condensed to a liquid. The water is then mixed with the hot water (now also cooled) before being returned to the earth. The largest existing geothermal well using wet steam is in Wairakei, New Zealand. Other plants have been built in Russia, Japan, and Mexico. In the United States, pilot plants have been constructed in California and New Mexico. The technology used in these plants is not yet adequate, however, to allow them to compete economically with fossilfueled power plants. Hot water (in contrast to steam) from underground reserves can also be used to generate electricity. Plants of this type make use of a binary (two-step) process. Hot water is piped from underground into a heat exchanger at the surface. The heat exchanger contains some low-boiling point liquid (the “working fluid"), such as a freon or isobutane. Heat from the hot water causes the working fluid to evaporate. The vapor then produced is used to drive the turbine and generator. The hot water is further cooled and then returned to the rock reservoir from which it came. In addition to dry and wet steam fields, a third kind of geothermal reserve exists: pressurized hot water fields located deep under the ocean floors. These reserves contain natural gas mixed with very hot water. Some experts be635
Environmental Encyclopedia 3
Giant panda
lieved that these geopressurized zones are potentially rich energy sources although no technology currently exists for tapping them. Another technique for the capture of geothermal energy makes use of a process known as hydrofracturing. In hydrofracturing, water is pumped from the surface into a layer of heated dry rock at pressures of about 7,000 lb/in2 (500 kg/cm2). The pressurized water creates cracks over a large area in the rock layer. Then, some material such as sand or plastic beads is also injected into the cracked rock. This material is used to help keep the cracks open. Subsequently, additional cold water can be pumped into the layer of hot rock, where it is heated just as natural groundwater is heated in a wet or dry steam field. The heated water is then pumped back out of the earth and into a turbine-generator system. After cooling, the water can be re-injected into the ground for another cycle. Since water is continually re-used in this process and the earth’s heat is essentially infinite, the hydrofracturing system can be regarded as a renewable source of energy. Considerable enthusiasm was expressed for the hydrofracturing approach during the 1970s and a few experimental plants were constructed. But, as oil prices dropped and interest in alternative energy sources decreased in the 1980s, these experiments were terminated. Geothermal energy clearly has some important advantages as a power source. The raw material—heated water and steam—is free and readily available, albeit in only certain limited areas. The technology for extracting hot water and steam is well developed from petroleum-drilling experiences, and its cost is relatively modest. Geothermal mining, in addition, produces almost no air pollution and seems to have little effect on the land where it occurs. On the other hand, geothermal mining does have its disadvantages. One is that it can be achieved in only limited parts of the world. Another is that it results in the release of gases, such as hydrogen sulfide, sulfur dioxide, and ammonia, that have offensive odors and are mildly irritating. Some environmentalists also object that geothermal mining is visually offensive, especially in some areas that are otherwise aesthetically attractive. Pollution of water by runoff from a geothermal well and the large volume of cooling water needed in such plant are also cited as disadvantages. At their most optimistic, proponents of geothermal energy claim that up to 15% of the United States’ power needs can be met from this source. Lagging interest and research in this area over the past decade have made this goal unreachable. Today, no more than 0.1% of the nation’s electricity comes from geothermal sources. Only in California is geothermal energy a significant power source. As an example, GeoProducts Corporation, of Moraga, California, 636
has constructed a $60 million geothermal plant near Lassen National Park that generates 30 megawatts of power. Until the government and the general public becomes more concerned about the potential of various types of alternative energy sources, however, geothermal is likely to remain a minor energy source in the country as a whole. See also Alternative fuels; Fossil fuels; Renewable resources; Water pollution [David E. Newton]
RESOURCES BOOKS Moran, J. M., M. D. Morgan, and J. H. Wiersma. Environmental Science. Dubuque, IA: W. C. Brown, 1993. National Academy of Sciences. Geothermal Energy Technology. Washington, DC: National Academy Press, 1988. Rickard, G. Geothermal Energy. Milwaukee, WI: Gareth Stevens, 1991. U.S. Department of Energy. Geothermal Energy and Our Environment. Washington, DC: U.S. Government Printing Office, 1980.
PERIODICALS Fishman, D. J. “Hot Rocks.” Discover 12 (July 1991): 22–23.
Giant panda Today, the giant panda (Ailuropoda melanoleuca) is one of the best known and most popular large mammals among the general public. Although its existence was known long ago, having been mentioned in a 2,500-year-old Chinese geography text, Europeans did not learn of its existence until its discovery by a French missionary in 1869. The first living giant panda did not reach the Western Hemisphere until 1937. The giant panda, variously classified with the true bears or, often, in a family of its own, once ranged throughout much of China and Burma, but is now restricted to a series of 13 wildlife reserves totaling just over 2,200 mi2 (5,700 km2) in three central and western Chinese provinces. The giant panda population has been decimated over the past 2,000 years by hunting and habitat destruction. In the years since 1987, they have lost more than 30% of their habitat. Giant pandas are one of the rarest mammals in the world, with current estimates of their population size at about 1,000 individuals (150 in captivity). Today human pressure on giant panda populations has diminished, although poaching continues. Giant pandas are protected by tradition and sentiment, as well as by law in the Chinese mountain forest reserves. Despite this progress, however, IUCN—The World Conservation Union—and the U. S. Fish and Wildlife Service consider the giant panda to be endangered. Some of this species’ unique requirements and habits do seem to put them in jeopardy.
Environmental Encyclopedia 3
Giardia
The anatomy of the giant panda indicates that it is a carnivore, however, its diet consists almost entirely of bamboo, whose cellulose cannot be digested by the panda. Since the giant panda obtains so little nutrient value from the bamboo, it must eat enormous quantities of the plant each day, about 35 lb (16 kg) of leaves and stems, in order to satisfy its energy requirements. Whenever possible, it feeds solely on the young succulent shoots of bamboo, which, being mostly water, requires it to eat almost 90 lb (41 kg) per day. This translates into 10–12 hours per day that pandas spend eating. Giant pandas have been known to supplement their diet with other plants such as horsetail and pine bark, and they will even eat small animals, such as rodents, if they can catch them, but well over 95% of their diet consists of the bamboo plant. Bamboo normally grows by sprouting new shoots from underground rootstocks. At intervals from 40 to 100 years, the bamboo plants blossom, produce seeds, then die. New bamboo then grows from the seed. In some regions it may take up to six years for new plants to grow from seed and produce enough food for the giant panda. Undoubtedly this has produced large shifts in panda population size over the centuries. Within the last quarter century, two bamboo flowerings have caused the starvation of nearly 200 giant pandas, a significant portion of the current population. Although the wildlife reserves contain sufficient bamboo, much of the vast bamboo forests of the past have been destroyed for agriculture, leaving no alternative areas to move to should bamboo blossoming occur in their current range. Low fecundity and limited success in captive breeding programs in zoos does not bode well for replenishing any significant losses in the wild population. Although there are 150 pandas in captivity, only about 28% are breeding. In 1999, the first giant panda to live more than a few days was born in captivity. For the time being, the giant panda population appears stable, a positive sign for one of the world’s scarcest and most popular animals. [Eugene C. Beckham]
RESOURCES BOOKS Nowak, R. M., ed. Walker’s Mammals of the World. 5th ed. Baltimore: Johns Hopkins University Press, 1991.
PERIODICALS “China goes High Tech to Help Panda Population.” USA Today, August 13, 2001. Drew, L. “Are We Loving the Panda to Death?” National Wildlife 27 (1989): 14–17. “Pandas Still under Threat of Extinction.” USA Today, February 16, 2001.
Giardia. (Photograph by J. Paulin. Visuals Unlimited. Reproduced by permission.)
Giardia Giardia is the genus (and common) name of a protozoan parasite in the phylum Sarcomastigophora. It was first described in 1681 by Antoni van Leeuwenhoek (called “The Father of Microbiology"), who discovered it in his own stool. The most common species is Giardia intestinalis (also called lamblia), which is a fairly common parasite found in humans. The disease it causes is called giardiasis. The trophozoite (feeding) stage is easily recognized by its pear-shaped, bilaterally-symmetrical form with two internal nuclei and four pairs of external flagella; the thinwalled cyst (infective) stage is oval. Both stages are found in the upper part of the small intestine in the mucosal lining. The anterior region of the ventral surface of the troph stage is modified into a sucking disc used to attach to the host’s abdominal epithelial tissue. Each troph attaches to one epithelial cell. In extreme cases, nearly every cell will be covered, causing severe symptoms. Infection usually occurs through drinking contaminated water. Symptoms include diarrhea, flatulence (gas), abdominal cramps, fatigue, weight loss, anorexia, and/or nausea and may last for more than five days. Diagnosis is usually done by detecting cysts or trophs of this parasite in fecal specimens. Giardia has a worldwide distribution. It is more common in warm, tropical regions than in cold regions. Hosts include frogs, cats, dogs, beaver, muskrat, horses, and humans. Children as well as adults can be affected, although it is more common in children. It is highly contagious. Normal infection rate in the United States ranges from 1.5 to 20%. In one case involving scuba divers from the New York City police and fire fighters, 22–55% were found to be infected, presumably after they accidentally drank contaminated water in the local rivers while diving. In another case, an epidemic of giar637
Environmental Encyclopedia 3
Gibbons
diasis occurred in Aspen, Colorado, in 1965 during the popular ski season and 120 people were infected. Higher infection rates are common in some areas of the world, including Iran and countries in Sub-Saharan Africa. Giardia can typically withstand sophisticated forms of sewage treatment, including filtration and chlorination. It is therefore hard to eradicate and may potentially increase in polluted lakes and rivers. For this reason, health officials should make concerted efforts to prevent contaminated feces from infected animals (including humans) from entering lakes used for drinking water. The most effective treatment for giardiasis is the drug Atabrine (quinacrine hydrochloride). Adult dosage is 0.1 g taken after meals three times each day. Side effects are rare and minimal. See also Cholera; Coliform bacteria [John Korstad]
RESOURCES BOOKS Markell, E. K., M. Voge, and D. T. John. Medical Parasitology. 7th ed. Philadelphia: W. B. Saunders, 1992. Schmidt, G. D., and L. S. Roberts. Foundations of Parasitology. 4th ed. St. Louis: Times Mirror/Mosby, 1989. U.S. Department of Health and Human Services. Health Information for International Travel. Washington, DC: U.S. Government Printing Office, 1991.
Gibbons Gibbons (genus Hylobates, meaning “dweller in the trees") are the smallest members of the ape family which also includes gorillas, chimpanzees, and orangutans. They spend most of their lives at the tops of trees in the jungle, eating leaves and fruit. They are extremely agile, swinging at speeds of 35 mph (56 km/h) with their long arms on branches to move from tree to tree. The trees can even be 50 ft (15 m) apart. They have no tails and are often seen walking upright on tree branches. Gibbons are known for their loud calls and songs, which they use to announce their territory and warn away others. They are devoted parents, raising usually one or two offspring at a time and showing extraordinary affection in caring for them. Conservationists and animal protectionists who have worked with gibbons describe them as extremely intelligent, sensitive, and affectionate. Gibbons have long been hunted for food, for medical research, and for sale as pets and zoo specimens. A common method of collecting them is to shoot the mother and capture the nursing or clinging infant, if it is still alive. The mortality rate in collecting and transporting gibbons to areas where they can be sold is extremely high. This, coupled with the fact that their jungle habitat is being destroyed at a rate of 638
A gibbon. (©Breck P. Kent/JLM Visuals. Reproduced by permission.)
32 acres (13 ha) per minute, has resulted in severe depletion of their numbers. Gibbons are found in southeast Asia, China, and India, and nine species are recognized. All nine species are considered endangered by the U.S. Department of the Interior and are listed in the most endangered category of the
Environmental Encyclopedia 3
Lois Marie Gibbs
Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). IUCN—The World Conservation Union considers three species of gibbon to
be endangered and two species to be vulnerable. Despite the ban on international trade in gibbons conferred by listing in Appendix I of CITES, illegal trade in gibbons, particularly babies, continues on a wide scale in markets throughout Asia. [Lewis G. Regenstein]
RESOURCES BOOKS Benirschke, K. Primates: The Road to Self-sustaining Populations. New York: Springer-Verlag, 1986. Preuschoft, H., et al. The Lesser Apes: Evolutionary and Behavioral Biology. Edinburgh: Edinburgh University Press, 1984.
OTHER International Center for Gibbon Studies. [cited May 2002]. .
Lois Marie Gibbs (1951 – ) American environmentalist and community organizer An activist dedicated to protecting communities from hazardous wastes, Lois Gibbs began her political career as a housewife and homeowner near the Love Canal, New York. She was born in Buffalo on June 25, 1951, the daughter of a bricklayer and a full-time homemaker. Gibbs was 21 and a mother when she and her husband bought their house near a buried dump containing hazardous materials from industry and the military, including wastes from the research and manufacture of chemical weapons. From the time the first articles about Love Canal began appearing in newspapers in 1978, Gibbs has petitioned for state and federal assistance. She began when she discovered the school her son was attending had been built directly on top of the buried canal. Her son had developed epilepsy and there were many similar, unexplained disorders among other children at the school, yet the superintendent was refusing to transfer anyone. The New York State Health Department then held a series of public meetings in which officials appeared more committed to minimizing the community perception of the problem than to solving the problem itself. The governor made promises he was unable to keep, and Gibbs herself was flown to Washington to appear at the White House for what she later decided was little more than political grandstanding. In the book she wrote about her experience, Love Canal: My Story, Gibbs describes her frustration and her increasing disillusionment with gov-
ernment, as the threats to the health of both adults and children in the community became more obvious and as it became clearer that no one would be able to move because no one could sell their homes. While state and federal agencies delayed, the media took an increasing interest in their plight, and Gibbs became more involved in political action. To force federal action, Gibbs and a crowd of supporters took two officers from the Environmental Protection Agency (EPA) hostage. A group of heavily armed FBI agents occupied the building across the street and gave her seven minutes before they stormed the offices of the Homeowners’ Association, where the men were being held. With less than two minutes left in the countdown, Gibbs appeared outside and released the hostages in front of a national television audience. By the middle of the next week, the EPA had announced that the Federal Disaster Assistance Administration would fund immediate evacuation for everyone in the area. But the families who left the Love Canal area still could not sell their homes, and Gibbs fought to force the federal government to purchase them and underwrite lowinterest loans. After she accused President Jimmy Carter of inaction on a national talk show, in the midst of an approaching election, he agreed to purchase the homes. But he refused to meet with her to discuss the loans. Carter signed the appropriations bill in a televised ceremony at the Democratic National Convention in New York City, and Gibbs simply walked onstage in the middle of it and repeated her request for mortgage assistance. The president could do nothing but promise his political support, and the assistance she had been asking for was soon provided. Gibbs was divorced soon after her family left the Love Canal area. She moved to Washington D.C. with her two children and founded the Citizen’s Clearinghouse for Hazardous Wastes in 1981 (later renamed the Center for Health, Environment and Justice in 1997). Its purpose is to assist communities in fighting toxic waste problems, particularly plans for toxic waste dumping sites, and the organization has worked with over 7,000 neighborhood and community groups. Gibbs has also published Dying from Dioxin, A Citizen’s Guide to Reclaiming Our Health and Rebuilding Democracy. She has appeared on many television and radio shows and has been featured in hundreds of newspaper and magazine articles. Gibbs has also been the subject of several documentaries and television movies. She often speaks at conferences and seminars and has been honored with numerous awards, including the prestigious Goldman Environmental Prize in 1991. Because of Gibbs’ activist work, no commercial sites for hazardous wastes have been opened in the United States since 1978. [Lewis G. Regenstein and Douglas Smith]
639
Environmental Encyclopedia 3
Gill nets
Lois Gibbs at her desk during her fight to win permanent relocation for the families living at Love Canal. (Corbis-Bettmann. Reproduced by permission.)
Because gill nets are so efficient at catching fish, they are just as efficient at catching many non-target species, including other fishes, sea turtles, sea mammals, and sea birds. Gill nets have been used extensively in the commercial fishery for salmon and capelin (Mallotus villosus). Dolphins, seals, and sea otters (Enhydra lutris) get tangled in the nets, as do diving sea birds such as murres, guillemots, auklets, and puffins that rely on capelin as a mainstay in their diet. Sea turtles are also entangled and drown. The problem has gotten worse over the last decade with the introduction and extensive use, primarily by foreign fishing fleets, of drift nets. Described as “the most indiscriminate killing device used at sea,” drift nets are monofilament gill nets up to 40 mi (64 km) in length. Left at sea for several days and then hauled on board a fishing vessel, these drift nets contain vast numbers of dead marine life, besides the target species, that are simply discarded over the side of the boat. The outrage expressed regarding these “curtains of death” led to a United Nations resolution banning their use in commercial fisheries after the end of 1992. Commercial fishermen who use other types of nets for catching fish, such as the purse seines used in the tuna fishing industry and the bag trawls used in the shrimping industry, have modified their nets and fishing techniques to attempt to eliminate the killing of dolphins and sea turtles, respectively. Unfortunately, such modifications of gill nets are nearly impossible due to the nets’ design and the way these nets are used. See also Turtle excluder device [Eugene C. Beckham]
RESOURCES BOOKS Gibbs, L. Love Canal: My Story. Albany: State University of New York Press, 1982. Wallace, A. Eco-Heroes. San Francisco: Mercury House, 1993.
RESOURCES PERIODICALS Norris, K. “Dolphins in Crisis.” National Geographic 182 (1992): 2–35.
Gill nets Gill nets are panels of diamond-shaped mesh netting used for catching fish. When fish attempt to swim through the net their gill covers get caught and they cannot back out. Depending on the target species, different mesh sizes are available for use. The top line of the net has a series of floats attached for buoyancy, and the bottom line has lead weights to hold the net vertically in the water column. Gill nets have been in use for many years. They became popular in commercial fisheries in the nineteenth century, evolving from cotton twine netting to the more modern nylon twine netting and monofilament nylon netting. As with many other aspects of commercial fishing, the use of gill nets has developed from minor utilization to a major environmental issue. Coupled with overfishing, the use of gill nets has caused serious concern throughout the world. 640
GIS see Geographic information systems
Glaciation The covering of the earth’s surface with glacial ice. The term also includes the alteration of the surface of the earth by glacial erosion or deposition. Due to the passage of time, ice erosion can be almost unidentifiable; the weathering of hard rock surfaces often eliminates minor scratches and other evidence of such glacial activities as the carving of deep valleys. The evidence of deposition, known as depositional imprints, can vary. It may consist of specialized features a few meters above the surrounding terrain, or it may consist
Environmental Encyclopedia 3
Henry A. Gleason
of ground materials several meters in thickness covering wide areas of the landscape. Only 10% of the earth’s surface is currently covered with glacial ice, but it is estimated that 30% had been covered with glacial ice at some time. During the last major glacial period, most of Europe and more than half of the North American continent were covered with ice. The glacial ice of modern day is much thinner than it was in the ice age, and the majority of it (85%) is found in Antarctica. About 11% of the remaining glacial ice is in Greenland, and the rest is scattered in high altitudes throughout the world. Moisture and cold temperatures are the two main factors for the formation of glacial ice. Glacial ice in Antarctica is the result of relatively small quantities of snow deposition and low loss of ice because of the cold climate. In the middle and low latitudes where the loss of ice, known as ablation, is higher, snowfall tends to be much higher and the glaciers are able overcome ablation by generating large amounts of ice. These types of systems tend to be more active than the glaciers in Antarctica, and the most active of these are often located at high altitudes and in the path of prevailing winds carrying marine moisture. The topography of the earth has been shaped by glaciation. Hills have been reduced in height and valleys created or filled in the movement of glacial ice. Moraine is a French term used to describe the ridges and earthen dikes formed near the edges of regional glaciers. Ground moraine is material that accumulates beneath a glacier and has low-relief characteristics, and end moraine is material that builds up along the extremities of a glacier in a ridge-like appearance. In England, early researchers found stones that were not common to the local ground rock and decided they must have “drifted” there, carried by icebergs on water. Though geology has changed since that time, the term remains and all deposits made by glacial ice are usually identified as drift. These glacial deposits, also known as till, are highly varied in composition. They can be a fine grained deposit, or very coarse with rather large stones present, or a combination of both. Rock and other soil-like debris are often crushed and ground into very small particles, and they are commonly found as sediment in waters flowing from a glacial mass. This material is called glacial flour, and it is carried downstream to form another kind of glacial deposit. During certain cold, dry periods of the year winds can pick up portions of this deposit and scatter it for miles. Many of the different soils in the American “Corn Belt” originated in this way, and they have become some of the more important agricultural soils in the world. [Royce Lambert]
RESOURCES BOOKS Flint, R. F. Glacial and Pleistocene Geology. New York: Wiley, 1972.
Henry A. Gleason
(1882 – 1975)
American ecologist Henry A. Gleason was a half generation after that small group of midwesterners who founded ecology as a discipline in the United States. He was a student of Stephen Forbes and his early work in ecology was influenced strongly by Cowles and Frederic E. Clements. He did later, however, in 1935, claim standing—bowing only to Cowles and Clements and for some reason not including Forbes—as “the only other original ecologist in the country.” And he was original. His work built on that of the founders, but he quickly and actively questioned their ideas and concepts, especially those of Clements, in the process creating controversy and polarization in the ecological community. Gleason called himself an “ecological outlaw” and probably over-emphasized his early lack of acceptance in ecology, but in a resolution of respect from the Ecological Society of America after his death, he was described as a revolutionary and a heretic for his skepticism toward ’established’ ideas in ecology. Stanley Cain said that Gleason was never “impressed nor fooled by the philosophical creations of other ecologists, for he has always tested their ideas concerning the association, succession, the climax, environmental controls, and biogeography against what he knew in nature.” Gleason was a critical rather than negative thinker and never fully rejected the utility of the idea of community, but the early marginalization of his ideas by mainstream ecologists, and the controversy they created, may have played a role in his later concentration on taxonomy over ecology. He claimed that if plant associations did exist, they were individualistic and different from area to area, even where most of the same species were present. Gleason did clearly reject, however, Clements’ idea of a monoclimax, proclaiming that “the Clementsian concept of succession, as an irreversible trend leading to the climax, was untenable.” His field observations also led him to repudiate Clements’ organismic concept of the plant community, asking “are we not justified in coming to the general conclusion, far removed from the prevailing opinion, that an association is not an organism?” He went on to say that it is “scarcely even a vegetation unit.” He also pointed out the errors in ’Raunkiaer’s Law,’ on frequency distribution, which as Robert McIntosh noted, was “widely interpreted [in early ecology] as being a fundamental community characteristic indicating homogeneity,” and questioned Jaccard’s comparison of two communities through a coefficient of similarity that Gleason 641
Henry A. Gleason
believed unduly gave as much weight to rare as to common species. Gleason’s own approach to the study of vegetation emerged from his skills as a floristic botanist, an approach rejected as “old botany” by the founders of ecology. As Nicolson suggests, “a floristic approach entailed giving primacy to the study of the individual plants and their species. This was the essence of [Gleason’s] individualistic concept.” In hindsight, somewhat ironically then, Gleason used old botany to create a new alternative to what had quickly become dogma in ecology, the centrality of the idea that units of vegetation were real, that the plant association was indispensable to an ecological approach. Clements was more accepted in the early part of the twentieth century than Gleason, though many ecologists at the time considered both too extreme, just in opposite ways. Today, Clements’ theories remain out of favor and some of Gleason’s have been revived, though not all of them. Contrary to his own observations, he was persuaded that plants are distributed randomly, at least over small areas, which is seldom if ever the case, though he later backed away from this assertion. He could not accept the theory of continental drift, stating that “the theory requires a shifting of the location of the poles in a way which does considerable violence to botanical and geological facts,” and therefore should have few adherents among botanists. Despite Gleason’s skepticism about some of Clements’ major ideas, the older botanist was a major influence, especially early in Gleason’s career. Especially influential was Clements’ rudimentary development of the quadrat method of sampling vegetation, which shaped Gleason’s approach to field work; Gleason took the method much further than Clements, and though not trained in mathematics, was the first ecologist to employ a number of quantitative approaches and methods. As McIntosh demonstrated, Gleason, following Forbes lead in aquatic ecology “was clearly one of the earliest and most insightful proponents of the use of quantitative methods in terrestrial ecology.” Gleason was born in the heart of the area where ecology first flourished in the United States. His interest in vegetation and his contributions to ecology were both stimulated by growing up in and doing research on the dynamics of the prairie-forest border. He won bachelor’s and master’s degrees from the University of Illinois and a Ph.D. from Columbia University. He returned to the University of Illinois as an instructor in botany (1901–1910), where he worked with Stephen Forbes at one of the major American centers of ecological research at the time. In 1910, he moved to the University of Michigan (1910) and while in Ann Arbor, married Eleanor Mattei. Then, in 1919, he moved to the New York Botanical Garden, where he spent the rest of his career, sometimes (reluctantly) as an administrator, 642
Environmental Encyclopedia 3 always as a research taxonomist. He retired from the Garden in 1951. Moving out of the Midwest, Gleason also moved out of ecology. Most of his work at the Botanic Garden was taxonomic. He did some ecological work, such as a threemonth ecological survey of Puerto Rico in 1926, and a restatement of his “individualistic concept of the plant association, (also in 1926 and also in the Bulletin of the Torrey Botanical Club), in which he posed what Nicolson described as “a radical challenge” to the basis of contemporary ecological practice. Gleason’s challenge to his colleagues and critics in ecology was to “demolish our whole system of arrangement and classification and start anew with better hope of success.” His reasoning was that ecologists had “attempted to arrange all our facts in accordance with older ideas, and have come as a result into a tangle of conflicting ideas and theories.” He anticipated twenty-first century thinking that identification on the ground of community and ecosystem as ecological units is arbitrary, noting that vegetation was too continuously varied to identify recurrent associations. He claimed, for example, that “no ecologist would refer the alluvial forests of the upper and lower Mississippi to the same association, yet there is no place along their whole range where one can logically mark a boundary between them. As Mcintosh suggests, “one of Gleason’s major contributions to ecology was that he strove to keep the conceptual mold from hardening prematurely.” In his work as a taxonomist for the Garden, Gleason traveled as a plant collector, becoming what he described as “hooked” on tropical American botany, specializing in the large family of melastomes, tropical plants ranging from black mouth fruits to handsome cultivated flowers, a group which engaged him for the rest of his career. His field work, on this family but especially many others, was reinforced by extensive study and identification on material collected by others and made available to him at the Garden. A major assignment during his New York years, emblematic of his work as a taxonomist, was a revision of the Britton and Brown Illustrated Flora of the Northeastern United States (1952) which Maguire describes as a “heavy duty [that] intervened and essentially brought to a close Gleason’s excellent studies of the South American floras and [his] detailed inquiry into the Melastomataceae...this great work...occupied some ten years of concentrated, self-disciplined attention.” He did publish a few brief pieces on the melastomes after the Britton and Brown, and also two books with Arthur Cronquist, Manual of Vascular Plants of Northeastern United States and Adjacent Canada (1963) and the more general The Natural Geographyof Plants (1964). The latter, though coauthored, was an overt attempt by Gleason to summarize a life’s work and make it accessible to a wider public.
Environmental Encyclopedia 3
Glen Canyon Dam
Gleason’s early ecological work on species-area relations, the problem of rare species, and his extensive taxonomic work all laid an initial base for contemporary concern among biologists (especially) about the threat to the earth’s bio-diversity. Gleason wrote that analysis of “the various species in a single association would certainly show that their optimum environments are not precisely identical,” a foreshadowing of later work on niche separation. McIntosh claimed in 1975 that Gleason’s individualistic concept “must be seen not simply as one of historical interest but very likely as one of the key concepts of modern and, perhaps, future ecological thought.” A revival of Gleason’s emphasis on the individual at mid-twentieth century became one of the foundations for what some scientists in the second half of the twentieth century called a “new ecology,” one that rejects imposed order and system and emphasizes the chaos, the randomness, the uncertainty and the unpredictability of natural systems. A call for ’adaptive’ resource and environmental management policies flexible enough to respond to unpredictable change in individually variant natural systems is one outgrowth of such changes in thinking in ecology and the environmental sciences. [Gerald L. Young]
FURTHER READING BOOKS Gleason, H. A. “Twenty-Five Years of Ecology, 1910–1935.” Vol. 4, Memoirs, Brooklyn Botanic Garden. Brooklyn: Brooklyn Botanic Garden, 1936.
PERIODICALS Cain, Stanley A. “Henry Allan Gleason: Eminent Ecologist 1959.” Bulletin of the Ecological Society of America 40, no. 4 (December 1959): 105–110. Gleason, H. A. “Delving Into the History of American Ecology—Reprint of 1952 Letter to C. H. Muller.” The Bulletin of the Ecological Society of America 56, no. 4 (December 1975): 7–10. Maguire, Bassett. “Henry Allan Gleason—1881–1975.” Bulletin of the Torrey Botanical Club 102, no. 5 (September/October 1975): 274–282. McIntosh, Robert P. “H.A. Gleason—"Individualistic Ecologist” 1882– 1975: His Contributions to Ecological Theory.” Bulletin of the Torrey Botanical Club 102, no. 5 (September/October 1975): 253–273. Nicolson, Malcolm. “Henry Allan Gleason and the Individualistic Hypothesis: The Structure of a Botanist’s Career.” The Botanical Review 56, no. 2 (April/June 1990): 91–161.
Glen Canyon Dam Until 1963, Glen Canyon was one of the most beautiful stretches of natural scenery in the American West. The canyon had been cut over thousands of years as the Colorado River flowed over sandstone that once formed the floor of an ancient sea. The colorful walls of Glen Canyon were often compared to those of the Grand Canyon, only about 50 mi (80 km) downstream.
Humans have long seen more than beauty in the canyon, however. They have envisioned the potential value of a water reservoir that could be created by damming the Colorado. In a region where water can be as valuable as gold, plans for the construction of a giant irrigation project with water from a Glen Canyon dam go back to at least 1850. Flood control was a second argument for the construction of such a dam. Like most western rivers, the Colorado is wild and unpredictable. When fed by melting snows and rain in the spring, its natural flow can exceed 300,000 ft4 (8,400 m4) per second. At the end of a hot dry summer, flow can fall to less than 1% of that value. The river’s water temperature can also fluctuate widely, by more than 36°F (20°C) in a year. A dam in Glen Canyon held the promise of moderating this variability. By the early 1900s, yet a third argument for building the dam was proposed—the generation of hydroelectric power. Both the technology and the demand were reaching the point that power generated at the dam could be supplied to Phoenix, Los Angeles, San Diego, and other growing urban areas in the Far West. Some objections were raised in the 1950s when construction of a Glen Canyon Dam was proposed, and environmentalists fought to protect this unique natural area. The 1950s and early 1960s were not, however, an era of high environmental sensitivity, and plans for the dam eventually were approved by the U. S. Congress. Construction of the dam, just south of the Utah-Arizona border, was completed in 1963 and the new lake it created, Lake Powell, began to develop. Seventeen years later, the lake was full holding a maximum of 27 million acre-feet of water. The environmental changes brought about by the dam are remarkable. The river itself has changed from a muddy brown color to a clear crystal blue as the sediments it carries are deposited behind the dam in Lake Powell. Erosion of river banks downstream from the dam has lessened considerably as spring floods are brought under control. Natural beaches and sandbars, once built up by deposited sediment, are washed away. River temperatures have stabilized at an annual average of about 50°F (10°C). These physical changes have brought about changes in flora and fauna also. Four species of fish native to the Colorado have become extinct, but at least 10 species of birds are now thriving where they barely survived before. The biotic community below the dam is significantly different from what it was before construction. During the 1980s, questions about the dam’s operation began to grow. A number of observers were especially concerned about the fluctuations in flow through the dam, a pattern determined by electrical needs in distant cities. During peak periods of electrical demand, operators increase the flow of water though the dam to a maximum of 30,000 ft4 643
Environmental Encyclopedia 3
Global Environment Monitoring System (840 m4) per second. At periods of low demand, that flow may be reduced to 1,000 ft4 (28 m4) per second. As a result of these variations, the river below the dam can change by as much as 13 ft (4 m) in height in a single 24-hour period. This variation can severely damage riverbanks and can have unsettling effects on wildlife in the area as, for example, fish are stranded on the shore or swept away from spawning grounds. River-rafting is also severely affected by changing river levels as rafters can never be sure from day to day what water conditions they may encounter. Operation of the Glen Canyon Dam is made more complex by the fact that control is divided up among at least three different agencies in the U. S. Department of the Interior, the Bureau of Reclamation, the Fish and Wildlife Service, and the National Park Service, all with somewhat different missions. In 1982, a comprehensive re-analysis of the Glen Canyon area was initiated. A series of environmental studies called the Glen Canyon Environmental Studies were designed and carried out over much of the following decade. In addition, Interior Secretary Manuel Lujan announced in 1989 that an environmental impact statement on the downstream effects of the dam would be conducted. The purpose of the environmental impact statement was to find out if other options were available for operating the dam that would minimize harmful effects on the environment, recreational opportunities, and Native American activities while still allowing the dam to produce sufficient levels of hydroelectric power. The effects studied included water, sediment, fish, vegetation, wildlife and habitat, endangered and other special-status species, cultural resources, air quality, recreation, hydropower, and non-use value (i.e., general appreciation of natural resources). Nine different operating options for the dam were considered. These options fell into three general categories: unrestricted fluctuating flows (two alternative modes); restricted fluctuating flows (four modes); and steady flows (three modes). The final choice made was one that involves “periodic high, steady releases of short duration” that reduce the dam’s performance significantly below its previous operating level. The criterion for this decision was the protection and enhancement of downstream resources while continuing to permit a certain level of flexibility in the dam’s operation. A later series of experiments was designed to see what could be done to restore certain downstream resources that have been destroyed or damaged by the dam’s operation. Between March 26 and April 2, 1996, the U.S. Bureau of Reclamation released unusually large amounts of water from the dam. The intent was to reproduce the large scale flooding on the Colorado River that had normally occurred every spring before the dam was built. 644
The primary focus of this project was to see if downstream sandbars could be restored by the flooding. The sandbars have traditionally been used as campsites and have been a major mechanism for the removal of silt from backwater channels used by native fish. Depending on the final results of this study, the Bureau will determine what changes, if any, should be taken in adjusting flow patterns over the dam to provide for maximum environmental benefit downstream along with power output. See also Alternative energy sources; Riparian land; Wild river [David E. Newton]
RESOURCES PERIODICALS Elfring, C. “Conflict in the Grand Canyon.” BioScience (November 1990): 709–711. Udall, J. R. “A Wild, Swinging River.” Sierra (May 1990): 22–26.
Global Environment Monitoring System A data-gathering project administered by the United Nations Environment Programme. The Global Environment Monitoring System (GEMS) is one aspect of the modern understanding that environmental problems ranging from the greenhouse effect and ozone layer depletion to the preservation of biodiversity are international in scope. The system was inaugurated in 1975, and it monitors weather and climate changes around the world, as well as variations in soils, the health of plant and animal species, and the environmental impact of human activities. GEMS was not intended to replace any existing systems; it was designed to coordinate the collection of data on the environment, encouraging other systems to supply information it believed was being omitted. In addition to coordinating the gathering of this information, the system also publishes it in an uniform and accessible fashion, where it can be used and evaluated by environmentalists and policy makers. GEMS operates 25 information networks in over 142 countries. These networks monitor air pollution, including the release of greenhouse gases and changes in the ozone layer, and air quality in various urban center; they also gather information on water quality and food contamination in cooperation with the World Health Organization and the Food and Agriculture Organization of the United Nations.
Environmental Encyclopedia 3
Global Releaf
Global Forum
Global Releaf
The Global Forum of Spiritual and Parliamentary Leaders on Human Survival is a worldwide organization of scientists, leaders of world religions, and parliamentarians who are attempting to change environmental and developmental values in their countries. Members include local, national, and international leaders in the arts, business, community action, education, faith, government, media, and youth sectors. Historically, lawmakers and spiritual leaders have differed in their views toward stewardship of the earth. A conference held in Oxford, England, in 1988, and attended by 200 spiritual and legislative leaders brought these groups together with scientists to discuss solutions to worldwide environmental problems. Speakers included the Dalai Lama, Mother Teresa, and the Archbishop of Canterbury, who conferred with experts such as Carl Sagan, Kenyan environmentalist Wangari Maathai, and Gaia hypothesis scientist James Lovelock. As a result of the Oxford conference, the Soviet Union invited the Global Forum to convene an international meeting on critical survival issues. The Moscow conference, called the Global Forum on Environment and Development, took place in January 1990. Over 1,000 spiritual and parliamentary leaders, scientists, artists, journalists, businessmen, and young people from eighty-three countries attended the Moscow Forum. One initiative of the Moscow Forum was a joint commitment by scientists and religious leaders to preserve and cherish the earth. The Global Forum tries not to duplicate the activities of other environmental groups but works to relate global issues to local environments. For example, participants at the first U.S.-based Global Forum conference in Atlanta in May 1992, learned about the local effects of global problems such as tropical rain forest destruction, global warming, and waste management. The Global Forum has initiated seminars worldwide on ethical implications of the environmental crisis. Artists learn about the role of the arts in communicating global survival issues. Business leaders promote sustainable development at the highest levels of business and industry. Young people petition their schools to include curriculum on environmental issues as required subjects.
Global Releaf, an international citizen action and education program, was initiated in 1988 by the 115-year-old American Forestry Association in response to the worldwide concern over global warming and the greenhouse effect. Campaigning under the slogan “Plant a tree, cool the globe,” its over 112,000 members began the effort to reforest the earth one tree at a time. In 1990, Global Releaf began Global Releaf Forest, an effort to restore damaged habitat on public lands through tree plantings Global Releaf Fund is its urban counterpart. Using each one-dollar donation to plant one tree resulted in the planting of more than four million trees on 70 sites in 33 states. By involving local citizens and resource experts in each project, the program ensures that the right species are planted in the right place at the right time. Results include the protection of endangered and threatened animals, restoration of native species, and improvement of recreational opportunities. Funding for the program has come largely from government agencies, corporations, and non-profit organizations. Chevrolet-Geo celebrated the planting of its millionth tree in October 1996. The Texaco/Global Releaf Urban Tree Initiative, utilizing more than 6,000 Texaco volunteers, has helped local groups plant more than 18,000 large trees and invested over $2.5 million in projects in twelve cities. Outfitter Eddie Bauer began an “Add a Dollar, Plant a Tree” program to fund eight Global Releaf Forest sites in the United States and Canada, planting close to 350,000 trees. The Global Releaf Fund also helps finance urban and rural reforestation on foreign soil in projects undertaken with its international partners. Engine manufacturer, Briggs & Stratton, for example, has made possible tree plantings both in the United States and in Ecuador, England, Germany, Poland, Romania, Slovakia, South Africa, and Ukraine, while Costa Rica, Gambia, and the Philippines have benefitted from picture-frame manufacturer Larsen-Juhl. Unfortunately, not enough funding exists to grant all the requests; in 1996, only 40% of the proposed projects received financial backing. Forced to pick and choose, the review board favors those projects which aim to protect endangered and threatened species. Burned forests and natural disaster areas—like the Francis Marion National Forest in South Carolina, devastated by 1989’s Hurricane Hugo— are also high on the priority list, as are streamside woodlands and landfills. Looking to the future, Global Releaf 2000 was launched in 1996 with the aim of encouraging the planting of 20 million trees, increasing the canopy in select cities by 20%, and expanding the program to include private lands and sanitary landfills. A 20-city survey done in 1985 by
[Linda Rehkopf]
RESOURCES ORGANIZATIONS Global Forum, East 45th St., 4th Floor , New York , NY USA 10017
645
Environmental Encyclopedia 3
Goiter
American Forests showed that four trees die for every one planted in United States cities and that the average city tree lives only 32 years (just seven years, downtown). With these facts in mind, Global Releaf asks that communities plant twice as many trees as are lost in the next decade. In August of 2001, more than 19 million trees had been planted.
[Ellen Link]
RESOURCES BOOKS Sobel K. L., S. Orrick, and R. Honig. Environmental Profiles: A Global Guide to Projects and People. New York: Garland, 1993.
PERIODICALS “Global Releaf 2000.” American Forests 103, no. 4 (Autumn 1996): 30. “A Helping Hand for Damaged Land.” American Forests 102, no. 3 (Summer 1996): 33–35. “Planting One for the Millennium.” American Forests 102, no. 3 (Summer 1996): 13–15.
ORGANIZATIONS American Forests, P.O. Box 2000, Washington , D.C. USA 20013 (202) 955-4500, Fax: (202) 955-4588, Email:
[email protected],
Global 2000 Report see The Global 2000 Report
Global warming see Greenhouse effect
GOBO see Child survival revolution
Goiter Generally refers to any abnormal enlargement of the thyroid gland. The most common type of goiter, the simple goiter, is caused by a deficiency of iodine in the diet. In an attempt to compensate for this deficiency, the thyroid gland enlarges and may become the size of a large softball in the neck. The general availability of table salt to which potassium iodide has been added ("iodized” salt) has greatly reduced the incidence of simple goiter in many parts of the world. A more serious form of goiter, toxic goiter, is associated with hyperthyroidism. The etiology of this condition is not well understood. A third form of goiter occurs primarily in women and is believed to be caused by changes in hormone production.
Golf courses The game of golf appears to be derived from ancient stickand-ball games long played in western Europe. However, 646
the first documented rules of golf were established in 1744, in Edinburgh, Scotland. Golf was first played in the United States in the 1770s, in Charleston, South Carolina. It was not until the 1880s, however, that the game began to become widely popular, and it has increasingly flourished since then. In 2002, there were about 16,000 golf courses in the United States, and thousands more in much of the rest of the world. Golf is an excellent form of outdoor recreation. There are many health benefits of the game, associated with the relatively mild form of exercise and extensive walking that can be involved. However, the development and management of golf courses also results in environmental damage of various kinds. The damage associated with golf courses can engender intense local controversy, both for existing facilities and when new ones are proposed for development. The most obvious environmental affect of golf courses is associated with the large amounts of land that they appropriate from other uses. Depending on its design, a typical 18-hole golf course may occupy an area of about 100-200 acres. If the previous use of the land was agricultural, then conversion to a golf course results in a loss of food production. Alternatively, if the land previously supported forest or some other kind of natural ecosystem, then the conversion results in a large, direct loss of habitat for native species of plants and animals. In fact, some particular golf courses have been extremely controversial because their development caused the destruction of the habitat of endangered species or rare kinds of natural ecosystems. For instance, the Pebble Beach Golf Links course, one of the most famous in the world, was developed in 1919 on the Monterey Peninsula of central California, in natural coastal and forest habitats that harbor numerous rare and endangered species of plants and animals. Several additional gold courses and associated tourist facilities were subsequently developed nearby, all of them also displacing natural ecosystems and destroying the habitat of rare species. Most of those recreational facilities were developed at a time when not much attention was paid to the needs of endangered species. Today, however, the conservation of biodiversity is considered an important issue. It is quite likely that if similar developments were now proposed in such critical habitats, citizen groups would mount intense protests and government regulators would not allow the golf courses to be built. The most intensively modified areas on golf courses are the fairways, putting greens, aesthetic lawns and gardens, and other highly managed areas. Because these kinds of areas are intrinsic to the design of golf courses, a certain amount of loss of natural habitat is inevitable. To some degree, however, the net amount of habitat loss can be decreased by attempting, to the degree possible, to retain natural community types within the golf course. This can
Environmental Encyclopedia 3 be done particularly effectively in the brushy and forested areas between the holes and their approaches. The habitat quality in these less-intensively managed areas can also be enhanced by providing nesting boxes and brush piles for use by birds and small mammals, and by other management practices known to favor wildlife. Habitat quality is also improved by planting native species of plants wherever it is feasible to do so. In addition to land appropriation, some of the management practices used on golf courses carry the risk of causing local environmental damage. This is particularly the case of putting greens, which are intensively managed to maintain an extremely even and consistent lawn surface. For example, to maintain a monoculture of desired species of grasses on putting greens and lawns, intensive management practices must be used. These include frequent mowing, fertilizer application, and the use of a variety of pesticidal chemicals to deal with various pests affecting the turfgrass. This may involve the application of such herbicides as Roundup (glyphosate), 2,4-D, MCPP, or Dicamba to deal with undesirable weeds. Herbicide application is particularly necessary when putting greens and lawns are being first established. Afterward their use can be greatly reduced by only using spot-applications directly onto turf-grass weeds. Similarly, fungicides might be used to combat infestations of turf-grass disease fungi, such as the fusarium blight (Fusarium culmorum), take-all patch (Gaeumannomyces graminis), and rhizoctonia blight (Rhizoctonia solani). Infestations by turf-damaging insects may also be a problem, which may be dealt with by one or more insecticide applications. Some important insect pests of golf-course turfgrasses include the Japanese beetle (Popillia japonica), chafer beetles (Cyclocephala spp.), June beetles (Phyllophaga spp.), and armyworm beetle (Pseudaletia unipuncta). Similarly, rodenticides may be needed to get rid of moles (Scalopus aquaticus) and their burrows. Golf courses can also be a major user of water, mostly for the purposes of irrigation in dry climates or during droughty periods. This can be an important problem in semi-arid regions, such as much of the southwestern U.S., where water is a scare and valuable commodity with many competing users. To some degree, water use can be decreased by ensuring that irrigation is only practiced when necessary, and only in specific places where it is needed, rather than according to a fixed schedule and in a broadcast manner. In some climatic areas, nature-scaping and other low-maintenance practices can be used over extensive areas of golf courses. This can result in intensive irrigation only being practiced in key areas, such as putting greens, and to a lesser degree fairways and horticultural lawns. Many golf courses have ponds and lakes embedded in their spatial design. If not carefully managed, these
Golf courses
waterbodies can become severely polluted by nutrients, pesticides, and eroded materials. However, if care is taken with golf-course management practices, their ponds and lakes can sustain healthy ecosystems and provide refuge habitat for local native plants and animals. Increasingly, golf-course managers and industry associations are attempting to find ways to support their sport while not causing an unacceptable amount of environmental damage. One of the most important initiatives of this kind is the Audubon Cooperative Sanctuary Program for Golf Courses, run by the Audubon International, a private conservation organization. Since 1991, this program has been providing environmental education and conservation advice to golf-course managers and designers. By 2002, membership in this Audubon program had grown to more than 2,300 courses in North America and elsewhere in the world. The Audubon Cooperative Sanctuary Program for Golf Courses provides advice to help planners and managers with: (a) environmental planning; (b) wildlife and habitat management; (c) chemical use reduction and safety; (d) water conservation; and (e) outreach and education about environmentally appropriate management practices. If a golf course completes recommended projects in all of the components of the program, it receives recognition as a Certified Audubon Cooperative Sanctuary. This allows the golf course to claim that it is conducting its affairs in a certifiably “green” manner. This results in tangible environmental benefits of various kinds, while being a source of pride of accomplishment for employees and managers, and providing a potential marketing benefit to a clientele of well-informed consumers. There are many specific examples of environmental benefits that have resulted from golf courses engaged in the Audubon Cooperative Sanctuary Program. For instance, seven golf courses in Arizona and Washington have allowed the installation of 150 artificial nesting burrows for burrowing owls (Athene cunicularia), an endangered species, on suitable habitat on their land. In 2000, Audubon International conducted a survey of cooperating golf courses, and the results were rather impressive. About 78% of the respondents reported that they had decreased the total amount of turfgrass area on their property; 73% had taken steps to increase the amount of wildlife habitat; 45% were engaged in an ecosystem restoration project; 90% were attempting to use native plants in their horticulture; and 85% had decreased their use of pesticides and 91% had switched to lowertoxicity chemicals. Just as important, about half of the respondents believed that there had been an improvement in the playing quality of their golf course and in the satisfaction of both employees and their client golfers. Moreover, none of the respondents believed that any of these values had been degraded as a result of adopting the management practices advised by the Audubon International program. 647
Environmental Encyclopedia 3
Good wood
These are all highly positive indicators. They suggest that the growing and extremely popular sport of golf can, within limits, potentially be practiced in ways that do not cause unacceptable levels of environmental and ecological damage. [Bill Freedman Ph.D.]
RESOURCES BOOKS Balogh, J. C., and W. J. Walker, eds. Golf Course Management and Construction: Environmental Issues. Leeds, UK: Lewis Publishers, 1992. Gillihan, S. W. Bird Conservation on Golf Courses: A Design and Management Manual. Ann Arbor, MI: Ann Arbor Press, 1992. Sachs, P. D., and R.T. Luff. Ecological Golf Course Management. Ann Arbor, MI: Ann Arbor Press, 2002.
OTHER “Audubon Cooperative Sanctuary Program for Golf.” Audubon International. 2002 [cited July 2002]. –http://www.audubonintl.org/programs/acss/ golf.htm>. United States Golf Association. [cited July 2002]. .
ORGANIZATIONS United States Golf Association, P.O. Box 708, Far Hills, N.J. USA 079310708, Fax: 908-781-1735, Email: usga.org, http://www.usga.org/
Good wood Good wood, or smart wood, is a term certifying that the wood is harvested from a forest operating under environmentally sound and sustainable practices. A “certified wood” label indicates to consumers that the wood they purchase comes from a forest operating within specific guidelines designed to ensure future use of the forest. A well-managed forestry operation takes into account the overall health of the forest and its ecosystems, the use of the forest by indigenous people and cultures, and the economic influences the forest has on local communities. Certification of wood allows the wood to be traced from harvest through processing to the final product (i.e., raw wood or an item made from wood) in an attempt to reduce uncontrollable deforestation, while meeting the demand for wood and wood products by consumers around the world. Public concern regarding the disappearance of tropical forests initially spurred efforts to reduce the destruction of vast acres of rainforests by identifying environmentally responsible forestry operations and encouraging such practices by paying foresters higher prices. Certification, however, is not limited to tropical forests. All forest types—tropical, temperate, and boreal (those located in northern climes)— from all countries may apply for certification. Plantations (stands of timber that have been planted for the purpose of logging or that have been altered so that they no longer 648
support the ecosystems of a natural forest) may also apply for certification. Certification of forests and forest owners and managers is not required. Rather, the process is entirely voluntary. Several organizations currently assess forests and forest management operations to determine whether they meet the established guidelines of a well-managed, sustainable forest. The Forest Stewardship Council (FSC), founded in 1993, is an organization of international members with environmental, forestry, and socioeconomic backgrounds that monitors these organizations and verifies that the certification they issue is legitimate. A set of 10 guiding principles known as Principles and Criteria (P&C) were established by the FSC for certifying organizations to utilize when evaluating forest management operations. The P&C address a wide range of issues, including compliance with local, national, and international laws and treaties; review of the forest operation’s management plans; the religious or cultural significance of the forest to the indigenous inhabitants; maintenance of the rights of the indigenous people to use the land; provision of jobs for nearby communities; the presence of threatened or endangered species; control of excessive erosion when building roads into the forest; reduction of the potential for lost soil fertility as a result of harvesting; protection against the invasion of non-native species; pest management that limits the use of certain chemical types and of genetically altered organisms; and protection of forests when deemed necessary (for example, a forest that protects a watershed or that contains threatened and/or endangered species). Guarding against illegal harvesting is a major hurdle for those forest managers working to operate within the established regulations for certification. Forest devastation occurs not only from harvesting timber for wood sales but when forests are clear cut to make way for cattle crazing or farming, or to provide a fuel source for local inhabitants. Illegal harvesting often occurs in developing countries where enforcement against such activities is limited (for example, the majority of the trees harvested in Indonesia are done so illegally). Critics argue against the worthiness of managing forests, suggesting that the logging of select trees from a forest should be allowed and that once completed, the remaining forest should be placed off limits to future logging. Nevertheless, certified wood products are in the market place; large wood and wood product suppliers are offering certified wood and wood products to their consumers. In 2001 the Forest Leadership Forum (a group of environmentalists, forest industry representatives, and retailers) met to identify how wood retailers can promote sustainable forests. It is hoped that consumer demand for good wood will drive up the number of forests participating in the certification program,
Environmental Encyclopedia 3
Jane Goodall
thereby reducing the rate of irresponsible deforestation of the world’s forests. [Monica Anderson]
RESOURCES BOOKS Bass, Stephen, et al. Certification’s Impact on Forests, Stakeholders and Supply Chains. London: IIED, 2001.
ORGANIZATIONS Forest Stewardship Council United States, 1155 30th Street, NW, Suite 300, Washington, DC USA 20007 (202) 342 0413, Fax: (202) 342 6589, Email:
[email protected], 5 mg/l; barium 0.100 mg/l; cadmium >1 mg/l; chromium >5 mg/l; lead >5 mg/l). Additional codes under toxicity include an “acute hazardous waste” with code “H": a substance which has been found to be fatal to humans in low doses or has been found to be fatal in corresponding human concentrations in laboratory animals. Toxic waste (hazard code “T") designates wastes which have been found through laboratory studies to be a carcinogen, mutagen, or teratogen for humans or other life forms. Certain wastes are specifically excluded from classification as hazardous wastes under RCRA, including domestic sewage, irrigation return flows, household waste, and nuclear waste. The latter is controlled via other legislation. The impetus for this effort at legislation and classification comes from several notable cases such as Love Canal, New York; Bhopal, India; Stringfellow Acid Pits (Glen Avon, California); and Seveso, Italy; which have brought media and public attention to the need for identification and classification of dangerous substances, their effects on health and the environment, and the importance of having knowledge about the potential risk associated with various wastes. A notable feature of the legislation is its attempt at defining terms so that professionals in the field and government officials will share the same vocabulary. For example, the difference between “toxic” and “hazardous” has been established; the former denotes the capacity of a substance to produce injury and the latter denotes the probability that injury will result from the use of (or contact with) a substance. The RCRA legislation on hazardous waste is targeted toward larger generators of hazardous waste rather than small operations. The small generator is one who generates less than 2,205 lb (1,000 kg) per month; accumulates less than 703
Hazardous waste site remediation
2,205 lb (1,000 kg); produces wastes which contain no more than 2.2 lb (1 kg) of acutely hazardous waste; has containers no larger than 5.3 gal (20 l) or contained in liners less than 22 lb (10 kg) of weight of acutely hazardous waste; has no greater than 220 lb (100 kg) of residue or soil contaminated from a spill, etc. The purpose of this exclusion is to enable the system of regulations to concentrate on the most egregious and sizeable of the entities that contribute to hazardous waste and thus provide the public with the maximum protection within the resources of the regulatory and legal systems. [Malcolm T. Hepworth]
RESOURCES BOOKS Dawson, G. W., and B. W. Mercer. Hazardous Waste Management. New York: Wiley, 1986. Dominguez, G. S., and K. G. Bartlett. Hazardous Waste Management. Vol. 1, The Law of Toxics and Toxic Substances. Boca Raton: CRC Press, 1986. U.S. Environmental Protection Agency. Hazardous Waste Management: A Guide to the Regulations. Washington, DC: U.S. Government Printing Office, 1980. Wentz, C. A. Hazardous Waste Management. New York: McGraw-Hill, 1989.
Hazardous waste site remediation The overall objective in remediating hazardous waste sites is the protection of human health and the environment by reducing risk. There are three primary approaches which can be used in site remediation to achieve acceptable levels of risk: Othe hazardous waste at a site can be contained to preclude additional migration and exposure Othe hazardous constituents can be removed from the site to make them more amenable to subsequent ex situ treatment, whether in the form of detoxification or destruction Othe hazardous waste can be treated in situ (in place) to destroy or otherwise detoxify the hazardous constituents Each of these approaches has positive and negative ramifications. Combinations of the three principal approaches may be used to address the various problems at a site. There is a growing menu of technologies available to implement each of these remedial approaches. Given the complexity of many of the sites, it is not uncommon to have treatment trains with a sequential implementation of various in situ and/or ex situ technologies to remediate a site. Hazardous waste site remediation usually addresses soils and groundwater. However, it can also include wastes, surface water, sediment, sludges, bedrock, buildings, and other man-made items. The hazardous constituents may be organic, inorganic and, occasionally, radioactive. They may 704
Environmental Encyclopedia 3 be elemental ionic, dissolved, sorbed, liquid, gaseous, vaporous, solid, or any combination of these. Hazardous waste sites may be identified, evaluated, and if necessary, remediated by their owners on a voluntary basis to reduce environmental and health effects or to limit prospective liability. However, in the United States, there are two far-reaching federal laws which may mandate entry into the remediation process: the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA, also called the Superfund law), and the Resource Conservation and Recovery Act (RCRA). In addi-
tion, many of the states have their own programs concerning abandoned and uncontrolled sites, and there are other laws that involve hazardous site remediation, such as the cleanup of polychlorinated biphenyl (PCB) under the auspices of the federal Toxic Substances Control Act (TSCA). Potential sites may be identified by their owners, by regulatory agencies, or by the public in some cases. Site evaluation is usually a complicated and lengthy process. In the federal Superfund program, sites at which there has been a release of one or more hazardous substances that might result in a present or potential future threat to human health and/or the environment are first evaluated by a Preliminary Assessment/Site Inspection (PA/SI). The data collected at this stage is evaluated, and a recommendation for further action may be formulated. The Hazard Ranking System (HRS) of the U.S. Environmental Protection Agency (EPA) may be employed to score the site with respect to the potential hazards it may pose and to see if it is worthy of inclusion on the National Priorities List (NPL) of sites most deserving the attention of resources. Regardless of the HRS score or NPL status, the EPA may require the parties responsible for the release (including the present property owner) to conduct a further assessment, in the form of the two-phase Remedial Investigation/Feasibility Study (RI/FS). The objective of the RI is to determine the nature and extent of contamination at and near the site. The RI data is next considered in a baseline risk assessment. The risk assessment evaluates the potential threats to human health and the environment in the absence of any remedial action, considering both present and future conditions. Both exposure and toxicity are considered at this stage. The baseline risk assessment may support a decision of no action at a site. If remedial actions are warranted, the second phase of the RI/FS, an engineering Feasibility Study, is performed to allow for educated selection of an appropriate remedy. The final alternatives are evaluated on the basis of nine criteria in the federal Superfund program. The EPA selects the remedial action it deems to be most appropriate and describes it and the process which led to its selection in the Record of Decision (ROD). Public comments are solicited on the proposed clean-up plan before the ROD is
Environmental Encyclopedia 3 issued. There are also other public comment opportunities during the RI/FS process. Once the ROD is issued, the project moves to the Remedial Design/Remedial Action (RD/RA) phase unless there is a decision of no action. Upon design approval, construction commences. Then, after construction is complete, long-term operation, maintenance, and monitoring activities begin. For Superfund sites, the EPA may allow one or more of the responsible parties to conduct the RI/FS and RD/ RA under its oversight. If possibly responsible parties are not willing to participate or are unable to be involved for technical, legal, or financial reasons, the EPA may choose to conduct the project with government funding and then later seek to recover costs in lawsuits against the parties. Other types of site remediation programs often replicate or approximate the approaches described above. Some states, such as Massachusetts, have very definite programs, while others are less structured. Containment is one of the available treatment options. There are several reasons for using containment techniques. A primary reason is difficulty in excavating the waste or treating the hazardous constituents in place. This may be caused by construction and other man-made objects located over and in the site. Excavation could also result in uncontrollable releases at concentrations potentially detrimental to the surrounding area. At many sites, the low levels of risks posed, in conjunction with the relative costs of treatment technologies, may result in the selection of a containment remedy. One means of site containment is the use of an impermeable cap to reduce rainfall infiltration and to prevent exposure of the waste through erosion. Another means of containment is the use of cut-off walls to restrict or direct the movement of groundwater. In situ solidification can also be used to limit the mobility of contaminants. Selection among alternatives is very site specific and reflects such things as the site hydrogeology, the chemical and physical nature of the contamination, proposed land use, and so on. Of course, the resultant risk must be acceptable. As with any in situ approach, there is less control and knowledge of the performance and behavior of the technology than is possible with off-site treatment. Since the use of containment techniques leaves the waste in place, it usually results in long-term monitoring programs to determine if the remediation remains effective. If a containment remedy were to fail, the site could require implementation of another type of technology. The ex situ treatment of hazardous waste provides the most control over the process and permits the most detailed assessments of its efficacy. Ex situ treatment technologies offer the biggest selection of options, but include an additional risk factor during transport. Examples of treatment options include incineration; innovative thermal destruc-
Hazardous waste site remediation
tion, such as infrared incineration; bioremediation; stabilization/solidification; soil washing; chemical extraction; chemical destruction; and thermal desorption. Another approach to categorizing the technologies available for hazardous waste site remediation is based upon their respective levels of demonstration. There are existing technologies, which are fully demonstrated and in routine commercial use. Performance and cost information is available. Examples of existing technologies include slurry walls, caps, incineration, and conventional solidification/stabilization. The next level of technology is innovative and has grown rapidly as the number of sites requiring remediation grew. Innovative technologies are characterized by limited availability of cost and performance data. More site-specific testing is required before an innovative technology can be considered ready for use at a site. Examples of innovative technologies are vacuum extraction, bioremediation, soil washing/flushing, chemical extraction, chemical destruction, and thermal desorption. Vapor extraction and in situ bioremediation are expected to be the next innovative technologies to reach “existing” status as a result of the growing base of cost and performance information generated by their use at many hazardous waste sites. The last category is that of emerging technologies. These technologies are at a very early stage of development and therefore require additional laboratory and pilot scale testing to demonstrate their technical viability. No cost or performance information is available. An example of an emerging technology is electrokinetic treatment of soils for metals removal. Groundwater contaminated by hazardous materials is a widespread concern. Most hazardous waste site remediations use a pump and treat approach as a first step. Once the groundwater has been brought to the surface, various treatment alternatives exist, depending upon the constituents present. In situ air sparging of the groundwater using pipes, wells, or curtains is also being developed for removal of volatile constituents. The vapor is either treated above ground with technologies for off-gas emissions, or biologically in the unsaturated or vadose zone above the aquifer. While this approach eliminates the costs and difficulties in treating the relatively large volumes of water (with relatively low contaminant concentrations) generated during pumpand-treat, it does not necessarily speed up remediation. Contaminated bedrock frequently serves as a source of groundwater or soil recontamination. Constituents with densities greater than water enter the bedrock at fractures, joints or bedding planes. From these locations, the contamination tends to diffuse in all directions. After many years of accumulation, bedrock contamination may account for the majority of the contamination at a site. Currently, little can be done to remediate contaminated bedrock. Specially 705
Environmental Encyclopedia 3
Hazardous waste siting
designed vapor stripping applications have been proposed when the constituents of concern are volatile. Efforts are on-going in developing means to enhance the fractures of the bedrock and thereby promote removal. In all cases, the ultimate remediation will be driven by the diffusion of contaminants back out of the rock, a very slow process. The remediation of buildings contaminated with hazardous waste offers several alternatives. Given the cost of disposal of hazardous wastes, the limited disposal space available, and the volume of demolition debris, it is beneficial to determine the extent of contamination of construction materials. This contamination can then be removed through traditional engineering approaches, such as scraping or sand blasting. It is then only this reduced volume of material that requires treatment or disposal as hazardous waste. The remaining building can be reoccupied or disposed of as nonhazardous waste. See also Hazardous material; Hazardous waste siting; Solidification of hazardous materials; Vapor recovery system [Ann N. Clarke and Jeffrey L. Pintenich]
RESOURCES BOOKS U.S. Environmental Protection Agency. Office of Emergency and Remedial Response. Guidance for Conducting Remedial Investigations and Feasibility Studies Under CERCLA. Washington, DC: U.S. Government Printing Office, 1988. U.S. Environmental Protection Agency. Office of Emergency and Remedial Response. Handbook: Remedial Action at Waste Disposal Sites. Washington, DC: U.S. Government Printing Office, 1985. U.S. Environmental Protection Agency. Office of Emergency and Remedial Response. Guidance on Remedial Actions for Contaminated Groundwater at Superfund Sites. Washington, DC: U.S. Government Printing Office, 1988. U.S. Environmental Protection Agency. Office of Environmental Engineering. Guide for Treatment Technologies for Hazardous Waste at Superfund Sites. Washington, DC: U.S. Government Printing Office, 1989. U.S. Environmental Protection Agency. Office of Solid Waste and Emergency Response. Innovative Treatment Technologies. Washington, DC: U.S. Government Printing Office, 1991. U.S. Environmental Protection Agency. Risk Reduction Engineering Laboratory. Handbook on In Situ Treatment of Hazardous Waste: Contaminated Soils. Washington, DC: U.S. Government Printing Office, 1990. U.S. Environmental Protection Agency. Technology Screening Guide for Treatment of CERCLA Soils and Sludges. Washington, DC: U.S. Government Printing Office, 1988.
Hazardous waste siting Regardless of the specific technologies to be employed, there are many technical and nontechnical considerations to be addressed before hazardous waste can be treated or disposed of at a given location. The specific nature and relative importance of these considerations to the successful siting reflect the chemodynamic behavior (i.e., transport and fate of the waste and/or treated residuals in the environment 706
after emission) as well as the specifics of the location and associated, proximate areas. Examples of these considerations are: the nature of the soil and hydrogeological features such as depth to and quality of groundwater; quality, use, and proximity of surface waters; and ambient air quality and meteorological conditions; and nearby critical environmental areas (wetlands, preserves, etc.), if any. Other considerations include surrounding land use; proximity of residences and other potentially sensitive receptors such as schools, hospitals, parks, etc.; availability of utilities; and the capacity and quality of the roadway system. It is also critical to develop and obtain the timely approval of all appropriate local, state, and federal permits. Associated with these permits is the required documentation of financial viability as established by escrowed closure funds, site insurance, etc. Site-specific standard operating procedures as well as contingency plans for use in emergencies are also required. Additionally, there needs to be baseline and ongoing monitoring plans developed and implemented to determine if there are any releases to or general degradation of the environment. One should also anticipate public hearings before permits are granted. Several states in the United States have specific regulations which restrict the siting of hazardous waste management facilities.
Haze An aerosol in the atmosphere of sufficient concentration and extent to decrease visibility significantly when the relative humidity is below saturation is known as haze. Haze may contain dry particles or droplets or a mixture of both, depending on the precise value of the humidity. In the use of the word, there is a connotation of some degree of permanence. For example, a dust storm is not a haze, but the coarse particles may settle rapidly and leave a haze behind once the velocity drops. Human activity is responsible for many hazes. Enhanced emission of sulfur dioxide results in the formation of aerosols of sulfuric acid. In the presence of ammonia, which is excreted by most higher animals including humans, such emissions result in aerosols of ammonium sulfate and bisulfate. Organic hazes are part of photochemical smog, such as the smog often associated with Los Angeles, and they consist primarily of polyfunctional, highly oxygenated compounds with at least five carbon atoms. Such hazes can also form if air with an enhanced nitrogen oxide content meets air containing the natural terpenes emitted by vegetation. All hazes, however, are not products of human activity. Natural hazes can result from forest fires, dust storms, and the natural processes that convert gaseous contaminants into particles for subsequent removal by precipitation or deposi-
Environmental Encyclopedia 3
Heat (stress) index
tion to the surface or to vegetation. Still other hazes are of mixed origin, as noted above, and an event such as a dust storm can be enhanced by human-caused devegetation of soil. Though it may contain particles injurious to health, haze is not of itself a health hazard. It can have a significant economic impact, however, when tourists cannot see scenic views, or if it becomes sufficiently dense to inhibit aircraft operations. See also Air pollution; Air quality; Air quality criteria; Los Angeles Basin; Mexico City, Mexico [James P. Lodge Jr.]
RESOURCES BOOKS Husar, R. B. Trends in Seasonal Haziness and Sulfur Emissions Over the Eastern United States. Research Triangle Park, NC: U. S. Environmental Protection Agency, 1989.
PERIODICALS Husar, R. B., and W. E. Wilson. “Haze and Sulfur Emission Trends in the Eastern United States.” Environmental Science and Technology 27 (January 1993): 12–16. Malm, W. C. “Characteristics and Origins of Haze in the Continental United States.” Earth-Science Reviews 33 (August 1992): 1–36. Raloff, J. “Haze May Confound Effects of Ozone Loss.” Science News 141 (4 January 1992): 5.
Heat (stress) index The heat index (HI) or heat stress index—sometimes called the apparent temperature or comfort index—is a temperature measure that takes into account the relative humidity. Based on human physiology and on clothing science, it measures how a given air temperature feels to the average person at a given relative humidity. The HI temperature is measured in the shade and assumes a wind speed of 5.6 mph (9 kph) and normal barometric pressure. At low relative humidity, the HI is less than or equal to the air temperature. At higher relative humidity, the HI exceeds the air temperature. For example, according to the National Weather Service’s (NWS) HI chart, if the air temperature is 70°F (21°C), the HI is 64°F (18°C) at 0% relative humidity and 72°F (22°C) at 100% relative humidity. At 95°F (35°C) and 55% relative humidity, the HI is 110°F (43°C). In very hot weather, humidity can raise the HI to extreme levels: at 115°F (46°C) and 40% relative humidity, the HI is 151°F (66°C). This is because humidity affects the body’s ability to regulate internal heat through perspiration. The body feels warmer when it is humid because perspiration evaporates more slowly; thus the HI is higher. The HI is used to predict the risk of physiological heat stress for an average individual. Caution is advised
at an HI of 80–90°F (27–32°C): fatigue may result with prolonged exposure and physical activity. An HI of 90– 105°F (32–41°C) calls for extreme caution, since sunstroke, muscle cramps, and heat exhaustion are possible. Danger warnings are issued at HIs of 105–130°F (41–54°C), when sunstroke and heat exhaustion are likely and there is a potential for heat stroke. Category IV, extreme danger, occurs at HIs above 130°F (54°C), when heatstroke and sunstroke are imminent. Individual physiology influences how people are affected by high HIs. Children and older people are more vulnerable. Acclimatization (being used to the climate) can alleviate some of the danger. However sunburn can increase the effective HI by slowing the skin’s ability to shed excess heat from blood vessels and through perspiration. Exposure to full sunlight can increase HI values by as much as 15°F (8°C). Winds, especially hot dry winds, also can increase the HI. In general, the NWS issues excessive heat alerts when the daytime HI reaches 105°F (41°C) and the nighttime HI stays above 80°F (27°C) for two consecutive days; however these values depend somewhat on the region or metropolitan area. In cities, high HIs often mean increased air pollution. The concentration of ozone, the major component of smog, tends to rise at ground level as the HI increases, causing respiratory problems for many people. The National Center for Health Statistics estimates that heat exposure results in an average of 371 deaths annually in the United States. About 1,700 Americans died in the heat waves of 1980. In Chicago in 1995, more than 700 people died during a five-day heat wave when the nighttime HI stayed above 89°F (32°C). At higher temperatures, the air can hold more water vapor; thus humidity and HI values increase as the atmosphere warms. Since the late nineteenth century, the mean annual surface temperature of the earth has risen between 0.5 and 1.0°F (0.3 and 0.6°C). According to the National Aeronautics and Space Administration, the five-year-mean temperature increased about 0.9°F (0.5°C) between 1975 and 1999, the fastest rate of recorded increase. In 1998 global surface temperatures were the warmest since the advent of reliable measurements and the 1990s accounted for seven of the 10 warmest years on record. Nighttime temperatures have been increasing twice as fast as daytime temperatures. Greenhouse gases, including carbon dioxide, methane, nitrous oxide, and chlorofluorocarbons, increase the heat-trapping capabilities of the atmosphere. Evaporation from the ocean surfaces increased during the twentieth century, resulting in higher humidity that enhanced the greenhouse effect. It is projected that during the twenty-first century greenhouse gas concentrations will double or even quadruple from pre-industrial levels. Increased urbanization also contributes to global warming, as 707
Heavy metals and heavy metal poisoning
buildings and roads hold in the heat. Climate simulations predict an average surface air temperature increase of 4.5– 7°F (2.5–4°C) by 2100. This will increase the number of extremely hot days and, in temperate climates, double the number of very hot days, for an average increase in summer temperatures of 4–5°F (2–3°C). More heat-related illnesses and deaths will result. The National Oceanic and Atmospheric Administration projects that the HI could rise substantially in humid regions of the tropics and sub-tropics. Warm, humid regions of the southeastern United States are expected to experience substantial increases in the summer HI due to increased humidity, even though temperature increases may be smaller than in the continental interior. Predictions for the increase in the summer HI for the Southeast United States over the next century range from 8–20°F (4–11°C). [Margaret Alic Ph.D.]
RESOURCES BOOKS Wigley, T. M. L. The Science of Climate Change: Global and U.S. Perspectives. Arlington, VA: Pew Center on Global Climate Change, 1999.
PERIODICALS Delworth, T. L., J. D. Mahlman, and T. R. Knutson. “Changes in Heat Index Associated with CO2-Induced Global Warming.” Climatic Change 43 (1999): 369–86.
OTHER Darling, Allan. “Heat Wave.” Internet Weather Source. June 27, 2000 [cited May 2002]. . Davies, Kert. “Heat Waves and Hot Nights.” Ozone Action and Physicians for Social Responsibility. July 26, 2000 [cited May 2002]. . National Assessment Synthesis Team. Climate Change Impacts on the United States: The Potential Consequences of Climate Variability and Change. Overview: Southeast. U. S. Global Change Research Program. March 5, 2002 [May 2002]. . Union of Concerned Scientists. Early Warning Signs of Global Warming: Heat Waves and Periods of Unusually Warm Weather. 2000 [cited May 2002]. . Trenberth, Kevin E. “The IPCC Assessment of Global Warming 2001.” FailSafe. Spring 2001 [cited May 2002]. .
ORGANIZATIONS National Weather Service, National Oceanic and Atmospheric Administration, U. S. Department of Commerce, 1325 East West Highway, Silver Spring, USA 20910 , Physicians for Social Responsibility, 1875 Connecticut Avenue, NW, Suite 1012, Washington , DC USA 20009 (202) 667-4260, Fax: (202) 6674201, Email:
[email protected], Union of Concerned Scientists, 2 Brattle Square, Cambridge, MA USA 02238 (617) 547-5552, Email:
[email protected],
708
Environmental Encyclopedia 3
Heavy metals and heavy metal poisoning Heavy metals are generally defined as environmentally stable elements of high specific gravity and atomic weight. They have such characteristics as luster, ductility, malleability, and high electric and thermal conductivity. Whether based on their physical or chemical properties, the distinction between heavy metals and non-metals is not sharp. For example, arsenic, germanium, selenium, tellurium, and antimony possess chemical properties of both metals and non-metals. Defined as metalloids, they are often loosely classified as heavy metals. The category “heavy metal” is, therefore, somewhat arbitrary and highly non-specific because it can refer to approximately 80 of the 103 elements in the periodic table. The term “trace element” is commonly used to describe substances which cannot be precisely defined but most frequently occur in the environment in concentrations of a few parts per million (ppm) or less. Only a relatively small number of heavy metals such as cadmium, copper, iron, cobalt, zinc, mercury, vanadium, lead, nickel, chromium, manganese, molybdenum, silver, and tin as well as the metalloids arsenic and selenium are associated with environmental, plant, animal, or human health problems. While the chemical forms of heavy metals can be changed, they are not subject to chemical/biological destruction. Therefore, after release into the environment they are persistent contaminants. Natural processes such as bedrock and soil weathering, wind and water erosion, volcanic activity, sea salt spray, and forest fires release heavy metals into the environment. While the origins of anthropogenic releases of heavy metals are lost in antiquity, they probably began as our prehistoric ancestors learned to recover metals such as gold, silver, copper, and tin from their ores and to produce bronze. The modern age of heavy metal pollution has its beginning with the Industrial Revolution. The rapid development of industry, intensive agriculture, transportation, and urbanization over the past 150 years, however, has been the precursor of today’s environmental contamination problems. Anthropogenic utilization has also increased heavy metal distribution by removing the substances from localized ore deposits and transporting them to other parts of the environment. Heavy metal by-products result from many activities including: ore extraction and smelting, fossil fuel combustion, dumping and landfilling of industrial wastes, exhausts from leaded gasolines, steel, iron, cement and fertilizer production, refuse and wood combustion. Heavy metal cycling has also increased through activities such as farming, deforestation, construction, dredging of harbors, and the disposal of municipal sludges and industrial wastes on land. Thus, anthropogenic processes, especially combustion, have substantially supplemented the natural atmospheric
Environmental Encyclopedia 3 emissions of selected heavy metals/metalloids such as selenium, mercury, arsenic, and antimony. They can be transported as gases or adsorbed on particles. Other metals such as cadmium, lead, and zinc are transported atmospherically only as particles. In either state heavy metals may travel long distances before being deposited on land or water. The heavy metal contamination of soils is a far more serious problem than either air or water pollution because heavy metals are usually tightly bound by the organic components in the surface layers of the soil and may, depending on conditions, persist for centuries or millennia. Consequently, the soil is an important geochemical sink which accumulates heavy metals rapidly and usually depletes them very slowly by leaching into groundwater aquifers or bioaccumulating into plants. However, heavy metals can also be very rapidly translocated through the environment by erosion of the soil particles to which they are adsorbed or bound and redeposited elsewhere on the land or washed into rivers, lakes or oceans to the sediment. The cycling, bioavailability, toxicity, transport, and fate of heavy metals are markedly influenced by their physico-chemical forms in water, sediments, and soils. Whenever a heavy metal containing ion or compound is introduced into an aquatic environment, it is subjected to a wide variety of physical, chemical, and biological processes. These include: hydrolysis, chelation, complexation, redox, biomethylation, precipitation and adsorption reactions. Often heavy metals experience a change in the chemical form or speciation as a result of these processes and so their distribution, bioavailability, and other interactions in the environment are also affected. The interactions of heavy metals in aquatic systems are complicated because of the possible changes due to many dissolved and particulate components and non-equilibrium conditions. For example, the speciation of heavy metals is controlled not only by their chemical properties but also by environmental variables such as: 1) pH; 2) redox potential; 3) dissolved oxygen; 4) ionic strength; 5) temperature; 6) salinity; 7) alkalinity; 8) hardness; 9) concentration and nature of inorganic ligands such as carbonate, bicarbonate, sulfate, sulfides, chlorides; 10) concentration and nature of dissolved organic chelating agents such as organic acids, humic materials, peptides, and polyamino-carboxylates; 11) the concentration and nature of particulate matter with surface sites available for heavy metal binding; and 12) biological activity. In addition, various species of bacteria can oxidize arsenate or reduce arsenate to arsenite, or oxidize ferrous iron to ferric iron, or convert mercuric ion to elemental mercury or the reverse. Various enzyme systems in living organisms can biomethylate a number of heavy metals. While it had been known for at least 60 years that arsenic
Heavy metals and heavy metal poisoning
and selenium could be biomethylated, microorganisms capable of converting inorganic mercury into monomethyl and dimethylmercury in lake sediments were not discovered until 1967. Since then, numerous heavy metals such as lead, tin, cobalt, antimony, platinum, gold, tellurium, thallium, and palladium have been shown to be biomethylated by bacteria and fungi in the environment. As environmental factors change the chemical reactivities and speciation of heavy metals, they influence not only the mobilization, transport, and bioavailability, but also the toxicity of heavy metal ions toward biota in both freshwater and marine ecosystems. The factors affecting the toxicity and bioaccumulation of heavy metals by aquatic organisms include: 1) the chemical characteristics of the ion; 2) solution conditions which affect the chemical form (speciation) of the ion; 3) the nature of the response such as acute toxicity, bioaccumulation, various types of chronic effects, etc.; 4) the nature and condition of the aquatic animal such as age or life stage, species, or trophic level in the food chain. The extent to which most of the methylated metals are bioaccumulated and/or biomagnified is limited by the chemical and biological conditions and how readily the methylated metal is metabolized by an organism. At present, only methylmercury seems to be sufficiently stable to bioaccumulate to levels that can cause adverse effects in aquatic organisms. All other methylated metal ions are produced in very small concentrations and are degraded naturally faster than they are bioaccumulated. Therefore, they do not biomagnify in the food chain. The largest proportion of heavy metals in water is associated with suspended particles, which are ultimately deposited in the bottom sediments where concentrations are orders of magnitude higher than those in the overlying or interstitial waters. The heavy metals associated with suspended particulates or bottom sediments are complex mixtures of: 1) weathering and erosion residues such as iron and aluminum oxyhydroxides, clays and other aluminosilicates; 2) methylated and non-methylated forms in organic matter such as living organisms, bacteria and algae, detritus and humus; 3) inorganic hydrous oxides and hydroxides, phosphates and silicates; and 4) diagenetically produced iron and manganese oxyhydroxides in the upper layer of sediments and sulfides in the deeper, anoxic layers. In anoxic waters the precipitation of sulfides may control the heavy metal concentrations in sediments while in oxic waters adsorption, absorption, surface precipitation and coprecipitation are usually the mechanisms by which heavy metals are removed from the water column. Moreover, physical, chemical and microbiological processes in the sediments often increase the concentrations of heavy metals in the pore waters which are released to overlying waters by diffusion or as the result of consolidation and bioturbation. 709
Environmental Encyclopedia 3
Heavy metals precipitation
Transport by living organisms does not represent a significant mechanism for local movement of heavy metals. However, accumulation by aquatic plants and animals can lead to important biological responses. Even low environmental levels of some heavy metals may produce subtle and chronic effects in animal populations. Despite these adverse effects, at very low levels, some metals have essential physiological roles as micronutrients. Heavy metals such as chromium, manganese, iron, cobalt, molybdenum, nickel, vanadium, copper, and selenium are required in small amounts to perform important biochemical functions in plant and animal systems. In higher concentrations they can be toxic, but usually some biological regulatory mechanism is available by means of which animals can speed up their excretion or retard their uptake of excessive quantities. In contrast, non-essential heavy metals are primarily of concern in terrestrial and aquatic systems because they are toxic and persist in living systems. Metal ions commonly bond with sulfhydryl and carboxylic acid groups in amino acids, which are components of proteins (enzymes) or polypeptides. This increases their bioaccumulation and inhibits excretion. For example, heavy metals such as lead, cadmium, and mercury bind strongly with -SH and -SCH3 groups in cysteine and methionine and so inhibit the metabolism of the bound enzymes. In addition, other heavy metals may replace an essential element, decreasing its availability and causing symptoms of deficiency. Uptake, translocation, and accumulation of potentially toxic heavy metals in plants differ widely depending on soil type, pH, redox potential, moisture, and organic content. Public health officials closely regulate the quantities and effects of heavy metals that move through the agricultural food chain to be consumed by human beings. While heavy metals such as zinc, copper, nickel, lead, arsenic, and cadmium are translocated from the soil to plants and then into the animal food chain, the concentrations in plants are usually very low and generally not considered to be an environmental problem. However, plants grown on soils either naturally enriched or highly contaminated with some heavy metals can bioaccumulate levels high enough to cause toxic effects in the animals or human beings that consume them. Contamination of soils due to land disposal of sewage and industrial effluents and sludges may pose the most significant long term problem. While cadmium and lead are the greatest hazard, other elements such as copper, molybdenum, nickel, and zinc can also accumulate in plants grown on sludge-treated land. High concentrations can, under certain conditions, cause adverse effects in animals and human beings that consume the plants. For example, when soil contains high concentrations of molybdenum and selenium, 710
they can be translocated into edible plant tissue in sufficient quantities to produce toxic effects in ruminant animals. Consequently, the U. S. Environmental Protection Agency has issued regulations which prohibit and/or tightly regulate the disposal of contaminated municipal and industrial sludges on land to prevent heavy metals, especially cadmium, from entering the food supply in toxic amounts. However, presently, the most serious known human toxicity is not through bioaccumulation from crops but from mercury in fish, lead in gasoline, paints and water pipes, and other metals derived from occupational or accidental exposure. See also Aquatic chemistry; Ashio, Japan; Atmospheric pollutants; Biomagnification; Biological methylation; Contaminated soil; Ducktown, Tennessee; Hazardous material; Heavy metals precipitation; Itai-Itai disease; Methylmercury seed dressings; Minamata disease; Smelters; Sudbury, Ontario; Xenobiotic [Frank M. D’Itri]
RESOURCES BOOKS Craig, P. J. “Metal Cycles and Biological Methylation.” The Handbook of Environmental Chemistry. Vol. 1, Part A, edited by O. H. Hutzinger. Berlin: Springer Verlag, 1980. Fo¨rstner, U., and G. T. W. Wittmann. Metal Pollution in the Aquatic Environment. 2nd ed. Berlin: Springer Verlag, 1981. Kramer, J. R., and H. E. Allen, eds. Metal Speciation: Theory, Analysis and Application. Chelsea, MI: Lewis, 1988.
Heavy metals precipitation The principle technology to remove metals pollutants from wastewater is by chemical precipitation. Chemical precipitation includes two secondary removal mechanisms, coprecipitation and adsorption. Precipitation processes are characterized by the solubility of the metal to be removed. They are generally designed to precipitate trace metals to their solubility limits and obtain additional removal by coprecipitation and adsorption during the precipitation reaction. There are many different treatment variables that affect these processes. They include the optimum pH, the type of chemical treatments used, and the number of treatment stages, as well as the temperature and volume of wastewater, and the chemical specifications of the pollutants to be removed. Each of these variables directly influences treatment objectives and costs. Treatability studies must be performed to optimize the relevant variables, so that goals are met and costs minimized. In theory, the precipitation process has two steps, nucleation followed by particle growth. Nucleation is represented by the appearance of very small particle seeds which are generally composed of 10–100 molecules. Particle growth
Environmental Encyclopedia 3
Robert Louis Heilbroner
involves the addition of more atoms or molecules into this particle structure. The rate and extent of this process is dependent upon the temperature and chemical characteristics of the wastewater, such as the concentration of metal initially present and other ionic species present, which can compete with or form soluble complexes with the target metal species. Heavy metals are present in many industrial wastewaters. Examples of such metals are cadmium, copper, lead, mercury, nickel, and zinc. In general, these metals can be complexed to insoluble species by adding sulfide, hydroxide, and carbonate ions to a solution. For example, the precipitation of copper (Cu) hydroxide is accomplished by adjusting the pH of the water to above 8, using precipitant chemicals such as lime (Ca(OH)2) or sodium hydroxide (NaOH). Precipitation of metallic carbonate and sulfide species can be accomplished by the addition of calcium carbonate or sodium sulfide. The removal of coprecipitive metals during precipitation of the soluble metals is aided by the presence of solid ferric oxide, which acts as an adsorbent during the precipitation reaction. For example, hydroxide precipitation of ferric chloride can be used as the source of ferric oxide for coprecipitation and adsorption reactions. Precipitation, coprecipitation, and adsorption reactions generate suspended solids which must be separated from the wastewater. Flocculation and clarification are again employed to assist in solids separation. The treatment is an important variable which must be optimized to effect the maximum metal removal possible. Determining the optimal pH range to facilitate the maximum precipitation of metal is a difficult task. It is typically accomplished by laboratory studies, such as by-jar tests rather than theoretical calculations. Often the actual wastestream behaves differently, and the theoretical metal solubilities and corresponding optimal pH ranges can vary considerably from theoretical values. See also Heavy metals and heavy metal poisoning; Industrial waste treatment; Itaiitai disease; Minamata disease; Sludge; Waste management [James W. Patterson]
RESOURCES BOOKS Nemerow, N. L., and A. Dasgupta. Industrial and Hazardous Waste Treatment. New York: Van Nostrand Reinhold, 1991.
Robert Louis Heilbroner (1919 – ) American economist and author An economist by profession, Robert Heilbroner is the author of a number of books and articles that put economic theories
and developments into historical perspective and relate them to contemporary social and political problems. He is especially noteworthy for his gloomy speculations on the future of a world confronted by the environmental limits to economic growth. Born in New York City in 1919, Heilbroner received a bachelor’s degree from Harvard University in 1940 and a Bronze Star for his service in World War II. In 1963 he earned a Ph.D. in economics from the New School for Social Research in New York, and in 1972, became the Norman Thomas Professor of Economics there. His books include The Worldly Philosophers (1955), The Making of Economic Society (1962), Marxism: For and Against (1980), and The Nature and Logic of Capitalism (1985). He has also served on the editorial board of the socialist journal Dissent. In 1974, Heilbroner published An Inquiry into the Human Prospect, in which he argues that three “external challenges” confront humanity: the population explosion, the threat of war, and “the danger...of encroaching on the environment beyond its ability to support the demands made on it.” Each of these problems, he maintains, arises from the development of scientific technology, which has increased human life span, multiplied weapons of destruction, and encouraged industrial production that consumes natural resources and pollutes the environment. Heilbroner believes that these challenges confront all economies, and that meeting them will require more than adjustments in economic systems. Societies will have to muster the will to make sacrifices. Heilbroner goes on to argue that persuading people to make these sacrifices may not be possible. Those living in one part of the world are not likely to give up what they have for the sake of those in another part, and people living now are not likely to make sacrifices for future generations. His reluctant conclusion is that coercion is likely to take the place of persuasion. Authoritarian governments may well supplant democracies because “the passage through the gantlet ahead may be possible only under governments capable of rallying obedience far more effectively than would be possible in a democratic setting. If the issue for mankind is survival, such governments may be unavoidable, even necessary.” Heilbroner wrote An Inquiry into the Human Prospect in 1972 and 1973, but his position had not changed by the end of the decade. In a revised edition written in 1979, he continued to insist upon the environmental limits to economic growth: “the industrialized capitalist and socialist worlds can probably continue along their present growth paths” for about 25 years, at which point “we must expect...a general recognition that the possibilities for expansion are limited, and that social and economic life must be maintained within fixed...material boundaries.” Heilbroner has published a number of books, including 21st Century Capitalism 711
Environmental Encyclopedia 3
Hells Canyon
Robert L. Heilbroner. (Photograph by Jose Pelaez. W. W. Norton. Reproduced by permission.)
and Visions of the Future. He also received the New York Council for the Humanities Scholar of the Year award in 1994. Heilbroner currerently holds the position of Norman Thomas Professor of Economics, Emeritus, at the New School for Social Research, in New York City. [Richard K. Dagger]
RESOURCES BOOKS Heilbroner, Robert L. An Inquiry into the Human Prospect. Rev. ed. New York: Norton, 1980. ———. The Making of an Economic Society. 6th ed. Englewood Cliffs, NJ: Prentice-Hall, 1980. ———. The Nature and Logic of Capitalism. New York: Norton, 1985. ———. Twenty-First Century Capitalism. Don Mills, Ont.: Anansi, 1992. ———. The Worldly Philosophers: The Lives, Times and Ideas of the Great Economic Thinkers. 6th ed. New York: Simon & Schuster, 1986. Straub, D., ed. Contemporary Authors: New Revision Series. Vol. 21. Detroit, MI: Gale Research, 1987.
Hells Canyon Hells Canyon is a stretch of canyon on the Snake River between Idaho and Oregon. This canyon, deeper than the Grand Canyon and formed in ancient basalt flows, contains some of 712
the United States’ wildest rapids and has provided extensive recreational and scenic boating since the 1920s. The narrow canyon has also provided outstanding dam sites. Hells Canyon became the subject of nationwide controversy between 1967 and 1975, when environmentalists challenged hydroelectric developers over the last stretch of free-flowing water in the Snake River from the border of Wyoming to the Pacific. Historically Hells Canyon, over 100 mi (161 km) long, filled with rapids, and averaging 6,500 ft (1,983 m) deep, presented a major obstacle to travelers and explorers crossing the mountains and deserts of southern Idaho and eastern Oregon. Nez Perce´, Paiute, Cayuse, and other Native American groups of the region had long used the area as a mild wintering ground with good grazing land for their horses. European settlers came for the modest timber and with cattle and sheep to graze. As early as the 1920s travelers were arriving in this scenic area for recreational purposes, with the first river runners navigating the canyon’s rapids in 1928. By the end of the Depression the Federal Power Commission was urging regional utility companies to tap the river’s hydroelectric potential, and in 1958 the first dam was built in the canyon. Falling from the mountains in southern Yellowstone National Park through Idaho, and into the Columbia River, the Snake River drops over 7,000 vertical ft (2,135 m) in 1,000 mi (1,609 km) of river. This drop and the narrow gorges the river has carved presented excellent dam opportunities, and by the end of the 1960s there were 18 major dams along the river’s course. By that time the river was also attracting great numbers of whitewater rafters and kayakers, as well as hikers and campers in the adjacent national forests. When a proposal was developed to dam the last freerunning section of the canyon, protesters brought a suit to the United States Supreme Court. In 1967, Justice William O. Douglas led the majority in a decision directing the utilities to consider alternatives to the proposed dam. Hells Canyon became a national environmental issue. Several members of Congress flew to Oregon to raft the river. The Sierra Club and other groups lobbied vigorously. Finally, in 1975 President Gerald Ford signed a bill declaring the remaining stretch of the canyon a National Scenic Waterway, creating a 650,000-acre (260,000-ha) Hells Canyon National Recreation Area, and adding 193,000 acres (77,200 ha) of the area to the National Wilderness Preservation System. See also Wild and Scenic Rivers Act; Wild river [Mary Ann Cunningham Ph.D.]
RESOURCES BOOKS Collins, R. O., and R. Nash. The Big Drops. San Francisco: Sierra Club Books, 1978.
Environmental Encyclopedia 3 OTHER Hells Canyon Recreation Area. “Hells Canyon.” Washington, DC: U.S. Government Printing Office, 1988.
Hazel Henderson (1933 – ) English/American environmental activist and writer Hazel Henderson is an environmental activist and futurist who has called for an end to current “unsustainable industrial modes” and urges redress for the “unequal access to resources which is now so dangerous, both ecologically and socially.” Born in Clevedon, England, Henderson immigrated to the United States after finishing high school; she became a naturalized citizen in 1962. After working for several years as a free-lance journalist, she married Carter F. Henderson, former London bureau chief of the Wall Street Journal in 1957. Her activism began when she became concerned about air quality in New York City, where she was living. To raise public awareness, she convinced the FCC and television networks to broadcast the air pollution index with the weather report. She persuaded an advertising agency to donate their services to her cause and teamed up with a New York City councilman to co-found Citizens for Clean Air. Her endeavors were rewarded in 1967, when she was commended as Citizen of the Year by the New York Medical Society. Henderson’s career as an advocate for social and environmental reform took flight from there. She argued passionately against the spread of industrialism, which she called “pathological” and decried the use of an economic yardstick to measure quality of life. Indeed, she termed economics “merely politics in disguise” and even “a form of brain damage.” Henderson believed that society should be measured by less tangible means, such as political participation, literacy, education, and health. “Per-capita income,” she felt, is “a very weak indicator of human well-being.” She became convinced that traditional industrial development wrought little but “ecological devastation, social unrest, and downright hunger...I think of development, instead,...as investing in ecosystems, their restoration and management.” Even the fundamental idea of labor should, Henderson argued, “be replaced by the concept of ’Good Work’—which challenges individuals to grow and develop their faculties; to overcome their ego-centeredness by joining with others in common tasks; to bring forth those goods and services needed for a becoming existence; and to do all this with an ethical concern for the interdependence of all life forms...” To advance her theories, Henderson has published several books, Creative Alternative Futures: The End of Economics (1978), The Politics of the Solar Age: Alternatives to
Herbicide
Economics (1981), Building a Win-Win World (1996), Toward Sustainable Communities: Resources for Citizens and Their Governments (1998), and Beyond Globalization: Shaping a Sustainable Global Economy (1999). She has also contributed to several periodicals, and lectured at colleges and universities. In 1972 she co-founded the Princeton Center for Alternative Futures, of which she is still a director. She is a member of the board of directors for Worldwatch Institute and the Council for Economic Priorities, among other organizations. In 1982 she was appointed a Horace Allbright Professor at the University of California at Berkeley. In 1996, Henderson was awarded the Global Citizen Award. [Amy Strumolo]
RESOURCES BOOKS Henderson, H. Beyond Globalization: Shaping a Sustainable Global Economy. 1999. ———. Building a Win-Win World. 1996. ———. Creative Alternative Futures: The End of Economics. 1978. ———. The Politics of the Solar Age: Alternatives to Economics. 1981. ———. Toward Sustainable Communities: Resources for Citizens and Their Governments. 1998. Telephone Interview with Hazel Henderson. Whole Earth Review (Winter 1988): 58–59.
PERIODICALS Henderson, H. “The Legacy of E. F. Schumacher.” Environment 20 (May 1978): 30–36. Holden, C. “Hazel Henderson: Nudging Society Off Its Macho Trip.” Science 190 (November 28, 1975): 863–64.
Herbicide Herbicides are chemical pesticides that are used to manage vegetation. Usually, herbicides are used to reduce the abundance of weedy plants, so as to release desired crop plants from competition. This is the context of most herbicide use in agriculture, forestry, and for lawn management. Sometimes herbicides are not used to protect crops, but to reduce the quantity or height of vegetation, for example along highways and transmission corridors. The reliance on herbicides to achieve these ends has increased greatly in recent decades, and the practice of chemical weed control appears to have become an entrenched component of the modern technological culture of humans, especially in agroecosystems. The total use of pesticides in the United States in the mid-1980s was 957 million lb per year (434 million kg/ year), used over 592,000 mi2 (1.5 million km2). Herbicides were most widely used, accounting for 68% of the total quantity [646 million lb per year [293 million kg/year]), and applied to 82% of the treated land [484,000 square miles 713
Herbicide
per year (121 million hectares/year)]. Note that especially in agriculture, the same land area can be treated numerous times each year with various pesticides. A wide range of chemicals is used as herbicides, including: chlorophenoxy acids, especially 2,4-D and 2,4,5-T, which have an auxin-like growth-regulating property and are selective against broadleaved angiosperm plants; Otriazines such as atrazine, simazine, and hexazinone; Ochloroaliphatics such as dalapon and trichloroacetate; Othe phosphonoalkyl chemical, glyphosate, and Oinorganics such as various arsenicals, cyanates, and chlorates. A “weed” is usually considered to be any plant that interferes with the productivity of a desired crop plant or some other human purpose, even though in other contexts weed species may have positive ecological and economic values. Weeds exert this effect by competing with the crop for light, water, and nutrients. Studies in Illinois demonstrated an average reduction of yield of corn or maize (Zea mays) of 81% in unweeded plots, while a 51% reduction was reported in Minnesota. Weeds also reduce the yield of small grains, such as wheat (Triticum aestivum) and barley (Hordeum vulgare), by 25–50%. Because there are several herbicides that are toxic to dicotyledonous weeds but not grasses, herbicides are used most intensively used in grain crops of the Gramineae. For example, in North America almost all of the area of maize cultivation is treated with herbicides. In part this is due to the widespread use of no-tillage cultivation, a system that reduces erosion and saves fuel. Since an important purpose of plowing is to reduce the abundance of weeds, the notillage system would be impracticable if not accompanied by herbicide use. The most important herbicides used in maize cultivation are atrazine, propachlor, alachlor, 2,4-D, and butylate. Most of the area planted to other agricultural grasses such as wheat, rice (Oryza sativa), and barley is also treated with herbicide, mostly with the phenoxy herbicides 2,4-D or MCPA. The intended ecological effect of any pesticide application is to control a pest species, usually by reducing its abundance to below some economically acceptable threshold. In a few situations, this objective can be attained without important nontarget damage. For example, a judicious spotapplication of a herbicide can allow a selective kill of large lawn weeds in a way that minimizes exposure to nontarget plants and animals. Of course, most situations where herbicides are used are more complex and less well-controlled than this. Whenever a herbicide is broadcast-sprayed over a field or forest, a wide variety of on-site, nontarget organisms is affected, O
714
Environmental Encyclopedia 3 and sprayed herbicide also drifts from the target area. These cause ecotoxicological effects directly, through toxicity to nontarget organisms and ecosystems, and indirectly, by changing habitat or the abundance of food species of wildlife. These effects can be illustrated by the use of herbicides in forestry, with glyphosate used as an example. The most frequent use of herbicides in forestry is for the release of small coniferous plants from the effects of competition with economically undesirable weeds. Usually the silvicultural use of herbicides occurs within the context of an intensive harvesting-and-management system, which may include clear-cutting, scarification, planting seedlings of a single desired species, spacing, and other practices. Glyphosate is a commonly used herbicide in forestry and agriculture. The typical spray rate in silviculture is about 2.2–4.9 mi (1–2.2 kg) active ingredient/ha, and the typical projection is for one to two treatments per forest rotation of 40–100 years. Immediately after an aerial application in forestry, glyphosate residues are about six times higher than litter on the forest floor, which is physically shielded from spray by overtopping foliage. The persistence of glyphosate residues is relatively short, with typical half-lives of two to four weeks in foliage and the forest floor, and up to eight weeks in soil. The disappearance of residues from foliage is mostly due to translocation and wash-off, but in the forest floor and soil glyphosate is immobile (and unavailable for root uptake or leaching) because of binding to organic matter and clay, and residue disappearance is due to microbial oxidation. Residues in oversprayed waterbodies tend to be small and short-lived. For example, two hours after a deliberate overspray on Vancouver Island, Canada, residues of glyphosate in stream water rose to high levels, then rapidly dissipated through flushing to only trace amounts 94 hours later. Because glyphosate is soluble in water, there is no propensity for bioaccumulation in organisms in preference to the inorganic environment, or to occur in larger concentrations at higher levels of the food chain/web. This is in marked contrast to some other pesticides such as DDT, which is soluble in organic solvents but not in water, so it has a strong tendency to bioaccumulate into the fatty tissues of organisms. As a plant poison, glyphosate acts by inhibiting the pathway by which four essential amino acids are synthesized. Only plants and some microorganisms have this metabolic pathway; animals obtain these amino acids from food. Consequently, glyphosate has a relatively small acute toxicity to animals, and there are large margins of toxicological safety in comparison with environmental exposures that are realistically expected during operational silvicultural sprays. Acute toxicity of chemicals to mammals is often indexed by the oral dose required to kill 50% of a test popula-
Environmental Encyclopedia 3 tion, usually of rats (i.e., rat LD50). The LD50 value for pure glyphosate is 5,600 mg/kg, and its silvicultural formulation has a value of 5,400 mg/kg. Compare these to LD50s for some chemicals which many humans ingest voluntarily: nicotine 50 mg/kg, caffeine 366, acetylsalicylic acid (ASA) 1,700, sodium chloride 3,750, and ethanol 13,000. The documented risks of longer-term, chronic exposures of mammals to glyphosate are also small, especially considering the doses that might be received during an operational treatment in forestry. Considering the relatively small acute and chronic toxicities of glyphosate to animals, it is unlikely that wildlife inhabiting sprayed clearcuts would be directly affected by a silvicultural application. However, glyphosate causes large habitat changes through species-specific effects on plant productivity, and by changing habitat structure. Therefore, wildlife such as birds and mammals could be secondarily affected through changes in vegetation and the abundance of their arthropod foods. These indirect effects of herbicide spraying are within the context of ecotoxicology. Indirect effects can affect the abundance and reproductive success of terrestrial and aquatic wild life on a sprayed site, irrespective of a lack of direct, toxic effects. Studies of the effects of habitat changes caused by glyphosate spraying have found relatively small effects on the abundance and species composition of wildlife. Much larger effects on wildlife are associated with other forestry practices, such as clear-cutting and the broadcast spraying of insecticides. For example, in a study of clearcuts sprayed with glyphosate in Nova Scotia, Canada, only small changes in avian abundance and species composition could be attributed to the herbicide treatment. However, such studies of bird abundance are conducted by enumerating territories, and the results cannot be interpreted in terms of reproductive success. Regrettably, there are not yet any studies of the reproductive success of birds breeding on clearcuts recently treated with a herbicide. This is an important deficiency in terms of understanding the ecological effects of herbicide spraying in forestry. An important controversy related to herbicides focused on the military use of herbicides during the Viet Nam war. During this conflict, the United States Air Force broadcastsprayed herbicides to deprive their enemy of food production and forest cover. More than 5,600 mi2 (14,503 km2) were sprayed at least once, about 1/7 the area of South Viet Nam. More than 55 million lb (25 million kg) of 2,4-D, 43 million lb (21 million kg) of 2,4,5-T, and 3.3 million lb (1.5 million kg) of picloram were used in this military program. The most frequently used herbicide was a 50:50 formulation of 2,4,5-T and 2,4-D known as Agent Orange. The rate of application was relatively large, averaging about 10 times the application rate for silvicultural purposes. About 86% of
Herbicide
spray missions were targeted against forests, and the remainder against cropland. As was the military intention, these spray missions caused great ecological damage. Opponents of the practice labelled it “ecocide,” i.e., the intentional use of anti-environmental actions as a military tactic. The broader ecological effects included severe damage to mangrove and tropical forests, and a great loss of wildlife habitat. In addition, the Agent Orange used in Viet Nam was contaminated by the dioxin isomer known as TCDD, an incidental by-product of the manufacturing process of 2,4,5T. Using post-Vietnam manufacturing technology, the contamination by TCDD in 2,4,5-T solutions can be kept to a concentration well below the maximum of 0.1 parts per million (ppm) set by the United States Environmental Protection Agency (EPA). However, the 2,4,5-T used in Viet Nam was grossly contaminated with TCDD, with a concentration as large as 45 ppm occurring in Agent Orange, and an average of about 2.0 ppm. Perhaps 243–375 lb (110–170 kg) of TCDD was sprayed with herbicides onto Vietnam. TCDD is well known as being extremely toxic, and it can cause birth defects and miscarriage in laboratory mammals, although as is often the case, toxicity to humans is less well understood. There has been great controversy about the effects on soldiers and civilians exposed to TCDD in Vietnam, but epidemiological studies have been equivocal about the damages. It seems likely that the effects of TCDD added little to human mortality or to the direct ecological effects of the herbicides that were sprayed in Vietnam. A preferable approach to pesticide use is integrated pest management (IPM). In the context of IPM, pest control is achieved by employing an array of complementary approaches, including: use of natural predators, parasites, and other biological controls; Ouse of pest-resistant varieties of crops; Oenvironmental modifications to reduce optimality of pest habitat; Ocareful monitoring of pest abundance; and Oa judicious use of pesticides, when necessary as a component of the IPM strategy. A successful IPM program can greatly reduce, but not necessarily eliminate, the reliance on pesticides. With specific relevance to herbicides, more research into organic systems and into procedures that are pest-specific are required for the development of IPM systems. Examples of pest-specific practices are the biological control of certain introduced weeds, for example: O
St. John’s wort (Hypericum perforatum) is a serious weed of pastures of the United States Southwest because it is
O
715
Heritage Conservation and Recreation Service
Environmental Encyclopedia 3
toxic to cattle, but it was controlled by the introduction in 1943 of two herbivorous leaf beetles; Othe prickly pear cactus (Opuntia spp.) became a serious weed of Australian rangelands after it was introduced as an ornamental plant, but it has been controlled by release of the moth Cactoblastis cactorum, whose larvae feed on the cactus. Unfortunately, effective IPM systems have not yet been developed for most weed problems for which herbicides are now used. Until there are alternative, pest-specific methods to achieve an economically acceptable degree of control of weeds in agriculture and forestry, herbicides will continue to be used for that purpose. See also Agricultural chemicals
1981 HCRS was abolished as an agency and its responsibilities were transferred to the National Park Service.
[Bill Freedman Ph.D.]
RESOURCES BOOKS Freedman, B. Environmental Ecology. 2nd Edition. San Diego, CA: Academic Press, 1995. McEwen, F. L., and G. R. Stephenson. The Use and Significance of Pesticides in the Environment. New York: Wiley, 1979.
PERIODICALS Pimentel, D., et al. “Environmental and Economic Effects of Reducing Pesticide Use.” Bioscience 41 (1991): 402–409. ———. “Controversy Over the Use of Herbicides in Forestry, With Particular Reference to Glyphosate Usage.” Environmental Carcinogenesis Reviews C8 (1991): 277–286.
Heritage Conservation and Recreation Service The Heritage Conservation and Recreation Service (HCRS) was created in 1978 as an agency of the U. S. Department of the Interior (Secretarial Executive Order 3017) to administer the National Heritage Program initiative of President Carter. The new agency was an outgrowth of and successor to the former Bureau of Outdoor Recreation. The HCRS resulted from the consolidation of some 30 laws, executive orders and interagency agreements that provided federal funds to states, cities and local community organizations to acquire, maintain, and develop historic, natural and recreation sites. HCRS focused on the identification and protection of the nation’s significant natural, cultural and recreational resources. It classified and established registers for heritage resources, formulated policies and programs for their preservation, and coordinated federal, state and local resource and recreation policies and actions. In February 716
Hetch Hetchy Reservoir The Hetch Hetchy Reservoir, located on the Tuolumne River in Yosemite National Park, was built to provide water and hydroelectric power to San Francisco. Its creation in the early 1900s led to one of the first conflicts between preservationists and those favoring utilitarian use of natural resources. The controversy spanned the presidencies of Roosevelt, Taft, and Wilson. A prolonged conflict between San Francisco and its only water utility, Spring Valley Water Company, drove the city to search for an independent water supply. After surveying several possibilities, the city decided to build a dam and reservoir in the Hetch Hetchy Valley because the river there could supply the most abundant and purest water. This option was also the least expensive, since the city planned to use the dam to generate hydroelectric power. It would also provide an abundant supply of irrigation water for area farmers and the recreation potential of a new lake. The city applied to the U. S. Department of the Interior in 1901 for permission to construct the dam, but the request was not approved until 1908. The department then turned the issue over to Congress to work out an exchange of land between the federal government and the city. Congressional debate spanned several years and produced a number of bills. Part of the controversy involved the Right of Way Act of 1901, which gave Congress power to grant rights of way through government lands; some claimed this was designed specifically for the Hetch Hetchy project. Opponents of the project likened the valley to Yosemite on a smaller scale. They wanted to preserve its high cliff walls, waterfalls, and diverse plant species. One of the most well-known opponents, John Muir, described the Hetch Hetchy Valley as “a grand landscape garden, one of Nature’s rarest and most precious mountain temples.” Campers and mountain climbers fought to save the campgrounds and trails that would be flooded. As the argument ensued, often played out in newspapers and other public forums, overwhelming national opinion appeared to favor the preservation of the valley. Despite this public support, a close vote in Congress led to the passage of the Raker Act, allowing the O’Shaughnessy Dam and Hetch Hetchy Reservoir to be constructed. President Woodrow Wilson signed the bill into law on December 19, 1913. The Hetch Hetchy Reservoir was completed in 1923 and still supplies water and electric power to San Francisco. In 1987, Secretary of the Interior Donald Hodel created a
Environmental Encyclopedia 3
High-solids reactor
brief controversy when he suggested tearing down O’Shaughnessy Dam. See also Economic growth and the environment; Environmental law; Environmental policy [Teresa C. Donkin]
RESOURCES BOOKS Jones, Holway R. John Muir and the Sierra Club: The Battle for Yosemite. San Francisco: Sierra Club, 1965. Nash, Roderick. “Conservation as Anxiety.” In The American Environment: Readings in the History of Conservation. 2nd ed. Reading, Mass: AddisonWesley Publishing Company, 1976.
Heterotroph A heterotroph is an organism that derives its nutritional carbon and energy by oxidizing (i.e., decomposing) organic materials. The higher animals, fungi, actinomycetes, and most bacteria are heterotrophs. These are the biological consumers that eventually decompose most of the organic matter on the earth. The decomposition products then are available for chemical or biological recycling. See also Biogeochemical cycles; Oxidation reduction reactions
High-grading (mining, forestry) The practice of high-grading can be traced back to the early days of the California gold rush, when miners would sneak into claims belonging to others and steal the most valuable pieces of ore. The practice of high-grading remains essentially unchanged today. An individual or corporation will enter an area and selectively mine or harvest only the most valuable specimens, before moving on to a new area. Highgrading is most prevalent in the mining and timber industries. It is not uncommon to walk into a forest, particularly an old-growth forest, and find the oldest and finest specimens marked for harvesting. See also Forest management; Strip mining
High-level radioactive waste High-level radioactive waste consists primarily of the byproducts of nuclear power plants and defense activities. Such wastes are highly radioactive and often decay very slowly. They may release dangerous levels of radiation for hundreds or thousands of years. Most high-level radioactive wastes have to be handled by remote control by workers who are protected by heavy shielding. They present, therefore, a serious health and environmental hazard. No entirely satisfactory method for disposing of high-level wastes has as yet
been devised. Currently, the best approach seems to involve immobilizing the wastes in a glass-like material and then burying them deep underground. See also Low-level radioactive waste; Radioactive decay; Radioactive pollution; Radioactive waste management; Radioactivity
High-solids reactor Solid waste disposal is a serious problem in the United
States and other developed countries. Solid waste can constitute valuable raw materials for commercial and industrial operations, however, and one of the challenges facing scientists is to develop an economically efficient method for utilizing it. Although the concept of bacterial waste conversion is simple, achieving an efficient method for putting the technique into practice is difficult. The main problem is that efficiency of conversion requires increasing the ratio of solids to water in the mixture, and this makes mixing more difficult mechanically. The high-solids reactor was designed by scientists at the Solar Energy Research Institute (SERI) to solve this problem. It consists of a cylindrical tube on a horizontal axis, and an agitator shaft running through the middle of it, which contains a number of Teflon-coated paddles oriented at 90 degrees to each other. The pilot reactors operated by SERI had a capacity of 2.6 gal (10 l). SERI scientists modeled the high solids reactor after similar devices used in the plastics industry to mix highly viscous materials. With the reactor, they have been able to process materials with 30–35% solids content, while existing reactors normally handle wastes with five to eight% solid content. With higher solid content, SERI reactors have achieved a yield of methane five to eight times greater than that obtained from conventional mixers. Researchers hope to be able to process wastes with solid content ranging anywhere from zero to 100%. They believe that they can eventually achieve 80% efficiency in converting biomass to methane. The most obvious application of the high-solids reactor is the processing of municipal solid wastes. Initial tests were carried out with sludge obtained from sewage treatment plants in Denver, Los Angeles, and Chicago. In all cases, conversion of solids in the sludge to methane was successful, and other applications of the reactor are also being considered. For example, it can be used to leach out uranium from mine wastes: anaerobic bacteria in the reactor will reduce uranium in the wastes and the uranium will then be absorbed on the bacteria or on ion exchange resins. The use of the reactor to clean contaminated soil is also being considered in the hope is that this will provide a desirable alternative to current processes for cleaning soil, 717
Environmental Encyclopedia 3
Hiroshima, Japan
which create large volumes of contaminated water. See also Biomass fuel; Solid waste incineration; Solid waste recycling and recovery; Solid waste volume reduction; Waste management [David E. Newton]
RESOURCES PERIODICALS “High Solids Reactor May Reduce Capital Costs.” Bioprocessing Technology (June 1990). “SERI Looking for Partners for Solar-Powered High Solids Reactor.” Waste Treatment Technology News (October 1990).
High-voltage power lines see Electromagnetic field
High-yield crops see Borlaug, Norman E.; Consultative Group on International Agricultural Research
Hiroshima, Japan Hiroshima is a beautiful modern city located near the southwestern tip of the main Japanese island of Honshu. It had been a military center with the headquarters of the Japanese southern army and a military depot prior to the end of World War II. The city is now a manufacturing center with a major university and medical school. It is most profoundly remembered because it was the first city to be exposed to the devastation of an atomic bomb. At 8:15 A.M. on the morning of August 6, 1945, a single B29 bomber flying in from Tinian Island in the Marinas released the bomb at 2,000 ft (606.6 m) above the city. The target was a “T"-shaped bridge near the city center. The only surviving building in the city center after the atomic bomb blast was a domed cement building at ground zero, just a few yards from the bridge. An experimental bomb developed by the Manhattan Project had been exploded at Alamagardo, New Mexico, only a few weeks earlier. The Alamagardo bomb had the explosive force of 15,000 tons of TNT. The Hiroshima uranium-235 bomb, with the explosive power of 20,000 tons of TNT, was even more powerful than the New Mexico bomb. The immediate effect of the bomb was to destroy by blast, winds, and fire an area of 4.4 mi2 (7 km2). Two-thirds of the city was destroyed. A portion of Hiroshima was protected from the blast by hills, and this is all that remains of the old city. Destruction of human lives was caused immediately by the blast force 718
of the bomb or by burns or radiation sickness later. Seventy-five thousand people were killed or were fatally wounded; there was an equal number of wounded survivors. Nagasaki, to the south and west of Hiroshima, was bombed on August 9, 1945, with much loss of life. The bombing of these two cities brought World War II to a close. The lessons that Hiroshima (and Nagasaki) teach are the horrors of war with its random killing of civilian men, women, and children. That is the major lesson—war is horrible with its destruction of innocent lives. The why of Hiroshima should be taken in the context of the battle for Okinawa which occurred only weeks before. America forces suffered 12,000 dead with 36,000 wounded in the battle for that small island 350 mi (563.5 km) from the mainland of Japan. The Japanese were reported to have lost 100,000 men. The determination of the Japanese to defend their homeland was well known, and it was estimated that the invasion of Japan would cost no less than 500,000 American lives. Japanese casualties were expected to be larger. It was the military judgment of the American President that a swift termination of the war would save more lives than it would cost; both American and Japanese. Whether this rationale for the atomic bombing of Hiroshima was correct, i.e., whether more people would have died if Japan was invaded, will never be known. However, it certainly is a fact that the war came to a swift end after the bombing of the two cities. The second lesson to be learned from Hiroshima is that radiation exposure is hazardous to human health and radiation damage results in radiation sickness and increased cancer risk. It had been known since the development of x rays at the turn of the century that radiation has the potential to cause cancer. However, the thousands of survivors at Hiroshima and Nagasaki were to become the largest group ever studied for radiation damage. The Atomic Bomb Casualty Commission, now referred to as the Radiation Effects Research Foundation (RERF), was established to monitor the health effects of radiation exposure and has studied these survivors since the end of World War II. The RERF has reported a 10–15 times excess of all types of leukemia among the survivors compared with populations not exposed to the bomb. The leukemia excess peaked four to seven years after exposure but still persists among the survivors. All forms of cancer tended to develop more frequently in heavily irradiated individuals, especially children under the age of 10 at the time of exposure. Thyroid cancer was also increased in these children survivors of the bomb. War is destructive to human life. The particular kind of destruction at Hiroshima, due to an atomic bomb, continues to be relentlessly destructive. The city of Hiroshima is known as a Peace City. [Robert G. McKinnell]
Environmental Encyclopedia 3
Holistic approach
A painting by Yasuko Yamagata depicting the Japanese city of Hiroshima after the atomic bomb was dropped on it. (Reproduced by permission.)
Holistic approach First formulated by Jan Smuts, holism has been traditionally defined as a philosophical theory that states that the determining factors in nature are wholes which are irreducible to the sum of their parts and that the evolution of the universe is the record of the activity and making of such wholes. More generally, it is the concept that wholes cannot be analyzed into parts or reduced to discrete elements without unexplainable residuals. Holism may also be defined by what it is not: it is not synonymous with organicism; holism does not require an entity to be alive or even a part of living processes. And neither is holism confined to spiritual mysticism, unaccessible to scientific methods or study. The holistic approach in ecology and environmental science derives from the idea proposed by Harrison Brown that “a precondition for solving [complex] problems is a realization that all of them are interlocked, with the result that they cannot be solved piecemeal.” For some scholars holism is the rationale for the very existence of ecology. As
David Gates notes, “the very definition of the discipline of ecology implies a holistic study.” The holistic approach has been successfully applied to environmental management. The United States Forest Service, for example, has implemented a multi-level approach to management that takes into account the complexity of forest ecosystems, rather than the traditional focus on isolated incidents or problems. Some people believe that a holistic approach to nature and the world will counter the effects of “reductionism"— excessive individualism, atomization, mechanistic worldview, objectivism, materialism, and anthropocentrism. Advocates of holism claim that its emphasis on connectivity, community, processes, networks, participation, synthesis, systems, and emergent properties will undo the “ills” of reductionism. Others warn that a balance between reductionism and holism is necessary. American ecologist Eugene Odum mandated that “ecology must combine holism with reductionism if applications are to benefit society.” Parts and wholes, at the macro- and micro-level, must be understood. 719
Environmental Encyclopedia 3
Homeostasis
The basic lesson of a combined and complementary partswhole approach is that every entity is both part and whole— an idea reenforced by Arthur Koestler’s concept of a holon. A holon is any entity that is both a part of a larger system and itself a system made up of parts. It is essential to recognize that holism can include the study of any whole, the entirety of any individual in all its ramifications, without implying any organic analogy other than organisms themselves. A holistic approach alone, especially in its extreme form, is unrealistic, condemning scholars to an unproductive wallowing in an unmanageable complexity. Holism and reductionism are both needed for accessing and understanding an increasingly complex world. See also Environmental ethics [Gerald L. Young Ph.D.]
RESOURCES BOOKS Bowen, W. “Reductions and Holism.” In Thinking About Nature: An Investigation of Nature, Value and Ecology. Athens: University of Georgia Press, 1988. Johnson, L. E. “Holism.” In A Morally Deep World: An Essay on Moral Significance and Environmental Ethics. Cambridge: Cambridge University Press, 1991. Savory, A. Holistic Resource Management. Covelo, CA: Island Press, 1988.
PERIODICALS Krippner, S. “The Holistic Paradigm.” World Futures 30 (1991): 133–40. Marietta Jr., D. E. “Environmental Holism and Individuals.” Environmental Ethics 10 (Fall 1988): 251–58. McCarty, D. C. “The Philosophy of Logical Wholism.” Synthese 87 (April 1991): 51–123. Van Steenbergen, B. “Potential Influence of the Holistic Paradigm on the Social Sciences.” Futures 22 (December 1990): 1071–83.
Homeostasis Humans, all other organisms, and even ecological systems, live in an environment of constant change. The persistently shifting, modulating, and changing milieu would not permit survival, if it were not for the capacity of biological systems to respond to this constant flux by maintaining a relatively stable internal environment. An example taken from mammalian biology is temperature which appears to be “fixed” at approximately 98.6°F (37°C). While humans can be exposed to extreme summer heat, and arctic mammals survive intense cold, body temperature remains constant within vary narrow limits. Homeostasis is the sum total of all the biological responses that provide internal equilibrium and assure the maintenance of conditions for survival. The human species has a greater variety of living conditions than any other organism. The ability of humans to live and reproduce in such diverse circumstances is due 720
to a combination of homeostatic mechanisms coupled with cultural (behavioral) responses. The scientific concept of homeostasis emerged from two scientists: Claude Bernard, a French physiologist, and Walter Bradford Cannon, an American physician. Bernard contrasted the external environment which surrounds an organism and the internal environment of that organism. He was, of course, aware that the external environment fluctuated considerably in contrast to the internal environment which remained remarkably constant. He is credited with the enunciation of the constancy of the internal environment ("La fixite´ du milieu inte´rieur...") in 1859. Bernard believed that the survival of an organism depended upon this constancy, and he observed it not only in temperature control but in the regulation of all of the systems that he studied. The concept of the stable “milieu inte´rieur” has been accepted and extended to the many organ systems of all higher vertebrates. This precise control of the internal environment is effected through hormones, the autonomic nervous system, endocrines, etc. The term “homeostasis,” derived from the Greek homoios meaning similar and stasis meaning to stand, suggests an internal environment which remains relatively similar or the same through time. The term was devised by Cannon in 1929 and used many times subsequently. Cannon noted that, in addition to temperature, there were complex controls involving many organ systems that maintained the internal stability within narrow limits. When those limited are exceeded, there is a reaction in the opposite direction that brings the condition back to normal, and the reactions returning the system to normal is referred to as negative feedback. Both Bernard and Cannon were concerned with human physiology. Nevertheless, the concept of homeostasis is applied to all levels of biological organization from the molecular level to ecological systems, including the entire biosphere. Engineers design self-controlling machines known as servomechanisms with feedback control by means of a sensing device, an amplifier which controls a servomotor which in turn runs the operation of the device. Examples of such devices are the thermostats which control furnace heat in a home or the more complicated automatic pilots of aircraft. While the human-made servomechanisms have similarities to biological homeostasis, they are not considered here. As indicated above, temperature is closely regulated in humans and other homeotherms (birds and mammals). The human skin has thermal receptors sensitive to heat or cold. If cold is encountered, the receptors notify an area of the brain known as the hypothalamus via a nerve impulse. The hypothalamus has both a heat-promoting center and a heat-losing center, and, with cold, it is the former which is stimulated. Thyroid-releasing hormone, produced in the
Environmental Encyclopedia 3 hypothalamus, causes the anterior pituitary to release thyroid stimulating hormone which, in turn, causes the thyroid gland to increase production of thyroxine which results in increased metabolism and therefore heat. Sympathetic nerves from the hypothalamus stimulate the adrenal medulla to secrete epinephrine and norepinephrine into the blood which also increases body metabolism and heat. Increased muscle activity will generate heat and that activity can be either voluntary (stamping the feet for instance) or involuntary (shivering). Since heat is dissipated via body surface blood vessels, the nervous system causes surface vasoconstriction to decrease that heat loss. Further, the small quantity of blood that does reach the surface of the body, where it is chilled, is reheated by countercurrent heat exchange resulting from blood vessels containing cold blood from the limbs running adjacent to blood vessels from the body core which contain warm blood. The chilled blood is prewarmed prior to returning to the body core. A little noted response to chilling is the voluntary reaching for a jacket or coat to minimize heat loss. The body responds with opposite results when excessive heat is encountered. The individual tends to shed unnecessary clothing, and activity is reduced to minimize metabolism. Vasodilation of superficial blood vessels allows for radiation of heat. Sweat is produced, which by evaporation reduces body heat. It is clear that the maintenance of body temperature is closely controlled by a complex of homeostasis mechanisms. Each step in temperature regulation is controlled by negative feedback. As indicated above, with exposure to cold the hypothalamus, through a series of steps, induces the synthesis and release of thyroxine by the thyroid gland. What was not indicated above was the fact that elevated levels of thyroxine control the level of activity of the thyroid by negative feedback inhibition of thyroid stimulating hormone. An appropriate level of thyroid hormone is thus maintained. In contrast, with inadequate thyroxine, more thyroid stimulating hormone is produced. Negative feedback controls assure that any particular step in homeostasis does not deviate too much from the normal. Historically, biologists have been particularly impressed with mammalian and human homeostasis. Lower vertebrates have received less attention. However, while internal physiology may vary more in a frog than in a human, there are mechanisms which assure the survival of frogs. For instance, when the ambient temperature drops significantly in the autumn in northern latitudes, leopard frogs move into lakes or rivers which do not freeze. Moving into lakes and rivers is a behavioral response to a change in the external environment which results in internal temperature stability. The metabolism and structure of the frog is inadequate to protect the frog from freezing, but the specific heat of the water is such that freezing does not occur except at the
Homeostasis
surface of the overwintering lake or river. Even though life at the bottom of a lake with an ice cover moves at a slower pace than during the warm summer months, a functioning circulatory system is essential for survival. In general, frog blood (not unlike crankcase oil prior to the era of multiviscosity oil) increases in viscosity with as temperature decreases. Frog blood, however, decreases in viscosity with the prolonged autumnal and winter cold temperatures, thus assuring adequate circulation during the long nights under an ice cover. This is another control mechanism that assures the survival of frogs by maintaining a relatively stable internal environment during the harsh winter. With a return of a warm external environment, northern leopard frogs leave cold water to warm up under the spring sun. Warm temperature causes frog blood viscosity to increase to summer levels. It may be that the behavioral and physiological changes do not prevent oscillations that would be unsuitable for warm blooded animals but, in the frog, the fluctuations do not interfere with survival, and in biology, that is all that is essential. There is homeostasis in ecological systems. Populations of animals in complex systems fluctuate in numbers, but the variations in numbers are generally between limits. For example, predators survive in adequate numbers as long as prey are available. If predators become too great in number, the population of prey will diminish. With fewer prey, the numbers of predators plummet through negative feedback thus permitting recovery of the preyed upon species. The situation becomes much more complex when other food sources are available to the predator. Many organisms encounter a negative feedback on growth rate with crowding. This density dependent population control has been studied in larval frogs, as well as many other organisms, where excretory products seem to specifically inhibit the crowded species but not other organisms in the same environment. Even with adequate food, high density culture of laboratory mice results in negative feedback on reproductive potential with abnormal gonad development and delayed sexual maturity. Density independent factors affecting populations are important in population control but would not be considered homeostasis. Drought is such a factor, and its effects can be contrasted with crowding. Populations of tadpoles will drop catastrophically when breeding ponds dry. Instead of fluctuating between limits (with controls), all individuals are affected the same (i.e., they die). The area must be repopulated with immigrants at a subsequent time, and the migration can be considered a population homeostatic control. The inward migration results in maintenance of population within the geographic area and aids in the survival of the species. [Robert G. McKinnell]
721
Environmental Encyclopedia 3
Homestead Act (1862)
RESOURCES BOOKS Hardy, R. N. Homeostasis. London: Edward Arnold, Ltd., 1976. Langley, L. L. Homeostasis. New York: Reinhold Publishing Co., 1965. Tortora, G. J., and N. P. Anagnostakos. Principles of Anatomy and Physiology. 5th ed. New York: Harper and Row, 1987.
Homestead Act (1862) The Homestead Act was signed into law in 1862. It was a legislative offer on a vast scale of free homesteads on unappropriated public lands. Any citizen (or alien who filed a declaration of intent to become a citizen), who had reached the age of 21, and was the head of a family could acquire title to a stretch of public land of up to 160 acres (65 ha) after living on it and farming it for five years. The only payment required was administrative fees. The settler could also obtain the land without the requirement of residence and cultivation for five years, against payment of $1.25 per acre. With the advent of machinery to mechanize farm labor, 160-acre (65 ha) tracts soon became uneconomical to operate, and Congress modified the original act to allow acquisition of larger tracts. The Homestead Act is still in effect, but good unappropriated land is scarce. Only Alaska still offers opportunities for homesteaders. The Homestead Act was designed to speed development of the United States and to achieve an equitable distribution of wealth. Poor settlers, who lacked the capital to buy land, were now able to start their own farms. Indeed, the act contributed greatly to the growth and development of the country, particularly in the period between the Civil War and World War I, and it did much to speed settlement west of the Mississippi River. In all, well over a quarter of a billion acres of land has been distributed under the Homestead Act and its amendments. However, only a small percentage of land granted under the act between 1862 and 1900 was in fact acquired by homesteaders. According to estimates, only at most 1 of every 6 acres (0.4 of every 2.4 ha) and possibly only 1 in 9 acres (0.4 in 3.6 ha) passed into the hands of family farmers. The railroad companies and land speculators obtained the bulk of the land, sometimes through gross fraud using dummy entrants. Moreover, the railroads often managed to get the best land while the homesteaders, ignorant of farming conditions on the Plains, often ended up with tracts least suitable to farming. Speculators frequently encouraged settlement on land that was too dry or had no sources of water for domestic use. When the homesteads failed, many settlers sold the land to speculators. The environmental consequences of the Homestead Act were many and serious. The act facilitated railroad devel722
opment, often in excess of transportation needs. In many instances, competing companies built lines to connect the same cities. Railroad development contributed significantly to the destruction of bison herds, which in turn led to the destruction of the way of life of the Plains Indians. Cultivation of the Plains caused wholesale destruction of the vast prairies, so that whole ecological systems virtually disappeared. Overfarming of semi-arid lands led to another environmental disaster, whose consequences were fully experienced only in the 1930s. The great Dust Bowl, with its terrifying dust storms, made huge areas of the country unlivable. The Homestead Act was based on the notion that land held no value unless it was cultivated. It has now become clear that reckless cultivation can be self-destructive. In many cases, unfortunately, the damage can no longer be undone. [William E. Larson and Marijke Rijsberman]
RESOURCES PERIODICALS Shimkin, M. N. “Homesteading on the Republican River.” Journal of the West 26 (October 1987): 58–66.
Horizon Layers in the soil develop because of the additions, losses, translocations, and transformations that take place as the soil ages. The soil layers occur as a result of water percolating through the soil and leaching substances downward. The layers are parallel to the soil surface and are called horizons. Horizons will vary from the surface to the subsoil and from one soil to the next because of the different intensities of the above processes. Soils are classified into different groups based on the characteristics of the horizons.
Horseshoe crabs The horseshoe crab (Limulus polyphemus) is the American species of a marine animal that is only a distant relation of crustaceans like crabs and lobsters. Horseshoe crabs are more closely related to spiders and scorpions. The crabs have been called “living fossils” because the genus dates back millions of years, and Limulus evolved very little over the years. Fossils found in British Columbia indicate that the ancestors of horseshoe crabs were in North America about 520 million years ago. During the late twentieth century, the declining horseshoe crab population concerned environmentalists. Horseshoe crabs are a vital food source for dozens of species of birds that migrate from South America to the Arctic Circle. Furthermore, crabs are collected for medical
Environmental Encyclopedia 3 research. After blood is taken from the crabs, they are returned to the ocean. American horseshoe crabs live along the Atlantic Ocean coastline. Crab habitat extends south from Maine to the Yucata´n in the Gulf of Mexico. Several other crab species are found in Southeast Asia and Japan. The American crab is named for its helmet-like shell that is shaped like a horseshoe. Limulus has a sharp tail shaped like a spike. The tail helps the crab move through the sand. If the crab tips over, the tail serves as a rudder so the crab can get back on its feet. The horseshoe crab is unique; its blood is blue and contains copper. The blood of other animals is red and contains iron. Mature female crabs measure up to 24 inches in length. Males are about two-thirds smaller. Horseshoe crabs can live for 19 years, and they reach sexual maturity in 10 years. The crabs come to shore to spawn in late May and early June. They spawn the during the phases of the full and new moon. The female digs nests in the sand and deposits from 200 to 300 eggs in each pit. The male crab fertilizes the eggs with sperm, and the egg clutch is covered with sand. During the spawning season, a female crab could deposit as many as 90,000 eggs. This spawning process coincides with the migration of shorebirds. Flocks of birds like the red knot and the sandpiper eat their fill of crab eggs before continuing their northbound migration. Through the years, people found a variety of uses for horseshoe crabs. During the sixteenth century, Native Americans in South Carolina attached the tails to the spears that they used to catch fish. In the nineteenth century, people ground the crabs up for use as fertilizer or food for chickens and hogs. During the twentieth century, researchers learned much about the human eye by studying the horseshoe crab’s compound eye. Furthermore, researchers discovered that the crab’s blood contained a special clotting agent that could be used to test the purity of new drugs and intravenous solutions. The agent called Limulus Amoebocyte Lysate is obtained by collecting horseshoe crabs during the spawning season. Crabs are bled and then returned to the beach. Horseshoe crabs are also used as bait. The harvesting of crabs increased sharply during the 1990s when people in the fishing industry used crabs as bait to catch eels and conch. The annual numbers of crabs harvested jumped from the thousands to the millions during the 1990s, according to environmental groups and organizations like the National Audubon Society. The declining horseshoe crab population could affect millions of migrating birds. The Audubon Society reported seeing fewer birds at the Atlantic beaches where horseshoe
Horseshoe crabs
A horseshoe crab (Limulus polyphemus). (©John M. Burnley, National Audubon Society Collection. Photo Researchers Inc. Reproduced by permission.)
crabs spawn. In the spring of 2000, scientists said that birds appeared undernourished. Observers doubted that they would complete their journey to the Arctic Circle. The Audubon Society and environmental groups have campaigned for state and federal regulations to protect horseshoe crabs. By 2002, coastal states and the Atlantic States Marine Fisheries Commission had set limits on the amount of crabs that could be harvested. The state of Virginia made bait bags mandatory when fishing with horseshoe crab bait. The mesh bag made of hard plastic holds the crab. That made it more difficult for predators to eat the crab so fewer Limuluscrabs were needed as bait. Furthermore, the federal government created a 1,500square-mile refuge for horseshoe crabs in Delaware Bay. The refuge extends from Ocean City, New Jersey to north of Ocean City, Maryland. As of March of 2002, harvesting was banned in the refuge. People who took crabs from the area faced a fine of up to $100,000, according to the National Marine Fisheries Service. As measures like those were enacted, marine biologists said that it could be several decades before the crab population increased. One reason for slow population growth was that it takes crabs 10 years to reach maturity. [Liz Swain]
723
Household waste
RESOURCES BOOKS Fortey, Richard. Trilobite!: Eyewitness to Evolution. New York: Alfred A. Knopf, 2000. Tanacredi, John, Ed. Limulus in the Limelight: 350 Million Years in the Making and in Peril? New York: Kluwer Academic Publishers, 2001.
ORGANIZATIONS Atlantic States Marine Fisheries Commission., 1444 Eye Street, NW, Sixth Floor, Washington, D.C. 20005 (202) 289-6400, Fax: (202) 289-6051, Email:
[email protected], http://www.asmfc.org National Audubon Society Horseshoe Crab Campaign., 1901 Pennsylvania Avenue NW, Suite 1100, Washington, D.C. 20006 (202) 861-2242, Fax: (202) 861-4290, Email:
[email protected], http://www.audubon.org/ campaign/horseshoe/contacts.htm National Marine Fisheries Service., 1315 East West Highway, SSMC3, Silver Spring, MD 20910 (301) 713-2334, Fax: (301) 713-0596, Email:
[email protected], hhttp://www.nmfs.noaa.gov
Hospital wastes see Medical waste
Household waste Household waste is commonly referred to as garbage or trash. As the population of the world expands, so does the amount of waste produced. Generally, the more automated and industrialized human societies become, the more waste they produce. For example, the industrial revolution introduced new manufactured products and new manufacturing processes that added to household solid waste and industrial waste. Modern consumerism and the excess packaging of many products also contribute significantly to the increasing amount of solid waste. Much of the trash Americans produce (about 40%) is paper and paper products. Paper accounts for more than 71 million tons of garbage. Yard wastes are the next most common waste, contributing more than 31 million tons of solid waste. Metals account for more than 8% of all household waste, and plastics are close behind with another 8% or 14 million tons. America’s trash also contains about 7% glass and nearly 20 million tons of other materials like rubber, textiles, leather, wood, and inorganic wastes. Much of the waste comes from packaging materials. Other types of waste produced by consumers are durable goods such as tires, appliances, and furniture, while other household solid waste is made up of non-durable goods such as paper, disposable products, and clothing. Many of these items could be recycled and reused, so they also can be considered a non-utilized resource. In less industrialized times and even today in many developing countries, households and industries disposed of unwanted materials in bodies of water or in land dumps. 724
Environmental Encyclopedia 3 However, this practice creates undesirable effects such as health hazards and foul odors. Open dumps serve as breeding grounds for disease-carrying organisms such as rats and insects. As the first world became more alert to environmental hazards, methods for waste disposal were studied and improved. Today, however, governments, policymakers, and individuals still wrestle with the problem of how to improve methods of waste disposal, storage, and recycling. In 1976, the United States Congress passed the Resource Conservation and Recovery Act (RCRA) in an effort to protect human health and the environment from hazards associated with waste disposal. In addition, the act aims to conserve energy and natural resources and to reduce the amount of waste Americans generate. Further, the RCRA promotes methods to manage waste in an environmentally sound manner. The act covers regulation of solid waste, hazardous waste, and underground storage tanks that hold petroleum products and certain chemicals. Most household solid waste is removed from homes through community garbage collection and then taken to landfills. The garbage in landfills is buried, but it can still produce noxious odors. In addition, rainwater can seep through landfill sites and leach out pollutants from the landfill trash. These are then carried into nearby bodies of water. Pollutants can also contaminate groundwater, which in turn leads to contamination of drinking water. In order to fight this problem, sanitary landfills were developed. Clay or plastic liners are placed in the ground before garbage is buried. This helps prevent water from seeping out of the landfill and into the surrounding environment. In sanitary landfills, each time a certain amount of waste is added to the landfill, it is covered by a layer of soil. At a predetermined height the site is capped and covered with dirt. Grass and trees can be planted on top of the capped landfill to help prevent erosion and to improve the look of the site. Sanitary landfills are more expensive than open pit dumps, and many communities do not want the stigma of having a landfill near them. These factors make it politically difficult to open new landfills. Landfills are regulated by state and local governments and must meet minimum requirements set by the United States Environmental Protection Agency (EPA). Some household hazardous wastes such as paint, used motor oil, or insecticides can not be accepted at landfills and must be handled separately. Incineration (burning) of solid waste offers an alternative to disposal in landfills. Incineration converts large amounts of solid waste to smaller amounts of ash. The ash must still be disposed of, however, and it can contain toxic materials. Incineration released smoke and other possible pollutants into the air. However, modern incinerators are equipped with smokestack scrubbers that are quite effective
Environmental Encyclopedia 3 in trapping toxic emissions. Many incinerators have the added benefit of generating electricity from the trash they burn. Composting is a viable alternative to landfills and incineration for some biodegradable solid waste. Vegetable trimmings, leaves, grass clippings, straw, horse manure, wood chippings, and similar plant materials are all biodegradable and can be composted. Compost helps the environment because it reduces the amount of waste going into landfills. Correct composting also breaks down biodegradable material into a nutrient-rich soil additive that can be used in gardens or for landscaping. In this way, nutrients vital to plants are returned to the environment. To successfully compost biodegradable wastes, the process must generate high enough temperatures to kill seeds or organisms in the composted material. If done incorrectly, compost piles can give off foul odors. Families and communities can help reduce household waste by making some simple lifestyle changes. They can reduce solid waste by recycling, repairing rather than replacing durable goods, buying products with minimal packaging, and choosing packaging made from recycled materials. Reducing packaging material is an example of source reduction. Much of the responsibility for source reduction with manufacturers. Businesses need to be encouraged to find smart and cost effective ways to manufacture and package goods in order to minimize waste and reduce the toxicity of the waste created. Consumers can help by encouraging companies to create more environmentally responsible packaging through their choice of products. For example, consumers successfully pressured McDonalds to change from serving their sandwiches in non-biodegradable Styrofoam boxes to wrapping them in biodegradable paper. For individual households can reduce the amount of waste they send to landfills by recycling. Paper, aluminum, glass, and plastic containers are the most commonly recycled household materials. Strategies for household recycling vary from community to community. In some areas materials must separated by type before collection. In others, the separation occurs after collection. Recycling preserves natural resources by providing an alternative supply of raw materials to industries. It also saves energy and eliminates the emissions of many toxic gases and water pollutants. In addition, recycling helps create jobs, stimulates development of more environmentally sound technologies, and conserves resources for future generations. For recycling to be successful, there must be an end market for goods made from recycled materials. Consumers can support recycling by buying “green” products made of recycled materials. Battery recycling is also becoming increasingly common in the United States and is required by law in many
Hubbard Brook Experimental Forest
European countries. In 2001, a nonprofit organization called Rechargeable Battery Recycling Corporation (RBRC) began offering American communities cost-free recycling of portable rechargeable batteries such as those used in cell phones, camcorders, and laptop computers. These batteries contain cadmium, which is recycled back into other batteries or used in certain coatings or color pigments. Household waste disposal is an international problem that is being attacked in many ways in many countries. In Tripoli, Libya, a plant exploits household waste, converting it to organic fertilizer. The plant recycles 500 tons of household waste, producing 212 tons of fertilizer a day. In France, a country with less available space for landfills than the United States, incineration is proving a desirable alternative. The French are turning household waste into energy through combustion and are developing technologies to control the residues that occur from incineration. In the United States, education of consumers is a key to reducing the volume and toxicity of household waste. The EPA promotes four basic principles for reducing solid waste: reduce the amount of trash discarded, reuse products and containers, recycle and compost, and reconsider activities that produce waste. [Teresa G. Norris]
RESOURCES OTHER “Communities Invited to Recycle Rechargables Cost Free.” Environmental News Network. October 11, 2001 [cited July 2002]. .
ORGANIZATIONS U.S. Environmental Protection Agency Office of Solid Waste, 1200 Pennsylvania Avenue NW, Washington, DC USA 20460 (703) 412-9810, Toll Free: (800) 424-9346, ,
HRS see Hazard Ranking System
Hubbard Brook Experimental Forest The Hubbard Brook Experimental Forest is located in West Thornton, New Hampshire. It is an experimental area established in 1955 within the White Mountains National Forest in New Hamphire’s central plateau and is administered by the U.S. Forest Service. Hubbard Brook was the site of many important ecological studies beginning in the 1960s which established the extent of nutrient losses when all the trees in a watershed are cut. Hubbard Brook is a north temperate watershed covered with a mature forest, and it is still accumulating bio725
Environmental Encyclopedia 3
Hudson River
mass. In one early study, vegetation cut in a section of
Hubbard Brook was left to decay while nutrient losses were monitored in the runoff. Total nitrogen losses in the first year were twice the amount cycled in the system during a normal year. With the rise of nitrate in the runoff, concentrations of calcium, magnesium, sodium, and potassium rose. These increases caused eutrophication and pollution of the streams fed by this watershed. Once the higher plants had been destroyed, the soil was unable to retain nutrients. Early evidence from the studies indicated that total losses in the ecosystem due to the clear-cutting were a large number of the total inventory of species. The site’s ability to support complex living systems was reduced. The lost nutrients could accumulate again, but erosion of primary minerals would limit the number of plants and animals sustained in the area. Another study at the Hubbard Brook site investigated the effects of forest cutting and herbicide treatment on nutrients in the forest. All of the vegetation in one of Hubbard Brook’s seven watersheds was cut and then the area was treated with the herbicides. At the time the conclusions were startling: deforestation resulted in much larger runoffs into the streams. The pH of the drainage stream went from 5.1 to 4.3, along with a change in temperature and electrical conductivity of the stream water. A combination of higher nutrient concentration, higher water temperature, and greater solar radiation due to the loss of forest cover produced an algal bloom, the first sign of eutrophication. This signaled that a change in the ecosystem of the watershed had occurred. It was ultimately demonstrated at Hubbard Brook that the use of herbicides on a cut area resulted in their transfer to the outgoing water. Hubbard Brook Experimental Forest continues to be an active research facility for foresters and biologists. Most current research focuses on water quality and nutrient exchange. The Forest Service also maintians an acid rain monitoring station, and conducts research on old-growth forests. The results from various studies done at Hubbard Brook have shown that mature forest ecosystems have a greater ability to trap and store nutrients for recycling within the ecosystem. In addition, mature forests offer a higher degrees of biodiversity than do forests that are clear-cut. See also Aquatic chemistry; Cultural eutrophication; Decline spiral; Experimental Lakes Area; Nitrogen cyle [Linda Rehkopf]
RESOURCES BOOKS Bormann, F. H. Pattern and Process in a Forested Ecosystem: Disturbance Development and the Steady State Based on the Hubbard Brook Ecosystem Study. New York: Springer-Verlag, 1991.
726
Botkin, D. B. Forest Dynamics: An Ecological Model. New York: Oxford University Press, 1993.
PERIODICALS Miller, G. “Window Into a Water Shed.” American Forests 95 (May-June 1989): 58–61.
Hudson River Starting at Lake Tear of the Clouds, a two-acre (0.8-ha) pond in New York’s Adirondack Mountains, the Hudson River runs 315 miles (507 km) to the Battery on Manhattan Island’s southern tip, where it meets the Atlantic Ocean. Although polluted and extensively dammed for hydroelectric power, the river still contains a wealth of aquatic species, including massive sea sturgeon (Acipenser oxyrhynchus) and short-nosed sturgeon (A. brevirostrum). The upper Hudson is fast-flowing trout stream, but below the Adirondack Forest Preserve, pollution from municipal sources, paper companies, and industries degrades the water. Stretches of the upper Hudson contain so-called warm water fish, including northern pike (Esox lucius), chain pickerel (E. niger), smallmouth bass (Micropterus dolomieui), and largemouth bass (M. salmoides). These latter two fish swam into the Hudson through the Lake Erie and Lake Champlain canals, which were completed in the early nineteenth century. The Catskill Mountains dominate the mid-Hudson region, which is rich in fish and wildlife, though dairy farming, a source of runoff pollution, is strong in the region. American shad (Alosa sapidissima), historically the Hudson’s most important commercial fish, spawn on the river flats between Kingston and Coxsackie. Marshes in this region support snapping turtles (Chelydra serpentina) and, in the winter, muskrat (Ondatra zibethicus) and mink (Mustela vison). Water chestnuts (Trapa natans) grow luxuriantly in this section of the river. Deep and partly bordered by mountains, the lower Hudson resembles a fiord. The unusually deep lower river makes it suitable for navigation by ocean-going vessels for 150 miles (241 km) upriver to Albany. Because the river’s surface elevation does not drop between Albany and Manhattan, the tidal effects of the ocean are felt all the way upriver to the Federal Lock and Dam above Albany. These powerful tides make long stretches of the lower Hudson saline or brackish, with saltwater penetrating as high as 60 miles (97 km) upstream from the Battery. The Hudson contains a great variety of botanical species. Over a dozen oaks thrive along its banks, including red oaks (Quercus rubra), black oaks (Q. velutina), pin oaks (Q. palustris), and rock chestnut (Q. prinus). Numerous other trees also abound, from mountain laurel (Kalmia latifolia) and red pine (Pinus resinosa) to flowering dogwood (Cornus
Environmental Encyclopedia 3 florida), together with a wide variety of small herbaceous plants. The Hudson River is comparatively short. More than 80 American rivers are longer than it, but it plays a major role in New York’s economy and ecology. Pollution threats to the river have been caused by the discharge of industrial and municipal waste, as well as pesticides washed off the land by rain. From 1930 to 1975, one chemical company on the river manufactured approximately 1.4 billion pounds of polychlorinated biphenyls (PCBs), and an estimated 10 million pounds a year entered the environment. In all, a total of 1.3 million pounds of PCB contamination allegedly occurred during the years prior to the ban, with the pollution originating from plants at Ford Edward and Hudson Falls. A ban was put in place for a time prohibiting the possession, removal, and eating of fish from the waters of the upper Hudson River. A proposed cleanup was designated, to proceed by means of a 40-mile dredging and sifting of 2.65 million cubic yards of sediment north of Albany, with an anticipated yield of 75 tons of PCBs. In February of 2001 the U.S. Environmental Protection Agency (EPA), having invoked the Superfund law, required the chemical company to begin planning the cleanup. The company was given several weeks to present a viable plan of attack, or else face a potential $1.5 billion fine for ignoring the directive in lieu of the cost of cleanup. The cleanup cost, estimated at $500 million was presented as the preferred alternative. The engineering phase of the cleanup project was expected to take three years of planning and was to be scheduled after the offending company filed a response to the EPA. The company responded within the allotted time frame in order to placate the EPA, although the specifics of a drafted work plan remained undetermined, and the company refused to withdraw a lawsuit filed in November of 2000, which challenged the constitutionality of the so-called Superfund law that authorized the EPA to take action. The river meanwhile was ranked by one environmental watchdog group as the fourth most endangered in the United States, specifically because of the PCB contamination. Environmental groups demanded also that attention be paid to the issues of urban sprawl, noise, and other pollution, while proposals for potentially polluting projects were endorsed by industrialists as a means of spurring the area’s economy. Among these industrial projects, the construction of a cement plant in Catskill where there is easy access to a limestone quarry, and the development of a power plant along the river in Athens generated controversy, stemming from the industrial asset afforded by development along the river versus the advantages of a less fouled environment. Additionally, the power plant, which threatened to add four new smokestacks to the skyline and to aggravate pollution, was seen as potentially detrimental to
Human ecology
tourism in that area. Also in recent decades, chlorinated hydrocarbons, dieldrin, endrin, DDT, and other pollutants have been linked to the decline in populations of the once common Jefferson salamander (Ambystoma jeffersonianum), fish hawk (Pandion haliaetus), and bald eagle (Haliaeetus leucocephalus). Concerns over the condition of the lower river spread anew following a severe September 11 terrorist attack on New York City in 2001. In this coastal tri-state urban area where anti-dumping laws were put in place in the mid twentieth century to protect the river from deterioration due to pollution, new threats of pollution surfaced regarding the potential for assorted types of leakage into the river caused when the integrity of some land-based structures including seawalls and underwater tunnels was compromised by the impact of exploding commercial jetliners involved in the attack. See also Agricultural pollution; Dams; Estuary; Feedlot runoff; Fertilizer runoff; Industrial waste treatment; Sewage treatment; Wastewater [David Clarke]
RESOURCES BOOKS Boyle, R. H. The Hudson River, A Natural and Unnatural History. New York: Norton, 1979. Peirce, N. R., and J. Hagstrom. The Book of America, Inside the Fifty States Today. New York: Norton, 1983.
PERIODICALS The Scientist, March 19, 2001.
Human ecology Human ecology may be defined as the branch of knowledge concerned with relationships between human beings and their environments. Among the disciplines contributing seminal work in this field are sociology, anthropology, geography, economics, psychology, political science, philosophy, and the arts. Applied human ecology emerges in engineering, planning, architecture, landscape architecture, conservation, and public health. Human ecology, then, is an interdisciplinary study which applies the principles and concepts of ecology to human problems and the human condition. The notion of interaction—between human beings and the environment and between human beings—is fundamental to human ecology, as it is to biological ecology. Human ecology as an academic inquiry has disciplinary roots extending back as far as the 1920s. However, much work in the decades prior to the 1970s was narrowly drawn and was often carried out by a few individuals whose intellectual legacy remained isolated from the mainstream of their 727
Environmental Encyclopedia 3
Humane Society of the United States
disciplines. The work done in sociology offers an exception to the latter (but not the former) rule; sociological human ecology is traced to the Chicago school and the intellectual lineage of Robert Ezra Park, his student Roderick D. Mackenzie, and Mackenzie’s student Amos Hawley. Through the influence of these men and their school, human ecology, for a time, was narrowly identified with a sociological analysis of spatial patterns in urban settings (although broader questions were sometimes contemplated). Comprehensive treatment of human ecology is first found in the work of Gerald L. Young, who pioneered the study of human ecology as an interdisciplinary field and as a conceptual framework. Young’s definitive framework is founded upon four central themes. The first of these is interaction, and the other three are developed from it: levels of organization, functionalism (part-whole relationships), and holism. These four basic concepts form the foundation for a series of field derivatives (niche, community, and ecosystem) and consequent notions (institutions, proxemics, alienation, ethics, world community, and stress/capacitance). Young’s emphasis on linkages and process set his approach apart from other synthetic attempts in human ecology, which were largely cumbersome classificatory schemata. These were subject to harsh criticism because they tended to embrace virtually all knowledge, resolve themselves into superficial lists and mnemonic “building blocks,” and had little applicability to real-world problems. Generally, comprehensive treatment of human ecology is more advanced in Europe than it is in the United States. A comprehensive approach to human ecology as an interdisciplinary field and conceptual framework gathered momentum in several independent centers during the 1970s and 1980s. Among these have been several college and university programs and research centers, including those at the University of Go¨teborg, Sweden, and, in the United States, at Rutgers University and the University of California at Davis. Interdisciplinary programs at the undergraduate level were first offered in 1972 by the College of the Atlantic (Maine) and The Evergreen State College (Washington). The Commonwealth Human Ecology Council in the United Kingdom, the International Union of Anthropological and Ethnological Sciences’ Commission on Human Ecology, the Centre for Human Ecology at the University of Edinburgh, the Institute for Human Ecology in California, and professional societies and organizations in Europe and the United States have been other centers of development for the field. Dr. Thomas Dietz, President of the Society for Human Ecology, defined some of the priority research problems which human ecology addresses in recent testimony before the U.S. House of Representatives Subcommittee on Environment and the National Academy of Sciences Committee on Environmental Research. Among these, Dietz listed 728
global change, values, post-hoc evaluation, and science and conflict in environmental policy. Other human ecologists would include in the list such items as commons problems, carrying capacity, sustainable development, human health, ecological economics, problems of resource use and distribution, and family systems. Problems of epistemology or cognition such as environmental perception, consciousness, or paradigm change also receive attention. Our Common Future, the report of the United Nation’s World Commission on Environment and Development of 1987, has stimulated a new phase in the development of human ecology. A host of new programs, plans, conferences and agendas have been put forth, primarily to address phenomena of global change and the challenge of sustainable development. These include the Sustainable Biosphere Initiative published by the Ecological Society of America in 1991 and extended internationally; the United Nations Conference on Environment and Development; the proposed new United States National Institutes for the Environment; the Man and the Biosphere Program’s Human-Dominated Systems Program; the report of the National Research Council Committee on Human Dimensions of Global Change and the associated National Science Foundation’s Human Dimensions of Global Change Program; and green plans published by the governments of Canada, Norway, the Netherlands, the United Kingdom, and Austria. All of these programs call for an integrated, interdisciplinary approach to complex problems of human-environmental relationships. The next challenge for human ecology will be to digest and steer these new efforts and to identify the perspectives and tools they supply. [Jeremy Pratt]
RESOURCES BOOKS Jungen, B. “Integration of Knowledge in Human Ecology.” In Human Ecology: A Gathering of Perspectives, edited by R. J. Borden, et al., Selected papers from the First International Conference of the Society for Human Ecology, 1986. ———. Origins of Human Ecology. Stroudsberg, PA: Hutchinson & Ross, 1983.
PERIODICALS Young, G. L. “Human Ecology As An Interdisciplinary Concept: A Critical Inquiry.” Advances in Ecological Research 8 (1974): 1–105. ———. “Conceptual Framework For An Interdisciplinary Human Ecology.” Acta Oecologiae Hominis 1 (1989): 1–136.
Humane Society of the United States The largest animal protection organization in the United States, Humane Society of the United States (HSUS) works
Environmental Encyclopedia 3 to preserve wildlife and wilderness, save endangered species, and promote humane treatment of all animals. Formed in 1954, HSUS specializes in education, cruelty investigations and prosecutions, wildlife and nature preservation, environmental protection, federal and state legislative activities, and other actions designed to protect animal welfare and the environment. Major projects undertaken by HSUS in recent years have included campaigns to stop the killing of whales, dolphins, elephants, bears, and wolves; to help reduce the number of animals used in medical research and to improve the conditions under which they are used; to oppose the use of fur by the fashion industry; and to address the problem of pet overpopulation. The group has worked extensively to ban the use of tuna caught in a way that kills dolphins, largely eliminating the sale of such products in the United States and western Europe. It has tried to stop international airlines from transporting exotic birds into the United States. Other high priority projects have included banning the international trade in elephant ivory, especially imports into the United States, and securing and maintaining a general worldwide moratorium on commercial whaling. HSUS companion animals section works on a variety of issues affecting dogs, cats, birds, horses, and other animals commonly kept as pets, striving to promote responsible pet ownership, particularly the spaying and neutering of dogs and cats to reduce the tremendous overpopulation of these animals. HSUS works closely with local shelters and humane societies across the country, providing information, training, evaluation, and consultation. Several national and international environmental and animal protection groups are affiliated and work closely with HSUS. Humane Society International works abroad to fulfill HSUS’s mission and to institute reform and educational programs that will benefit animals. EarthKind, a global environmental protection group that emphasizes wildlife protection and humane treatment of animals, has been active in Russia, India, Thailand, Sri Lanka, the United Kingdom, Romania, and elsewhere, working to preserve forests, wetlands, wild rivers, natural ecosystems, and endangered wildlife. The National Association for Humane and Environmental Education is the youth education division of HSUS, developing and producing periodicals and teaching materials designed to instill humane values in students and young people, including KIND (Kids in Nature’s Defense) News, a newspaper for elementary school children, and KIND TEACHER, an 80-page annual full of worksheets and activities for use by teachers. The Center for Respect of Life and the Environment works with academic institutions, scholars, religious leaders
Human-powered vehicles
and organizations, arts groups, and others to foster an ethic of respect and compassion towards all creatures and the natural environment. Its quarterly publication, Earth Ethics, examines such issues as earth education, sustainable communities, ecological economics, and other values affecting our relationship with the natural world. The Interfaith Council for the Protection of Animals and Nature promotes conservation and education mainly within the religious community, attempting to make religious leaders, groups, and individuals more aware of our moral and spiritual obligations to preserve the planet and its myriad life forms. HSUS has been quite active, hard-hitting, and effective in promoting its animal protection programs, such as leading the fight against the fur industry. It accomplishes its goals through education, lobbying, grassroots organizing, and other traditional, legal means of influencing public opinion and government policies. With over 3.5 million members or “constituents” and an annual budget of over $35 million, HSUS is considered the largest and one of the most influential animal protection groups in the United States and, perhaps, the world. [Lewis G. Regenstein]
RESOURCES ORGANIZATIONS The Humane Society of the United States, 2100 L Street, NW, Washington, D.C. USA 20037 (202) 452-1100,
Humanism A perspective or doctrine that focuses primarily on the interests, capacities, and achievements of human beings. This focus on human concerns has led some to conclude that human beings have rightful dominion over the earth and that their interests and well-being are paramount and take precedence over all other considerations. Religious humanism, for instance, generally holds that God made human beings in His own image and put them in charge of His creation. Secular humanism views human beings as the source of all value or worth. Some environmentally-minded critics, such as Lynn White Jr., and David Ehrenfeld claim that much environmental destruction can be traced to “the arrogance of humanism.”
Human-powered vehicles Finding easy modes of transportation seems to be a basic human need, but finding easy and clean modes is becoming imperative. Traffic congestion, overconsumption of fossil fuels and air pollution are all direct results of automotive 729
Environmental Encyclopedia 3
Human-powered vehicles
Innovative bicycle designs. (McGraw-Hill Inc. Reproduced by permission.)
lifestyles around the world. The logical alternative is humanpowered vehicles (HPVs), perhaps best exemplified in the bicycle, the most basic HPV. New high-tech developments in HPVs are not yet ready for mass production, nor are they able to compete with cars. Pedal-propelled HPVs in the air, on land, or under the sea are still in the expensive, designand-race-for-a-prize category. But the challenge of humanpowered transport has inspired a lot of inventive thinking, both amateur and professional. Bicycles and rickshaws comprise the most basic HPVs. Of these two vehicles, bicycles are clearly the most popular, and production of these HPVs has surpassed production of automobiles in recent years. The number of bicycles in use throughout the world is roughly double that of cars; China alone contains 270 million bicycles, or one third of the total bicycles worldwide. Indeed the bicycle has overtaken the automobile as the preferred mode of transportation in many nations. There are many reasons for the popularity of the bike: it fulfills both recreational and functional needs, it is an economical alternative to automobiles, and it does not contribute to the problems facing the environment. 730
Although the bicycle provides a healthy and scenic form of recreation, people also find it useful in basic transportation. In the Netherlands, bicycle transportation accounts for 30% of work trips and 60% of school trips. One-third of commuting to work in Denmark is by bicycle. In China, the vast majority of all trips there are made via bicycle. A surge in bicycle production occurred in 1973, when in conjunction with rising oil costs, production doubled to 52 million per year. Soaring fuel prices in the 1970s inspired people to find inexpensive, economical alternatives to cars, and many turned to bicycles. Besides being efficient transportation, bikes are simply cheaper to purchase and to maintain than cars. There is no need to pay for parking or tolls, no expensive upkeep, and no high fuel costs. The lack of fuel costs associated with bicycles leads to another benefit: bicycles do not harm the environment. Cars consume fossil fuels and in so doing release more than twothirds of the United States’ smog-producing chemicals. They are furthermore considered responsible for many other environmental ailments: depletion of the ozone layer
Environmental Encyclopedia 3 through release of chlorofluorocarbons from automobile air conditioning units; cause of cancer through toxic emissions; and consumption of the world’s limited fuel resources. With human energy as their only requirement, bicycles have none of these liabilities. Nevertheless, in many cases—such as long trips or travelling in inclement weather—cars are the preferred form of transportation. Bicycles are not the optimal choice in many situations. Thus engineers and designers seek to improve on the bicycle and make machines suitable for transport under many different conditions. They are striving to produce new human-powered vehicles—HPVs that maximize air and sea currents, that have reasonable interior ergonomics, and that can be inexpensively produced. Several machines designed to fit this criteria exist. As for developments in human-powered aircraft, success is judged on distance and speed, which depend on the strength of the pedaller and the lightness of the craft. The current world record holder is Greek Olympic cyclist Kanellos Kanellopoulos who flew Daedalus 88. Daedalus 88 was created by engineer John Langford and a team of MIT engineers and funded by American corporations. Kanellopoulos flew Daedalus 88 for 3 hours and 54 minutes across the Aegean Sea between Crete and Santorini, a distance of 74 mi (119 km), in April 1988. The craft averaged 18.5 mph (29 kph) and flew 15 ft (4.6 m) above the water. Upon arrival at Santorini, however, the sun began to heat up the black sands and generate erratic shore winds and Daedalus 88 plunged into the sea. It was a few yards short of its goal, and the tailboom of the 70-lb (32-kg) vehicle was snapped by the wind. But to cheering crowds on the beach, Kanellopoulos rose from the sea with a victory sign and strode to shore. In the creation of a human-powered helicopter, students at California Polytechnic State University had been working on perfecting one since 1981. In 1989 they achieved liftoff with Greg McNeil, a member of the United States National Cycling Team, pedalling an astounding 1.0 hp. The graphite epoxy, wood, and Mylar craft, Da Vinci III, rose 7 in (17.7 cm) for 6.8 seconds. But rules for the $10,000 Sikorsky prize, sponsored by the American Helicopter Society, stipulate that the winning craft must rise nearly 10 ft, or 3 m, and stay aloft one minute. On land, recumbent vehicles, or recumbents, are wheeled vehicles in which the driver pedals in a semi-recumbent position, contained within a windowed enclosure. The world record was set in 1989 by American Fred Markham at the Michigan International Speedway in an HPV named Goldrush. Markham pedalled more than 44 mph (72 kph). Unfortunately, the realities of road travel cast a long shadow over recumbent HPVs. Crews discovered that they tended to be unstable in crosswinds, distracted other drivers
Human-powered vehicles
and pedestrians, and lacked the speed to correct course safely in the face of oncoming cars and trucks. In the sea, being able to maneuver at your own pace and be in control of your vehicle—as well as being able to beat a fast retreat undersea—are the problems faced by HPV submersible engineers. Human-powered subs are not a new idea. The Revolutionary War created a need for a bubble sub that was to plant an explosive in the belly of a British ship in New York Harbor. (The naval officer, breathing one-half hour’s worth of air, failed in his night mission, but survived). The special design problems of modern two-person HP-subs involve controlling buoyancy and ballast, pitch and yaw (nose up/down/sideways), reducing drag, increasing thrust, and positioning the pedaller and the propulsor in the flooded cockpit (called “wet") in ways that maximize air intake from scuba tanks and muscle power from arms and legs. Depending on the design, the humans in HP-subs lie prone, foot to head or side by side, or sit, using their feet to pedal and their hands to control the rudder through the underwater currents. Studies by the United States Navy Experimental Dive Unit indicate that a well-trained athlete can sustain 0.5 hp for 10 minutes underwater. On the surface of the water, fin-propelled watercraft— lightweight inflatables that are powered by humans kicking with fins—are ideal for fishermen whom maneuverability, not speed, is the goal. Paddling with the legs, which does not disturb fish, leaves the hands free to cast. In most designs, the fisherman sits on a platform between tubes, his feet in the water. Controllability is another matter, however: in open windy water, the craft is at the mercy of the elements in its current design state. Top speed is about 50 yd (46 m) in three minutes. Finally, over the surface of the water, the first humanpowered hydrofoil, Flying Fish, with national track sprinter Bobby Livingston, broke a world record in September 1989 when it traveled 100 m over Lake Adrian, Michigan, at 16.1 knots (18.5 mph). A vehicle that pedalled like a bicycle, resembled a model airplane with a two-blade propeller and a 6-ft (1.8-m) carbon graphite wing, Flying Fish sped across the surface of the lake on two pontoons. [Stephanie Ocko and Andrea Gacki]
RESOURCES BOOKS Lowe, M. “Bicycle Production Outpaces Autos.” In Vital Signs 1992: The Trends That Are Shaping Our Future, edited by L. R. Brown, C, Flavin, and H. Hane. New York: Norton, 1992.
731
Humus
PERIODICALS Banks, R. “Sub Story.” National Geographic World (July 1992): 8–11. Blumenthal, T. “Outer Limits.” Bicycling, December 1989, 36. Britton, P. “Muscle Subs.” Popular Science, June 1989, 126–129. ———. “Technology Race Beneath the Waves.” Popular Science, June 1991, 48–54. Horgan, J. “Heli-Hopper: Human-powered Helicopter Gets Off the Ground.” Scientific American 262 (March 1990): 34. Kyle, C. R. “Limits of Leg Power.” Bicycling, October 1990, 100–101. Langley, J. “Those Flying Men and Their Magnificent Machines.” Bicycling, April 1992, 74–76. “Man-Powered Helicopter Makes First Flight.” Aviation Week and Space Technology, December 1989, 115. Martin, S. “Cycle City 2000.” Bicycling, March 1992, 130–131.
Humus Humus is essentially decomposed organic matter in soil. Humus can vary in color but is often dark brown. Besides containing valuable nutrients, there are many other benefits of humus: it stabilizes soil mineral particles into aggregates, improves pore space relationships and aids in air and water movement, aids in water holding capacity, and influences the absorption of hydrogen ions as a pH regulator.
Hunting and trapping Wild animals are a potentially renewable natural resource. This means that they can be harvested in a sustainable fashion, as long as then birth rate is greater than the rate of exploitation by humans. In the sense meant here, “harvesting” refers to the killing of wild animals as a source of meat, fur, antlers, or other useful products, or as an outdoor sport. The harvesting can involve trapping, or hunting using guns, bows-and-arrows, or other weapons. (Fishing is also a kind of hunting, but it is not dealt with here). From the ecological perspective, it is critical that the exploitation is undertaken in a sustainable fashion; otherwise, serious damages are caused to the resource and to ecosystems more generally. Unfortunately, there have been numerous examples in which wild animals have been harvested at grossly unsustainable rates, which caused their populations to decline severely. In a few cases this caused species to become extinct—they no longer occur anywhere on Earth. For example, commercial hunting in North America resulted in the extinctions of the great auk (Pinguinnis impennis), passenger pigeon (Ectopistes migratorius), and Steller’s sea cow (Hydrodamalis stelleri). Unsustainable commercial hunting also brought other species to the brink of extinction, including the Eskimo curlew (Numenius borealis), northern right whale (Eubalaena glacialis), northern fur seal (Callorhinus ursinus), grey 732
Environmental Encyclopedia 3 whale (Eschrichtius robustus), and American bison or buffalo (Bison bison). Fortunately, these and many other examples of overexploitation of wild animals by humans are regrettable cases from the past. Today, the exploitation of wild animals in North America is undertaken with a view to the longerterm conservation of their stocks, that is, an attempt is made to manage the harvesting in a sustainable fashion. This means that trapping and hunting are much more closely regulated than they used to be. If harvests of wild animals are to be undertaken in a sustainable manner, it is critical that harvest levels are determined using the best available understanding of population-level productivity and stock sizes. It is also essential that harvest quotas are respected by trappers and hunters and that illegal exploitation (or poaching) does not compromise what might otherwise be a sustainable activity. The challenge of modern wildlife management is to ensure that good conservation science is sensibly integrated with effective monitoring and management of the rates of exploitation. Ethics of trapping and hunting From the strictly ecological perspective, sustainable trapping and hunting of wild animals is no more objectionable than the prudent harvesting of timber or agricultural crops. However, people have widely divergent attitudes about the killing of wild (or domestic) animals for meat, sport, or profit. At one end of the ethical spectrum are people who see no problem with the killing wild animals as a source of meat or cash. At the other extreme are individuals with a profound respect for the rights of all animals, and who believe that killing any sentient creature is ethically wrong. Many of these latter people are animal-rights activists, and some of them are involved in organizations that undertake high-profile protests and other forms of advocacy to prevent or restrict trapping and hunting. In essence, these people object to the lethal exploitation of wild animals, even under closely regulated conditions that would not deplete their populations. Most people, of course, have attitudes that are intermediate to those just described. Trapping The fur trade was one a very important commercial activity during the initial phase of the colonization of North America by Europeans. During those times, as now, furs were a valuable commodity that could be obtained from nature and could be sold at a great profit in urban markets. In fact, the quest for furs was the most important reason for much of the early exploration of the interior of North America, as fur traders penetrated all of the continent’s great rivers seeking new sources of pelts and profit. Most furbearing animals are harvested by a form of hunting known as trapping.
Environmental Encyclopedia 3 Until recently, most trapping involved leg-hold traps, a relatively crude method that results in many animals enduring cruel and lingering deaths. Fortunately, other, more humane alternatives now exist in which most trapped animals are killed quickly and do not suffer unnecessarily. In large part, the movement towards more merciful trapping methods has occurred in response to effective, high-profile lobbying by organizations that oppose trapping, and the trapping industry has responded by developing and using more humane methods of killing wild furbearers. Various species of furbearers are trapped in North America, particularly in relatively remote, wild areas, such as the northern and montane forests of the continental United States, Alaska, and Canada. Among the most valuable furbearing species are beaver (Castor canadensis), muskrat (Ondatra zibethicus), mink (Mustela vison), river otter (Enhydra lutris), bobcat (Lynx rufus), lynx (Lynx canadensis), red fox (Vulpes vulpes), wolf (Canis lupus), and coyote (Canis latrans). The hides of other species are also valuable, such as black bear (Ursus americanus), white-tailed deer (Odocoileus virginianus), and moose (Alces alces), but these species are not hunted primarily for their pelage. Some species of seals are hunted for their fur, although this is largely done by shooting, clubbing, or netting, rather than by trapping. The best examples of this are the harp seal (Phoca groenlandica) of the northwestern Atlantic Ocean and the northern fur seal (Callorhinus ursinus) of the Bering Sea. Many seal pups are killed by commercial hunters in the spring when their coats are still white and soft. This harvest has been highly controversial and is the subject of intense opposition from animal rights groups. Game Mammals Hunting is a popular sport in North America, enjoyed by about 16 million people each year, most of them men. In 1991, hunting contributed more than $12 billion to the United States economy, about half of which was spent by big-game hunters. Various species of terrestrial animals are hunted in large numbers. This is mostly done by stalking the animals and shooting them with rifles, although shotguns and bowand-arrow are sometimes used. Some hunting is done for subsistence purposes, that is, the meat of the animals is used to feed the family or friends of the hunters. Subsistence hunting is especially important in remote areas and for aboriginal hunters. Commercial or market hunts also used to be common, but these are no longer legal in North America (except under exceptional circumstances) because they have generally proven to be unsustainable. However, illegal, semicommercial hunting (or poaching) still takes place in many remote areas where game animals are relatively abundant and where there are local markets for wild meat.
Hunting and trapping
In addition, many people hunt as a sport, that is, for the excitement and accomplishment of tracking and killing wild animals. In such cases, using the meat of the hunted animals may be only a secondary consideration, and in fact the hunter may only seek to retain the head, antlers, or horns of the prey as a trophy (although the meat may be kept by the hunter’s guide). Big-game hunting is an economically important activity in North America, with large amounts of money being spent on the equipment, guides, and transportation necessary to undertake this sport. The most commonly hunted big-game mammal in North America is the white-tailed deer. Other frequently hunted ungulates include the mule deer (Odocoileus hemionus), moose, elk or wapiti (Cervus canadensis), caribou (Rangifer tarandus), and pronghorn antelope (Antilocapra americana). Black bear and grizzly bear (Ursus arctos) are also hunted, as are bighorn sheep (Ovis canadensis) and mountain goat (Oreamnos americanus). Commonly hunted small-game species include various species of rabbits and hares, such as the cottontail rabbit (Sylvilagus floridanus), snowshoe hare (Lepus americanus), and jackrabbit (Lepus californicus), as well as the grey or black squirrel (Sciurus carolinensis) and woodchuck (Marmota monax). Wild boar (Sus scrofa) are also hunted in some regions—these are feral animals descended from escaped domestic pigs. Game birds Various larger species of birds are hunted in North America for their meat and for sport. So-called upland game birds are hunted in terrestrial habitats and include ruffed grouse (Bonasa umbellus), willow ptarmigan (Lagopus lagopus), bobwhite quail (Colinus virginianus), wild turkey (Meleagris gallopava), mourning dove (Zanaidura macroura), and woodcock (Philohela minor). Several introduced species of upland gamebirds are also commonly hunted, particularly ring-necked pheasant (Phasianus colchicus) and Hungarian or grey partridge (Perdix perdix). Much larger numbers of waterfowl are hunted in North America, including millions of ducks and geese. The most commonly harvested species of waterfowl are mallard (Anas platyrhynchos), wood duck (Aix sponsa), Canada goose (Branta canadensis), and snow and blue goose (Chen hyperborea), but another 35 or so species in the duck family are also hunted. Other hunted waterfowl include coots (Fulica americana) and moorhens (Gallinula chloropus). [Bill Freedman Ph.D.]
RESOURCES BOOKS Halls, L. K., ed. White-tailed Deer: Ecology and Management. Harrisburg: Stackpole Books, 1984.
733
Environmental Encyclopedia 3
Hurricane
Novak, M., et al. Wild Furbearer Management and Conservation in North America. North Bay: Ontario Trappers Association, 1987. Phillips, P. C. The Fur Trade (2 vols.). Norman: University of Oklahoma, 1961. Robinson, W. L., and E. G. Bolen. Wildlife Ecology and Management. 3rd ed. New York: Macmillan, 1996.
PERIODICALS Freedman, B. Environmental Ecology. 2nd ed. San Diego: Academic Press, 1995.
Hurricane Hurricanes, called typhoons or tropical cyclones in the Far East, are intense cyclonic storms which form over warm tropical waters, and generally remain active and strong only while over the oceans. Their intensity is marked by a distinct spiraling pattern of clouds, very low atmospheric pressure at the center, and extremely strong winds blowing at speeds greater than 74 mph (120 kph) within the inner rings of clouds. Typically when hurricanes strike land and move inland, they immediately start to disintegrate, though before they do they bring widespread destruction of property and loss of life. The radius of such a storm can be 100 mi (160 km) or greater. Thunderstorms, hail, and tornados frequently are imbedded in hurricanes. Hurricanes occur in every tropical ocean except the South Atlantic, and with greater frequency from August through October than any other time of year. The center of a hurricane is called the eye. It is an area of relative calm, few clouds and higher temperatures, and represents the center of the low pressure pattern. Hurricanes usually move from east to west near the tropics, but when they migrate poleward to the mid-latitudes they can get caught up in the general west to east flow pattern found in that region of the earth. See also Tornado and cyclone
George Evelyn Hutchinson 1991)
(1903 –
American ecologist Born January 30, 1903, in Cambridge, England, Hutchinson was the son of Arthur Hutchinson, a professor of mineralogy at Cambridge University, and Evaline Demeny Shipley Hutchinson, an ardent feminist. He demonstrated an early interest in flora and fauna and a basic understanding of the scientific method. In 1918, at the age of 15, he wrote a letter to the Entomological Record and Journal of Variation about a grasshopper he had seen swimming in a pond. He described an experiment he performed on the insect and included it for taxonomic identification. 734
In 1924, Hutchinson earned his bachelor’s degree in zoology from Emmanuel College at Cambridge University, where he was a founding member of the Biological Tea Club. He then served as an international education fellow at the Stazione Zoologica in Naples from 1925 until 1926, when he was hired as a senior lecturer at the University of Witwatersrand in Johannesburg, South Africa. He was apparently fired from this position two years later by administrators who never imagined that in 1977 the university would honor the ecologist by establishing a research laboratory in his name. Hutchinson earned his master’s degree from Emmanuel College in absentia in 1928 and applied to Yale University for a fellowship so he could pursue a doctoral degree. He was instead appointed to the faculty as a zoology instructor. He was promoted to assistant professor in 1931 and became an associate professor in 1941, the year he obtained his United States citizenship. He was made a full professor of zoology in 1945, and between 1947 and 1965 he served as director of graduate studies in zoology. Hutchinson never did receive his doctoral degree, though he amassed an impressive collection of honorary degrees during his lifetime. Hutchinson was best known for his interest in limnology, the science of freshwater lakes and ponds. He spent most of his life writing the four-volume Treatise on Limnology, which he completed just months before his death. The research that led to the first volume—covering geography, physics, and chemistry—earned him a Guggenheim Fellowship in 1957. The second volume, published in 1967, covered biology and plankton. The third volume, on water plants, was published in 1975, and the fourth volume, about invertebrates, appeared posthumously in 1993. The Treatise on Limnology was among the nine books, nearly 150 research papers, and many opinion columns which Hutchinson penned. He was an influential writer whose scientific papers inspired many students to specialize in ecology. Hutchinson’s greatest contribution to the science of ecology was his broad approach, which became known as the “Hutchinson school.” His work encompassed disciplines as varied as biochemistry, geology, zoology, and botany. He pioneered the concept of biogeochemistry, which examines the exchange of chemicals between organisms and the environment. His studies in biogeochemistry focused on how phosphates and nitrates move from the earth to plants, then animals, and then back to the earth in a continuous cycle. His holistic approach influenced later environmentalists when they began to consider the global scope of environmental problems. In 1957, Hutchinson published an article entitled “Concluding Remarks,” considered his most inspiring and intriguing work, as part of the Cold Spring Harbor Symposia on Quantitative Biology. Here, he introduced and described
Environmental Encyclopedia 3 the ecological niche, a concept which has been the source of much research and debate ever since. The article was one of only three in the field of ecology chosen for the 1991 collection Classics in Theoretical Biology. Hutchinson won numerous major awards for his work in ecology. In 1950, he was elected to the National Academy of Science. Five years later, he earned the Leidy Medal from the Philadelphia Academy of Natural Sciences. He was awarded the Naumann Medal from the International Association of Theoretical and Applied Limnology in 1959. This is a global award, granted only once every three years, which Hutchinson earned for his contributions to the study of lakes in the first volume of his treatise. In 1962, the Ecological Society of America chose him for its Eminent Ecologist Award. Hutchinson’s research often took him out of the country. In 1932, he joined a Yale expedition to Tibet, where he amassed a vast collection of organisms from high-altitude lakes. He wrote many scientific articles about his work in North India, and the trip also inspired his 1936 travel book, The Clear Mirror. Other research projects drew Hutchinson to Italy, where, in the sediment of Lago di Monterosi, a lake north of Rome, he found evidence of the first case of artificial eutrophication, dating from around 180 B.C. Hutchinson was devoted to the arts and humanities, and he counted several musicians, artists, and writers among his friends. The most prominent of his artistic friends was English author Rebecca West. He served as her literary executor, compiling a bibliography of her work which was published in 1957. He was also the curator of a collection of her papers at Yale’s Beinecke Library. Hutchinson’s writing reflected his diverse interests. Along with his scientific works and his travel book, he also wrote an autobiography and three books of essays, The Itinerant Ivory Tower (1953), The Enchanted Voyage and Other Studies (1962), and The Ecological Theatre and the Evolutionary Play (1965). For 12 years, beginning in 1943, Hutchinson wrote a regular column titled “Marginalia” for the American Scientist. His thoughtful columns examined the impact on society of scientific issues of the day. Hutchinson’s skill at writing, as well as his literary interests, was recognized by Yale’s literary society, the Elizabethan Club, which twice elected him president. He was also a member of the Connecticut Academy of Arts and Sciences and served as its president in 1946. While Hutchinson built his reputation on his research and writing, he also was considered an excellent teacher. His teaching career began with a wide range of courses including beginning biology, entomology, and vertebrate embryology. He later added limnology and other graduate courses to his areas of expertise. He was personable as well as innovative, giving his students illustrated note sheets, for
George Evelyn Hutchinson
example, so they could concentrate on his lectures without worrying about taking their own notes. Leading oceanographer Linsley Pond was among the students whose careers were changed by Hutchinson’s teaching. Pond enrolled in Yale’s doctoral program with the intention of becoming an experimental embryologist. But after one week in Hutchinson’s limnology class, he had decided to do his dissertation research on a pond. Hutchinson loved Yale. He particularly cherished his fellowship in the residential Saybrook College. He was also very active in several professional associations, including the American Academy of Arts and Sciences, the American Philosophical Society, and the National Academy of Sciences. He served as president of the American Society of Limnology and Oceanography in 1947, the American Society of Naturalists in 1958, and the International Association for Theoretical and Applied Limnology from 1962 until 1968. Hutchinson retired from Yale as professor emeritus in 1971, but continued his writing and research for 20 more years, until just months before his death. He produced several books during this time, including the third volume of his treatise, as well as a textbook titled An Introduction to Population Ecology (1978), and memoirs of his early years, The Kindly Fruits of the Earth (1979). He also occasionally returned to his musings on science and society, writing about several topical issues in 1983 for the American Scientist. Here, he examined the question of nuclear disarmament, speculating that “it may well be that total nuclear disarmament would remove a significant deterrent to all war.” In the same article, he also philosophized on differences in behavior between the sexes: “On the whole, it would seem that, in our present state of evolution, the less aggressive, more feminine traits are likely to be of greater value to us, though always endangered by more aggressive, less useful tendencies. Any such sexual difference, small as it may be, is something on which perhaps we can build.” Several of Hutchinson’s most prestigious honors, including the Tyler Award, came during his retirement. Hutchinson earned the $50,000 award, often called the Nobel Prize for conservation, in 1974. That same year, the National Academy of Sciences gave him the Frederick Garner Cottrell Award for Environmental Quality. He was awarded the Franklin Medal from the Franklin Institute in 1979, the Daniel Giraud Elliot Medal from the National Academy of Sciences in 1984, and the Kyoto Prize in Basic Science from Japan in 1986. Having once rejected a National Medal of Science because it would have been bestowed on him by President Richard Nixon, he was awarded the medal posthumously by President George Bush in 1991. Hutchinson’s first marriage, to Grace Evelyn Pickford, ended with a divorce in 1933. During the six weeks residence 735
Environmental Encyclopedia 3
Hybrid vehicles
the state of Nevada then required to grant divorces, he studied the lakes near Reno and wrote a major paper on freshwater ecology in arid climates. Later that year, Hutchinson married Margaret Seal, who died in 1983 from Alzheimer’s disease. Hutchinson cared for her at home during her illness. In 1985, he married Anne Twitty Goldsby, whose care enabled him to travel extensively and continue working in spite of his failing health. When she died unexpectedly in December 1990, the ailing widower returned to his British homeland. He died in London on May 17, 1991, and was buried in Cambridge. [Cynthia Washam]
RESOURCES BOOKS Hutchinson, George A. A Preliminary List of the Writings of Rebecca West. Yale University Library, 1957. ———. A Treatise on Limnology. Wiley, Vol. 1, 1957; Vol. 2, 1967; Vol. 3, 1979; Vol. 4, 1993. ———. An Introduction to Population Ecology. Yale University Press, 1978. ———. The Clear Mirror. Cambridge University Press, 1937. ———. The Ecological Theater and the Evolutionary Play. Yale University Press, 1965. ———. The Enchanted Voyage and Other Studies. Yale University Press, 1962. ———. The Itinerant Ivory Tower. Yale University Press, 1952. ———. The Kindly Fruits of the Earth. Yale University Press, 1979.
PERIODICALS ———. “Marginalia.” American Scientist (November–December 1983): 639–644. Edmondson, Y. H., ed. “G. Evelyn Hutchinson Celebratory Issue.” Limnology and Oceanography 16 (1971): 167–477. Edmondson, W. T. “Resolution of Respect.” Bulletin of the Ecological Society of America 72 (1991): 212–216. Hutchinson, George E. “A Swimming Grasshopper.” Entomological Record and Journal of Variation 30 (1918): 138. ———. “Marginalia.” American Scientist 31 (1943): 270. ———. “Concluding Remarks.” Bulletin of Mathematical Biology 53 (1991): 193–213. ———. “Lanula: An Account of the History and Development of the Lago di Monterosi, Latlum, Italy.” Transactions of the American Philosophical Society 64 (1970): part 4.
Hybrid vehicles The roughly 200 million automobiles and light trucks currently in use in the United States travel approximately 2.4 trillion miles every year, and consume almost two-thirds of the U.S. oil supply. They also produce about two-thirds of the carbon monoxide, one-third of the lead and nitrogen oxides, and a quarter of all volatile organic compounds (VOCs). More efficient transportation energy use could have dramatic effects on environmental quality as well as 736
saving billions of dollars every year in our payments to foreign governments. In response to high gasoline prices in the 1970s and early 1980s, automobile gas-mileage averages in the United States more than doubled from 13 mpg in 1975 to 28.8 mpg in 1988. Unfortunately, cheap fuel prices and the popularity of sport utility vehicles (SUVs) and light trucks in the 1990s caused fuel efficiency to slide back below where it was 25 years earlier. By 2002, the average mileage for U.S. cars and trucks was only 27.6 mpg. Amory B. Lovins of the Rocky Mountain Institute in Colorado estimated that raising the average fuel efficiency of the United States car and light truck fleet by one mile per gallon would cut oil consumption about 295,000 barrels per day. In one year, this would equal the total amount the Interior Department hopes to extract from the Arctic National Wildlife Refuge (ANWR) in Alaska. It isn’t inevitable that we consume and pollute so much. A number of alternative transportation options already are available. Of course the lowest possible fossil fuel consumption option is to walk, skate, ride a bicycle, or other forms of human-powered movement. Many people, however, want or need the comfort and speed of a motor vehicle. Several models of battery-powered electric automobiles have been built, but the batteries are heavy, expensive, and require more frequent recharging than most customers will accept. Even though 90% of all daily commutes are less than 50 mi (80 km), most people want the capability to take a long road trip of several hundred miles without needing to stop for fuel or recharging. An alternative that appears to have much more customer appeal is the hybrid gas-electric vehicle. The first hybrid to be marketed in the United States was the twoseat Honda Insight. A 3-cylinder, 1.0 liter gas engine is the main power source for this sleek, lightweight vehicle. A 7hp (horsepower) electric motor helps during acceleration and hill climbing. When the small battery pack begins to run down, it is recharged by the gas engine, so that the vehicle never needs to be plugged in. More electricity is captured during “regenerative” braking further increasing efficiency. With a streamlined lightweight plastic and aluminum body, the Insight gets about 75 mpg (33.7 km/l) in highway driving and has low-enough emissions to qualify as a “super low emission vehicle.” It meets the most strict air quality standards anywhere in the United States. Quick acceleration and nimble handling make the Insight fun to drive. Current cost is about $20,000. Perhaps the biggest drawback to the Insight is its limited passenger and cargo capacity. Although the vast majority of all motor vehicle trips made in the United States involve only a single driver, most people want the ability to have more than one passenger or several suitcases at least
Environmental Encyclopedia 3 occasionally. To meet this need, Honda introduced a hybridengine version of its popular Civic line in 2002. With four doors and ample space for four adults plus a reasonable amount of luggage. The 5-speed manual version of the Civic hybrid gets 48 mpg in both city and highway driving. With a history of durability and consumer satisfaction in other Honda models, and a 10-year warranty on its battery and drive train, the hybrid Civic appears to offer the security that consumers will want in adopting this new technology. Toyota also has introduced a hybrid vehicle called the Prius. Similar in size to the Honda Civic, the Prius comes in a four-door model with enough room for the average American family. During most city driving, it depends only on its quiet, emission-free, electric motor. The batteries needed to drive the 40-hp are stacked up behind the back seat providing a surprisingly large trunk for luggage. The 70-hp, 1.5 liter gas engine kicks in to help accelerate or when the batteries need recharging. Getting about 52 mpg (22 km/l) in city driving, the Prius is one of the most efficient cars on the road and can travel more than 625 mi (1,000 km) without refueling. Some drivers are unnerved by the noiseless electric motor. Sitting at a stoplight, it makes no sound at all. You might think it was dead, but when the light changes, you glide off silently and smoothly. Introduced in Japan in 1997, the Prius sells in the United States for about the same price as the Honda hybrids. The Sierra Club estimates that in 100,000 mi (160,000 km), a Prius will generate 27 tons of CO2, a Ford Taurus will generate 64 tons, while the Ford Excursion SUV will produce 134 tons. In 1999, the Sierra Club awarded both the Insight and the Prius an “excellence in engineering” award, the first time this organization has ever endorsed commercial products. Both Ford and General Motors (GM) have announced intentions to build hybrid engines for their popular sport utility vehicles and light trucks. This program may be more for public relations, however, than to save fuel or reduce pollution. The electrical generators coupled to engines of these vehicles will produce only 12 volts of power. This is far less than the 42 volts needed to provide drive the wheels. Instead, the electricity generated by the gasoline-burning engine will only be used to power accessories such as video recorders, computers, on-board refrigerators, and the like. Having this electrical power available will probably actually increase fuel consumption rather than reduce it. For uncritical consumers, however, it provides a justification for continuing to drive huge, inefficient vehicles. In 2002, President G. W. Bush announced he was abandoning the $1.5 billion government-subsidized project to develop high-mileage gasoline-fueled vehicles started with great fanfare eight years earlier by the Clinton/Gore administration. Instead, Bush was throwing his support behind a
Hydrocarbons
plan to develop hydrogen-based fuel cells to power the automobiles of the future. Fuel cells use a semi-permeable film or electrolyte that allows the passage of charged atoms, called ions, but is impermeable to electrons to generate an electrical current between an anode and cathode. A fuel cell run on pure oxygen and hydrogen produces no waste products except drinkable water and radiant heat. Fossil fuels can be used as the source for the hydrogen, but some pollutants are released (most commonly carbon dioxide) in the process of hydrogen generation. Currently, the fuel cells available need to be quite large to provide enough energy for a vehicle. Fuel cell-powered buses and vans that have space for a large power system are currently being tested, but a practical, family vehicle appears to be years away. While they agree that fuel cells offer a wonderful option for cars of the future, many environmentalists regard putting all our efforts into this one project to be misguided at best. It probably will be at least a decade before a fuelcell vehicle is commercially available. [William P. Cunningham Ph.D.]
RESOURCES BOOKS Hodkinson, Ron, and John Fenton. Lightweight Electric/Hybrid Vehicle Design. Warrendale, PA: Society of Automotive Engineers, 2001. Jurgen., Ronald K., ed. Electric and Hybrid-electric Vehicles. Warrendale, PA: Society of Automotive Engineers, 2002. Koppel, Tom. Powering the Future: The Ballard Fuel Cell and the Race to Change the World. New York: John Wiley & Sons, 1999.
PERIODICALS “Dark Days For Detroit—The Big Three’s Gravy Train in Recent Years— Fat Profits from Trucks—is Being Derailed by a New Breed of Hybrid Vehicles from Europe and Japan.” Business Week, January 28, 2002, 61. Ehsani, M., K. M. Rahman, and H. A. Toliyat. “Propulsion System Design of Electric and Hybrid Vehicles.” IEEE Transactions on Industrial Electronics 44 (1997): 19. Hermance, David, and Shoichi Sasaki. “Special Report on Electric Vehicles—Hybrid Electric Vehicles take to the Streets.” IEEE Spectrum 35 (1998): 48. Jones, M. “Hybrid Vehicles—The Best of Both Worlds?” Chemistry and Industry 15 (1995): 589. Maggetto, G. and J. Van Mierlo. “Fuel cells: Systems and applications— Electric vehicles, hybrid vehicles and fuel cell electric vehicles: State of the art and perspectives.” Annales de chimie—science des mate´riaux. 26 (2001): 9.
Hydrocarbons Any compound composed of elemental carbon and hydrogen, hydrocarbons may also contain chlorine, oxygen, nitrogen, and other atoms. Hydrocarbons are classified according to the arrangement of carbon atoms and the types of chemical bonds. The major classes include aromatic or carbon ring 737
Environmental Encyclopedia 3
Hydrochlorofluorocarbons
compounds, alkanes (also called aliphatic or paraffin) compounds with straight or branched chains and single bonds, and alkenes and alkynes with double and triple bonds, respectively. Most hydrocarbon fuels are a mixture of many compounds. Gasoline, for example, includes several hundred hydrocarbon compounds, including paraffins, olefins, and aromatic compounds, and consequently exhibits a host of possible environmental effects. All of the fossil fuels, including crude oils and petroleum, as well as many other compounds important to industries, are hydrocarbons. Hydrocarbons are environmentally important for several reasons. First, hydrocarbons give off greenhouse gases, especially carbon dioxide, when burned and are important contributers to smog. In addition, many aromatic hydrocarbons and hydrocarbons containing halogens are toxic or carcinogenic.
Hydrochlorofluorocarbons The term hydrochlorofluorocarbon (HCFC) refers to halogenated hydrocarbons that contain chlorine and/or fluorine in place of some hydrogen atoms in the molecule. They are chemical cousins of the chlorofluorocarbons (CFCs), but differ from them in that they have less chlorine. A special subgroup of the HCFCs is the hydrofluorocarbons (HFCs), which contain no chlorine at all. A total of 53 HCFCs and HFCs are possible. The HCFCs and HFCs have become commercially and environmentally important since the 1980s. Their growing significance has resulted from increasing concerns about the damage being done to stratospheric ozone by CFCs. Significant production of the CFCs began in the late 1930s. At first, they were used almost exclusively as refrigerants. Gradually other applications—especially as propellants and blowing agents—were developed. By 1970, the production of CFCs was growing by more than 10% per year, with a worldwide production of well over 662 million lb (300 million kg) of one family member alone, CFC-11. Environmental studies began to show, however, that CFCs decompose in the upper atmosphere. Chlorine atoms produced in this reaction attack ozone molecules (O3), converting them to normal oxygen (O2). Since stratospheric ozone provides protection for humans against solar ultraviolet radiation, this finding was a source of great concern. By 1987, 31 nations had signed the Montreal Protocol, agreeing to cut back significantly on their production of CFCs. The question became how nations were to find substitutes for the CFCs. The problem was especially severe in developing nations where CFCs are widely used in refrigeration and air-conditioning systems. Countries like China and India refused to take part in the CFC-reduction plan unless 738
CHF3 CHCl2CF3 CH2FCClF2 CH3CHClF
HFC-23 HCFC-123 HCFC-133b HCFC-151a
developed nations helped them switch over to an equally satisfactory substitute. Scientists soon learned that HCFCs were a more benign alternative to the CFCs. They discovered that compounds with less chlorine than the amount present in traditional CFCs were less stable and often decomposed before they reached the stratosphere. By mid 1992, the United States Environmental Protection Agency (EPA) had selected 11 chemicals that they considered to be possible replacements for CFCs. Nine of those compounds are HFCs and two are HCFCs. The HCFC-HFC solution is not totally satisfactory, however. Computer models have shown that nearly all of the proposed substitutes will have at least some slight effect on the ozone layer and the greenhouse effect. In fact, the British government considered banning one possible substitute for CFCs, HCFC-22, almost as soon as the compound was developed. In addition, one of the most promising candidates, HCFC-123, was found to be carcinogenic in rats. Finally, the cost of replacing CFCs with HCFCs and HFCs is expected to be high. One consulting firm, Metroeconomica, has estimated that CFC substitutes may be six to 15 times as expensive as CFCs themselves. See also Aerosol; Air pollution; Air pollution control; Air quality; Carcinogen; Ozone layer depletion; Pollution; Pollution control [David E. Newton]
RESOURCES PERIODICALS Johnson, J. “CFC Substitutes Will Still Add to Global Warming.” New Scientist 126 (April 14, 1990): 20. MacKenzie, D. “Cheaper Alternatives for CFCs.” New Scientist 126 (June 30, 1990): 39–40. Pool, R. “Red Flag on CFC Substitute.” Nature 352 (July 11, 1991): 352. Stone, R. “Ozone Depletion: Warm Reception for Substitute Coolant.” Science 256 (April 3, 1992): 22.
Hydrogen The lightest of all chemical elements, hydrogen has a density about one-fourteenth that of air. It has a number of special chemical and physical properties. For example, hydrogen
Environmental Encyclopedia 3 has the second lowest boiling and freezing points of all elements. The combustion of hydrogen produces large quantities of heat, with water as the only waste product. From an environmental standpoint, this fact makes hydrogen a highly desirable fuel. Many scientists foresee the day when hydrogen will replace fossil fuels as our most important source of energy.
Hydrogeology Sometimes called groundwater hydrology or geohydrology, this branch of hydrology is concerned with the relationship of subsurface water and geologic materials. Of primary interest is the saturated zone of subsurface water, called groundwater, which occurs in rock formations and in unconsolidated materials such as sands and gravels. Groundwater is studied in terms of its occurrence, amount, flow, and quality. Historically, much of the work in hydrogeology centered on finding sources of groundwater to supply water for drinking, irrigation, and municipal uses. More recently, groundwater contamination by pesticides, chemical fertilizers, toxic wastes, and petroleum and chemical spills have become new areas of concern for hydrogeologists.
Hydrologic cycle The natural circulation of water on the earth is called the hydrologic cycle. Water cycles from bodies of water, via evaporation to the atmosphere, and eventually returns to the oceans as precipitation, runoff from streams and rivers, and groundwater flow. Water molecules are transformed from liquid to vapor and back to liquid within this cycle. On land, water evaporates from the soil or is taken up by plant roots and eventually transpired into the atmosphere through plant leaves; the sum of evaporation and transpiration is called evapotranspiration. Water is recycled continuously. The molecules of water in a glass used to quench your thirst today, at some point in time may have dissolved minerals deep in the earth as groundwater flow, fallen as rain in a tropical typhoon, been transpired by a tropical plant, been temporarily stored in a mountain glacier, or quenched the thirst of people thousands of years ago. The hydrologic cycle has no real beginning or end but is a circulation of water that is sustained by solar energy and influenced by the force of gravity. Because the supply of water on the earth is fixed, there is no net gain or loss of water over time. On an average annual basis, global evaporation must equal global precipitation. Likewise, for any
Hydrologic cycle
body of land or water, changes in storage must equal the total inflow minus the total outflow of water. This is the hydrologic or water balance. At any point in time, water on the earth is either in active circulation or in storage. Water is stored in icecaps, soil, groundwater, the oceans, and other bodies of water. Much of this water is only temporarily stored. The residence time of water storage in the atmosphere is several days and is only about 0.04% of the total freshwater on the earth. For rivers and streams, residence time is weeks; for lakes and reservoirs, several years; for groundwater, hundreds to thousands of years; for oceans, thousands of years; and for icecaps, tens of thousands of years. As the driving force of the hydrologic cycle, solar radiation provides the energy necessary to evaporate water from the earth’s surface, almost threequarters of which is covered by water. Nearly 86% of global precipitation originates from ocean evaporation. Energy consumed by the conversion of liquid water to vapor cools the temperature of the evaporating surface. This same energy, the latent heat of vaporization, is released when water vapor changes back to liquid. In this way, the hydrologic cycle globally redistributes heat energy as well as water. Once in the atmosphere, water moves in response to weather circulation patterns and is transported often great distances from where it was evaporated. In this way, the hydrologic cycle governs the distribution of precipitation and hence, the availability of fresh water over the earth’s surface. About 10% of atmospheric water falls as precipitation each day and is simultaneously replaced by evaporation. This 10% is unevenly distributed over the earth’s surface and, to a large extent, determines the types of ecosystems that exist at any location on the earth and likewise governs much of the human activity that occurs on the land. The earliest civilizations on the earth settled in close proximity to fresh water. Subsequently, and for centuries, humans have been striving to correct, or cope with, this uneven distribution of water. Historically, we have extracted stored water or developed new storages in areas of excess, or during periods of excess precipitation, so that water could be available where and when it is most needed. Understanding processes of the hydrologic cycle can help us develop solutions to water problems. For example, we know that precipitation occurs unevenly over the earth’s surface because of many complex factors that trigger precipitation. For precipitation to occur, moisture must be available and the atmosphere must become cooled to the dew point, the temperature at which air becomes saturated with water vapor. This cooling of the atmosphere occurs along storm fronts or in areas where moist air masses move into mountain ranges and are pushed up into colder air. However, atmospheric particles must be present for the moisture to con739
Hydrologic cycle
dense upon, and water droplets must coalesce until they are large enough to fall to the earth under the influence of gravity. Recognizing the factors that cause precipitation has resulted in efforts to create conditions favorable for precipitation over land surfaces via cloud seeding. Limited success has been achieved by seeding clouds with particles, thus promoting the condensation-coalescence process. Precipitation has not always increased with cloud seeding and questions of whether cloud seeding limits precipitation in other downwind areas is of both economic and environmental concern. Parts of the world have abundant moisture in the atmosphere, but it occurs as fog because the mechanisms needed to transform this moisture into precipitation do not exist. In dry coastal areas, for example, some areas have no measurable precipitation for years, but fog is prevalent. By placing huge sheets of plastic mesh along coastal areas, fog is intercepted, condenses on the sheets, and provides sufficient drinking water to supply small villages. Total rainfall alone does not necessarily indicate water abundance or scarcity. The magnitude of evapotranspiration compared to precipitation determines to some extent whether water is abundant or in short supply. On a continent basis, evapotranspiration represents from 56 to 80% of annual precipitation. For individual watersheds within continents, these%ages are more extreme and point to the importance of evapotranspiration in the hydrologic cycle. Weather circulation patterns responsible for water shortages in some parts of the world are also responsible for excessive precipitation, floods, and related catastrophes in other parts of the world. Precipitation that falls on land, but that is not stored, evaporated or transpired, becomes excess water. This excess water eventually reaches groundwater, streams, lakes, or the ocean by surface and subsurface flow. If the soil surface is impervious or compacted, water flows over the land surface and reaches stream channels quickly. When surface flow exceeds a channel’s capacity, flash flooding is the result. Excessive precipitation can saturate soils and cause flooding no matter what the pathway of flow. For example, in 1988 catastrophic flooding and mudslides in Thailand caused over 500 fatalities or missing persons, nearly 700 people were injured, 4,952 homes were lost, and 221 roads and 69 bridges were destroyed. A three-day rainfall of over nearly 40 in (1,000 mm) caused hillslopes to become saturated. The effects of heavy rainfall were exacerbated by the removal of natural forest cover and conversion to rubber plantations and agricultural crops. Although floods and mudslides occur naturally, many of the pathways of water flow that contribute to such occurrences can be influenced by human activity. Any time vegeta740
Environmental Encyclopedia 3 tive cover is severely reduced and soil exposed to direct rainfall, surface water flow and soil erosion can degrade watershed systems and their aquatic ecosystems. The implications of global warming or greenhouse effects on the hydrologic cycle raise several questions. The possible changes in frequency and occurrence of droughts and floods are of major concern, particularly given projections of population growth. Global warming can result in some areas becoming drier while others may experience higher precipitation. Globally, increased temperature will increase evaporation from oceans and ultimately result in more precipitation. The pattern of precipitation changes over the earth’s surface, however, cannot be predicted at the present time. The hydrologic cycle influences nutrient cycling of ecosystems, processes of soil erosion and transport of sediment, and the transport of pollutants. Water is an excellent liquid solvent; minerals, salts, and nutrients become dissolved and transported by water flow. The hydrologic cycle is an important driving mechanism of nutrient cycling. As a transporting agent, water moves minerals and nutrients to plant roots. As plants die and decay, water leaches out nutrients and carries them downstream. The physical action of rainfall on soil surfaces and the forces of running water can seriously erode soils and transport sediments downstream. Any minerals, nutrients, and pollutants within the soil are likewise transported by water flow into groundwater, streams, lakes, or estuaries. Atmospheric moisture transports and deposits atmospheric pollutants, including those responsible for acid rain. Sulfur and nitrogen oxides are added to the atmosphere by the burning of fossil fuels. Being an excellent solvent, water in the atmosphere forms acidic compounds that become transported via the atmosphere and deposited great distances from their original site. Atmospheric pollutants and acid rain have damaged freshwater lakes in the Scandinavian countries and terrestrial vegetation in eastern Europe. In 1983, such pollution caused an estimated $1.2 billion loss of forests in the former West Germany alone. Once pollutants enter the atmosphere and become subject to the hydrologic cycle, problems of acid rain have little chance for resolution. However, programs that reduce atmospheric emissions in the first place provide some hope. An improved understanding of the hydrologic cycle is needed to better manage water resources and our environment. Opportunities exist to improve our global environment, but better knowledge of human impacts on the hydrologic cycle is needed to avoid unwanted environmental effects. See also Estuary; Leaching [Kenneth N. Brooks]
Environmental Encyclopedia 3
Hydroponics
The hydrologic or water cycle. (McGraw-Hill Inc. Reproduced by permission.)
RESOURCES BOOKS Committee on Opportunities in the Hydrologic Sciences, Water Sciences Technology Board. Opportunities in the Hydrologic Sciences. National Research Council. Washington, DC: National Academy Press, 1991. Lee, R. Forest Hydrology. New York: Columbia University Press, 1980. Postel, S. “Air Pollution, Acid Rain, and the Future of Forests.” Worldwatch Paper 58. Washington, DC: Worldwatch Institute, 1984. Van der Leeden, F., F. L. Troise, and D. K. Todd. The Water Encyclopedia. 2nd ed. Chelsea, MI: Lewis Publishers, 1990.
PERIODICALS Nash, N. C. “Chilean Engineers Find Water for Desert by Harvesting Fog in Nets.” New York Times, July 14, 1992, B5.
OTHER
monly, hydrology encompasses the study of the amount, distribution, circulation, timing, and quality of water. It includes the study of rainfall, snow accumulation and melt, water movement over and through the soil, the flow of water in saturated, underground geologic materials (groundwater), the flow of water in channels (called streamflow), evaporation and transpiration, and the physical, chemical and biological characteristics of water. Solving problems concerned with water excesses, flooding, water shortages, and water pollution are in the domain of hydrologists. With increasing concern about water pollution and its effects on humans and on aquatic ecosystems, the practice of hydrology has expanded into the study and management of chemical and biological characteristics of water.
Rao, Y. S. “Flash Floods in Southern Thailand.” Tiger Paper 15 (1988): 1– 2. Regional Office for Asia and the Pacific (RAPA), Food and Agricultural Organization of the United Nations. Bangkok.
Hydroponics
Hydrology The science and study of water, including its physical and chemical properties and its occurrence on earth. Most com-
Hydroponics is the practice of growing plants in water as opposed to soil. It comes from the Greek hydro ("water") and ponos ("labor"), implying “water working.” The essential 741
Hydroponics
macro- and micro- (trace) nutrients needed by the plants are supplied in the water. Hydroponic methods have been used for more than 2,000 years, dating back to the Hanging Gardens of Babylon. More recently, it has been used by plant physiologists to discover which nutrients are essential for plant growth. Unlike soil, where nutrient levels are unknown and variable, precise amounts and kinds of minerals can be added to deionized water, and removed individually, to find out their role in plant growth and development. During World War II hydroponics was used to grow vegetable crops by U.S. troops stationed on some Pacific islands. Today, hydroponics is becoming a more popular alternative to conventional agriculture in locations with low or inaccessible sources of water or where land available for farming is scarce. For example, islands and desert areas like the American Southwest and the Middle East are prime regions for hydroponics. Plants are typically grown in greenhouses to prevent water loss. Even in temperate areas where fresh water is readily available, hydroponics can be used to grow crops in greenhouses during the winter months. Two methods are traditionally used in hydroponics. The original technique is the water method, where plants are supported from a wire mesh or similar framework so that the roots hang into troughs which receive continuous supplies of nutrients. A recent modification is a nutrient-film technique (NFT), also called the nutrient-flow method, where the trough is lined with plastic. Water flows continuously over the roots, decreasing the stagnant boundary layer surrounding each root, and thus enhances nutrient uptake. This provides a versatile, lightweight, and inexpensive system. In the second method, plants are supported in a growing medium such as sterile sand, gravel, crushed volcanic rock, vermiculite, perlite, sawdust, peatmoss, or rice hulls. The nutrient solution is supplied from overhead or underneath holding tanks either continuously or semi-continuously using a drip method. The nutrient solution is usually not reused. On some Caribbean Islands like St. Croix, hydroponics is being used in conjunction with intensive fish farms (e.g., tilapia) which use recirculated water (a practice is more recently known as aquaponics). This is a “win-win” situation because the nitrogenous wastes, which are toxic to the fish, are passed through large greenhouses with hydroponicallygrown plants like lettuce. The plants remove the nutrients and the water is returned to the fish tanks. There is a sensitive balance between stocking density of fish and lettuce production. Too high a ratio of lettuce plants to fish results in lower lettuce production due to nutrient limitation. Too low a ratio also results in low vegetable production, but this time as a result of the buildup of toxic chemicals. The optimum yield came from a ratio of 1.9 lettuce plants to 1 fish. One pound (0.45 kg) of feed per day was appropriate to feed 33 742
Environmental Encyclopedia 3 lb (15 kg) of tilapia fingerlings, which sustained 189 lettuce plants and produced nearly 3,300 heads of lettuce annually. When integrated systems (fish-hydroponic recirculating units) are compared to separate production systems, the results clearly favor the former. The combined costs and chemical requirements of the separate production systems was nearly two to three times greater than that of the recirculating system to produce the same amount of lettuce and fish. However, there are some drawbacks that must be considered—disease outbreaks in plants and/or fish; the need to critically maintain proper nutrient (especially trace element), plant, and fish levels; uncertainties in fish and market prices; and the need for highly-skilled labor. The integrated method can be adapted to grow other types of vegetables like strawberries, ornamental plants like roses, and other types of animals such as shellfish. Some teachers have even incorporated this technique into their classrooms to illustrate ecological as well as botanical and culture principles. Some proponents of hydroponic gardening make fairly optimistic claims and state that a sophisticated unit is no more expensive than an equivalent parcel of farmed land. They also argue that hydroponic units (commonly called “hydroponicums") require less attention than terrestrial agriculture. Some examples of different types of “successful” hydroponicums are: a person in the desert area of southern California has used the NFT system for over 18 years and grows his plants void of substate in water contained in open cement troughs that cover 3 acres (7.5 ha); a hydroponicum in Orlando, Florida, utilizes the Japanese system of planting seedlings on styrofoam boards that float on the surface of a nutrient bath which is constantly aerated; an outfit in Queens, New York, uses the Israeli Ein-Gedi system which allows plant roots to hang free inside a tube which is sprayed regularly with a nutrient solution, yielding 150,000 lbs (68,000 kg) of tomatoes, 100,000 lb (45,500 kg) of cucumbers, and one million heads of lettuce per acre (0.4 ha) each year; and finally, a farmer in Blooming Prairie, Minnesota, uses the NFT system in a greenhouse to grow Bibb and leafy lettuce year-round so he can sell his produce to area hospitals, some supermarkets, and a few produce warehouses. Most people involved in hydroponics agree that the main disadvantage is the high cost for labor, lighting, water, and energy. Root fungal infections can also be easily spread. Advantages include the ability to grow crops in arid regions or where land is at a premium; more controlled conditions, such as the ability to grow plants indoors, and thus minimize pests and weeds; greater planting densities; and constant supply of nutrients. Hydroponic gardening is becoming more popular for home gardeners. It may also be a viable option to growing crops in some developing countries. Overall, the future looks bright for hydroponics. [John Korstad]
Environmental Encyclopedia 3 RESOURCES BOOKS Resh, H. M. Hydroponic Food Production: A Definitive Guidebook for the Advanced Home Gardener and Commercial Hydroponci Grower, 5th ed. Santa Barbara: Woodbridge Press, 1995. Saffell, H. L. How to Start on a Shoestring and Make a Profit with Hydroponics. Franklin, TN: Mayhill Press, 1994.
PERIODICALS Nicol, E. “Hydroponics and Aquaculture in the High School Classroom.” The American Biology Teacher 52 (1990): 182–4. Rakocy, J. E. “Hydroponic Lettuce Production in a Recirculating Fish Culture System.” Island Perspectives 3 (1988–89): 5–10.
Hydropower see Aswan High Dam; Dams (environmental effects); Glen Canyon Dam; James Bay hydropower project; Low-head hydropower; Tellico Dam; Tennessee Valley Authority; Three Gorges Dam
Hydrothermal vents
rock from below the earth’s crust. Water temperatures of higher than 660°F. have been recorded at some vents. Water flowing from vents contains minerals such as iron, copper, and zinc. The minerals fall like rain and settle on the ocean floor. Over time, the mineral deposits build up and form a chimney around the vent. The first hydrothermal vents were discovered in 1977 by scientists aboard the submersible Alvin. The scientists found the vents near the Gala´ pagos Islands in the eastern Pacific Ocean. Other vents were discovered in the Pacific, Atlantic, and Indian oceans. In 2000, scientists discovered a field of hydrothermal vents in the Atlantic Ocean. The area called the “Lost City” contained 180-feet tall chimneys. These were the largest known chimneys. Hydrothermal vents are located at ocean depths of 8,200 to 10,000 feet. The area near a hydrothermal vent is home to unique animals. They exist without sunlight and live in mineral-levels that would poison animals living on land. These unique animals include 10-foot-long tube worms, 1-foot-long clams, and shrimp. [Liz Swain]
Hydrothermal vents Hydrothermal vents are hot springs located on the ocean floor. The vents spew out water heated by magma, molten
Hypolimnion: Lakes see Great Lakes
743
This Page Intentionally Left Blank
I
IAEA see International Atomic Energy Agency
Ice age Ice age usually refers to the Pleistocene epoch, the most recent occurrence of continental glaciation. Beginning several million years ago in Antarctica, it is marked by at least four major advances and retreats (excluding Antarctica). Ice ages occur during times when more snow falls during the winter than is lost by melting, evaporation, and loss of ice chunks in water during the summer. Alternating glacial and interglacial stages are best explained by a combination of Earth’s orbital cycles and changes in carbon dioxide levels. These cycles operate on time scales of tens of millennia. By contrast, global warming projections involve decades, a far more imminent concern for humankind.
Ice age refugia The series of ice ages that occurred between 2.4 million and 10,000 years ago had a dramatic effect on the climate and the life forms in the tropics. During each glacial period the tropics became both cooler and drier, turning some areas of tropical rain forest into dry seasonal forest or savanna. For reasons associated with local topography, geography, and climate, some areas of forest escaped the dry periods, and acted as refuges (refugia) for forest biota. During subsequent interglacials, when humid conditions returned to the tropics, the forests expanded and were repopulated by plants and animals from the species-rich refugia. Ice age refugia today correspond to present day areas of tropical forest that typically receive a high rainfall and often contain unusually large numbers of species, including a high proportion of endemic species. These species-rich refugia are surrounded by relatively species-poor areas of
forest. Refugia are also centers of distribution for obligate forest species (such as the gorilla [Gorilla gorilla]) with a present day narrow and disjunct distribution best explained by invoking past episodes of deforestation and reforestation. The location and extent of the forest refugia have been mapped in both Africa and South America. In the African rain forests there are three main centers of species richness and endemism recognized for mammals, birds, reptiles, amphibians, butterflies, freshwater crabs, and flowering plants. These centers are in Upper Guinea, Cameroon, and Gabon, and the eastern rim of the Zaire basin. In the Amazon Basin more than 20 refugia have been identified for different groups of animals and plants in Peru, Columbia, Venezuela, and Brazil. The precise effect of the ice ages on biodiversity in tropical rain forests is currently a matter of debate. Some have argued that the repeated fluctuations between humid and arid phases created opportunities for the rapid evolution of certain forest organisms. Others have argued the opposite — that the climatic fluctuations resulted in a net loss of species diversity through an increase in the extinction rate. It has also been suggested that refugia owe their species richness not to past climate changes but to other underlying causes such as a favorable local climate, or soil. The discovery of centers of high biodiversity and endemism within the tropical rain forest biome has profound implications for conservation biology. A “refuge rationale” has been proposed by conservationists, whereby ice age refugia are given high priority for preservation, since this would save the largest number of species, (including many unnamed, threatened, and endangered species), from extinction. Since refugia survived the past dry-climate phases, they have traditionally supplied the plants and animals for the restocking of the new-growth forests when wet conditions returned. Modern deforestation patterns, however, do not take into account forest history or biodiversity, and both forest refugia and more recent forests are being destroyed equally. For the first time in millions of years, future tropical 745
Environmental Encyclopedia 3
Impervious material
forests which survive the present mass deforestation episode could have no species-rich centers from which they can be restocked. See also Biotic community; Deciduous forest; Desertification; Ecosystem; Environment; Mass extinction
diminish biodiversity. An improvement cut is the initial step to prepare a neglected or unmanaged stand for future harvest. See also Clear-cutting; Forest management; Selection cutting
[Neil Cumberlidge Ph.D.]
In situ mining RESOURCES BOOKS Collins, Mark, ed. The Last Rain Forests. London: Mitchell Beazley Publishers, 1990. Kingdon, Jonathan. Island Africa: The Evolution of Africa’s Rare Animals and Plants. Princeton: Princeton University Press, 1989. Sayer, Jeffrey A., et al., eds. The Conservation Atlas of Tropical Forests. New York: Simon and Schuster, 1992. Whitmore, T. C. An Introduction to Tropical Rain Forests. Oxford, England: Clarenden Press, 1990. Wilson, E. O., ed. Biodiversity. Washington DC: National Academy Press, 1988.
see Bureau of Mines
Inbreeding
“Biological Diversification in the Tropics.” Proceedings of the Fifth International Symposium of the Association for Tropical Biology, at Caracas, Venezuela, February 8-13, 1979, edited by Ghillean T. Prance. New York: Columbia University Press, 1982.
Inbreeding occurs when closely related individuals mate with one another. Inbreeding may happen in a small population or due to other isolating factors; the consequence is that little new genetic information is added to the gene pool. Thus recessive, deleterious alleles become more plentiful and evident in the population. Manifestations of inbreeding are known as inbreeding depression. A general loss of fitness often results and may cause high infant mortality, and lower birth weights, fecundity, and longevity. Inbreeding depression is a major concern when attempting to protect small populations from extinction.
Impervious material
Incidental catch
OTHER
As used in hydrology, this term refers to rock and soil material that occurs at the earth’s surface or within the subsurface which does not permit water to enter or move through them in any perceptible amounts. These materials normally have small-sized pores or have pores that have become clogged (sealed) which severely restrict water entry and movement. At the ground surface, rock outcrops, road surfaces, or soil surfaces that have been severely compacted would be considered impervious. These areas shed rainfall easily, causing overland flow or surface runoff which pick up and transport soil particles and cause excessive soil erosion. Soils or geologic strata beneath the earth’s surface are considered impervious, or impermeable, if the size of the pores is small and/or if the pores are not connected.
Improvement cutting Removal of crooked, forked, or diseased trees from a forest in which tree diameters are 5 in (13 cm) or larger. In forests where trees are smaller, the same process is called cleaning or weeding. Both have the objective of improving species composition, stem quality and/or growth rate of the forest. Straight, healthy, vigorous trees of the desired species are favored. By discriminating against certain tree species and eliminating trees with cavities or insect problems, improvement cuts can reduce the variety of habitats and thereby 746
see Bycatch
Incineration As a method of waste management, incineration refers to the burning of waste. It helps reduce the volume of landfill material and can render toxic substances non-hazardous, provided certain strict guidelines are followed. There are two basic types of incineration: municipal and hazardous waste incineration. Municipal waste incineration The process of incineration involves the combination of organic compounds in solid wastes with oxygen at high temperature to convert them to ash and gaseous products. A municipal incinerator consists of a series of unit operations which include a loading area under slightly negative pressure to avoid the escape of odors, a refuse bin which is loaded by a grappling bucket, a charging hopper leading to an inclined feeder and a furnace of varying type—usually of a horizontal burning grate type—a combustion chamber equipped with a bottom ash and clinker discharge, followed by a gas flue system to an expansion chamber. If byproduct stream is to be produced either for heating or power generation purposes, then the downstream flue system includes heat exchanger tubing as well. After the heat has been exchanged, the flue gas proceeds to a series of gas cleanup
Environmental Encyclopedia 3
Incineration
Diagram of a municipal incinerator. (McGraw-Hill Inc. Reproduced by permission.)
systems which neutralizes the acid gases (sulfur dioxide and hydrochloric acid, the latter resulting from burning chlorinated plastic products), followed by gas scrubbers and then solid/gas separation systems such as baghouses before dischargement to tall stacks. The stack system contains a variety of sensing and control devices to enable the furnace to operate at maximum efficiency consistent with minimal particulate emissions. A continuous log of monitoring systems is also required for compliance with county and state environmental quality regulations. There are several products from a municipal incinerator system: items which are removed before combustion such as large metal pieces; grate or bottom ash (which is usually watersprayed after removal from the furnace for safe storage); fly (or top ash) which is removed from the flue system generally mixed with products from the acid neutralization process; and finally the flue gases which are expelled to the environment. If the system is operating optimally, the flue gases will meet emission requirements, and the heavy metals from the wastes will be concentrated in the fly ash. (Typically these heavy metals, which originate from volatile metallic constituents, are lead and arsenic.) The fly ash typically is then stored
in a suitable landfill to avoid future problems of leaching of heavy metals. Some municipal systems blend the bottom ash with the top ash in the plant in order to reduce the level of heavy metals by dilution. This practice is undesirable from an ultimate environmental viewpoint. There are many advantages and disadvantages to municipal waste incineration. Some of the advantages are as follows: 1) The waste volume is reduced to a small fraction of the original. 2) Reduction is rapid and does not require semi-infinite residence times in a landfill. 3) For a large metropolitan area, waste can be incinerated on site, minimizing transportation costs. 4) The ash residue is generally sterile, although it may require special disposal methods. 5) By use of gas clean-up equipment, discharges of flue gases to the environment can meet stringent requirements and be readily monitored. 6) Incinerators are much more compact than landfills and can have minimal odor and vermin problems if properly designed. 7) Some of the costs of operation can be reduced by heat-recovery techniques such as the sale of steam to municipalities or electrical energy generation. There are disadvantages to municipal waste incineration as well. For example: 1) Generally the capital cost is 747
Environmental Encyclopedia 3
Indicator organism
high and is escalating as emission standards change. 2) Permitting requirements are becoming increasingly more difficult to obtain. 3) Supplemental fuel may be required to burn municipal wastes, especially if yard waste is not removed prior to collection. 4) Certain items such as mercury-containing batteries can produce emissions of mercury which the gas cleanup system may not be designed to remove. 5) Continuous skilled operation and close maintenance of process control is required, especially since stack monitoring equipment reports any failure of the equipment which could result in mandated shut down. 6) Certain materials are not burnable and must be removed at the source. 7) Traffic to and from the incinerator can be a problem unless timing and routing are carefully managed. 8) The incinerator, like a landfill, also has a limited life, although its lifetime can be increased by capital expenditures. 9) Incinerators also require landfills for the ash. The ash usually contains heavy metals and must be placed in a specially-designed landfill to avoid leaching. Hazardous waste incineration For the incineration of hazardous waste, a greater degree of control, higher temperatures, and a more rigorous monitoring system are required. An incinerator burning hazardous waste must be designed, constructed, and maintained to meet Resource Conservation and Recovery Act (RCRA) standards. An incinerator burning hazardous waste must achieve a destruction and removal efficiency of at least 99.99 percent for each principal organic hazardous constituent. For certain listed constituents such as polychlorinated biphenyl (PCB), mass air emissions from an incinerator are required to be greater than 99.9999%. The Toxic Substances Control Act requires certain standards for the incineration of PCBs. For example, the flow of PCB to the incinerator must stop automatically whenever the combustion temperature drops below the specified value; there must be continuous monitoring of the stack for a list of emissions; scrubbers must be used for hydrochloric acid control; among others. Recently medical wastes have been treated by steam sterilization, followed by incineration with treatment of the flue gases with activated carbon for maximum absorption of organic constituents. The latter system is being installed at the Mayo Clinic in Rochester, Minnesota, as a model medical disposal system. See also Fugitive emissions; Solid waste incineration; Solid waste volume reduction; Stack emissions [Malcolm T. Hepworth]
RESOURCES BOOKS Brunner, C. R. Handbook of Incineration Systems. New York: McGrawHill, 1991.
748
Edwards, B. H., et al. Emerging Technologies for the Control of Hazardous Wastes. Park Ridge, NJ: Noyes Data Corporation, 1983. Hickman Jr., H. L., et al. Thermal Conversion Systems for Municipal Solid Waste. Park Ridge, NJ: Noyes Publications, 1984. Vesilind, R. A., and A. E. Rimer. Unit Operations in Resource Recovery Engineering. Englewood Cliffs, NJ: Prentice-Hall, 1981. Wentz, C. A. Hazardous Waste Management. New York: McGraw-Hill, 1989.
Incineration, solid waste see Solid waste incineration
Indicator organism Indicator organisms, sometimes called bioindicators, are plant or animal species known to be either particularly tolerant or particularly sensitive to pollution. The health of an organism can often be associated with a specific type or intensity of pollution, and its presence can then be used to indicate polluted conditions relative to unimpacted conditions. Tubificid worms are an example of organisms that can indicate pollution. Tubificid worms live in the bottom sediments of streams and lakes, and they are highly tolerant of sewage. In a river polluted by wastewater discharge from a sewage treatment plant, it is common to see a large increase in the numbers of tubificid worms in stream sediments immediately downstream. Upstream of the discharge, the numbers of tubificid worms are often much lower or almost absent, reflecting cleaner conditions. The number of tubificid worms also decreases downstream, as the discharge is diluted. Pollution-intolerant organisms can also be used to indicate polluted conditions. The larvae of mayflies live in stream sediments and are known to be particularly sensitive to pollution. In a river receiving wastewater discharge, mayflies will show the opposite pattern of tubificid worms. The mayfly larvae are normally present in large numbers above the discharge point; they decrease or disappear at the discharge point and reappear further downstream as the effects of the discharge are diluted. Similar examples of indicator organisms can be found among plants, fish, and other biological groups. Giant reedgrass (Phragmites australis) is a common marsh plant that is typically indicative of disturbed conditions in wetlands. Among fish, disturbed conditions may be indicated by the disappearance of sensitive species like trout which require clear, cold waters to thrive. The usefulness of indicator organisms is limited. While their presence or absence provides a reliable general picture of polluted conditions, they are often little help in
Environmental Encyclopedia 3 identifying the exact sources of pollution. In the sediments of New York Harbor, for example, pollution-tolerant insect larvae are overwhelmingly dominant. However, it is impossible to attribute the large larval populations to just one of the sources of pollution there, which include ship traffic, sewage and industrial discharge, and storm runoff. The U.S. Environmental Protection Agency (EPA) is working diligently to find reliable predictors of aquatic ecosystem health using indicator species. Recently, the EPA has developed standards for the usefulness of species as ecological indicator organisms. A potential indicator species for use in evaluating watershed health must successfully pass four phases of evaluation. First, a potential indicator organism should provide information that is relevant to societal concerns about the environment, not simply academically interesting information. Second, use of a potential indicator organism should be feasible. Logistics, sampling costs, and timeframe for information gathering are legitimate considerations in deciding whether an organism is a potential indicator species or not. Thirdly, enough must be known about a potential species before it may be effectively used as an indicator organism. Sufficient knowledge regardin! g the natural variations to environmental flux should exist before incorporating a species as a true watershed indicator species. Lastly, the EPA has set a fourth criterion for evaluation of indicator species. A useful indicator should provide information that is easily interpreted by policy makers and the public, in addition to scientists. Additionally, in an effort to make indicator species information more reliable, the creation of indicator species indices are being investigated. An index is a formula or ratio of one amount to another that is used to measure relative change. The major advantage of developing an indicator organism index that is somewhat universal to all aquatic environments is that it can be tested using statistics. Using mathematical statistical methods, it may be determined whether a significant change in an index value has occurred. Furthermore, statistical methods allow for a certain level of confidence that the measured values repres! ent what is actually happening in nature. For example, a study was conducted to evaluate the utility of diatoms (a kind of microscopic aquatic algae) as an index of aquatic system health. Diatoms meet all four criteria mentioned above, and various species are found in both fresh and salt water. An index was created that was calculated using various measurable characteristics of diatoms that could then be evaluated statistically over time and among varying sites. It was determined that the diatom index was sensitive enough to reliably reflect three categories of the health of an aquatic ecosystem. The diatom index showed that values obtained from areas impacted by human activities had greater variability over time than diatom indices obtained from less disturbed loca-
Indigenous peoples
tions. Many such indices are being developed using different species, and multiple species in an effort to create reliable information from indicator organisms. As more is learned about the physiology and life history of indicator organisms and their individual responses to different types of pollution, it may be possible to draw more specific conclusions. See also Algal bloom; Nitrogen cycle; Water pollution [Terry Watkins]
RESOURCES BOOKS Browder, J. A., ed. Aquatic Organisms As Indicators of Environmental Pollution. Bethesda, MD: American Water Resources Association, 1988. Connell, D. W., and G. J. Miller. Chemistry and Ecotoxicology of Pollution. New York: Wiley-Interscience, 1984.
Indigenous peoples Cultural or ethnic groups living in an area where their culture developed or where their people have existed for many generations. Most of the world’s indigenous peoples live in remote forests, mountains, deserts, or arctic tundra, where modern technology, trade, and cultural influence are slow to penetrate. Many had much larger territories historically but have retreated to, or been forced into, small, remote areas by the advance of more powerful groups. Indigenous groups, also sometimes known as native or tribal peoples, are usually recognized in comparison to a country’s dominant cultural group. In the United States the dominant, non-indigenous cultural groups speak English, has historic roots in Europe, and maintain strong economic, technological, and communication ties with Europe, Asia, and other parts of the world. Indigenous groups in the United States, on the other hand, include scores of groups, from the southern Seminole and Cherokee to the Inuit and Yupik peoples of the Arctic coast. These groups speak hundreds of different languages or dialects, some of which have been on this continent for thousands of years. Their traditional economies were based mainly on small-scale subsistence gathering, hunting, fishing, and farming. Many indigenous peoples around the world continue to engage in these ancient economic practices. It is often difficult to distinguish who is and who is not indigenous. European-Americans and Asian-Americans are usually not considered indigenous even if they have been here for many generations. This is because their cultural roots connect to other regions. On the other hand, a German residing in Germany is also not usually spoken of as indigenous, even though by any strict definition she or he is indigenous. This is because the term is customarily reserved to denote economic or political minorities—groups that are relatively powerless within the countries where they live. 749
Environmental Encyclopedia 3
Indonesian forest fires
Historically, indigenous peoples have suffered great losses in both population and territory to the spread of larger, more technologically advanced groups, especially (but not only) Europeans. Hundreds of indigenous cultures have disappeared entirely just in the past century. In recent decades, however, indigenous groups have begun to receive greater international recognition, and they have begun to learn effective means to defend their lands and interests—including attracting international media attention and suing their own governments in court. The main reason for this increased attention and success may be that scientists and economic development organizations have recently become interested in biological diversity and in the loss of world rain forests. The survival of indigenous peoples, of the world’s forests, and of the world’s gene pools are now understood to be deeply interdependent. Indigenous peoples, who know and depend on some of the world’s most endangered and biologically diverse ecosystems, are increasingly looked on as a unique source of information, and their subsistence economies are beginning to look like admirable alternatives to large-scale logging, mining, and conversion of jungles to monocrop agriculture. There are probably between 4,000 and 5,000 different indigenous groups in the world; they can be found on every continent (except Antarctica) and in nearly every country. The total population of indigenous peoples amounts to between 200 million and 600 million (depending upon how groups are identified and their populations counted) out of a world population just over 6.2 billion. Some groups number in the millions; others comprise only a few dozen people. Despite their world-wide distribution, indigenous groups are especially concentrated in a number of “cultural diversity hot spots,” including Indonesia, India, Papua New Guinea, Australia, Mexico, Brazil, Zaire, Cameroon, and Nigeria. Each of these countries has scores, or even hundreds, of different language groups. Neighboring valleys in Papua New Guinea often contain distinct cultural groups with unrelated languages and religions. These regions are also recognized for their unusual biological diversity. Both indigenous cultures and rare species survive best in areas where modern technology does not easily penetrate. Advanced technological economies involved in international trade consume tremendous amounts of land, wood, water, and minerals. Indigenous groups tend to rely on intact ecosystems and on a tremendous variety of plant and animal species. Because their numbers are relatively small and their technology simple, they usually do little long-lasting damage to their environment despite their dependence on the resources around them. The remote areas where indigenous peoples and their natural environment survive, however, are also the richest remaining reserves of natural resources in most countries. Frequently state governments claim all timber, mineral, 750
water, and land rights in areas traditionally occupied by tribal groups. In Indonesia, Malaysia, Burma (Myanmar), China, Brazil, Zaire, Cameroon, and many other important cultural diversity regions, timber and mining concessions are frequently sold to large or international companies that can quickly and efficiently destroy an ecological area and its people. Usually native peoples, because they lack political and economic clout, have no recourse to losing their homes. Generally they are relocated, attempts are made to integrate them into mainstream culture, and they join laboring classes in the general economy. Indigenous rights have begun to strengthen in recent years. As long as international media attention continues to give them the attention they need—especially in the form of international economic and political pressure on state governments—and as long as indigenous leaders are able to continue developing their own defense strategies and legal tactics, the survival rate of indigenous peoples and their environments may improve significantly. [Mary Ann Cunningham Ph.D.]
RESOURCES BOOKS Redford, K. H., and C. Padoch. Conservation of Neotropical Forests: Working from Traditional Resources Use. New York: Columbia University Press, 1992.
OTHER Durning, A. T. “Guardians of the Land: Indigenous Peoples and the Health of the Earth.” Worldwatch Paper 112. Washington, DC: Worldwatch Institute, 1992.
Indonesian forest fires For several months in 1997 and 1998, a thick pall of smoke covered much of Southeast Asia. Thousands of forest fires burning simultaneously on the Indonesian islands of Kalimantan (Borneo) and Sumatra, are thought to have destroyed about 8,000 mi2 (20,000 km2) of primary forest, or an area about the size of New Jersey. The smoke generated by these fires spread over eight countries and 75 million people, covering an area larger than Europe. Hazy skies and the smell of burning forests could be detected in Hong Kong, nearly 2,000 mi (3,200 km) away. The air quality in Singapore and the city of Kuala Lumpur, Malaysia, just across the Strait of Malacca from Indonesia, was worse than any industrial region in the world. In towns such as Palembang, Sumatra, and Banjarmasin, Kalimantan, in the heart of the fires, the air pollution index frequently passed 800, twice the level classified in the United States as an air quality emergency, hazardous to human health. Automobiles had to drive with their headlights on, even at noon. People
Environmental Encyclopedia 3 groped along smoke-darkened streets unable to see or breathe normally. At least 20 million people in Indonesia and Malaysia were treated for illnesses such as bronchitis, eye irritation, asthma, emphysema, and cardiovascular diseases. It’s thought that three times that many who couldn’t afford medical care went uncounted. The number of extra deaths from this months-long episode is unknown, but it seems likely to have been hundreds of thousands, mostly elderly or very young children. Unable to see through the thick haze, several boats collided in the busy Straits of Malacca, and a plane crashed on Sumatra, killing 234 passengers. Cancelled airline flights, aborted tourist plans, lost workdays, medical bills, and ruined crops are estimated to have cost countries in the afflicted area several billion dollars. Wildlife suffered as well. In addition to the loss of habitat destroyed by fires, breathing the noxious smoke was as hard on wild species as it was on people. At the Pangkalanbuun Conservation Reserve, weak and disoriented orangutans were found suffering from respiratory diseases much like those of humans. Geographical isolation on the 16,000 islands of the Indonesian archipelago has allowed evolution of the world’s richest collection of biodiversity. Indonesia has the second largest expanse of tropical forest and the highest number of endemic species anywhere. This makes destruction of Indonesian plants, animals, and their habitat of special concern. The dry season in tropical Southeast Asia has probably always been a time of burning vegetation and smoky skies. Farmers practicing traditional slash and burn agriculture start fires each year to prepare for the next growing season. Because they generally burn only a hectare or two at a time, however, these shifting cultivators often help preserve plant and animal species by opening up space for early successional forest stages. Globalization and the advent of large, commercial plantations, however, have changed agricultural dynamics. There is now economic incentive for clearing huge tracts of forestland to plant oil palms, export foods such as pineapples and sugar cane, and fast-growing eucalyptus trees. Fire is viewed as the only practical way remove biomass and convert wild forest to into domesticated land. While it can cost the equivalent of $200 to clear a hectare of forest with chainsaws and bulldozers, dropping a lighted match into dry underbrush is essentially free. In 1997 to 1998, the Indonesian forest was unusually dry. A powerful El Nin˜o/Southern Oscillation weather pattern caused the most severe droughts in 50 years. Forests that ordinarily stay green and moist even during the rainless season became tinder dry. Lightning strikes are thought to have started many forest fires, but many people took advantage of the drought for their own purposes. Although the government blamed traditional farmers for setting most of
Indonesian forest fires
the fires, environmental groups claimed that the biggest fires were caused by large agribusiness conglomerates with close ties to the government and military. Some of these fires were set to cover up evidence of illegal logging operations. Others were started to make way for huge oil-palm plantations and fast-growing pulpwood trees, Neil Byron of the Center for International Forestry Research was quoted as saying that “fire crews would go into an area and put out the fire, then come back four days later and find it burning again, and a guy standing there with a petrol can.” According to the World Wide Fund for Nature, 37 plantations in Sumatra and Kalimantan were responsible for a vast majority of the forest burned on those islands. The plantation owners were politically connected to the ruling elite, however, and none of them was ever punished for violation of national forest protection laws. Indonesia has some of the strongest land-use management laws of any country in the world, but these laws are rarely enforced. In theory, more than 80% of its land is in some form of protected status, either set aside as national parks or classified as selective logging reserves where only a few trees per hectare can be cut. The government claims to have an ambitious reforestation program that replants nearly 1.6 million acres (1 million hectares) of harvested forest annually, but when four times that amount is burned in a single year, there’s not much to be done but turn it over to plantation owners for use as agricultural land. Aquatic life, also, is damaged by these forest fires. Indonesia, Malaysia, and the Philippines have the richest coral reef complexes in the world. More than 150 species of coral live in this area, compared with only about 30 species in the Caribbean. The clear water and fantastic biodiversity of Indonesia’s reefs have made it an ultimate destination for scuba divers and snorkelers from around the world. Unfortunately, soil eroded from burned forests clouds coastal waters and smothers reefs. Perhaps one of the worst effects of large tropical forest fires is that they may tend to be self-reinforcing. Moist tropical forests store huge amounts of carbon in their standing biomass. When this carbon is converted into CO2 by fire and released to the atmosphere, it acts as a greenhouse gas to trap heat and cause global warming. All the effects of human-caused global climate change are still unknown, but we stronger climatic events such as severe droughts may make further fires even more likely. Alarmed by the magnitude of the Southeast Asia fires and the potential they represent for biodiversity losses and global climate change, world leaders have proposed plans for international intervention to prevent A recurrence. Fears about imposing on national sovereignty, however, have made it difficult to come up with a plan for how to cope with this growing threat. [William P. Cunningham Ph.D.]
751
Indoor air quality
RESOURCES BOOKS Glover, David, and Timothy Jessup, eds. Indonesia’s Fires and Haze: The Cost of Catastrophe. Singapore: International Development Research Centre, 2002.
PERIODICALS Aditama, Tjandra Yoga. “Impact of Haze from Forest Fire to Respiratory Health: Indonesian Experience.” Respirology (2000): 169–174. Chan, C. Y., et al. “Effects of 1997 Indonesian forest fires on tropospheric ozone enhancement, radiative forcing, and temperature change over the Hong Kong region” Journal of Geophysical Research-Atmospheres 106 (2001):14875-14885. Davies, S. J. and L. Unam. “Smoke-haze from the 1997 Indonesian forest fires: effects on pollution levels, local climate, atmospheric CO2 concentrations, and tree photosynthesis.” Forest Ecology & Management 124(1999):137-144. Murty, T. S., D. Scott, and W. Baird. “The 1997 El Nin˜o, Indonesian Forest Fires and the Malaysian Smoke Problem: A Deadly Combination of Natural and Man-Made Hazard” Natural Hazards 21 (2000): 131–144. Tay, Simon. “Southeast Asian Fires: The Challenge Over Sustainable Environmental Law and Sustainable Development.” Peace Research Abstracts 38 (2001): 603–751.
Indoor air quality An assessment of air quality in buildings and homes based on physical and chemical monitoring of contaminants, physiological measurements, and/or psychosocial perceptions. Factors contributing to the quality of indoor air include lighting, ergonomics, thermal comfort, tobacco smoke, noise, ventilation, and psychosocial or work-organizational factors such as employee stress and satisfaction. “Sick building syndrome” (SBS) and “building-related illness” (BRI) are responses to indoor air pollution commonly described by office workers. Most symptoms are nonspecific; they progressively worsen during the week, occur more frequently in the afternoon, and disappear on the weekend. Poor indoor air quality (IAQ) in industrial settings such as factories, coal mines, and foundries has long been recognized as a health risk to workers and has been regulated by the U.S. Occupational Safety and Health Administration (OSHA). The contaminant levels in industrial settings can be hundreds or thousands of times higher than the levels found in homes and offices. Nonetheless, indoor air quality in homes and offices has become an environmental priority in many countries, and federal IAQ legislation has been introduced in the U.S. Congress for the past several years. However, none has yet passed, and currently the U.S. Environmental Protection Agency (EPA) has no enforcement authority in this area. Importance of IAQ The prominence of IAQ issues has risen in part due to well-publicized incidents involving outbreaks of Legionnaires’ disease, Pontiac fever, sick building syndrome, mul752
Environmental Encyclopedia 3 tiple chemical sensitivity, and asbestos mitigation in pub-
lic buildings such as schools. Legionnaire’s disease, for example, caused twenty-nine deaths in 1976 in a Philadelphia hotel due to infestation of the building’s air conditioning system by a bacterium called Legionella pneumophila. This microbe affects the gastrointestinal tract, kidneys, and central nervous system. It also causes the non-fatal Pontiac fever. IAQ is important to the general public for several reasons. First, individuals typically spend the vast majority of their time—80–90%—indoors. Second, an emphasis on energy conservation measures, such as reducing air exchange rates in ventilation systems and using more energy efficient but synthetic materials, has increased levels of air contaminants in offices and homes. New “tight” buildings have few cracks and openings so minimal fresh air enters such buildings. Low ventilation and exchange rates can increase indoor levels of carbon monoxide, nitrogen oxides, ozone, volatile organic compounds, bioaerosols, and pesticides and maintain high levels of second-hand tobacco smoke generated inside the building. Thus, many contaminants are found indoors at levels that greatly exceed outdoor levels. Third, an increasing number of synthetic chemicals— found in building materials, furnishing, cleaning and hygiene products—are used indoors. Fourth, studies show that exposure to indoor contaminants such as radon, asbestos, and tobacco smoke pose significant health risks. Fifth, poor IAQ is thought to adversely affect children’s development and lower productivity in the adult population. Demands for indoor air quality investigations of “sick” and problem buildings have increased rapidly in recent years, and a large fraction of buildings are known or suspected to have IAQ problems. Indoor contaminants Indoor air contains many contaminants at varying but generally low concentration levels. Common contaminants include radon and radon progeny from the entry of soil gas and groundwater and from concrete and other mineralbased building materials; tobacco smoke from cigarette and pipe smoking; formaldehyde from polyurethane foam insulation and building materials; volatile organic compounds (VOCs) emitted from binders and resins in carpets, furniture, or building materials, as well as VOCs used in dry cleaning processes and as propellants and constituents of personal use and cleaning products, like hair sprays and polishes; pesticides and insecticides; carbon monoxide, nitrogen oxides, and other combustion productions from gas stoves, appliances, and vehicles; asbestos from high temperature insulation; and biological contaminants including viruses, bacteria, molds, pollen, dust mites, and indoor and outdoor biota. Many or most of these contaminants are present at low levels in all indoor environments.
Environmental Encyclopedia 3
Indoor air quality
Some major indoor air pollutants. (Wadsworth Inc. Reproduced by permission.)
The quality of indoor air can change rapidly in time and from room to room. There are many diverse sources that emit various physical and chemical forms of contaminants. Some releases are slow and continuous, such as outgassing associated with building and furniture materials, while others are nearly instantaneous, like the use of cleaners and aerosols. Many building surfaces demonstrate significant interactions with contaminants in the form of sorptiondesorption processes. Building-specific variation in air exchange rates, mixing, filtration, building and furniture surfaces, and other factors alter dispersion mechanisms and contaminant lifetimes. Most buildings employ filters that can remove particles and aerosols. Filtration systems do not effectively remove very small particles and have no effect on gases, vapors, and odors. Ventilation and air exchange units designed into the heating and cooling systems of buildings are designed to diminish levels of these contaminants by dilution. In most buildings, however, ventilation systems are turned off at night after working hours, leading to an increase in contaminants through the night. Though operation and maintenance issues are estimated to cause the bulk of indoor air quality problems, deficiencies in the design of the heating,
ventilating and air conditioning (HVAC) system can cause problems as well. For example, locating a building’s fresh air intake near a truck loading dock will bring diesel fumes and other noxious contaminants into the building. Health impacts Exposures to indoor contaminants can cause a variety of health problems. Depending on the pollutant and exposure, health problems related to indoor air quality may include non-malignant respiratory effects, including mucous membrane irritation, allergic reactions, and asthma; cardiovascular effects; infectious diseases such as Legionnaires’ disease; immunologic diseases such as hypersensitivity pneumonitis; skin irritations; malignancies; neuropsychiatric effects; and other non-specific systemic effects such as lethargy, headache, and nausea. In addition indoor air contaminants such as radon, formaldehyde, asbestos, and other chemicals are suspected or known carcinogens. There is also growing concern over the possible effects of low level exposures on suppressing reproductive and growth capabilities and impacting the immune, endocrine, and nervous systems. 753
Environmental Encyclopedia 3
Industrial waste treatment
Solving IAQ problems Acute indoor air quality problems can be greatly eliminated by identifying, evaluating, and controlling the sources of contaminants. IAQ control strategies include the use of higher ventilation and air exchange rates, the use of lower emission and more benign constituents in building and consumer products (including product use restriction regulations), air cleaning and filtering, and improved building practices in new construction. Radon may be reduced by inexpensive subslab ventilation systems. New buildings could implement a day of “bake-out,” which heats the building to temperatures over 90°F (32°C) to drive out volatile organic compounds. Filters to remove ozone, organic compounds, and sulfur gases may be used to condition incoming and recirculated air. Copy machines and other emission sources should have special ventilation systems. Building designers, operators, contractors, maintenance personnel, and occupants are recognizing that healthy buildings result from combined and continued efforts to control emission sources, provide adequate ventilation and air cleaning, and good maintenance of building systems. Efforts toward this direction will greatly enhance indoor air quality. [Stuart Batterman]
RESOURCES BOOKS Godish, T. Indoor Air Pollution Control. Chelsea, MI: Lewis, 1989. Kay, J. G., et al. Indoor Air Pollution: Radon, Bioaerosols and VOCs. Chelsea, MI: Lewis, 1991. Samet, J. M., and J. D. Spengler. Indoor Air Pollution: A Health Perspective. Baltimore: Johns Hopkins University Press, 1991.
PERIODICALS Kreiss, K. “The Epidemiology of Building-Related Complaints and Illnesses.” Occupational Medicine: State of the Art Reviews 4 (1989): 575–92.
Industrial waste treatment Many different types of solid, liquid, and gaseous wastes are discharged by industries. Most industrial waste is recycled, treated and discharged, or placed in a landfill. There is no one means of managing industrial wastes because the nature of the wastes varies widely from one industry to another. One company might generate a waste that can be treated readily and discharged to the environment (direct discharge) or to a sewer in which case final treatment might be accomplished at a publicly owned treatment works (POTW). Treatment at the company before discharge to a sewer is referred to as pretreatment. Another company might generate a waste which is regarded as hazardous and therefore requires special management procedures related to storage, transportation and final disposal. 754
The pertinent legislation governing to what extent wastewaters need to be treated before discharge is the 1972 Clean Water Act (CWA). Major amendments to the CWA were passed in 1977 and 1987. The Environmental Protection Agency (EPA) was also charged with the responsibility of regulating the priority pollutants under the CWA. The CWA specifies that toxic and nonconventional pollutants are to be treated with the Best Available Technology (BAT). Gaseous pollutants are regulated under the Clean Air Act (CAA), promulgated in 1970 and amended in 1977 and 1990. An important part of the CAA consists of measures to attain and maintain National Ambient Air Quality Standards (NAAQS). Hazardous air pollutant (HAP) emissions are to be controlled through Maximum Achievable Control Technology (MACT) which can include process changes, material substitutions and/or air pollution control equipment. The “cradle to grave” management of hazardous wastes is to be performed in accordance with the Resource Conservation and Recovery Act (RCRA) of 1976 and the Hazardous and Solid Waste Amendments (HSWA) of 1984. In 1990, the United States, through the Pollution Prevention Act, adopted a program designed to reduce the volume and toxicity of waste discharges. Pollution prevention (P2) strategies might involve changing process equipment or chemistry, developing new processes, eliminating products, minimizing wastes, recycling water or chemicals, trading wastes with another company, etc. In 1991, the EPA instituted the 33/50 program which was to result in an overall 33% reduction of 17 high priority pollutants by 1992 and a 50% reduction of the pollutants by 1995. Both goals were surpassed. Not only has this program been successful, but it sets an important precedence because the participating companies volunteered. Additionally, P2 efforts have led industries to rigorously think through product life cycles. A Life Cycle Analysis (LCA) starts with consideration for acquiring raw materials, moves through the stages related to processing, assembly, service and reuse, and ends with retirement/disposal. The LCA therefore reveals to industry the costs and problems versus the benefits for every stage in the life of a product. In designing a waste management program for an industry, one must think first in terms of P2 opportunities, identify and characterize the various solid, liquid and gaseous waste streams, consider relevant legislation, and then design an appropriate waste management system. Treatment systems that rely on physical (e.g., settling, floatation, screening, sorption, membrane technologies, air stripping) and chemical (e.g., coagulation, precipitation, chemical oxidation and reduction, pH adjustment) operations are referred to as physicochemical, whereas systems in which microbes are cultured to metabolize waste constituents are known as biologi-
Environmental Encyclopedia 3
INFORM
cal processes (e.g., activated sludge, trickling filters, biotowers, aerated lagoons, anaerobic digestion, aerobic digestion, composting). Oftentimes, both physicochemical and biological systems are used to treat solid and liquid waste streams. Biological systems might be used to treat certain gas streams, but most waste gas streams are treated physicochemically (e.g., cyclones, electrostatic precipitators, scrubbers, bag filters, thermal methods). Solids and the sludges or residuals that result from treating the liquid and gaseous waste streams are also treated by means of physical, chemical, and biological methods. In many cases, the systems used to treat wastes from domestic sources are also used to treat industrial wastes. For example, municipal wastewaters often consist of both domestic and industrial waste. The local POTW therefore may be treating both types of wastes. To avoid potential problems caused by the input of industrial wastes, municipalities commonly have pretreatment programs which require that industrial wastes discharged to the sewer meet certain standards. The standards generally include limits for various toxic agents such as metals, organic matter measured in terms of biochemical oxygen demand (bod) or chemical oxygen demand, nutrients such as nitrogen and phosphorus, pH and other contaminants that are recognized as having the potential to impact on the performance of the POTW. At the other end of the spectrum, there are wastes that need to be segregated and managed separately in special systems. For example, an industry might generate a hazardous waste that needs to be placed in barrels and transported to an EPA approved treatment, storage or disposal facility (TSDF). Thus, it is not possible to simply use one train of treatment operations for all industrial waste streams, but an effective, generic strategy has been developed in recent years for considering the waste management options available to an industry. The basis for the strategy is to look for P2 opportunities and to consider the life cycle of a product. An awareness of waste stream characteristics and the potential benefits of stream segregation is then melded with the knowledge of regulatory compliance issues and treatment system capabilities/performance to minimize environmental risks and costs. [Gregory D. Boardman]
RESOURCES BOOKS Freeman, H. M. Industrial Pollution Prevention Handbook. New York: McGraw-Hill, Inc., 1995. Haas, C.N., and R.J. Vamos. Hazardous and Industrial Waste Treatment. Englewood Cliffs: Prentice Hall, Inc., 1995. LaGrega, M. D., P. L. Buckingham, and J. C. Evans. Hazardous Waste Management. New York: McGraw-Hill, Inc., 1994.
Metcalf and Eddy, Inc. Wastewater Engineering Treatment, Disposal and Reuse. Revised by G. Tchobanoglous and F. Burton. New York: McGrawHill, Inc., 1991. Nemerow, N. L., and Dasgupta, A. Industrial and Hazardous Waste Treatment. New York: Van Nostrand Reinhold, 1991. Peavy, H. S., D. R. Rowe, and G. Tchobanoglous. Environmental Engineering. New York: McGraw-Hill Book, 1995. Tchobanoglous, G., et al. Integrated Solid Waste Management Engineering Principles and Management Issues. New York: McGraw-Hill, Inc., 1993.
PERIODICALS Romanow, S., and T. E. Higgins. “Treatment of Contaminated Groundwater from Hazardous Waste Sites—Three Case Studies.” Presented at the 60th Water Pollution Control Federation Conference, Philadelphia (October 58, 1987).
Inertia see Resistance (inertia)
Infiltration In hydrology, infiltration refers to the maximum rate at which a soil can absorb precipitation. This is based on the initial moisture content of the soil or on the portion of precipitation that enters the soil. In soil science, the term refers to the process by which water enters the soil, generally by downward flow through all or part of the soil surface. The rate of entry relative to the amount of water being supplied by precipitation or other sources determines how much water enters the root zone and how much runs off the surface. See also Groundwater; Soil profile; Water table
INFORM INFORM was founded in 1973 by environmental research specialist Joanna Underwood and two colleagues. Seriously concerned about air pollution, the three scientists decided to establish an organization that would identify practical ways to protect the environment and public health. Since then, their concerns have widened to include hazardous waste, solid waste management, water pollution, and land, energy, and water conservation. The group’s primary purpose is “to examine business practices which harm our air, water, and land resources” and pinpoint “specific ways in which practices can be improved.” INFORM’s research is recognized throughout the United States as instrumental in shaping environmental policies and programs. Legislators, conservation groups and business leaders use INFORM’s authority as an acknowledged basis for research and conferences. Source reduction has become one of INFORM’s most important projects. A decrease in the amount and/or toxicity of waste entering the waste stream, source reduction includes any activity by an 755
Environmental Encyclopedia 3
INFOTERRA (U.N. Environment Program)
individual, business, or government that lessens the amount of solid waste—or garbage—that would otherwise have to be recycled or incinerated. Source reduction does not include recycling, municipal solid waste composting, household hazardous waste collection, or beverage container deposit and return systems. The first priority in source reduction strategies is elimination; the second, reuse. Public education is a crucial part of INFORM’s program. To this end INFORM has published Making Less Garbage: A Planning Guide for Communities. This book details ways to achieve source reduction including buying reusable, as opposed to disposable, items; buying in bulk; and maintaining and repairing products to extend their lives. INFORM’s outreach program goes well beyond its source reduction project. The staff of over 25 full-time scientists and researchers and 12 volunteers and interns makes presentations at national and international conferences and local workshops. INFORM representatives have also given briefings and testimony at Congressional hearings and produced television and radio advertisements to increase public awareness on environmental issues. The organization also publishes a quarterly newsletter, INFORM Reports. [Cathy M. Falk]
RESOURCES ORGANIZATIONS
analyzes the replies. INFOTERRA is used by governments, industries, and researchers in 177 countries. [Linda Rehkopf]
RESOURCES ORGANIZATIONS UNEP-Infoterra/USA, MC 3404 Ariel Rios Building, 1200 Pennsylvania Avenue, Washington, D.C. USA 20460 Fax: (202) 260-3923, Email:
[email protected] Injection well Injection wells are used to dispose waste into the subsurface zone. These wastes can include brine from oil and gas wells, liquid hazardous wastes, agricultural and urban runoff, municipal sewage, and return water from air-conditioning. Recharge wells can also be used for injecting fluids to enhance oil recovery, injecting treated water for artificial aquifer recharge, or enhancing a pump-and-treat system. If the wells are poorly designed or constructed, or if the local geology is not sufficiently studied, injected liquids can enter an aquifer and cause groundwater contamination. Injection wells are regulated under the Underground Injection Control Program of the Safe Drinking Water Act. See also Aquifer restoration; Deep-well injection; Drinking-water supply; Groundwater monitoring; Groundwater pollution; Water table
INFORM, Inc., 120 Wall Street, New York, NY USA 10005 (212) 361-2400, Fax: (212) 361-2412, Email:
[email protected],
Inoculate
INFOTERRA (U.N. Environment Program) INFOTERRA has its international headquarters in Kenya and is a global information network operated by the Earthwatch program of the United Nations Environment Program (UNEP). Under INFOTERRA, participating nations designate institutions to be national focal points, such as the Environmental Protection Agency (EPA) in the United States. Each national institution chosen as a focal point, prepares a list of its national environmental experts and selects what it considers the best sources for inclusion in INFOTERRA’s international directory of experts. INFOTERRA initially used its directory only to refer questioners to the nearest appropriate experts, but the organization has evolved into a central information agency. It consults sources, answers public queries for information, and 756
To inoculate involves the introduction of microorganisms into a new environment. Originally the term referred to the insertion of a bud or shoot of one plant into the stem or trunk of another to develop new strains or hybrids. These hybrid plants would be resistant to botanic disease or they would allow greater harvests or range of climates. With the advent of vaccines to prevent human and animal disease, the term inoculate has come to represent injection of a serum to prevent, cure, or make immune from disease. Inoculation is of prime importance in that the introduction of specific microorganism species into specific macroorganisms may establish a symbiotic relationships where each organism benefits. For example, the introduction of mycorrhiza fungus to plants improves the plants’ ability to absorb nutrients from the soil. See also Symbiosis
Insecticide see Pesticide
Environmental Encyclopedia 3
Integrated pest management Integrated pest management (IPM) is a newer science that aims to give the best possible pest control while minimizing damage to human health or the environment. IPM means either using fewer chemicals more effectively or finding ways, both new and old, that substitute for pesticide use. Technically, IPM is the selection, integration and implementation of pest control based on predicted economic, ecological and sociological consequences. IPM seeks maximum use of naturally occurring pest controls, including weather, disease agents, predators and parasites. In addition, IPM utilizes various biological, physical, and chemical control and habitat modification techniques. Artificial controls are imposed only as required to keep a pest from surpassing intolerable population levels which are predetermined from assessments of the pest damage potential and the ecological, sociological, and economic costs of the control measures. Farmers have come to understand that the presence of a pest species does not necessarily justify action for its control. In fact, tolerable infestations may be actually desirable, providing food for important beneficial insects. Why this change in farming practices? The introduction of synthetic organic pesticides such as the insecticide DDT, and the herbicide 2,4-D (half the formula in Agent Orange) after World War II began a new era in pest control. These products were followed by hundreds of synthetic organic fungicides, nematicides, rodenticides and other chemical controls. These chemical materials were initially very effective and very cheap. Synthetic chemicals eventually became the primary means of pest control in productive agricultural regions, providing season-long crop protection against insects and weeds. They were used in addition to fertilizers and other treatments. The success of modern pesticides led to widespread acceptance and reliance upon them, particularly in this country. Of all the chemical pesticides applied worldwide in agriculture, forests, industry and households, one-third to one-half were used in the United States. Herbicides have been used increasingly to replace hand labor and machine cultivation for control of weeds in crops, in forests, on the rights-of-way of highways, utility lines, railroads and in cities. Agriculture consumes perhaps 65% of the total quantity of synthetic organic pesticides used in the United States each year. In addition, chemical companies export an increasingly larger amount to Third World countries. Pesticides banned in the United States such as DDT, EDB and chlordane, are exported to countries where they are applied to crops imported by the United States for consumption. For more than a decade, problems with pesticides have become increasingly apparent. Significant groups of pests have evolved with genetic resistance to pesticides. The
Integrated pest management
increase in resistance among insect pests has been exponential, following extensive use of chemicals in the last forty years. Ticks, insects and spider mites (nearly 400 species) are now especially resistant, and the creation of new insecticides to combat the problem is not keeping pace with the emergence of new strains of resistant insect pests. Despite the advances in modern chemical control and the dramatic increase in chemical pesticides used on U.S. cropland, annual crop losses from all pests appear to have remained constant or to have increased. Losses caused by weeds have declined slightly, but those caused by insects have nearly doubled. The price of synthetic organic pesticides has increased significantly in recent years, placing a heavy financial burden on those who use large quantities of the materials. As farmers and growers across the United States realize the limitations and human health consequences of using artificial chemical pesticides, interest in the alternative approach of integrated pest management grows. Integrated pest management aims at management rather than eradication of pest species. Since potentially harmful species will continue to exist at tolerable levels of abundance, the philosophy now is to manage rather than eradicate the pests. The ecosystem is the management unit. (Every crop is in itself a complex ecological system.) Spraying pesticides too often, at the wrong time, or on the wrong part of the crop may destroy the pests’ natural enemies ordinarily present in the ecosystem. Knowledge of the actions, reactions, and interactions of the components of the ecosystems is requisite to effective IPM programs. With this knowledge, the ecosystem is manipulated in order to hold pests at tolerable levels while avoiding disruptions of the system. The use of natural controls is maximized. IPM emphasizes the fullest practical utilization of the existing regulating and limiting factors in the form of parasites, predators, and weather, which check the pests’ population growth. IPM users understand that control procedures may produce unexpected and undesirable consequences, however. It takes time to change over and determination to keep up the commitment until the desired results are achieved. An interdisciplinary systems approach is essential. Effective IPM is an integral part of the overall management of a farm, a business or a forest. For example, timing plays an important role. Certain pests are most prevalent at particular times of the year. By altering the date on which a crop is planted, serious pest damage can be avoided. Some farmers simultaneously plant and harvest, since the procedure prevents the pests from migrating to neighboring fields after the harvest. Others may plant several different crops in the same field, thereby reducing the number of pests. The variety of crops harbor greater numbers of natural enemies and make it more difficult for the pests to locate and colonize their 757
Environmental Encyclopedia 3
Integrated pest management
host plants. In Thailand and China, farmers flood their fields for several weeks before planting to destroy pests. Other farmers turn the soil, so that pests are brought to the surface and die in the sun’s heat. The development of specific IPM program depends on the pest complex, resources to be protected, economic values, and availability of personnel. It also depends upon adequate funding for research and to train farmers. Some of the techniques are complex, and expert advice is needed. However, while it is difficult to establish absolute guidelines, there are general guidelines that can apply to the management of any pest group. Growers must analyze the “pest” status of each of the reputedly injurious organisms and establish economic thresholds for the “real” pests. The economic threshold is, in fact, the population level, and is defined as the density of a pest population below which the cost of applying control measures exceeds the losses caused by the pest. Economic threshold values are based on assessments of the pest damage potential and the ecological, sociological, and economic costs associated with control measures. A given crop, forest area, backyard, building, or recreational area may be infested with dozens of potentially harmful species at any one time. For each situation, however, there are rarely more than a few pest species whose populations expand to intolerable levels at regular and fairly predictable intervals. Key pests recur regularly at population densities exceeding economic threshold levels and are the focal point for IPM programs. Farmers must also devise schemes for lowering equilibrium positions of key pests. A key pest will vary in severity from year to year, but its average density, known as the equilibrium position, usually exceeds its economic threshold. IPM efforts manipulate the environment in order to reduce a pest’s equilibrium position to a permanent level below the economic threshold. This reduction can be achieved by deliberate introduction and establishment of natural enemies (parasites, predators, and diseases) in areas where they did not previously occur. Natural enemies may already occur in the crop in small numbers or can be introduced from elsewhere. Certain microorganisms, when eaten by a pest, will kill it. Newer chemicals show promise as alternatives to synthetic chemical pesticides. These include insect attractant chemicals, weed and insect disease agents and insect growth regulators or hormones. A pathogen such as Bacillus thuringiensis (BT), has proven commercially successful. Since certain crops have an inbuilt resistance to pests, pest-resistant or pest-free varieties of seed, crop plants, ornamental plants, orchard trees, and forest trees can be used. Growers can also modify the pest environment to increase the effectiveness of the pest’s biological control agents, to destroy its breeding, feeding, or shelter habitat or otherwise render it harmless. 758
This includes crop rotation, destruction of crop harvest residues and soil tillage, and selective burning or mechanical removal of undesirable plant species and pruning, especially for forest pests. While nearly permanent control of key insect and plant disease pests of agricultural crops has been achieved, emergencies will occur, and all IPM advocates acknowledge this. During those times, measures should be applied that create the least ecological destruction. Growers are urged to utilize the best combination of the three basic IPM components: natural enemies, resistant varieties and environmental modification. However, there may be a time when pesticides may be the only recourse. In that case, it is important to coordinate the proper pesticide, the dosage and the timing in order to minimize the hazards to nontarget organisms and the surrounding ecosystems. Pest management techniques have been known for many years and were used widely before World War II. They were deemphasized by insect and weed control scientists and by corporate pressures as the synthetic chemicals became commercially available after the war. Now there is a renewed interest in the early control techniques and in new chemistry. Reports detailing the success of IPM are emerging at a rapid rate as thousands of farmers yearly join the ranks of those who choose to eliminate chemical pesticides. Sustainable agricultural practice increases the richness of the soil by replenishing the soil’s reserves of fertility. IPM does not produce secondary problems such as pest resistance or resurgence. It also diminishes soil erosion, increases crop yields and saves money over the long haul. Organic foods are reported to have better cooking quality, better flavor and greater longevity in the storage bins. And with less pesticide residue, our food is clearly more healthy to eat. See also Sustainable agriculture [Liane Clorfene Casten]
RESOURCES BOOKS Baker, R. R., and P. Dunn. New Directions in Biological Control: Alternatives for Supressing Agricultural Pests and Diseases. New York: Wiley, 1990. Burn, A. J., et al. Integrated Pest Management. New York: Academic Press, 1988. DeBach, P., and D. Rosen. Biological Control By Natural Enemies. 2nd ed. Cambridge, MA: Cambridge University Press, 1991. Pimentel, D. The Pesticide Question: Environment, Economics and Ethics. New York: Chapman & Hall, 1992.
PERIODICALS Bottrell, D. G., and R. F. Smith. “Integrated Pest Management.” Environmental Science & Technology 16 (May 1982): 282A–288A.
Environmental Encyclopedia 3
Intergenerational justice One of the key features of an environmental ethic or perspective is its concern for the health and well-being of future generations. Questions about the rights of future people and the responsibilities of those presently living are central to environmental theory and practice and are often asked and analyzed under the term intergenerational justice. Most traditional accounts or theories of justice have focused on relations between contemporaries: What distribution of scarce goods is fairest or optimally just? Should such goods be distributed on the basis of merit or need? These and other questions have been asked by thinkers from Aristotle through John Rawls. Recently, however, some philosophers have begun to ask about just distributions over time and across generations. The subject of intergenerational justice is a key concern for environmentally-minded thinkers for at least two reasons. First, human beings now living have the power to permanently alter or destroy the planet (or portions thereof) in ways that will affect the health, happiness, and well-being of people living long after we are all dead. One need only think, for example, of the radioactive wastes generated by nuclear power plants which will be intensely “hot” and dangerous for many thousands of years. No one yet knows how to safely store such material for a hundred, much less many thousands, of years. Considered from an intergenerational perspective then, it would be unfair—that is, unjust— for the present generation to enjoy the benefits of nuclear power, passing on to distant posterity the burdens and dangers caused by our (in)action. Second, we not only have the power to affect future generations, but we know that we have it. And with such knowledge comes the moral responsibility to act in ways that will prevent harm to future people. For example, since we know about the health effects of radiation on human beings, our having that knowledge imposes upon us a moral obligation not to needlessly expose anyone—now or in the indefinite future—to the harms or hazards of radioactive wastes. Many other examples of intergenerational harm or hazard exist: global warming, topsoil erosion, disappearing tropical rain forests, depletion and/or pollution of aquifers, among others. But whatever the example, the point of the intergenerational view is the same: the moral duty to treat people justly or fairly applies not only to people now living, but to those who will live long after we are gone. To the extent that our actions produce consequences that may prove harmful to people who have not harmed (and in the nature of the case cannot harm) us is, by any standard, unjust. And yet it seems quite clear that we in the present generation are in many respects acting unjustly toward distant posterity. This is true not only for harms or hazards bequeathed to
Intergenerational justice
future people, but the point applies also to deprivations of various kinds. Consider, for example, the present generation’s profligate use of fossil fuels. Reserves of oil and natural gas are both finite and nonreplaceable; once burned (or turned into plastic or some other petroleum-based material), a gallon of oil is gone forever; every drop or barrel used now is therefore unavailable for future people. As Wendell Berry observed, the claim that fossil fuel energy is cheap rests on a simplistic and morally doubtful assumption about the rights of the present generation: “We were able to consider [fossil fuel energy] “cheap” only by a kind of moral simplicity: the assumption that we had a “right” to as much of it as we could use. This was a “right” made solely by might. Because fossil fuels, however abundant they once were, were nevertheless limited in quantity and not renewable, they obviously did not “belong” to one generation more than another. We ignored the claims of posterity simply because we could, the living being stronger than the unborn, and so worked the “miracle” of industrial progress by the theft of energy from (among others) our children.” And that, Berry adds, “is the real foundation of our progress and our affluence. The reason that we are a rich nation is not that we have earned so much wealth — you cannot, by any honest means, earn or deserve so much. The reason is simply that we have learned, and become willing, to market and use up in our own time the birthright and livelihood of posterity.” These and other considerations have led some environmentally-minded philosophers to argue for limits on presentday consumption, so as to save a fair share of scarce resources for future generations. John Rawls, for instance, constructs a just savings principle according to which members of each generation may consume no more than their fair share of scarce resources. The main difficulty in arriving at and applying any such principle lies in determining what counts as fair share. As the number of generations taken into account increases, the share available to any single generation then becomes smaller; and as the number of generations approaches infinity, any one generation’s share approaches zero. Other objections have been raised against the idea of intergenerational justice. These objections can be divided into two groups, which we can call conceptual and technological. One conceptual criticism is that the very idea of intergenerational justice is itself incoherent. The idea of justice is tied with that of reciprocity or exchange; but relations of reciprocity can exist only between contemporaries; therefore the concept of justice is inapplicable to relations between existing people and distant posterity. Future people are in no position to reciprocate; therefore people now living cannot be morally obligated to do anything for them. An759
Intergovernmental Panel on Climate Change (IPCC)
other conceptual objection to the idea of intergenerational justice is concerned with rights. Briefly, the objection runs as follows: future people do not (yet) exist; only actually existing people have rights, including the right to be treated justly; therefore future people do not have rights which we in the present have a moral obligation to respect and protect. Critics of this view counter that it not only rests on a toorestrictive conception of rights and justice, but that it also paves the way for grievous intergenerational injustices. Several arguments can be constructed to counter the claim that justice rests on reciprocity (and therefore applies only to relations between contemporaries) and the claim that future people do not have rights, including the right to be treated justly by their predecessors. Regarding reciprocity: since we acknowledge in ethics and recognize in law that it is possible to treat an infant or a mentally disabled or severely retarded person justly or unjustly, even though they are in no position to reciprocate, it follows that the idea of justice is not necessarily connected with reciprocity. Regarding the claim that future people cannot be said to have rights that require our recognition and respect: one of the more ingenious arguments against this view consists of modifying John Rawls’s imaginary veil of ignorance. Rawls argues that principles of justice must not be partisan or favor particular people but must be blind and impartial. To ensure impartiality in arriving at principles of justice, Rawls invites us to imagine an original position in which rational people are placed behind a veil of ignorance wherein they are unaware of their age, race, sex, social class, economic status, etc. Unaware of their own particular position in society, rational people would arrive at and agree upon impartial and universal principles of justice. To ensure that such impartiality extends across generations, one need only thicken the veil by adding the proviso that the choosers be unaware of the generation to which they belong. Rational people would not accept or agree to principles under which predecessors could harm or disadvantage successors. Some critics of intergenerational justice argue in technological terms. They contend that existing people need not restrict their consumption of scarce or nonrenewable resources in order to save some portion for future generations. For, they argue, substitutes for these resources will be discovered or devised through technological innovations and inventions. For example, as fossil fuels become scarcer and more expensive, new fuels—gasohol or fusion-derived nuclear fuel—will replace them. Thus we need never worry about depleting any particular resource because every resource can be replaced by a substitute that is as cheap, clean, and accessible as the resource it replaces. Likewise, we need not worry about generating nuclear wastes that we do not yet know how to store safely. Some solution is bound to be devised sometime in the future. 760
Environmental Encyclopedia 3
Environmentally-minded critics of this technological line of argument claim that it amounts to little more than wishful thinking. Like Charles Dickens’s fictional character Mr. Micawber, those who place their faith in technological solutions to all environmental problems optimistically expect that “something will turn up.” Just as Mr. Micawber’s faith was misplaced, so too, these critics contend, is the optimism of those who expect technology to solve all problems, present and future. Of course such solutions may be found, but that is a gamble and not a guarantee. To wager with the health and well-being of future people is, environmentalists argue, immoral. There are of course many other issues and concerns raised in connection with intergenerational justice. Discussions among and disagreements between philosophers, economists, environmentalists, and others are by no means purely abstract and academic. How these matters are resolved will have a profound effect on the fate of future generations. [Terence Ball]
RESOURCES BOOKS Auerbach, B. E. Unto the Thousandth Generation: Conceptualizing Intergenerational Justice. New York: Peter Lang, 1995. Ball, T. Transforming Political Discourse. Oxford, England: Blackwell, 1988. Barry, B., and R. I. Sikora, eds. Obligations to Future Generations. Philadelphia: Temple University Press, 1978. Barry, B. Theories of Justice. Berkeley: University of California Press, 1988. Berry, W. The Gift of Good Land. San Francisco: North Point Press, 1981. De-Shalit, A. Why Posterity Matters: Environmental Policies and Future Generations. London and New York: Routledge, 1995. Fishkin, J., and P. Laslett, eds. Justice Between Age Groups and Generations. New Haven, CT: Yale University Press, 1991. MacLean, D., and P. G. Brown, eds. Energy and the Future. Totawa, NJ: Rowman & Littlefield, 1983. Partridge, E., ed. Responsibilities to Future Generations. Buffalo, NY: Prometheus Books, 1981. Rawls, J. A Theory of Justice. Cambridge: Harvard University Press, 1971. Wenz, P. S. Environmental Justice. Albany: State University of New York Press, 1988.
Intergovernmental Panel on Climate Change (IPCC) The Intergovernmental Panel on Climate Change (IPCC) was established in 1988 as a joint project of the United Nations Environment Programme (UNEP) and the World Meteorological Organization (WMO). The primary mission of the IPCC is to bring together the world’s leading experts on the earth’s climate to gather, assess, and disseminate scientific information about climate change, with a view to informing international and national policy makers. The
Environmental Encyclopedia 3
Intergovernmental Panel on Climate Change (IPCC)
IPCC has become the highest-profile and best-regarded international agency concerned with the climatic consequences of “greenhouse gases,” such as carbon dioxide and methane, that are a byproduct of the combustion of fossil fuels. The IPCC is an organization that has been and continues to be at the center of a great deal of controversy. The IPCC was established partly in response to Nobel Laureate Mario Molina’s 1985 documentation of chemical processes which occur when human-made chemicals deplete the earth’s atmospheric ozone shield. Ozone depletion is likely to result in increased levels of ultraviolet radiation reaching the earth’s surface, producing a host of health, agricultural, and environmental problems. Molina’s work helped to persuade most of the industrialized nations to ban chlorofluorocarbons and several other ozone-depleting chemicals. It also established a context in which national and international authorities began to pay serious attention to the global environmental consequences of atmospheric changes resulting from industrialization and reliance on fossil fuels. Continuing to operate under the auspices of the United Nations and headquartered in Geneva, Switzerland, the IPCC is organized into three working groups and a task force, and meets about once a year. The first group gathers scientific data and analyzes the functioning of the climate system with special attention to the detection of potential changes resulting from human activity. The second group’s assignment is to assess the potential socioeconomic impacts and vulnerabilities associated with climate change. It is also charged with exploring options for humans to adapt to potential climate change. The third group focuses on ways to reduce greenhouse gas emissions and to stop or reduce climate change. The task force is charged with maintaining inventories of greenhouse emissions for all countries. The IPCC has published its major findings in “Full Assessment” Reports, first issued in 1990 and 1995. The Tenth Session of the IPCC (Nairobi, 1994) directed that future full assessments should be prepared approximately every five years. The Third Assessment Report was entitled “Climate Change 2001". Special reports and technical papers are also published as the panel identifies issues. The IPCC has drawn a great deal of criticism virtually from its inception. Massive amounts of money are at stake in policy decisions which might seek to limit greenhouse gas emissions, and much of the criticism directed at the IPCC tends to come from lobbying and research groups mostly funded by industries that either produce or use large quantities of fossil fuels. Thus, a lobbying group sponsored by energy, transportation, and manufacturing interests called the Global Climate Coalition attacked parts of the 1995 report as unscientific. At the core of the controversy was Chapter Eight of the report, “Detection of Climate
Change and Attribution of Causes". Although the IPCC was careful to hedge its conclusions in various ways, acknowledging difficulties in measurement, disagreements over methodologies for interpreting data, and general uncertainty about the conclusions of its findings, it nevertheless suggested a connection between greenhouse gas emissions and global warming. Not satisfied with such caveats, the Global Climate Coalition charged that the IPCC’s conclusions had been presented as far less debatable than they actually were. This cast a cloud of uncertainty over the report, at least for some United States policymakers. However, other leaders took the report more seriously. The Second Assessment Report provided important input to the negotiations that led to the development of the Kyoto Protocol in 1997, a treaty aimed at reducing the global output of greenhouse gases. In the summer of 1996, results of new studies of the upper atmosphere were published which provided a great deal of indirect support for the IPCC’s conclusions. Investigators found significant evidence of cooling in the upper atmosphere and warming in the lower atmosphere, with this effect being especially pronounced in the southern hemisphere. These findings confirmed the predictions of global warming models such as those employed by the IPCC. Perhaps emboldened by this confirmation, but still facing a great deal of political opposition, the IPCC released an unequivocal statement about global warming and its causes in November 1996. The IPCC declared that “the balance of evidence suggests that there is a discernible human influence on global climate". The statement made clear that a preponderance of evidence and a majority of scientific experts indicated that observable climate change was a result of human activity. The IPCC urged that all nations limit their use of fossil fuels and develop more energy-efficient technologies. These conclusions and recommendations provoked considerable criticism from less-developed countries. Leaders of the less-industrialized areas of the world tend to view potential restrictions on the use of fossil fuels as unfair hindrance of their efforts to catch up with the United States and Western Europe in industry, transportation, economic infrastructure, and standards of living. The industrialized nations, they point out, were allowed to develop without any such restrictions and now account for the vast majority of the world’s energy consumption and greenhouse gas emissions. These industrialized nations therefore should bear the brunt of any efforts to protect the global climate, substantially exempting the developing world from restrictions on the use of fossil fuels. The IPCC’s conclusions and recommendations have also drawn strong opposition from industry groups in the United States, such as the American Petroleum Institute, 761
Environmental Encyclopedia 3
Internalizing costs
and conservative Republican politicians. These critics charge that the IPCC’s new evidence is only fashionable but warmed-over theory, and that no one has yet proven conclusively that climate change is indeed related to human influence. In view of the likely massive economic impact of any aggressive program aimed at the reduction of emissions, there is no warrant for following the IPCC’s dangerous and ill-considered advice. Under Republican leadership, Congress slashed funds for Environmental Protection Agency and Department of Energy programs concerned with global warming and its causes, as well as funds for researching alternative and cleaner sources of energy. These funding cuts and the signals they sent created foreign relations problems for the Clinton Administration. The United States was unable to honor former President Bush’s 1992 pledge (at the Rio de Janeiro Earth Summit) to reduce the country’s emission of carbon dioxide and methane to 1990 levels by the year 2000. Indeed, owing in part to low oil prices and a strong domestic economy, the United States was consuming more energy and emitting more greenhouse gases than ever before by 2000. In the summer of 2001, the IPCC released its strongest statement to date on the problem of global warming, in its Third Assessment Report. The report, “Climate Change 2001", provides further evidence for global warming and its cause—the widescale burning of fossil fuels by humans. The report projects that global mean surface temperatures on earth will increase by 2.5–10.4°F (1.5–5.9°C) by the year 2100, unless greenhouse gas emissions are reduced well below current levels. The report also notes that this warming trend will represent the fasting warming of the earth in 10,000 years, with possible dire consequences to human society and the environment. In the early 2000s, the administration of President George W. Bush, a former oilman, was resistant to the ideas of global warming and reducing greenhouse gas emissions. The administration strongly opposed the Kyoto Treaty and domestic pollution reduction laws, claiming such measures would cost jobs and reduce the standard of living, and that the scientific evidence was inconclusive. In June 2001, a National Academy of Science (NAS) panel reported to President Bush that the IPCC’s studies on global warming were scientifically valid. In April 2002, with pressure from the oil industry, the Bush administration forced the removal of IPCC Chairman Robert Watson, an American atmospheric scientist who had been outspoken over the issue of climate change and the need for greenhouse gas reduction in industrialized countries. The IPCC elected Dr. Rajendra K. Pachauri as its next Chairman at its nineteenth session in Geneva. Dr. Pachauri, a citizen of India, is a well-known world-class 762
expert in economics and technology, with a strong commitment to the IPCC process and to scientific integrity. [Lawrence J. Biskowski and Douglas Dupler]
RESOURCES BOOKS McKibbin, Warwick J., and Peter Wilcoxen. Climate Change Policy After Kyoto: Blueprint for a Realistic Approach. Washington DC: The Brookings Institution Press, 2002.
PERIODICALS McKibben, Bill. “Climate Change 2001: Third Assessment Report.” New York Review of Books, July 5, 2001, 35. Trenberth, Kevin E. “Stronger Evidence of Human Influences on Climate: The 2001 IPCC Assessment.” Environment, May 2001, 8.
OTHER Intergovernmental Panel on Climate Change Home Page. [cited July 2002]. . Union of Concerned Scientists Global Warming Web Page. [cited July 2002]. . World Meteorological Organization Home Page. [cited July 2002]. .
ORGANIZATIONS IPCC Secretariat, C/O World Meteorological Organization, 7bis Avenue de la Paix, C.P. 2300, CH- 1211, Geneva, Switzerland 41-22-730-8208, Fax: 41-22-730-8025, Email:
[email protected],
Internal costs see Internalizing costs
Internalizing costs Private market activities create so-called externalities. An example of a negative externality is air pollution. It occurs when a producer does not bear all the costs of an activity in which he or she engages. Since external costs do not enter into the calculations producers make, they will make few attempts to limit or eliminate pollution and other forms of environmental degradation. Negative externalities are a type of market defect all economists believe is appropriate to try to correct. Milton Friedman refers to such externalities as “neighborhood effects,” (although it must be kept in mind that some forms of pollution have an all but local effect). The classic neighborhood effect is pollution. The premise of a free market is that when two people voluntarily make a deal, they both benefit. If society gives everyone the right to make deals, society as a whole will benefit. It becomes richer from the aggregation of the many mutually beneficial deals that are made. However, what happens if in making mutually beneficial deals there is a waste product that the parties release
Environmental Encyclopedia 3 into the environment and that society must either suffer from or clean up? The two parties to the deal are better off, but society as a whole has to pay the costs. Friedman points out that individual members of a society cannot appropriately charge the responsible parties for external costs or find other means of redress. Friedman’s answer to this dilemma is simple: society, through government, must charge the responsible parties the costs of the clean-up. Whatever damage they generate must be internalized in the price of the transaction. Polluters can be forced to internalize environmental costs through pollution taxes and discharge fees, a method generally favored by economists. When such taxes are imposed, the market defect (the price of pollution which is not counted in the transaction) is corrected. The market price then reflects the true social costs of the deal, and the parties have to adjust accordingly. They will have an incentive to decrease harmful activities and develop less environmentally damaging technology. The drawback of such a system is that society will not have direct control over pollution levels, although it will receive monetary compensation for any losses it sustains. However, if the government imposed a tax or charge on the polluting parties, it would have to place a monetary value on the damage. In practice, this is difficult to do. How much for a human life lost to pollution? How much for a vista destroyed? How much for a plant or animal species brought to extinction? Finally, the idea that pollution is all right as long as the polluter pays for it is unacceptable to many people. In fact, the government has tried to control activities with associated externalities through regulation, rather than by supplementing the price system. It has set standards for specific industries and other social entities. The standards are designed to limit environmental degradation to acceptable levels and are enforced through the Environmental Protection Agency (EPA). They prohibit some harmful activities, limit others, and prescribe alternative behaviors. When market actors do not adhere to these standards they are subject to penalties. In theory, potential polluters are given incentives to reduce and treat their waste, manufacture less harmful products, develop alternative technologies, and so on. In practice, the system has not worked as well as it was hoped in the 1960s and 1970s, when much of the environmental legislation presently in force was enacted. Enforcement has been fraught with political and legal difficulties. Extensions on deadlines are given to cities for not meeting clean air standards and to the automobile industry for not meeting standards on fuel economy of new cars, for instance. It has been difficult to collect fines from industries found to have been in violation. Many cases are tied up in the courts through a lengthy appeals process. Some companies simply declare bankruptcy to evade fines. Others continue
International Atomic Energy Agency
polluting because they find it cheaper to pay fines than to develop alternative production processes. Alternative strategies presently under debate include setting up a trade in pollution permits. The government would not levy a tax on pollution but would issue a number of permits that altogether set a maximum acceptable pollution level. Buyers of permits can either use them to cover their own polluting activities or resell them to the highest bidder. Polluters will be forced to internalize the environmental costs of their activities so that they will have an incentive to reduce pollution. The price of pollution will then be determined by the market. The disadvantage of this system is that the government will have no control over where pollution takes place. It is thinkable that certain regions will have high concentrations of industries using the permits, which may result in local pollution levels that are unacceptably high. Whether marketable pollution permits address present pollution problems more satisfactorily than does regulation alone has yet to be seen. See also Environmental economics [Alfred A. Marcus and Marijke Rijsberman]
RESOURCES BOOKS Friedman, M. Capitalism and Freedom. Chicago: University of Chicago Press, 1962. Marcus, A. Business and Society: Ethics, Government, and the World Economy. Homewood, IL: Irwin Press, 1993.
International Atomic Energy Agency The first decade of research on nuclear weapons and nuclear reactors was characterized by extreme secrecy, and the few nations that had the technology carefully guarded their information. In 1954, however, that philosophy changed, and the United States, in particular, became eager to help other nations use nuclear energy for peaceful purposes. A program called “Atoms for Peace” brought foreign students to the United States for the study of nuclear sciences and provided enriched uranium to countries wanting to build their own reactors, encouraging interest in nuclear energy throughout much of the world. But this program created a problem. It increased the potential diversion of nuclear information and nuclear materials for the construction of weapons, and the threat of nuclear proliferation grew. The United Nations created the International Atomic Energy Agency (IAEA) in 1957 to address this problem. The agency had two primary objectives: to encourage and assist with the development of peaceful applications of nuclear power throughout the world and to prevent the diversion of nuclear materials to weapons research and development. 763
International Cleaner Production Cooperative
The first decade of IAEA’s existence was not marked by much success. In fact, the United States was so dissatisfied with the agency’s work that it began signing bilateral nonproliferation treaties with a number of countries. Finally, the 1970 nuclear non-proliferation treaty more clearly designated the IAEA’s responsibilities for the monitoring of nuclear material. Today the agency is an active division of the United Nations Educational, Scientific, and Cultural Organization (UNESCO), and its headquarters are in Vienna. The IAEA operates with a staff of more than 800 professional workers, about 1,200 general service workers, and a budget of about $150 million. To accomplish its goal of extending and improving the peaceful use of nuclear energy, IAEA conducts regional and national workshops, seminars, training courses, and committee meetings. It publishes guidebooks and manuals on related topics and maintains the International Nuclear Information System, a bibliographic database on nuclear literature that includes more than 1.2 million records. The database is made available on magnetic tape to its 42-member states. The IAEA also carries out a rigorous program of inspection. In 1987, for example, it made 2,133 inspections at 631 nuclear installations in 52 non-nuclear weapon nations and four nuclear weapon nations. In a typical year, IAEA activities include conducting safety reviews in a number of different countries, assisting in dealing with accidents at nuclear power plants, providing advice to nations interested in building their own nuclear facilities, advising countries on methods for dealing with radioactive wastes, teaching nations how to use radiation to preserve foods, helping universities introduce nuclear science into their curricula, and sponsoring research on the broader applications of nuclear science. [David E. Newton]
RESOURCES ORGANIZATIONS International Atomic Energy Agency, P.O. Box 100, Wagramer Strasse 5, Vienna, Austria A-1400 (413) 2600-0, Fax: (413) 2600-7, Email:
[email protected] International Cleaner Production Cooperative The International Cleaner Production Cooperative is an Internet resource () that was implemented to provide access to globally relevant information about cleaner production and pollution prevention to the international community. The site is hosted by the U.S. Environmental Protection Agency and 764
Environmental Encyclopedia 3 gives access to a consortium of World Wide Web sites that provides information to businesses, professional, and local, regional, national, and international agencies that are striving for cleaner production. The cooperative provides links to people and businesses involved with cleaner production and pollution prevention, and to sources of technical assistance and information on international policy. The United Nations Environment Programme (UNEP) is one of the primary members of the cooperative. [Marie H. Bundy]
International Convention for the Regulation of Whaling (1946) The International Whaling Commission (IWC) was established in 1949 following the inaugural International Convention for the Regulation of Whaling, which took place in Washington, D.C., in 1946. Many nations have membership in the IWC, which primarily sets quotas for whales. The purpose of these quotas is twofold: they are intended to protect the whale species from extinction while allowing a limited whaling industry. In recent times, however, the IWC has come under attack. The vast majority of nations in the Commission have come to oppose whaling of any kind and object to the IWC’s practice of establishing quotas. Furthermore, some nations—principally Iceland, Japan, and Norway—wish to protect their traditional whaling industries and are against the quotas set by the IWC. With two such divergent factions opposing the IWC, its future is as doubtful as that of the whales. Since its inception, the Commission has had difficulty implementing its regulations and gaining approval for its recommendations. In the meantime whale populations have continued to dwindle. In its original design, the IWC consisted of two sub-committees, one scientific and the other technical. Any recommendation that the scientific committee put forth was subject to the politicized technical committee before final approval. The technical committee evaluated the recommendation and changed it if it was not politically or economically viable; essentially, the scientific committee’s recommendations have often been rendered powerless. Furthermore, any nation that has decided an IWC recommendation was not in its best interest could have dismissed it by simply registering an objection. In the 1970s this gridlock and inaction attracted public scrutiny; people objected to the IWC’s failure to protect the world’s whales. Thus in 1972 the United Nations Conference on the Human Environment voted overwhelmingly to stop commercial whaling. Nevertheless, the IWC retained some control over the whaling industry. In 1974 the Commission attempted to bring scientific research to management strategies in its
Environmental Encyclopedia 3 “New Management Procedure.” The IWC assessed whale populations with finer resolution, scrutinizing each species to see if it could be hunted and not die out. It classified whales as either “initial management stocks” (harvestable), “sustained management stocks” (harvestable), or “protection stocks” (unharvestable). While these classifications were necessary for effective management, much was unknown about whale population ecology, and quota estimates contained high levels of uncertainty. Since the 1970s, public pressure has caused many nations in the IWC to oppose whale hunting of any kind. At first, one or two nations proposed a whaling moratorium each year. Both pro- and anti-whaling countries began to encourage new IWC members to vote for their respective positions, thus dividing the Commission. In 1982, the IWC enacted a limited moratorium on commercial whaling, to be in effect from 1986 until 1992. During that time it would thoroughly assess whale stocks and afterward allow whaling to resume for selected species and areas. Norway and Japan, however, attained special permits for whaling for scientific research: they continued to catch approximately 400 whales per year, and the meat was sold to restaurants. Then in 1992—the year when whaling was supposed to have resumed—many nations voted to extend the moratorium. Iceland, Norway, and Japan objected strongly to what they saw as an infringement on their traditional industries and eating customs. Iceland subsequently left the IWC, and Japan and Norway have threatened to follow. These countries intend to resume their whaling programs. Members of the IWC are torn between accommodating these nations in some way and protecting the whales, and amid such controversy it is unlikely that the Commission can continue in its present mission. Although the IWC has not been able to marshall its scientific advances or enforce its own regulations in managing whaling, it is broadening its original mission. The Commission may begin to govern the hunting of small cetaceans such as dolphins and porpoises, which are believed to suffer from overhunting. [David A. Duffus and Andrea Gacki]
RESOURCES BOOKS Burton, R. The Life and Death of Whales. London: Andre Deutsch Ltd., 1980. Kellog, R. The International Whaling Commission. International Technical Conference on Conservation of Living Resources of the Sea. New York: United Nations Publications, 1955.
PERIODICALS Holt, S. J. “Let’s All Go Whaling.” The Ecologist 15 (1985): 113–124. Pollack, A. “Commission to Save Whales Endangered, Too.” The New York Times, May 18, 1993, B8.
International Geosphere-Biosphere Programme
International Council for Bird Preservation see BirdLife International
International Geosphere-Biosphere Programme (U.N. Environmental Programme) Research scientists from all countries have always interacted with each other closely. But in recent decades, a new type of internationalism has begun to evolve, in which scientists from all over the world work together on very large projects concerning the planet. An example is research on global change. A number of scientists have come to believe that human activities, such as the use of fossil fuels and deforestation of tropical rain forests, may be altering the earth’s climate. To test that hypothesis, a huge amount of meteorological data must be collected from around the world, and no single institution can possibly obtain and analyze it all. A major effort to organize research on important, worldwide scientific questions such as climate change was begun in the early 1980s. Largely through the efforts of scientists from two United States organizations, the National Aeronautics and Space Administration (NASA) and the National Research Council, a proposal was developed for the creation of an International Geosphere-Biosphere Programme (IGBP). The purpose of the IGBP was to help scientists from around the world focus on major issues about which there was still too little information. Activity funding comes from national governments, scientific societies, and private organizations. IGBP was not designed to be a new organization, with new staff, new researchers, and new funding problems. Instead, it was conceived of as a coordinating program that would call on existing organizations to attack certain problems. The proposal was submitted in September 1986 to the General Assembly of the International Council of Scientific Unions (ICSU), where it received enthusiastic support. Within two years, more than 20 nations agreed to cooperate with IGBP, forming national committees to work with the international office. A small office, administered by Harvard oceanographer James McCarthy, was installed at the Royal Swedish Academy of Sciences in Stockholm. IGBP has moved forward rapidly. It identified existing programs that fit the Programme’s goals and developed new research efforts. Because many global processes are gradual, a number of IGBP projects are designed with time frames of ten to twenty years. 765
International Institute for Sustainable Development
By the early 1990s, IGBP had defined a number of projects, including the Joint Global Ocean Flux Study, the Land-Ocean Interactions in the Coastal Zone study, the Biospheric Aspects of the Hydrological Cycle research, Past Global Changes, Global Analysis, Interpretation and Modeling, and Global Change System for Analysis, Research and Training. [David E. Newton]
RESOURCES BOOKS Kupchella, C. E. Environmental Science: Living within the System of Nature. Boston: Allyn and Bacon, Inc., 1986.
PERIODICALS Edelson, E. “Laying the Foundation.” Mosaic (Fall/Winter 1988): 4–11. Perry, J. S. “International Institutions for the Global Environment.” MTS Journal (Fall 1991): 27–8.
OTHER International Geosphere-Biosphere Programme. [cited June 2002]. .
International Institute for Sustainable Development The International Institute for Sustainable Development (IISD) is a nonprofit organization that serves as an information and resources clearinghouse for policy makers promoting sustainable development. IISD aims to promote sustainable development in decision making worldwide by assisting with policy analysis, providing information about practices, measuring sustainability, and building partnerships to further sustainability goals. It serves businesses, governments, communities, and individuals in both developing and industrialized nations. IISD’s stated aim is to “create networks designed to move sustainable development from concept to practice.” Founded in 1990 and based in Winnipeg, Canada, IISD is funded by foundations, governmental organizations, private sector sources, and revenue from publications and products. IISD works in seven program areas. The Business Strategies program focuses on improving competitiveness, creating jobs, and protecting the environment through sustainability. Projects include several publications and the EarthEnterprise program, which offers entrepreneurial and employment strategies. IISD’s Trade and Sustainable Development program works on building positive relationships between trade, the environment, and development. It examines how to make international accords, such as those made by the World Trade Organization, compatible with the goals of sustainable development. The Community Adapta766
Environmental Encyclopedia 3
tion and Sustainable Livelihoods program identifies adaptive
strategies for drylands in Africa and India, and it examines the influences of policies and new technology on local ways of life. The Great Plains program works with community, farm, government, and industry groups to assist communities in the Great Plains region of North America with sustainable development. It focuses on government policies in agriculture, such as the Western Grain Transportation Act, as well as loss of transportation subsidies, the North American Free Trade Agreement (NAFTA), soil salination and loss of wetlands, job loss, and technological advances. Measurement and Indicators aims to set measurable goals and progress indicators for sustainable development. As part of this, IISD offers information about the successful uses of taxes and subsidies to encourage sustainability worldwide. Common Security focuses on initiatives of peace and consensusbuilding. IISD’s Information and Communications program offers several publications and Internet sites featuring information on sustainable development issues, terms, events, and media coverage. This includes Earth Negotiations Bulletin, which provides on-line coverage of major environmental and development negotiations (especially United Nations conferences), and IISDnet, with information about sustainable development worldwide. IISD also publishes more than 50 books, monographs, and discussion papers, including Sourcebook on Sustainable Development, which lists organizations, databases, conferences, and other resources. IISD produces five journals that include Developing Ideas, published bimonthly both in print and electronically, and featuring articles on sustainable development terms, issues, resources, and recent media coverage. Earth Negotiations Bulletin reports on conferences and negotiations meetings, especially United Nations conferences. IISD’s Internet journal, /linkages/journal/, is a bimonthly electronic multi-media subscription magazine focusing on global negotiations. Its reporting service, Sustainable Developments, reports on environmental and development negotiations for meetings and symposia via the Internet. IISD also operates IISDnet (http://iisd1.iisd.ca/), an Internet information site featuring research, new trends, global activities, contacts, information on IISD’s activities and projects, including United Nations negotiations on environment and development, corporate environmental reporting, and information on trade issues. IISD is the umbrella organization for Earth Council, an international nongovernmental organization (NGO) created in 1992 as a result of the United Nations Earth Summit. The organization creates measurements for achieving sustainable development and assesses practices and economic measures for their effects on sustainable development. Earth Council coordinated the Rio+5 Forum in Rio de Janeiro in March 1997, which assessed progress towards
Environmental Encyclopedia 3
International Primate Protection League
sustainable development since the Earth Summit in 1992. IISD has produced two publications, Trade and Sustainable Development and Guidelines for the Practical Assessment of Progress Toward Sustainable Development. [Carol Steinfeld]
RESOURCES ORGANIZATIONS International Institute for Sustainable Development, 161 Portage Avenue East, 6th Floor, Winnipeg, ManitobaCanada R3B 0Y4 (204) 958-7700, Fax: (204) 958-7710, Email:
[email protected],
International Joint Commission The International Joint Commission (IJC) is a permanent, independent organization of the United States and Canada formed to resolve trans-boundary ecological concerns. Founded in 1912 as a result of provisions under the Boundary Waters Treaty of 1909, the IJC was patterned after an earlier organization, the Joint Commission, which was formed by the United States and Britain. The IJC consists of six commissioners, with three appointed by the President of the United States, and three by the Governor-in-Council of Canada, plus support personnel. The commissioners and their organizations generally operate freed from direct influence or instruction from their national governments. The IJC is frequently cited as an excellent model for international dispute resolution because of its history of successfully and objectively dealing with natural resources and environmental disputes between friendly countries. The major activities of the IJC have dealt with apportioning, developing, conserving, and protecting the binational water resources of the United States and Canada. Some other issues, including transboundary air pollution, have also been addressed by the Commission. The power of the IJC comes from its authority to initiate scientific and socio-economic investigations, conduct quasi-judicial inquiries, and arbitrate disputes. Of special concern to the IJC have been issues related to the Great Lakes. Since the early 1970s, IJC activities have been substantially guided by provisions under the 1972 and 1978 Great Lakes Water Quality Agreement plus updated protocols. For example, it is widely acknowledged, and well documented, that environmental quality and ecosystem health have been substantially degraded in the Great Lakes. In 1985, the Water Quality Board of the IJC recommended that states and provinces with Great Lakes boundaries make a collective commitment to address this communal problem, especially with respect to pollution. These governments agreed to develop and implement remedial ac-
tion plans (RAPs) towards the restoration of environmental health within their political jurisdictions. Forty-three areas of concern have been identified on the basis of environmental pollution, and each of these will be the focus of a remedial action plan. An important aspect of the design and intent of the overall program, and of the individual RAPs, will be developing a process of integrated ecosystem management. Ecosystem management involves systematic, comprehensive approaches toward the restoration and protection of environmental quality. The ecosystem approach involves consideration of interrelationships among land, air, and water, as well as those between the inorganic environment and the biota, including humans. The ecosystem approach would replace the separate, more linear approaches that have traditionally been used to manage environmental problems. These conventional attempts have included directed programs to deal with particular resources such as fisheries, migratory birds, land use, or point sources and area sources of toxic emissions. Although these non-integrated methods have been useful, they have been limited because they have failed to account for important inter-relationships among environmental management programs, and among components of the ecosystem. [Bill Freedman Ph.D.]
RESOURCES ORGANIZATIONS International Joint Commission, 1250 23rd Street, NW, Suite 100, Washington, D.C. USA 20440 (202) 736-9000, Fax: (202) 735-9015, ,
International Primate Protection League Founded in 1974 by Shirley McGreal, International Primate Protection League (IPPL) is a global conservation organization that works to protect nonhuman primates, especially monkeys and apes (chimpanzees, orangutans, gibbons, and gorillas). IPPL has 30,000 members, branches in the United Kingdom, Germany, and Australia, and field representatives in 31 countries. Its advisory board consists of scientists, conservationists, and experts on primates, including the world-renowned primatologist Jane Goodall, whose famous studies and books are considered the authoritative texts on chimpanzees. Her studies have also heightened public interest and sympathy for chimpanzees and other nonhuman primates. 767
International Register of Potentially Toxic Chemicals
IPPL runs a sanctuary and rehabilitation center at its Summerville, South Carolina headquarters, which houses two dozen gibbons and other abandoned, injured, or traumatized primates who are refugees from medical laboratories or abusive pet owners. IPPL concentrates on investigating and fighting the multi-million dollar commercial trafficking in primates for medical laboratories, the pet trade, and zoos, much of which is illegal trade and smuggling of endangered species protected by international law. IPPL is considered the most active and effective group working to stem the cruel and often lethal trade in primates. IPPL’s work has helped to save the lives of literally tens of thousands of monkeys and apes, many of which are threatened or endangered species. For example, the group was instrumental in persuading the governments of India and Thailand to ban or restrict the export of monkeys, which were being shipped by the thousands to research laboratories and pet stores across the world. The trade in primates is especially cruel and wasteful, since a common way of capturing them is by shooting the mother, which then enables poachers to capture the infant. Many captured monkeys and apes die enroute to their destinations, often being transported in sacks, crates, or hidden in other devices. IPPL often undertakes actions and projects that are dangerous and require a good deal of skill. In 1992, its investigations have led to the conviction of a Miami, Florida, animal dealer for conspiring to help smuggle six baby orangutans captured in the jungles of Borneo. The endangered orangutan is protected by the Convention on International Trade in Endangered Species of Fauna and Flora (CITES), as well as by the United States Endangered Species Act. In retaliation, the dealer unsuccessfully sued McGreal, as did a multi-national corporation she once criticized for its plan to capture chimpanzees and use them for hepatitis research in Sierra Leone. A more recent victory for IPPL occurred in April 2002. In 1997, Chicago O’Hare airport received two shipments from Indonesia, each of which contained more than 250 illegally imported monkeys. Included in the shipments were dozens of unweaned baby monkeys. After several years of pursuing the issue, the U.S. Fish and Wildlife Service and the U.S. Federal prosecutors charged the LABS Company (a breeder of monkeys for research based in the United States) and several of its employees, including its former president, on eight felonies and four misdemeanors. IPPL publishes IPPL News several times a year and sends out periodic letters alerting members of events and issues that affect primates. [Lewis G. Regenstein] 768
Environmental Encyclopedia 3
RESOURCES ORGANIZATIONS International Primate Protection League, P.O. Box 766, Summerville, SC USA 29484 (843) 871-2280, Fax: (843) 871-7988, Email:
[email protected],
International Register of Potentially Toxic Chemicals (U. N. Environment Programme) The International Register of Potentially Toxic Chemicals is published by the United Nations Environment Programme (UNEP). Part of UNEP’s three-pronged Earthwatch program, the register is an international inventory of chemicals that threaten the environment. Along with the Global Environment Monitoring System and INFOTERRA, the register monitors and measures environmental problems worldwide. Information from the register is routinely shared with agencies in developing countries. Third World countries have long been the toxic dumping grounds for the world, and they still use many chemicals that have been banned elsewhere. Environmental groups regularly send information from the register to toxic chemical users in developing countries as part of their effort to stop the export of toxic pollution. RESOURCES ORGANIZATIONS International Register of Potentially Toxic Chemicals, Chemin des Ane´mones 15, Gene`ve, Switzerland CH-1219 +41-22-979 91 11, Fax: +41-22-979 91 70, Email:
[email protected] International Society for Environmental Ethics The International Society for Environmental Ethics (ISEE) is an organization that seeks to educate people about the environmental ethics and philosophy concerning nature. An environmental ethic is the philosophy that humans have a moral duty to sustain the natural environment and attempts to answer how humans should treat other species (plant and animal), use Earth’s natural resources, and place value on the aesthetic experiences of nature. The society is an auxiliary organization of the American Philosophical Association, with about 700 members in over 20 countries. Many of ISEE’s current members are philosophers, teachers, or environmentalists. The ISEE officers include president Mark Sagoff (Institute for Philosophy and Public Policy, University of Maryland) and John Baird Callicott, vice president, (professor of philosophy at the University of North Texas). Two other key members are editors
Environmental Encyclopedia 3
International trade in toxic waste
of the ISEE newsletter, Jack Weir and Holmes Rolston, III (Professor of Philosophy, Colorado State University). All have contributed to the ongoing ISEE Master Environmental Ethics Bibliography. ISEE publishes a quarterly newsletter available to members in print form and maintains an Internet site of back issues. Of special note is the ISEE Bibliography, an ongoing project that contains over 5,000 records from journals such as Environmental Ethics, Environmental Values, and the Journal of Agricultural and Environmental Ethics. Another work in progress, the ISEE Syllabus Project, continues to be developed by Callicott and Robert Hood, doctoral candidate at Bowling Green State University. They maintain a database of course offerings in environmental philosophy and ethics, based on information from two-year community colleges and four-year state universities, private institutions, and master’s- and doctorate-granting universities. ISEE supports the enviroethics program which has spurred many Internet discussion groups and is constantly expanding into new areas of communication. [Nicole Beatty]
RESOURCES ORGANIZATIONS International Society for Environmental Ethics, Environmental Philosophy Inc., Department of Philosophy, University of North Texas, P.O. Box 310980, Denton, TX USA 76203-0980,
International trade in toxic waste Just as VCRs, cars, and laundry soap are traded across borders, so too is the waste that accompanies their production. In the United States alone, industrial production accounts for at least 500 million lb (230 million kg) of hazardous waste a year. The industries of other developed nations also produce waste. While some of it is disposed within national borders, a portion is sent to other countries where costs are cheaper and regulations less stringent than in the waste’s country of origin. Unlike consumer products, internationally traded hazardous waste has begun to meet local opposition. In some recent high-profile cases, barges filled with waste have traveled the world looking for final resting places. In at least one case, a ship may have dumped about ten tons of toxic municipal incinerator ash in the ocean after being turned away from dozens of ports. In recent years national and international bodies have begun to voice official opposition to this dangerous trade through bans and regulations. The international trade in toxic wastes is, at bottom, waste disposal with a foreign-relations twist. Typically a
manufacturing facility generates waste during the production process. The facility manager pays a waste-hauling firm to dispose of the waste. If the landfills in the country of origin cost too much, or if there are no landfills that will take the waste, the disposal firm will find a cheaper option, perhaps a landfill in another country. In the United States, the shipper must then notify the Environmental Protection Agency (EPA), which then notifies the State Department. After ascertaining that the destination country will indeed accept the waste, American regulators approve the sale. Disposing of the waste overseas in a landfill is only the most obvious example of this international trade. Waste haulers also sell their cargo as raw materials for recycling. For example, used lead-acid batteries discarded by American consumers are sent to Brazil where factory workers extract and resmelt the lead. Though the lead-acid alone would classify as hazardous, whole batteries do not. Waste haulers can ship these batteries overseas without notification to Mexico, Japan, and Canada, among other countries. In other cases, waste haulers sell products, like DDT, that have been banned in one country to buyers in another country that has no ban. Whatever the strategy for disposal, waste haulers are most commonly small, independent operators who provide a service to waste producers in industrialized countries. These haulers bring waste to other countries to take advantage of cheaper disposal options and less stringent regulatory climates. Some countries forbid the disposal of the certain kinds of waste. Countries without such prohibitions will import more waste. Cheap landfills depend on cheap labor and land. Countries with an abundance of both can become attractive destinations. Entrepreneurs or government officials in countries, like Haiti, or regions within countries, such as Wales, that lack a strong manufacturing base, view waste disposal as a viable, inexpensive business. Inhabitants may view it as the best way to make money and create jobs. Simply by storing hazardous waste, the country of Guinea-Bissau could have made $120 million, more money than its annual budget. Though the less developed countries (LDC) predictably receive large amounts of toxic waste, the bulk of the international trade occurs between industrialized nations. Canada and the United Kingdom in particular import large volumes of toxic waste. Canada imports almost 85% of the waste sent abroad by American firms, approximately 150,000 lb (70,000 kg) per year. The bulk of the waste ends up at an incinerator in Ontario or a landfill in Quebec. Because Canada’s disposal regulations are less strict than United States laws, the operators of the landfill and incinerator can charge lower fees than similar disposal sites in the United States. A waste hauler’s life becomes complicated when the receiving country’s government or local activists discover that 769
Environmental Encyclopedia 3
International trade in toxic waste
the waste may endanger health and the environment. Local regulators may step in and forbid the sale. This happened many times in the case of the Khian Sea, a ship that had contracted to dispose of Philadelphia’s incinerator ash. The ship was turned away from Haiti, from Guinea-Bissau, from Panama, and from Sri Lanka. For two years, beginning in 1986, the ship carried the toxic ash from port to port looking for a home for its cargo before finally mysteriously losing the ash somewhere in the Indian Ocean. This early resistance to toxic-waste dumping has since led to the negotiation of international treaties forbidding or regulating the trade in toxic waste. In 1989, the African, Caribbean, and Pacific countries (ACP) and the countries belonging to the European Economic Community (EEC) negotiated the Lome IV Convention, which bans shipments of nuclear and hazardous waste from the EEC to the ACP countries. ACP countries further agreed not to import such waste from non-EEC countries. Environmentalists have encouraged the EEC to broaden its commitment to limiting waste trade. In the same year, under the auspices of the United Nations Environment Programme (UNEP), the Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and Their Disposal was negotiated. This requires shippers to obtain government permission from the destination country before sending waste to foreign landfills or incinerators. Critics contend that Basel merely formalizes the trade. In 1991, the nations of the Organization of African Unity negotiated another treaty restricting the international waste trade. The Bamako Convention on the Ban of the Import into Africa and the Control of Transboundary Movement and Management of Hazardous Wastes within Africa criminalized the import of all hazardous waste. Bamako further forbade waste traders from importing to Africa materials that had been banned in one country to a country that has no such ban. Bamako also radically redefined the assessment of what constitutes a health hazard. Under the treaty, all chemicals are considered hazardous until proven otherwise. These international strategies find their echoes in national law. Less developed countries have tended to follow the Lome and Bamako examples. At least eighty-three African, Latin-Caribbean, and Asian-Pacific countries have banned hazardous waste imports. And the United States, in a policy similar to the Basel Convention, requires hazardous waste shipments to be authorized by the importing country’s government. The efforts to restrict toxic waste trade reflect, in part, a desire to curb environmental inequity. When waste flows from a richer country to a poorer country or region, the inhabitants living near the incinerator, landfill, or 770
recycling facility are exposed to the dangers of toxic compounds. For example, tests of workers in the Brazilian lead resmelting operation found blood-lead levels several times the United States standard. Lead was also found in the water supply of a nearby farm after five cows died. The loose regulations that keep prices low and attract waste haulers mean that there are fewer safeguards for local health and the environment. For example, leachate from unlined landfills can contaminate local groundwater. Jobs in the disposal industry tend to be lower paying than jobs in manufacturing. The inhabitants of the receiving country receive the wastes of industrialization without the benefits. Stopping the waste trade is a way to force manufacturers to change production processes. As long as cheap disposal options exist, there is little incentive to change. A waste-trade ban makes hazardous waste expensive to discard, and will force business to search for ways to reduce this cost. Companies that want to reduce their hazardous waste may opt for source reduction, which limits the hazardous components in the production process. This can both reduce production costs and increase output. A Monsanto facility in Ohio saved more than $3 million dollars a year while eliminating more than 17 million lb (8 million kg) of waste. According to officials at the plant, average yield increased by 8%. Measures forced by a lack of disposal options can therefore benefit the corporate bottom line, while reducing risks to health and the environment. See also Environmental law; Environmental policy; Groundwater pollution; Hazardous waste siting; Incineration; Industrial waste treatment; Leaching; Ocean dumping; Radioactive waste; Radioactive waste management; Smelter; Solid waste; Solid waste incineration; Solid waste recycling and recovery; Solid waste volume reduction; Storage and transport of hazardous materials; Toxic substance; Waste management; Waste reduction [Alair MacLean]
RESOURCES BOOKS Dorfman, M., W. Muir, and C. Miller. Environmental Dividends: Cutting More Chemical Waste. New York: INFORM, 1992. Moyers, B. D. Global Dumping Ground: The International Traffic in Hazardous Waste. Cabin John, MD: Seven Locks Press, 1990. Vallette, J., and H. Spalding. The International Trade in Wastes: A Greenpeace Inventory. Washington, DC: Greenpeace, 1990.
PERIODICALS Chepesiuk, R. “From Ash to Cash: The International Trade in Toxic Waste.” E Magazine 2 (July-August 1991): 30–37.
Environmental Encyclopedia 3
Intrinsic value
International Union for the Conservation of Nature and Natural Resources see IUCN—The World Conservation Union
International Voluntary Standards International Voluntary Standards are industry guidelines or agreements that provide technical specifications so that products, processes, and services can be used worldwide. The need for development of a set of international standards to be followed and used consistently for environmental management systems was recognized in response to an increased desire by the global community to improve environmental management practices. In the early 1990s, the International Organization for Standardization or ISO, which is located in Geneva, Switzerland, began development of a strategic plan to promote a common international approach to environmental management. ISO 14000 is the title of a series of voluntary international environmental standards that is under development by ISO and is 142 member nations, including the United States. Some of the standards developed by ISO include standardized sampling, testing and analytical methods for use in the monitoring of environmental variables such as the quality of air, water and soil. [Marie H. Bundy]
International Whaling Commission see International Convention for the Regulation of Whaling (1946)
International Wildlife Coalition The International Wildlife Coalition (IWC) was established by a small group of individuals who came from a variety of environmental and animal rights organizations in 1984. Like many NGOs (nongovernmental organizations) that arose in the 1970s and 1980s their initial work involved the protection of whales. The IWC raised money for whale conservation programs on endangered Atlantic humpback whale populations. This was one of the first species where researchers identified individual animals through tail photographs. Using this technique the IWC developed what is now a common tool, a whale adoption program based on individual animals with human names. From that basis, the fledgling group established itself in an advocacy role with three principles in their mandate: to prevent cruelty to wildlife, to prevent killing of wildlife, and to prevent destruction of wildlife habitat. In light of
those principles, the IWC can be characterized as an extended animal rights organization. They maintain the “prevention of cruelty” aspect common to humane societies, perhaps the oldest progenitor of animal rights groups. In standing by an ethic of preventing killing, they stand with animal rights groups, but by protecting habitat they take a more significant step by acting in a broad way to achieve their initial two principles. The program thus works at both ends of the spectrum, undertaking wildlife rehabilitation and other programs dealing with the individual animals, as well as lobbying and promoting letter writing campaigns to improve wildlife legislation. For example, they have used their Brazilian office to create pressure to combat the international trade in exotic pets, and their Canadian office to oppose the harp seal hunt and the deterioration of Canada’s impending endangered species legislation. Their United States-based operation has built a reputation in the research field working with government agencies to ensure that whale-watching on the eastern seaboard does not harm the whales. Offices in the United Kingdom are a focus for the IWC concern over the European Community policies, such as lifting their ban on importing fur from animals killed in leg hold traps. It has become evident that the diversity within the varied groups that constitute the environmental community is a positive force, however, most conservation NGOs do not cross the gulf between animal rights and habitat conservation. A clear distinction exists between single animal approaches and broader conservation ideals, as they appeal to different protection strategies, and potentially different donors. Although the emotional appeal of releasing porpoises from fishing nets alive outranks backroom lobbying for changes in fishing regulations, the lobbying effort protect more porpoises. The IWC may be deemed more successful by exploiting a range of targets, or less so than a dedicated advocacy group applying all its focus to one issue. They can point to a growth from a modest 3,000 supporters in the beginning, to over 100,000 people supporting the International Wildlife Coalition today. [David Duffus]
RESOURCES ORGANIZATIONS International Wildlife Coalition, 70 East Falmouth Highway, East Falmouth, MA USA 02536 (508) 548-8328, Fax: (508) 548-8542, Email:
[email protected],
Intrinsic value Saying that an object has intrinsic value means that, even though it has no specific use, market, or monetary value, it 771
Introduced species
nevertheless can be valuable in and of itself and for its own sake. The Northern spotted owl (Strix occidentalis caurina) for example, has no instrumental or market value; it is not a means to any human end, nor is it sold or traded in any market. But, environmentalists argue, utility and price are not the only measures of worth. Indeed, they say, some of the things humans value most—truth, love, respect—are not for sale at any price, and to try to put a price on them would only tend to cheapen them. Such things have “intrinsic value.” Similarly, environmentalists say, the natural environment and its myriad life-forms are valuable in their own right. Wilderness, for instance, has intrinsic value and is worthy of protecting for its own sake. To say that something has intrinsic value is not necessarily to deny that it may also have instrumental value for humans and non-human animals alike. Deer, for example, have intrinsic value; but they also have instrumental value as a food source for wolves and other predator species. See also Shadow pricing
Introduced species Introduced species (also called invasive species) are those that have been released by humans into an area to which they are not native. These releases can occur accidently, from places such as the cargo holds of ships. They can also occur intentionally, and species have been introduced for a range of ornamental and recreational uses, as well as for agricultural, medicinal, and pest control purposes. Introduced species can have dramatically unpredictable effects on the environment and native species. Such effects can include overabundance of the introduced species, competitive displacement, and disease-caused mortality of the native species. Numerous examples of adverse consequences associated with the accidental release of species or the long term effects of deliberately introduced species exist in the United States and around the world. Introduced species can be beneficial as long as they are carefully regulated. Almost all the major varieties of grain and vegetables used in the United States originated in other parts of the world. This includes corn, rice, wheat, tomatoes, and potatoes. The kudzu vine, which is native to Japan, was deliberately introduced into the southern United States for erosion control and to shade and feed livestock. It is, however, an extremely aggressive and fast-growing species, and it can form continuous blankets of foliage that cover forested hillsides, resulting in malformed and dead trees. Other species introduced as ornamentals have spread into the wild, displacing or outcompeting native species. Several varieties of cultivated roses, such as the multiflora rose, are serious pests and nuisance shrubs in field and pastures. The purple loos772
Environmental Encyclopedia 3 estrife, with its beautiful purple flowers, was originally
brought from Europe as a garden ornamental. It has spread rapidly in freshwater wetlands in the northern United States, displacing other plants such as cattails. This is viewed with concern by ecologists and wildlife biologists since the food value of loosestrife is minimal, while the roots and starchy tubes of cattails are an important food source to muskrats. Common ragweed was accidently introduced to North America, and it is now a major health irritant for many people. Introduced species are sometimes so successful because human activity has changed the conditions of a particular environment. The Pine Barrens of southern New Jersey form an ecosystem that is naturally acidic and low in nutrients. Bogs in this area support a number of slow-growing plant species that are adapted to these conditions, including peat moss, sundews, and pitcher plants. But urban runoff, which contain fertilizers, and wastewater effluent, which is high in both nitrogen and phosphorus, have enriched the bogs; the waters there have become less acidic and shown a gradual elevation in the concentration of nutrients. These changes in aquatic chemistry have resulted in changes in plant species, and the acidophilus mosses and herbs are being replaced by fast-growing plants that are not native to the Pine Barrens. Zebra mussels were transported by accident from Europe to the United States, and they are causing severe problems in the Great Lakes. They proliferate at a prodigious rate, crowding out native species and clogging industrial and municipal water-intake pipes. Many ecologists fear that shipping traffic will transport the zebra mussel to harbors all over the country. Scattered observations of this tiny crustacean have already been made in the lower Hudson River in New York. Although introduced species are usually regarded with concern, they can occasionally be used to some benefit. The water hyacinth is an aquatic plant of tropical origin that has become a serious clogging nuisance in lakes, streams, and waterways in the southern United States. Numerous methods of physical and chemical removal have been attempted to eradicate or control it, but research has also established that the plant can improve water quality. The water hyacinth has proved useful in the withdrawal of nutrients from sewage and other wastewater. Many constructed wetlands, polishing ponds, and waste lagoons in waste treatment plants now take advantage of this fact by routing wastewater through floating beds of water hyacinth. The reintroduction of native species is extremely difficult, and it is an endeavor that has had low rates of success. Efforts by the Fish and Wildlife Service to reintroduce the endangered whooping crane into native habitat in the southwestern United States were initially unsuccessful be-
Environmental Encyclopedia 3 cause of the fragility of the eggs, as well as the poor parenting skills of birds raised in captivity. The service then devised a strategy of allowing the more common sandhill crane to incubate the eggs of captive whooping cranes in wilderness nests, and the fledglings were then taught survival skills by their surrogate parents. Such projects, however, are extremely time and labor intensive; they are also costly and difficult to implement for large numbers of most species. Due to the difficulties and expense required to protect native species and to eradicate introduced species, there are not many international laws and policies that seek to prevent these problems before they begin. Thus customs agents at ports and airports routinely check luggage and cargo for live plant and animal materials to prevent the accidental or deliberate transport of non-native species. Quarantine policies are also designed to reduce the probability of spreading introduced species, particularly diseases, from one country to another. There are similar concerns about genetically engineered organisms, and many have argued that their creation and release could have the same devastating environmental consequences as some introduced species. For this reason, the use of bioengineered organisms is highly regulated; both the Food and Drug Administration and the Environmental Protection Agency (EPA) impose strict controls on the field testing of bioengineered products, as well as on their cultivation and use. Conservation policies for the protection of native species are now focused on habitats and ecosystems rather than single species. It is easier to prevent the encroachment of introduced species by protecting an entire ecosystem from disturbance, and this is increasingly well recognized both inside and outside the conservation community. See also Bioremediation; Endangered species; Fire ants; Gypsy moth; Rabbits in Australia; Wildlife management [Usha Vedagiri and Douglas Smith]
RESOURCES BOOKS Common Weeds of the United States. United States Department of Agriculture. New York: Dover Publications, 1971. Forman, R. T. T., ed. Pine Barrens: Ecology and Landscape. New York: Academic Press, 1979.
Inversion see Atmospheric inversion
Ionizing radiation
Iodine 131 A radioactive isotope of the element iodine. During the 1950s and early 1960s, iodine-131 was considered a major health hazard to humans. Along with cesium-137 and strontium-90, it was one of the three most abundant isotopes found in the fallout from the atmospheric testing of nuclear weapons. These three isotopes settled to the earth’s surface and were ingested by cows, ultimately affecting humans by way of dairy products. In the human body, iodine-131, like all forms of that element, tends to concentrate in the thyroid, where it may cause cancer and other health disorders. The Chernobyl nuclear reactor explosion is known to have released large quantities of iodine-131 into the atmosphere. See also Radioactivity
Ion Forms of ordinary chemical elements that have gained or lost electrons from their orbit around the atomic nucleus and, thus, have become electrically charged. Positive ions (those that have lost electrons) are called cations because when charged electrodes are placed in a solution containing ions the positive ions migrate to the cathode (negative electrode). Negative ions (those that have gained extra electrons) are called anions because they migrate toward the anode (positive electrode). Environmentally important cations include the hydrogen ion (H+) and dissolved metals. Important anions include the hydroxyl ion (OH-) as well as many of the dissolved ions of nonmetallic elements. See also Ion exchange; Ionizing radiation
Ion exchange The process of replacing one ion that is attached to a charged surface with another. A very important type of ion exchange is the exchange of cations bound to soil particles. Soil clay minerals and organic matter both have negative surface charges that bind cations. In a fertile soil the predominant exchangeable cations are Ca2+, Mg2+ and K+. In acid soils Al3+ and H+ are also important exchangeable ions. When materials containing cations are added to soil, cations leaching through the soil are retarded by cation exchange.
Ionizing radiation High-energy radiation with penetrating competence such as x rays and gamma rays which induces ionization in living material. Molecules are bound together with covalent bonds, and generally an even number of electrons binds the atoms together. However, high-energy penetrating radiation can 773
Environmental Encyclopedia 3
Iron minerals
fragment molecules resulting in atoms with unpaired electrons known as “free radicals.” The ionized “free radicals” are exceptionally reactive, and their interaction with the macromolecules (DNA, RNA, and proteins) of living cells can, with high dosage, lead to cell death. Cell damage (or death) is a function of penetration ability, the kind of cell exposed, the length of exposure, and the total dose of ionizing radiation. Cells that are mitotically active and have a high oxygen content are most vulnerable to ionizing radiation. See also Radiation exposure; Radiation sickness; Radioactivity
Iron minerals The oxides and hydroxides of ferric iron (Fe(III)) are very important minerals in many soils, and are important suspended solids in some fresh water systems. Important oxides and hydroxides of iron include goethite, hematite, lepidocrocite, and ferrihydrite. These minerals tend to be very finely divided and can be found in the clay-sized fraction of soils, and like other clay-sized minerals, are important adsorbers of ions. At high pH they adsorb hydroxide (OH-) ions creating negatively charged surfaces that contribute to cation exchange surfaces. At low pH they adsorb hydrogen (H+) ions, creating anion exchange surfaces. In the pH range between 8 and 9 the surfaces have little or no charge. Iron hydroxide and oxide surfaces strongly adsorb some environmentally important anions, such as phosphate, arsenate and selanite, and cations like copper, lead, manganese and chromium. These ions are not exchangeable, and in environments where iron oxides and hydroxides are abundant, surface adsorption can control the mobility of these strongly adsorbed ions. The hydroxides and oxides of iron are found in the greatest abundance in older highly weathered landscapes. These minerals are very insoluble and during soil weathering they form from the iron that is released from the structure of the soil-forming minerals. Thus, iron oxide and hydroxide minerals tend to be most abundant in old landscapes that have not been affected by glaciation, and in landscapes where the rainfall is high and the rate of soil mineral weathering is high. These minerals give the characteristic red (hematite or ferrihydrite) or yellow-brown (goethite) colors to soils that are common in the tropics and subtropics. See also Arsenic; Erosion; Ion exchange; Phosphorus; Soil profile; Soil texture
Irradiation of food see Food irradiation 774
Irrigation Irrigation is the method of supplying water to land to support plant growth. This technology has had a powerful role in the history of civilization. In arid regions sunshine is plentiful and soil is usually fertile, so irrigation supplies the critical factor needed for plant growth. Yields have been high, but not without costs. Historic problems include salinization and water logging; contemporary difficulties include immense costs, spread of water-borne diseases, and degraded aquatic environments. One geographer described California’s Sierra Nevada as the “mother nurse of the San Joaquin Valley.” Its heavy winter snowpack provides abundant and extended runoff for the rich valley soils below. Numerous irrigation districts, formed to build diversion and storage dams, supply water through gravity-fed canals. The snow melt is low in nutrients, so salinization problems are minimal. Wealth from the lush fruit orchards has enriched the state. By contrast, the Colorado River, like the Nile, flows mainly through arid lands. Deeply incised in places, the river is also limited for irrigation by the high salt content of desert tributaries. Still, demand for water exceeds supply. Water crossing the border into Mexico is so saline that the federal government has built a desalinization plant at Yuma, Arizona. Colorado River water is imperative to the Imperial Valley, which specializes in winter produce in the rich, delta soils. To reduce salinization problems, one-fifth of the water used must be drained off into the growing Salton Sea. Salinization and water logging have long plagued the Tigris, Euphrates, and Indus River flood plains. Once fertile areas of Iraq and Pakistan are covered with salt crystals. Half of the irrigated land in our western states is threatened by salt buildup. Some of the worst problems are degraded aquatic environments. The Aswan High Dam in Egypt has greatly amplified surface evaporation, reduced nutrients to the land and to fisheries in the delta, and has contributed to the spread of schistosomiasis via water snails in irrigation ditches. Diversion of drainage away from the Aral Sea for cotton irrigation has severely lowered the shoreline, and threatens this water body with ecological disaster. Spray irrigation in the High Plains is lowering the Ogallala Aquifer’s water table, raising pumping costs. Kesterson Marsh in the San Joaquin Valley has become a hazard to wildlife because of selenium poisoning from irrigation drainage. The federal Bureau of Reclamation has invested huge sums in dams and reservoirs in western states. Some question the wisdom of such investments, given the past century of farm surpluses, and argue that water users are not paying the true cost.
Environmental Encyclopedia 3
Island biogeography
A farm irrigation system. (U. S. Geological Survey Reproduced by permission.)
Irrigation still offers great potential, but only if used with wisdom and understanding. New technologies may yet contribute to the world’s ever-increasing need for food. See also Climate; Commercial fishing; Reclamation [Nathan H. Meleen]
RESOURCES BOOKS Huffman, R. E. Irrigation Development and Public Water Policy. New York: Ronald Press, 1953. Powell, J. W. “The Reclamation Idea.” In American Environmentalism: Readings in Conservation History. 3rd ed., edited by R. F. Nash. New York: McGraw-Hill, 1990. Wittfogel, K. A. “The Hydraulic Civilizations.” In Man’s Role in Changing the Face of the Earth, edited by W. L. Thomas Jr. Chicago: University of Chicago Press, 1956. Zimmerman, J. D. Irrigation. New York: Wiley, 1966.
OTHER U.S. Department of Agriculture. Water: 1955 Yearbook of Agriculture. Washington, DC: U.S. Government Printing Office, 1955.
Island biogeography Island biogeography is the study of past and present animal and plant distribution patterns on islands and the processes
that created those distribution patterns. Historically, island biogeographers mainly studied geographic islands—continental islands close to shore in shallow water and oceanic islands of the deep sea. In the last several decades, however, the study and principles of island biogeography have been extended to ecological islands such as forests and prairie fragments isolated by human development. Biogeographic “islands” may also include ecosystems isolated on mountaintops and landlocked bodies of water such as Lake Malawi in the African Rift Valley. Geographic islands, however, remain the main laboratories for developing and testing the theories and methods of island biogeography. Equilibrium theory Until the 1960s, biogeographers thought of islands as living museums—relict (persistent remnant of an otherwise extinct species of plant or animal) scraps of mainland ecosystems in which little changed—or closed systems mainly driven by evolution. That view began to radically change in 1967 when Robert H. MacArthur and Edward O. Wilson published The Theory of Island Biogeography. In their book, MacArthur and Wilson detail the equilibrium theory of island biogeography—a theory that became the new paradigm of the field. The authors proposed that island ecosystems exist in dynamic equilibrium, with a steady turnover of species. Larger islands—as well as islands closest to a source of immigrants—accommodate the most species in the equilibrium condition, according to their theory. MacArthur and Wilson also worked out mathematical models to demonstrate and predict how island area and isolation dictate the number of species that exist in equilibrium. Dispersion The driving force behind species distribution is dispersion—the means by which plants and animals actively leave or are passively transported from their source area. An island ecosystem can have more than one source of colonization, but nearer sources dominate. How readily plants or animals disperse is one of the main reasons equilibrium will vary from species to species. Birds and bats are obvious candidates for anemochory (dispersal by air), but some species normally not associated with flight are also thought to reach islands during storms or even normal wind currents. Orchids, for example, have hollow seeds that remain airborne for hundreds of kilometers. Some small spiders, along with other insects like bark lice, aphids, and ants (collectively knows as aerial plankton) often are among the first pioneers of newly formed islands. Whether actively swimming or passively floating on logs or other debris, dispersal by sea is called thallasochory. Crocodiles have been found on Pacific islands 600 miles (950 km) from their source areas, but most amphibians, larger terrestrial reptiles, and, in particular, mammals, have difficulty crossing even narrow bodies of water. Thus, thalla775
Island biogeography
sochory is the medium of dispersal primarily for fish, plants, and insects. Only small vertebrates such as lizards and snakes are thought to arrive at islands by sea on a regular basis. Zoochory is transport either on or inside an animal. This method is primarily a means of plant dispersal, mostly by birds. Seeds ride along either stuck to feathers or survive passage through a bird’s digestive tract and are deposited in new territory. Anthropochory is dispersal by human beings. Although humans intentionally introduce domestic animals to islands, they also bring unintended invaders, such as rats. Getting to islands is just the first step, however. Plants and animals often arrive to find harsh and alien conditions. They may not find suitable habitats. Food chains they depend on might be missing. Even if they manage to gain a foothold, their limited numbers make them more susceptible to extinction. Chances of success are better for highly adaptable species and those that are widely distributed beyond the island. Wide distribution increases the likelihood a species on the verge of extinction may be saved by the rescue effect, the replenishing of a declining population by another wave of immigration. Challenging established theories Many biogeographers point out that isolated ecosystems are more than just collections of species that can make it to islands and survive the conditions they encounter there. Several other contemporary theories of island biogeography build on MacArthur and Wilson’s theory; other theories contradict it. Equilibrium theory suggests that species turnover is constant and regular. Evidence collected so far indicates MacArthur and Wilson’s model works well in describing communities of rapid dispersers who have a regular turnover, such as insects, birds, and fish. However, this model may not apply to species who disperse more slowly. Proponents of historical legacy models argue that communities of larger animals and plants (forest trees, for example) take so long to colonize islands that changes in their populations probably reflect sudden climactic or geological upheaval rather than a steady turnover. Other theories suggest that equilibrium may not be dynamic, that there is little or no turnover. Through competition, established species keep out new colonists; the newcomers might occupy the same ecological niches as their predecessors. Established species may also evolve and adapt to close off those niches. Island resources and habitats may also be distinct enough to limit immigration to only a few well-adapted species. Thus, in these later models, dispersal and colonization are not nearly as random as in MacArthur and Wilson’s model. These less random, more deterministic theories of island ecosystems conform to specific assembly rules—a 776
Environmental Encyclopedia 3 complex list of factors accounting for the species present in the source areas, the niches available on islands, and competition between species. Some biogeographers suggest that every island—and perhaps every habitat on an island—may require its own unique model. Human disruption of island ecosystems further clouds the theoretical picture. Not only are habitats permanently altered or lost by human intrusion, but anthropochory also reduces an island’s isolation. Thus, finding relatively undisturbed islands to test different theories can be difficult. Since the time of naturalists Charles Darwin and his colleague, Alfred Wallace, islands have been ideal “natural laboratories” for studying evolution. Patterns of evolution stand out on islands for two reasons: island ecosystems tend to be simpler than other geographical regions, and they contain greater numbers of endemic species, plant, and animal species occurring only in a particular location. Many island endemics are the result of adaptive radiation—the evolution of new species from a single lineage for the purpose of filling unoccupied ecological niches. Many species from mainland source areas simply never make it to islands, so species that can immigrate find empty ecological niches where once they faced competition. For example, monitor lizards immigrating to several small islands in Indonesia found the niche for large predators empty. Monitors on these islands evolved into Komodo Dragons, filling the niche. Conservation of biodiversity Theories of island biogeography also have potential applications in the field of conservation. Many conservationists argue that as human activity such as logging and ranching encroach on wild lands, remaining parks and reserves begin to resemble small, isolated islands. According to equilibrium theory, as those patches of wild land grow smaller, they support fewer species of plants and animals. Some conservationists fear that plant and animal populations in those parks and reserves will sink below minimum viable population levels—the smallest number of individuals necessary to allow the species to continue reproducing. These conservationists suggest that one way to bolster populations is to set aside larger areas and to limit species isolation by connecting parks and preserves with wildlife corridors. Islands with greatest variety of habitats support the most species; diverse habitats promotes successful dispersal, survival, and reproduction. Thus, in attempting to preserve island biodiversity, conservationists focus on several factors: the size (the larger the island, the more habitats it contains), climate, geology (soil that promotes or restricts habitats), and age of the island (sparse or rich habitats). All of these factors must be addressed to ensure island biodiversity. [Darrin Gunkel]
Environmental Encyclopedia 3
ISO 14000: International Environmental Management Standards
RESOURCES BOOKS Harris, Larry D. The Fragmented Forest: Island Biogeography Theory and the Preservation of Biotic Diversity. Chicago: University of Chicago Press, 1984. Mac Arthur, Robert H., and Edward O. Wilson.The Theory of Island Biogeography. Princeton: Princeton University Press, 1967. Quaman, David. Song of the Dodo: Island Biogeography in an Age of Extinction. New York: Scribner, 1996. Whittaker, Robert J. Island Biogeography: Ecology, Evolution and Conservation. London: Oxford University Press, 1999.
PERIODICALS Grant, P. R. “Competition Exposed by Knight?” Nature 396: 216–217.
OTHER “Island Biogeography.” University of Oxford School of Geography and the Environment. August 7, 2000 [cited June 26, 2002]. .
ORGANIZATIONS Environmental Protection Agency (EPA), 1200 Pennsylvania Avenue, NW, Washington, DC USA 20460 (202) 260-2090, Email:
[email protected],
ISO 14000: International Environmental Management Standards ISO 14000 refers to a series of environmental management standards that were adopted by the International Organization for Standardization (ISO) in 1996 and are beginning to be implemented by businesses across the world. Any person or organization interested in the environment and in the goal of improving environmental performance of businesses should be interested in ISO 14000. Such interested parties include businesses themselves, their legal representatives, environmental organizations and their members, government officials, and others. What is the ISO and what are ISO standards? The International Organization for Standardization (ISO) is a private (nongovernmental) worldwide organization whose purpose is to promote uniform standards in international trade. Its members are elected representatives from national standards organizations in 111 countries. The ISO covers all fields involving promoting goods, services, or products and where a Member Body suggests that standardization is desirable, with the exception of electrical and electronic engineering, which are covered by a different organization called the International Electrotechnical Commission (IEC). However, the ISO and the IEC work closely together. Since the ISO began operations in 1947, its Central Secretariat has been located in Geneva, Switzerland. Between 1951 (when it published its first standard) and 1997, the ISO issued over 8,800 standards. Standards are docu-
ments containing technical specifications, rules, guidelines, and definitions to ensure equipment, products, and services perform as specified. Among the best known standards published by the ISO are those that comprise the ISO 9000 series, which was developed between 1979–1986, and published in 1987. Because ISO 9000 is a forerunner to ISO 14000, it is important to understand the basic structure and function of ISO 9000. The ISO 9000 series is a set of standards for quality management and quality assurance. The standards apply to processes and systems that produce products; they do not apply to the products themselves. Further, the standards provide a general framework for any industry; they are not industry-specific. A company that has become registered under ISO 9000 has demonstrated that it has a documented system for quality that is in place and consistently applied. ISO 9000 standards apply to all kinds of companies whether large or small, in services or manufacturing. The latest major set of standards published by the ISO is the ISO 14000 series. The impetus for that series came from the United Nations Conference on the Environment and Development (UNCED), which was held in Rio De Janeiro in 1992 and attended by representatives of over one hundred nations. One of the documents resulting from that conference was the Global Environmental Initiative, which prompted the ISO to develop its ISO 14000 series of international environmental standards. The ISO’s goal is to insure that businesses adopt common internal procedures for environmental controls including, but not limited to, audits. It is important to note that the standards are process standards, not performance standards. The goal is to ensure that businesses are in compliance with their own national and local applicable environmental laws and regulations. The initial standards in the series include numbers 14001, 14004, and 14010-14012; all of them adopted by the ISO in 1996. Provisions of ISO 14000 Series standards ISO 14000 sets up criteria pursuant to which a company may become registered or certified as to its environmental management practices. Central to the process of registration pursuant to ISO 14000 is a company’s Environmental Management System (EMS). The EMS is a set of procedures for assessing compliance with environmental laws and company procedures for environmental protection, identifying and resolving with problems, and engaging the company’s workforce in a commitment to improved environmental performance by the company. ISO 14001 series can be divided into two groups: guidance documents and specification documents. The series sets out standards against which a company’s EMS will be evaluated. For example, it must include an accurate summary of the legal standards with which the company must comply, 777
ISO 14000: International Environmental Management Standards
such as permit stipulations, and relevant provisions of statutes and regulations, and even provisions of administrative or court-certified consent judgments. To become certified, the EMS must: (1) include an environmental policy; (2) establish plans to meet environmental goals and comply with legal requirements; (3) provide for implementation of the policy and operation under it including training for personnel, communication, and document control; (4) set up monitoring and measurement devices and an audit procedure to insure continuing improvement; and (5) provide for management review. The EMS must be certified by a registrar who has been qualified under ISO 13012, a standard that predates the ISO 14000 series. The ISO 14004 series is a guidance document that gives advice that may be followed but is not required. It includes five principles each of which corresponds to one of the five areas of ISO 14001 listed above. ISO 14010, 14011, and 14012 are auditing standards. For example, 14010 covers general principles of environmental auditing, and 14011 provides guidelines for auditing of an Environmental Management System (EMS). ISO 14012 provides guidelines for establishing qualifications for environmental auditors, whether those auditors are internal or external. Plans for additional standards within the ISO 14000 Series standards The ISO is considering proposals for standards on training and certifying independent auditors (called registrars) who will certify that ISO 14000-certified business have established and adhere to stringent internal systems to monitor and improve their own environmental protection actions. Later the ISO may also establish standards for assessing a company’s environmental performance. Standards may be adopted to for use of eco-labeling and life cycle assessment of goods involved in international trade. Benefits and consequences of ISO 14000 Series standards A company contemplating obtaining ISO 14000 registration must evaluate its potential advantages as well as its costs to the company. ISO 14000 certification may bring various rewards to companies. For example, many firms are hoping that, in return for obtaining ISO 14000 certification (and the actions required to do so), regulatory agencies such as the U.S. Environmental Protection Agency (EPA) will give them more favorable treatment. For example, leniency might be shown in less stringent filing or monitoring requirements or even less severe sanctions for past or present violations of environmental statutes and regulations. Further, compliance may be merely for good public relations, leading consumers to view the certified company 778
Environmental Encyclopedia 3
as a good corporate citizen that works to protect the environment. There is public pressure on companies to demonstrate their environmental stewardship and accountability; obtaining ISO 14000 certification is one way to do so. The costs to the company will depend on the scope of the Environmental Management System (EMS). For example, the EMS might be international, national, or limited to individual plants operated by the company. That decision will affect the costs of the environmental audit considerably. National and international systems may prove to be costly. On the other hand, a company may realize cost savings. For example, an insurance company may give reduced rates on insurance to cover accidental pollution releases to a company that has a proven environmental management system in place. Internally, by implementing an EMS, a company may realize cost savings as a result of waste reduction, use of less fewer toxic chemicals, less energy use, and recycling. A major legal concern raised by lawyers studying the ISO 14000 standards relates to the question of confidentiality. There are serious questions as to whether a governmental regulatory agency can require disclosure of information discovered during a self-audit by a company. The use of a third-party auditor during preparation of the EMS process may weaken a company’s argument that information discovered is privileged. ISO 14000 has potential consequences with respect to international law as well as international trade. ISO 14000 is intended to promote a series of universally accepted EMS practices and lead to consistency in environmental standards between and among trading partners. Some developing countries such as Mexico are reviewing ISO 14000 standards and considering incorporating their provisions within their own environmental laws and regulations. On the other hand, some developing countries have suggested that environmental standards created by ISO 14000 may constitute nontariff barriers to trade in that costs of ISO 14000 registration may be prohibitively high for small- to medium-size companies. Companies that have implemented ISO 9000 have learned to view their companies’ operations through a “quality of management” lens and implementation of ISO 14000 may lead to use of “environmental quality” lens. ISO 14000 has the potential to lead to two kinds of cultural changes. First, within the corporation, it has the potential to lead to consideration of environmental issues throughout the company and its business decisions ranging from hiring of employees to marketing. Second, ISO 14000 has the potential to become part of a global culture as the public comes to view ISO 14000 certification as a benchmark connoting good environmental stewardship by a company. [Paulette L. Stenzel]
Environmental Encyclopedia 3 RESOURCES BOOKS Tibor, T., and I. Feldman. ISO 14000: A Guide to the New Environmental Management Standards. Irwin Publishing Company, 1996. von Zharen, W. M. ISO 14000: Understanding the Environmental Standards. Government Institutes, 1996.
PERIODICALS Kass, S. L. “The Lawyer’s Role in Implementing ISO 14000.” Natural Resources & Environment 3, no. 5 (Spring 1997).
Isotope Different forms of atoms of the same element. Atoms consist of a nucleus, containing positively-charged particles (protons) and neutral particles (neutrons), surrounded by negatively-charged particles (electrons). Isotopes of an element differ only in the number of neutrons in the nucleus and hence in atomic weight. The nuclei of some isotopes are unstable and undergo radioactive decay. An element can have several stable and radioactive isotopes, but most elements have only two or three isotopes that are of any importance. Also, for most elements the radioactive isotopes are only of concern in material exposed to certain types of radiation sources. Carbon has three important isotopes with atomic weights of 12, 13, and 14. C-12 is stable and represents 98.9% of natural carbon. C-13 is also stable and represents 1.1% of natural carbon. C-14 represents an insignificant fraction of naturally-occurring carbon, but it is radioactive and important because its radioactive decay is valuable in the dating of fossils and ancient artifacts. It is also useful in tracing the reactions of carbon compounds in research. See also Nuclear fission; Nuclear power; Radioactivity; Radiocarbon dating
Itai-itai disease The symptoms of Itai-Itai disease were first observed in 1913 and characterized between 1947 and 1955; it was 1968, however, before the Japanese Ministry of Health and Welfare officially declared that the disease was caused by chronic cadmium poisoning in conjunction with other factors such as the stresses of pregnancy and lactation, aging, and dietary deficiencies of vitamin D and calcium. The name arose from the cries of pain, “itai-itai” (ouch-ouch) by the most seriously stricken victims, older Japanese farm women. Although men, young women, and children were also exposed, 95% of the victims were post-menopausal women over 50 years of age. They usually had given birth to several children and had lived more than 30 years within 2 mi (3 km) of the lower stream of the Jinzu River near Toyama.
Itai-itai disease
The disease started with symptoms similar to rheumatism, neuralgia, or neuritis. Then came bone lesions, osteomalacia, and osteoporosis, along with renal disfunction and proteinuria. As it escalated, pain in the pelvic region caused the victims to walk with a duck-like gait. Next, they were incapable of rising from their beds because even a slight strain caused bone fractures. The suffering could last many years before it finally ended with death. Overall, an estimated 199 victims have been identified, of which 162 had died by December 1992. The number of victims increased during and after World War II as production expanded at the Kamioka Mine owned by the Mitsui Mining and Smelting Company. As 3,000 tons of zinc-lead ore per day were mined and smelted, cadmium was discharged in the wastewater. Downstream, farmers withdrew the fine particles of flotation tailings in the Jinzu River along with water for drinking and crop irrigation. As rice plants were damaged near the irrigation inlets, farmers dug small sedimentation pools that were ineffective against the nearly invisible poison. Both the numbers of Itai-Itai disease patients and the damage to the rice crops rapidly decreased after the mining company built a large settling basin to purify the wastewater in 1955. However, even after the discharge into the Jinzu River was halted, the cadmium already in the rice paddy soils was augmented by airborne exhausts. Mining operations in several other Japanese prefectures also produced cadmium-contaminating rice, but afflicted individuals were not certified as Itai-Itai patients. That designation was applied only to those who lived in the Jinzu River area. In 1972 the survivors and their families became the first pollution victims in Japan to win a lawsuit against a major company. They won because in 1939 Article 109 of the Mining Act had imposed strict liability upon mining facilities for damages caused by their activities. The plaintiffs had only to prove that cadmium discharged from the mine caused their disease, not that the company was negligent. As epidemiological proof of causation sufficed as legal proof in this case, it set a precedent for other pollution litigation as well. Despite legal success and compensation, the problem of contaminated rice continues. In 1969 the government initially set a maximum allowable standard of 0.4 parts per million (ppm) cadmium in unpolished rice. However, because much of the contaminated farmland produced grain in excess of that level, in 1970 under the Foodstuffs Hygiene Law this was raised to 1 ppm cadmium for unpolished rice and 0.9 ppm cadmium for polished rice. To restore contaminated farmland, Japanese authorities instituted a program in which, each year, the most highly contaminated soils in a small area are exchanged for uncontaminated soils. Less contaminated soils are rehabilitated through the addi779
IUCN—The World Conservation Union
tion of lime, phosphate, and a cadmium sequestering agent, EDTA. By 1990 about 10,720 acres (4,340 ha), or 66.7% of the approximately 16,080 acres (6,510 ha) of the most highly cadmium contaminated farmland had been restored. In the remaining contaminated areas where farm families continue to eat homegrown rice, the symptoms are alleviated by treatment with massive doses of vitamins B1, B12, D, calcium, and various hormones. New methods have also been devised to cause the cadmium to be excreted more rapidly. In addition, the high costs of compensation and restoration are leading to the conclusion that prevention is not only better but cheaper. This is perhaps the most encouraging factor of all. See also Bioaccumulation; Environmental law; Heavy metals and heavy metal poisoning; Mine spoil waste; Smelter; Water pollution [Frank M. D’Itri]
RESOURCES BOOKS Kobayashi, J. “Pollution by Cadmium and the Itai-Itai Disease in Japan.” In Toxicity of Heavy Metals in the Environment, Part 1, edited by F. W. Oehme. New York: Marcel Dekker, 1978. Kogawa, K. “Itai-Itai Disease and Follow-Up Studies.” In Cadmium in the Environment, Part II, edited by J. O. Nriagu. New York: Wiley, 1981. Tsuchiya, K., ed. Cadmium Studies in Japan: A Review. Tokyo, Japan, and Amsterdam, Netherlands: Kodansha and Elsevier/North-Holland Biomedical Press, 1978.
IUCN—The World Conservation Union Founded in 1948 as the International Union for the Conservation of Nature and Natural Resources (IUCN), IUCN works with governments, conservation organizations, and industry groups to conserve wildlife and approach the world’s environmental problems using “sound scientific insight and the best available information.” Its membership, currently over 980, comes from 140 countries and includes 56 sovereign states, as well as government agencies and nongovernmental organizations. IUCN exists to serve its members, representing their views and providing them with the support necessary to achieve their goals. Above all, IUCN works with its members “to achieve development that is sustainable and that provides a lasting improvement in the quality of life for people all over the world.” IUCN’s three basic conservation objectives are: (1) to secure the conservation of nature, and especially of biological diversity, as an essential foundation for the future; (2) to ensure that where the earth’s natural resources are used this is done in a wise, equitable, and sustainable way; (3) to guide the development of human communities toward ways of life that are both of 780
Environmental Encyclopedia 3 good quality and in enduring harmony with other components of the biosphere. IUCN is one of the few organizations to include both governmental agencies and nongovernmental organizations. It is in a unique position to provide a neutral forum where these organizations can meet, exchange ideas, and build partnerships to carry out conservation projects. IUCN is also unusual in that it both develops environmental policies and then implements them through the projects it sponsors. Because the IUCN works closely with, and its membership includes, many government scientists and officials, the organization often takes a conservative, pro-management, as opposed to a “preservationist,” approach to wildlife issues. It may encourage or endorse limited hunting and commercial exploitation of wildlife if it believes this can be carried out on a sustainable basis. IUCN maintains a global network of over 5,000 scientists and wildlife professionals who are organized into six standing commissions that deal with various aspects of the union’s work. There are commissions on Ecology, Education, Environmental Planning, Environmental Law, National Parks and Protected Areas, and Species Survival. These commissions create action plans, develop policies, advise on projects and programs, and contribute to IUCN publications, all on an unpaid, voluntary basis. IUCN publishes an authoritative series of “Red Data Books,” describing the status of rare and endangered wildlife. Each volume provides information on the population, distribution, habitat and ecology, threats, and protective measures in effect for listed species. The “Red Data Books” concept was originated in the mid-1960s by the famous British conservationist Sir Peter Scott, and the series now includes a variety of publication on regions and species. Other titles in the series of “Red Data Books” include Dolphins, Porpoises, and Whales of the World; Lemurs of Madagascar and the Comoros; Threatened Primates of Africa; Threatened Swallowtail Butterflies of the World; Threatened Birds of the Americas; and books on plants and other species of wildlife, including a series of conservation action plans for threatened species. Other notable IUCN works include World Conservation Strategy: Living Resources Conservation for Sustainable Development and its successor document Caring for the Earth—A Strategy for Sustainable Living; and the United Nations List of Parks and Protected Areas. IUCN also publishes books and papers on regional conservation, habitat preservation, environmental law and policy, ocean ecology and management, and conservation and development strategies. [Lewis G. Regenstein]
Environmental Encyclopedia 3 RESOURCES ORGANIZATIONS IUCN—The World Conservation Union Headquarters, Rue Mauverney 28, Gland, Switzerland CH-1196 ++41 (22) 999-0000, Fax: ++41 (22) 999-0002, Email:
[email protected],
Ivory-billed woodpecker The ivory-billed woodpecker (Campephilus principalis) is one of the rarest birds in the world and is considered by most authorities to be extinct in the United States. The last confirmed sighting of ivory-bills was in Cuba in 1987 or 1988. Though never common, the ivory-billed woodpecker was rarely seen in the United States after the first years of the twentieth century. Some were seen in Louisiana in 1942, and since then, occasional sightings have been unverified. Interest in the bird rekindled in 1999, when a student at Louisiana State University claimed to have seen a pair of ivory-billed woodpeckers in a wilderness preserve. Teams of scientists searched the area for two years. No ivory-billed woodpecker was sighted, though some evidence made it plausible the bird was in the vicinity. By mid-2002, the ivorybilled woodpecker’s return from the brink of extinction remained a tantalizing possibility, but not an established fact. The ivory-billed woodpecker was a huge bird, averaging 19–20 in (48–50 cm) long, with a wingspan of over 30 in (76 cm). The ivory-colored bills of these birds were prized as decorations by native Americans. The naturalist John James Audubon found ivory-billed woodpecker in swampy forest edges in Texas in the 1830s. But by the end of the nineteenth century, the majority of the bird’s prime habitat had been destroyed by logging. Ivory-billed woodpeckers required large tracts of land in the bottomland cypress, oak, and black gum forests of the Southeast, where they fed off insect larva in mature trees. This species was the largest woodpecker in North America, and they preferred the largest of these trees, the same ones targeted by timber companies as the most profitable to harvest. The territory for breeding pairs of ivory-billed woodpeckers consists of about three square miles of undisturbed, swampy forest, and there was little prime habitat left for them after 1900, for most of these areas had been heavily logged. By the 1930s, one of the only virgin cypress swamps left was the Singer Tract in Louisiana, an 80,000-acre (32,375-ha) swathe of land owned by the Singer Sewing Machine Company. In 1935 a team of ornithologists descended on it to locate, study, and record some of the last ivory-billed woodpeckers in existence. They found the birds and were able to film and photograph them, as well as make the only sound recordings of them in existence. The Audubon Society, the state of Louisiana, and the U.S. Fish and Wildlife Service tried to buy the land from Singer to make it a refuge for the rare birds. But Singer
Ivory-billed woodpecker
had already sold timber rights to the land. During World War II, when demand for lumber was particularly high, the Singer Tract was leveled. One of the giant cypress trees that was felled contained the nest and eggs of an ivory-billed woodpecker. Land that had been virgin forest then became soybean fields. Few sightings of these woodpeckers were made in the 1940s, and none exist for the 1950s. But in the early 1960s ivory-billed woodpeckers were reported seen in South Carolina, Texas, and Louisiana. Intense searches, however, left scientists with little hope by the end of that decade, as only six birds were reported to exist. Subsequent decades yielded a few individual sightings in the United States, but none were confirmed. In 1985 and 1986, there was a search for the Cuban subspecies of the ivory-billed woodpecker. The first expedition yielded no birds, but trees were found that had apparently been worked by the birds. The second expedition found at least one pair of ivory-billed woodpeckers. Most of the land formerly occupied by the Cuban subspecies was cut over for sugar cane plantations by the 1920s, and surveys in 1956 indicated that this population had declined to about a dozen birds. The last reported sightings of the species occurred in the Sierra de Moa area of Cuba. They are still considered to exist there, but the health of any remaining individuals must be in question, given the inbreeding that must occur with such a low population level and the fact that so little suitable habitat remains. In 1999, a student at Louisiana State University (LSU) claimed to have seen a pair of ivory-billed woodpeckers while he was hunting for turkey in the Pearl River Wildlife Management Area near the Louisiana-Mississippi border. The student, David Kulivan, was a credible witness, and he soon convinced ornithologists at LSU to search for the birds. News of the sighting attracted thousands of amateur and professional birders over the next two years. Scientists from LSU, Cornell University, and the Louisiana Department of Wildlife and Fisheries organized an expedition that included posting of high-tech listening devices. Over more than two years, no one else saw the birds, though scientists found trunks stripped of bark, characteristic of the way the ivorybilled woodpecker feeds, and two groups heard the distinct double rapping sound the ivory-billed woodpecker makes when it knocks a trunk. No one heard the call of the ivorybilled woodpecker, though this sound would have been considered definitive evidence of the ivory-billed woodpecker’s existence. Hope for confirmation of Kulivan’s sighting rested on deciphering the tapes made by a dozen recording devices. This was being done at Cornell University, and was expected to take years. By mid-2002, the search for the ivory-billed woodpecker in Louisiana had wound down, disappointingly in781
Environmental Encyclopedia 3
Izaak Walton League
Ivory-billed woodpecker (Campephilus). (Photograph by James Tanner. Photo Researchers Inc. Reproduced by permission.)
conclusive. While some scientists remained skeptical about the sighting, others believed that the forest in the area may have regrown enough to support an ivory-billed woodpecker population. See also Deforestation; Endangered species; Extinction; International Council for Bird Preservation; Wildlife management [Eugene C Beckham]
RESOURCES BOOKS Collar, N. J., et al. Threatened Birds of the Americas: The ICBP/IUCN Red Data Book. Washington, DC: Smithsonian Institution Press, 1992. Ehrlich, P. R., D. S. Dobkin, and D. Wheye. The Birder’s Handbook. New York: Simon & Schuster, 1988. Ehrlich, P. R., D. S. Dobkin, and D. Wheye. Birds in Jeopardy: The Imperiled and Extinct Birds of the United States and Canada, Including Hawaii and Puerto Rico. Stanford: Stanford University Press, 1992.
PERIODICALS Gorman, James. “Listening for the Call of a Vanished Bird” New York Times, March 5, 2002, F1. Graham, Frank Jr. “Is the Ivorybill Back?” Audubon (May/June 2000): 14.
782
Pianin, Eric. “Scientists Give Up Search for Woodpecker; Some Signs Noted of Ivory-Billed Bird Not Seen Since ’40s” Washington Post, February 21, 2002, A2. Tomkins, Shannon. “Dead or Alive?” Houston Chronicle, April 14, 2002, 8.
Izaak Walton League In 1922, 54 sportsmen and sportswomen—all concerned with the apparent destruction of American fishing waterways—established the Izaak Walton League of America (IWLA). They looked upon Izaak Walton, a seventeenthcentury English fisherman and author of The Compleat Angler, as inspiration in protecting the waters of America. The Izaak Walton League has since widened its focus: as a major force in the American conservation movement, IWLA now pledges in its slogan “to defend the nation’s soil, air, woods, water, and wildlife.” When sportsmen and sportswomen formed IWLA approximately 70 years ago, they worried that American industry would ruin fishing streams. Raw sewage, soil erosion, and rampant pollution threatened water and wildlife.
Environmental Encyclopedia 3 Initially the League concentrated on preserving lakes, streams, and rivers. In 1927, at the request of President Calvin Coolidge, IWLA organized the first national water pollution inventory. Izaak Walton League members (called “Ikes") subsequently helped pass the first national water pollution control act in the 1940s. In 1969 IWLA instituted the Save Our Streams program, and this group mobilized forces to pass the groundbreaking Clean Water Act of 1972. The League did not only concentrate on the preservation of American waters, however. From its 1926 campaign to protect the black bass, to the purchase of a helicopter in 1987 to help game law officers protect waterfowl from poachers in the Gulf of Mexico, IWLA has also been instrumental in the preservation of wildlife. In addition, the League has fought to protect public lands such as the National Elk Refuge in Wyoming, the Everglades National Park, and the Isle Royale National Park. IWLA currently sponsors several environmental programs designed to conserve natural resources and educate the public. The aforementioned Save Our Streams (SOS) program is a grassroots organization designed to monitor water quality in streams and rivers. Through 200 chapters nationwide, SOS promotes “stream rehabilitation” through stream adoption kits and water pollution law training. Another program, Wetlands Watch, allows local groups to purchase, adopt, and protect nearby wetlands. Similarly, the Izaak Walton League Endowment buys land to save it from unwanted development. IWLA’s Uncle Ike Youth Education program aims to educate children and convince them of the necessity of preserving the environment. A last major program from the League is its internationally acclaimed Outdoor Ethics program. Outdoor Ethics works to stop poaching and other illegal and unsportsmanlike outdoor activities by educating hunters, anglers, and others. The League also sponsors and operates regional conservation efforts. Its Midwest Office, based in Minnesota,
Izaak Walton League
concentrates on preservation of the Upper Mississippi River region. The Chesapeake Bay Program is a major regional focus. Almost 25% of the “Ikes” live in the region of this estuary, and public education, awards, and local conservation projects help protect Chesapeake Bay. In addition the Soil Conservation Program focuses on combating soil erosion and groundwater pollution, and the Public Lands Restoration Task Force works out of its headquarters in Portland, Oregon, to strike a balance between forests and the desire for their natural resources in the West. IWLA makes its causes known through a variety of publications. Splash, a product of SOS, enlightens the public as to how to protect streams in America. Outdoor Ethics, a newsletter from the program of the same name, educates recreationists to responsible practices of hunting, boating, and other outdoor activities. The League also publishes a membership magazine, Outdoor America, and the League Leader, a vehicle of information for IWLA’s 2,000 chapter and division officers. IWLA has also produced the longestrunning weekly environmental program on television. Entitled Make Peace with Nature, the program has aired on PBS for almost 20 years and presents stories of environmental interest. Having expanded its scope from water to the general environment, IWLA has become a vital force in the national conservation movement. Through its many and varied programs, the League continues to promote constructive and active involvement in environmental problems. [Andrea Gacki]
RESOURCES ORGANIZATIONS Izaak Walton League of America, 707 Conservation Lane, Gaithersburg, MD USA 20878 (301) 548-0150, Fax: (301) 548-0146, Toll Free: (800) IKE-LINE, Email:
[email protected],
783
This Page Intentionally Left Blank
J
Wes Jackson (1936 – ) American environmentalist and writer Wes Jackson is a plant geneticist, writer, and co-founder, with his wife Dana Jackson, of the Land Institute in Salina, Kansas. He is one of the leading critics of conventional agricultural practices, which in his view are depleting topsoil, reducing genetic diversity, and destroying small family farms and rural communities. Jackson is also critical of the culture that provides the pretext and the context within which this destruction occurs and is justified as “necessary,” “efficient,” and “economical.” He contrasts a culture or mind-set that emphasizes humanity’s mastery or dominion over nature with an alternative vision that takes “nature as the measure” of human activity. The former viewpoint can produce temporary triumphs but not long-lasting or sustainable livelihood; only the latter holds out the hope that humans can live with nature, on nature’s terms. Jackson was born in 1936 in Topeka, Kansas, the son of a farmer. Young and restless, Jackson held various jobs— welder, farm hand, ranch hand, teacher—before devoting his time to the study of agricultural practices in the United States and abroad. He attended Kansas Wesleyan University, the University of Kansas, and North Carolina State University, where he earned his doctorate in plant genetics in 1967. According to Jackson, agriculture as we know it is unnatural, artificial, and, by geological time-scales, of relatively recent origin. It requires plowing, which leads to loss of topsoil, which in turn reduces and finally destroys fertility. Large-scale “industrial” agriculture also requires large investments, complex and expensive machinery, fertilizers, pesticides, and herbicides, and leads to a loss of genetic diversity, to soil erosion and compaction, and other negative consequences. It is also predicated on the planting and harvesting of annual crops—corn, wheat, soybeans—that leaves fields uncovered for long periods and thus leaves precious topsoil unprotected and vulnerable to erosion by wind and water. For every bushel of corn harvested, a bushel of topsoil is lost. Jackson estimates that America has lost between one-
third and one-half of its topsoil since the arrival of the first European settlers. At the Land Institute, Jackson and his associates are attempting to re-think and revise agricultural practices so as to “make nature the measure” and enable farmers to “meet the expectations of the land,” rather than the other way around. In particular, they are returning to, and attempting to learn from, the native prairie plants and the ecosystems that sustain them. They are also exploring the feasibility of alternative farming methods that might minimize or even eliminate entirely the planting and harvesting of annual crops, favoring instead the use of perennials that protect and bind topsoil. Jackson’s emphasis is not exclusively scientific or technical. Like his long-time friend Wendell Berry, Jackson emphasizes the culture in agriculture. Why humans grow food is not at all mysterious or problematic: we must eat in order to live. But how we choose to plant, grow, harvest, distribute, and consume food is clearly a cultural and moral matter having to do with our attitudes and beliefs. Our contemporary consumer culture is out of kilter, Jackson contends, in various ways. For one, the economic emphasis on minimizing costs and maximizing yields ignores longer-term environmental costs that come with the depletion of topsoil, the diminution of genetic diversity, and the depopulation of rural communities. For another, most Americans have lost (and many have never had) a sense of connectedness with the land and the natural environment; Jackson contends that they are unaware of the mysteries and wonder of birth, death and rebirth, and of cycles and seasons, that are mainstays of a meaningful human life. To restore this sense of mystery and meaning requires what Jackson calls homecoming and “the resettlement of America,” and “becoming native to this place.” More Americans need to return to the land, to repopulate rural communities, and to re-learn the wealth of skills that we have lost or forgotten or never acquired. Such skills are more than matters of method or technique, they also have to do with ways of relating to nature and to each other. Jackson has received several awards, such as the Pew 785
Environmental Encyclopedia 3
James Bay hydropower project
Conservation Scholars award (1990) and a MacArthur Fellowship (1992). Wes Jackson has been called, by critics and admirers alike, a radical and a visionary. Both labels appear to apply. For Jackson’s vision is indeed radical, in the original sense of the term (from the Latin radix, or root). It is a vision not only of “new roots for agriculture” but of new and deeper roots for human relationships and communities that, like protected prairie topsoil, will not easily erode. [Terence Ball]
RESOURCES BOOKS Berry, W. “New Roots for Agricultural Research.” In The Gift of Good Land. San Francisco: North Point Press, 1981. Eisenberg, E. New Roots for Agriculture. San Francisco: Friends of the Earth, 1980. Jackson, Wes. Altars of Unhewn Stone. San Francisco: North Point Press, 1987. ———. Becoming Native to this Place. Lexington, KY: University Press of Kentucky, 1994. ———, W. Berry, and B. Coleman, eds. Meeting the Expectations of the Land. San Francisco: North Point Press, 1984.
PERIODICALS Eisenberg, E. “Back to Eden.” The Atlantic (October 1989): 57–89.
James Bay hydropower project James Bay forms the southern tip of the much larger Hudson Bay in Quebec, Canada. To the east lies the Quebec-Labrador peninsula, an undeveloped area with vast expanses of pristine wilderness. The region is similar to Siberia, covered in tundra and sparse forests of black spruce and other evergreens. It is home to roughly 100 species of birds, twenty species of fish and dozens of mammals, including muskrat, lynx, black bear, red fox, and the world’s largest herd of caribou. The area has also been home to the Cree and other Native Indian tribes for centuries. Seven rivers drain the wet, rocky region, the largest being the La Grande. In the 1970s, the government-owned Hydro-Quebec electric utility began to divert these rivers, flooding 3,861 square miles (10,000 km2) of land. They built a series of reservoirs, dams and dikes on La Grande that generated 10,300 megawatts of power for homes and businesses in Quebec, New York, and New England. With its $16 billion price tag, the project is one of the world’s largest energy projects. The complex generates a total of 15,000 megawatts. A second phase of the project added two more hydroelectric complexes, supplying another 12,000 megawatts of power-786
the equivalent of more than thirty-five nuclear power plants.
But the project has had many opponents. The Cree and other Inuit tribes joined forces with American environmentalists to protest the project. Its environmental impact has had scant analysis; in fact, damage has been severe. Ten thousand caribou drowned in 1984, while crossing one of the newly-dammed rivers on their migration route. When the utility flooded land, it destroyed habitat for countless plants and animals. The graves of Cree Indians, who for millennia, had hunted, traveled, and lived along the rivers, were inundated. The project also altered the ecology of the James and Hudson bays, disrupting spawning cycles, nutrient systems, and other important maritime resources. Naturally-occurring mercury in rocks and soil is released as the land is flooded and accumulates as it passes through the food chain from microscopic organisms, to fish, to humans. A majority of the native people in villages where fish are a main part of the diet show symptoms of mercury poisoning. Despite these problems, Hydro-Quebec pursued the project, partly because of Quebec’s long-standing struggle for independence from Canada. The power is sold to corporate customers, providing income for the province and attracting industry to Quebec. The Cree and environmentalists, joined by New York congressmen, took their fight to court. On Earth Day 1993, they filed suit against New York Power Authority in United States District Court in New York, challenging the legality of the agreement, which was to go into effect in 1999. Their claim was based on the United States Constitution and the 1916 Migratory Bird Treaty with Canada. In February 2002, nearly 70 percent of Quebec’s James Bay Cree Indians endorsed a 2.25 billion dollar deal with the Quebec government for hydropower development on their land. Approval for the deal ranged from a low of 50 percent, to a high of 83 percent, among the nine communities involved. Some Cree spokespersons considered the agreement a vindication of the long campaign, waged since 1975, to have Cree rights respected. Under the deal, the James Bay Cree would receive $16 million in 2002, $30.7 million in 2003, then $46.5 million a year for 48 years. In return, the Cree would drop environmental lawsuits totaling $2.4 billion. The Cree also agreed to hydroelectric plants along the Eastman River and Rupert River, subject to environmental approval. The deal guarantees the Cree jobs with the hydroelectric authority and gives them more control over logging and other areas of their economy. See also Environmental law; Hetch Hetchy Reservoir; Nuclear energy [Bill Asenjo Ph.D.]
Environmental Encyclopedia 3 RESOURCES BOOKS McCutcheon, S. Electric Rivers: The Story of the James Bay Project. Montreal: Black Rose Books, 1991.
PERIODICALS Associated Press. “James Bay Cree Approve Deal with Quebec on Hydropower Development.” February 05, 2002. Picard, A. “James Bay II.” Amicus Journal 12 (Fall 1990): 10–16.
Japanese logging In recent decades the timber industry has intensified efforts to harvest logs from tropical, temperate, and boreal forests worldwide to meet an increasing demand for wood and wood products. Japanese companies have been particularly active in logging and importing timber from around the world. Because of wasteful and destructive logging practices that result from efforts to maximize corporate financial gains, those interested in reducing deforestation have raised many concerns about Japan’s logging industry. The world’s forests, especially tropical rain forests, are rich in species, including plants, insects, birds, reptiles, and mammals. Many of these species exist only in very limited areas where conditions are suitable for their existence. These endangered forest dwellers provide unique and irreplaceable genetic material that can contribute to the betterment of domestic plants and animals. The forests are a valuable resource for medically useful drugs. Healthy forests stabilize watersheds by absorbing rainfall and retarding runoff. Mat roots help control soil erosion, preventing the silting of waterways and damage to reefs, fisheries, and spawning grounds. The United Nations Food and Agriculture Organization reports tropical deforestation rates of 42 million acres (over 17 million ha) per year. More than half of the Earth’s primary tropical forest area has vanished, and more than half of the remaining forest has been degraded. While Brazil contains about a third of the world’s remaining tropical rain forest, southeast Asia is now a major supplier of tropical woods. Burma, Thailand, Laos, Vietnam, Kampuchea, Malaysia, Indonesia, Borneo, New Guinea, and the Philippines contain 20% of the world’s remaining tropical forests. With current rates of deforestation, it is estimated that almost all of Southeast Asia’s primary forests will be gone by the year 2010. While a number of countries make use of rain forest wood, Japan is the number one importer of tropical timber. Japan’s imports account for about 30% of the world trade in tropical lumber. Japan also imports large amounts of timber from temperate and boreal forests in Canada, Russia, and the United States. These three countries contain most of the remaining
Japanese logging
boreal forests, and they supply more than half of the world’s industrial wood. As demand for timber continues to climb, previously undisturbed virgin forests are increasingly being used. To speed harvesting, logging roads are built to provide access, and heavy equipment is brought in to hasten work. In the process, soil is compacted, making plant re-growth difficult or impossible. Although these practices are not limited to one country, Japanese firms have been cited by environmentalists as particularly insensitive to the environmental impact of logging. The globe’s forests are sometimes referred to as the ’lungs of the planet,’ exchanging carbon dioxide for oxygen. Critics claim that the wood harvesting industry is destroying this vital natural resource, and in the process this industry is endangering the planet’s ability to nurture and sustain life. Widespread destruction of the world’s forests is a growing concern. Large Japanese companies, and companies affiliated with Japanese firms, have logged old growth forests in several parts of the globe to supply timber for the Japanese forest products industry. Clear-cutting of trees over large areas in tropical rain forests has made preservation of the original flora and fauna impossible. Many species are becoming extinct. Replanting may in time restore the trees, but it will not restore the array of organisms that were present in the original forest. Large scale logging activities have had a largely negative impact on the local economies in exporting regions because whole logs are shipped to Japan for further processing. Developed countries such as the United States and Canada, which in the past harvested timber and processed it into lumber and other products, have lost jobs to Japan. Indigenous cultures that have thrived in harmony with their forest homelands for eons are displaced and destroyed. Provision has not been made for the survival of local flora and fauna, and provision for forest re-establishment has thus far proven inadequate. As resentment has grown in impacted areas, and among environmentalists, efforts have emerged to limit or stop large-scale timber harvesting and exporting. Although concern has been voiced over all large-scale logging operations, special concern has been raised over harvesting of tropical timber from previously undisturbed primary forest areas. Tropical rain forests are especially unique and valuable natural resources for many reasons, including the density and variety of species within their borders. The exploitation of these unique ecosystems will result in the extinction of many potentially valuable species of plants and animals that exist nowhere else. Many of these forms of life have not yet been named or scientifically studied. In addition, over-harvesting of tropical rainforests has a negative effect on weather patterns, especially by reducing rainfall. 787
Environmental Encyclopedia 3
Japanese logging
Japan is a major importer of tropical timber from Malaysia, New Guinea, and the Solomon Islands. Although the number of imported logs has declined in recent years, this has been matched by an increase in imported tropical plywood manufactured in Indonesia and Malaysia. As a result, the total amount of timber removed has remained fairly constant. An environmentalist group called the Rainforest Action Network (RAN) has issued an alarm concerning recent expansion of logging activity by firms affiliated with Japanese importers. The RAN alleges that: “After laying waste to the rain forests of Asia and the Pacific islands, giant Malaysian logging companies are setting their sights on the Amazon. This past year, some of Southeast Asia’s biggest forestry conglomerates have moved into Brazil, and are buying controlling interests in area logging companies, and purchasing rights to cut down vast rain forest territories for as little as $3 U.S. dollars per acre. In the last few months of 1996 these companies quadrupled their South American interests, and now threaten 15% of the Amazon with immediate logging. According to The Wall Street Journal, up to 30 million acres (12.3 million ha) are at stake. Major players include the WTK Group, Samling, Mingo, and Rimbunan Hijau.” The RAN claims that “the same timber companies in Sarawak, Malaysia, worked with such rapacious speed that they devastated the region’s forest within a decade, displacing traditional peoples and leaving the landscape marred with silted rivers and eroded soil.” One large Japanese firm, the Mitsubishi Corporation, has been targeted for criticism and boycott by the RAN, as one of the world’s largest importers of timber. The boycott is an effort to encourage environmentally-conscious consumers to stop buying products marketed by companies affiliated with the huge conglomerate, including automobiles, cameras, beer, cell phones, and consumer electronics equipment. Through its subsidiaries, Mitsubishi has logged or imported timber from the Philippines, Malaysia, Papua New Guinea, Bolivia, Indonesia, Brazil, Chile, Canada (British Columbia and Alberta), Siberia, and the United States (Alaska, Oregon, Washington, and Texas). The RAN charges that “Mitsubishi Corporation is one of the most voracious destroyers of the world’s rain forests. Its timber purchases have laid waste to forests in the Philippines, Malaysia, Papua New Guinea, Indonesia, Brazil, Bolivia, Australia, New Zealand, Siberia, Canada, and even the United States.” The Mitsubishi Corporation itself does not sell consumer products, but it consists of 190 interlinked companies and hundreds of associated firms that do market to consumers. This conglomerate forms one of the world’s largest industrial and financial powers. The Mitsubishi umbrella includes Mitsubishi Bank, Mitsubishi Heavy Industries, Mitsubishi Electronics, Mitsubishi Motors, and other major components. To force Mit788
subishi and other corporations involved with timber harvesting to operate in a more environmentally responsible way and to end “their destructive logging and trading practices,” an international boycott was organized in 1990 by the World Rainforest Movement (tropical forests) and the Taiga Rescue Network (boreal forests). The Mitsubishi Corporation has countered criticism by launching a program “to promote the regeneration of rain forests...in Malaysia that plants seedlings and monitors their development.” In 1990, the corporation formed an Environmental Affairs Department, one of the first of its kind in Japan, to draft environmental guidelines, and coordinate corporate environmental activities. In the words of the Mitsubishi Corporation Chairman, “A business cannot continue to exist without the trust and respect of society for its environmental performance.” Mitsubishi Corporation reports that they have launched a program to support experimental reforestation projects in Malaysia, Brazil, and Chile. In Malaysia, the company is working with a local agricultural university, under the guidance of a professor from Japan. About 300,000 seedlings were planted on a barren site in 1991. Within five years, the trees were over 33 feet (10 m) in height and the corporation claimed that they were “well on the way to establishing techniques for regenerating tropical forest on burnt or barren land using indigenous species.” Similar projects are underway in Brazil and Chile. The company is also conducting research on sustainable management of the Amazon rain forests. In Canada, Mitsubishi Corporation has participated in a pulp project called Al-Pac to start a mill “which will supply customers in North America, Europe, and Asia,” meeting “the strictest environmental standards by employing advanced, environmentally safe technology. Al-Pac harvests around 0.25% of its total area annually and all harvested areas will be reforested.” [Bill Asenjo Ph.D.]
RESOURCES BOOKS Marx, M. J. The Mitsubishi Campaign: First Year Report. Rainforest Action Network, San Francisco, 1993. Mitsubishi Corporation Annual Report 1996. Mitsubishi Corporation, Tokyo, 1996. Wakker, E. “Mitsubishi’s Unsustainable Timber Trade: Sarawak.” In Restoration of Tropical Forest Ecosystems. L. and M. Lohmann, eds. Netherlands: Kluwer Academic Publishers, 1993.
PERIODICALS Marshall, G. “The Political Economy of the Logging: The Barnett Inquiry into Corruption in the Papua New Guinea Timber Industry,” The Ecologist 20, no. 5 (1990). Neff, R., and W. J. Holstein. “Mitsubishi is on the Move,” Business Week, September 24, 1990. World Rainforest Report XII, no. 4 (October-December 1995). San Francisco: Rainforest Action Network.
K
Kapirowitz Plateau The Kapirowitz Plateau, a wildlife refuge on the northern rim of the Grand Canyon, has come to symbolize wildlife management gone awry, a classic case of misguided human intervention intended to help wildlife that ended up damaging the animals and the environment. The Kapirowitz is located on the Colorado River in northwestern Arizona, and is bounded by steep cliffs dropping down to the Kanab Canyon to the west, and the Grand and Marble canyons to the south and southeast. Because of its inaccessibility, according to naturalist James B. Trefethen, the Plateau was considered a “biological island,” and its deer population “evolved in almost complete genetic isolation.” The lush grass meadows of the Kapirowitz Plateau supported a resident population of 3,000 mule deer (Odocoileus hemionus), which were known and renowned for their massive size and the huge antlers of the old bucks. Before the advent of Europeans, Paiute and Navajo Indians hunted on the Kapirowitz in the fall, stocking up on meat and skins for the winter. In the early 1900s, in an effort to protect and enhance the magnificent deer population of the Kapirowitz, the federal government prohibited all killing of deer, and even eliminated the predator population in the area. As a result, the deer population exploded, causing massive overbrowsing, starvation, and a drastic decline in the health and population of the herd. In 1893, when the Kapirowitz and surrounding lands were designated the Grand Canyon National Forest Reserve, hundreds of thousands of sheep, cattle, and horses were grazing on the Plateau, resulting in overgrazing, erosion, and large-scale damage to the land. On November 28, 1906, President Theodore Roosevelt established the one million-acre (400,000-ha) Grand Canyon National Game Preserve, which provided complete protection of the Kapirowitz ’s deer population. By then, however, overgrazing by livestock had destroyed much of the native vegetation and changed the Kapirowitz considerably for the worse. Contin-
ued pasturing of over 16,000 horses and cattle degraded the Kapirowitz even further. The Forest Service carried out President Roosevelt’s directive to emphasize “the propagation and breeding” of the mule deer by not only banning hunting, but also natural predators as well. From 1906 to 1931, federal agents poisoned, shot, or trapped 4,889 coyotes (Canis latrans), 781 mountain lions (Puma concolor, 554 bobcats (Felis rufus), and 20 wolves (Canis lupus). Without predators to remove the old, the sick, the unwary, and other biologically unfit animals, and keep the size of the herd in check, the deer herd began to grow out of control, and to lose those qualities that made its members such unique and magnificent animals. After 1906, the deer population doubled within 10 breeding seasons, and by 1918 (two years later), it doubled again. By 1923, the herd had mushroomed to at least 30,000 deer, and perhaps as many as 100,000 according to some estimates. Unable to support the overpopulation of deer, range grasses and land greatly deteriorated, and by 1925, 10,000– 15,000 deer were reported to have died from starvation and malnutrition. Finally, after relocation efforts mostly failed to move a significant number of deer off of the Kapirowitz, hunting was reinstated, and livestock grazing was strictly controlled. By 1931, hunting, disease, and starvation had reduced the herd to under 20,000. The range grasses and other vegetation returned, and the Kapirowitz began to recover. In 1975 James Trefethen wrote, “the Kapirowitz today again produces some of the largest and heaviest antlered mule deer in North America.” In the fields of wildlife management and biology, the lessons of the Kapirowitz Plateau are often cited (as in the writings of naturalist Aldo Leopold) to demonstrate the valuable role of predators in maintaining the balance of nature (such as between herbivores and the plants they consume) and survival of the fittest. The experience of the Kapirowitz shows that in the absence of natural predators, prey populations (especially ungulates) tend to increase beyond the carrying capacity of the land, and eventually the 789
Environmental Encyclopedia 3
Robert Francis Kennedy Jr.
results are overpopulation and malnutrition. See also Predator control; Predator-prey interactions [Lewis G. Regenstein]
RESOURCES BOOKS Leopold, A. A Sand County Almanac. New York: Oxford University Press, 1949. Trefethen, J. B. An American Crusade for Wildlife. New York: Winchester Press, 1975.
PERIODICALS Rasmussen, D. I. “Biotic Communities of the Kapirowitz Plateau,” Ecological Monographs 3 (1941): 229–275.
Robert Francis Kennedy Jr. (1954 – ) American environmental lawyer Robert “Bobby” Kennedy Jr. had a very controversial youth. Kennedy entered a drug rehabilitation program, at the age of 28, after being found guilty of drug possession following the South Dakota incident. He was sentenced to two years probation and community service. Clearly the incident was a turning point in Bobby’s life. “Let’s just say, I had a tumultuous adolescence that lasted until I was 29,” he told a reporter for New York magazine, which ran a long profile of Kennedy in 1995, entitled “Nature Boy.” The title refers to the passion which has enabled Kennedy to emerge from his bleak years as a strong and vital participant in environmental causes. A Harvard graduate and published author, Kennedy serves as chief prosecuting attorney for a group called the Hudson Riverkeeper (named after the famed New York river) and senior attorney for the Natural Resources Defense Council. Kennedy, who earlier in his career served as assistant district attorney in New York City after passing the bar, is also a clinical professor and supervising attorney at the Environmental Litigation Clinic at Pace University School of Law in New York. While Kennedy appeared to be following in the family’s political footsteps, working, for example, on several political campaigns and serving as a state coordinator for his Uncle Ted’s 1980 presidential campaign, it is in environmental issues that Bobby Jr. has found himself. He has worked on environmental issues across the Americas and has assisted several indigenous tribes in Latin America and Canada in successfully negotiating treaties protecting traditional homelands. He is also credited with leading the fight to protect New York City’s water supply, a battle which resulted in the New York City Watershed Agreement, regarded as an 790
international model for combining development and environmental concerns. Opportunity was always around the corner for a young, confident and intelligent Kennedy. After Harvard, Bobby Jr. earned a law degree at the University of Virginia. In 1978, the subject of Kennedy’s Harvard thesis—a prominent Alabama judge—was named head of the FBI. A publisher offered Bobby money to expand his previous research into a book, published in 1978, called Judge Frank M. Johnson, Jr.: A Biography. Bobby did a publicity tour which included TV appearances, but the reviews were mixed. In 1982, Bobby married Emily Black, a Protestant who later converted to Catholicism. Two children followed: Robert Francis Kennedy III and Kathleen Alexandra, named for Bobby’s Aunt Kathleen, who died in a plane crash in 1948. The marriage, however, coincided with Bobby’s fallout through drug addiction. In 1992, Bobby and Emily separated, and a divorce was obtained in the Dominican Republic. In 1994, Bobby married Mary Richardson, an architect, with whom he would have two more children. During this time Bobby emerged as a leading environmental activist and litigator. Kennedy is quoted as saying: “To me...this is a struggle of good and evil between shortterm greed and a long-term vision of building communities that are dignified and enriching and that meet the obligations of future generations. There are two visions of America. One is that this is just a place where you make a pile for yourself and keep moving. And the other is that you put down roots and build communities that are examples to the rest of humanity.” Kennedy goes on: “The environment cannot be separated from the economy, housing, civil rights. How we distribute the goods of the earth is the best measure of our democracy. It’s not about advocating for fishes and birds. It’s about human rights.” RESOURCES BOOKS Young Kennedys: The New Generation. Avon, 1998.
OTHER Biography Resource Center Online. Biography Resource Center. Farmington Hills, MI: The Gale Group. 2002.
ORGANIZATIONS Riverkeeper, Inc., 25 Wing & Wing, Garrison, NY USA 10524-0130 (845) 424-4149, Fax: (845) 424-4150, Email:
[email protected],
Kepone Kepone (C10Cl10O) is an organochlorine pesticide that was manufactured by the Allied Chemical Corporation in Vir-
Environmental Encyclopedia 3 ginia from the late 1940s to the 1970s. Kepone was responsible for human health problems and extensive contamination of the James River and its estuary in the Chesapeake Bay. It is a milestone in the development of a public environmental consciousness, and its history is considered by many to be a classic example of negligent corporate behavior and inadequate oversight by state and federal agencies. Kepone is an insecticide and fungicide that is closely related to other chlorinated pesticides such as DDT and aldrin. As with all such pesticides, Kepone causes lethal damage to the nervous systems of its target organisms. A poorly water-soluble substance, it can be absorbed through the skin, and it bioaccumulates in fatty tissues from which it is later released into the bloodstream. It is also a contact poison; when inhaled, absorbed, or ingested by humans, it can damage the central nervous system as well as the liver and kidneys. It can also lead to neurological symptoms such as tremors, muscle spasms, sterility, and cancer. Although the manufacture and use of Kepone is now banned by the Environmental Protection Agency (EPA), organochlorines have long half-lives, and these compounds, along with their residues and degradation products, can persist in the environment over many decades. Allied Chemical first opened a plant to manufacture nitrogen-based fertilizers in 1928 in the town of Hopewell, on the banks of the James River in Virginia. This plant began producing Kepone in 1949. Commercial production was subsequently begun, although a battery of toxicity tests indicated that Kepone was both toxic and carcinogenic and that it caused damage to the functioning of the nervous, muscular, and reproductive systems in fish, birds, and mammals. It was patented by Allied in 1952 and registered with federal agencies in 1957. The demand for the pesticide grew after 1958, and Allied expanded production by entering into a variety of subcontracting agreements with a number of smaller companies, including the Life Science Products Company. In 1970, a series of new environmental regulations came into effect which should have changed the way wastes from the manufacture of Kepone were discharged. The Refuse Act Permit Program and the National Pollutant Discharge Elimination Program (NPDES) of the Clean Water Act required all dischargers of effluents into United States waters to register their discharges and obtain permits from federal agencies. At the time these regulations went into effect, Allied Chemical had three pipes discharging Kepone and plastic wastes into the Gravelly Run, a tributary of the James River, about 75 mi (120 km) north of Chesapeake Bay. A regional sewage treatment plant which would accept industrial wastes was then under construction but not scheduled for completion until 1975. Rather than installing expensive pollution control equipment for the interim pe-
Kepone
riod, Allied chose to delay. They adopted a strategy of misinformation, reporting the releases as temporary and unmonitored discharges, and they did not disclose the presence of untreated Kepone and other process wastes in the effluents. The Life Science Products Company also avoided the new federal permit requirements by discharging their wastes directly into the local Hopewell sewer system. These discharges caused problems with the functioning of the biological treatment systems at the sewage plant; the company was required to reduce concentrations of Kepone in sewage, but it continued its discharges at high concentrations, violating these standards with the apparent knowledge of plant treatment officials. During this same period, an employee of Life Science Products visited a local Hopewell physician, complaining of tremors, weight loss, and general aches and pains. The physician discovered impaired liver and nervous functions, and a blood test revealed an astronomically high level of Kepone—7.5 parts per million. Federal and state officials were contacted, and the epidemiologist for the state of Virginia toured the manufacturing facility at Life Science Products. This official reported that “Kepone was everywhere in the plant,” and that workers wore no protective equipment and were “virtually swimming in the stuff.” Another investigation discovered 75 cases of Kepone poisoning among the workers; some members of their families were also found to have elevated concentrations of the chemical in their blood. Further investigations revealed that the environment around the plant was also heavily contaminated. The soil contained 10,000–20,000 ppm of Kepone. Sediments in the James River, as well as local landfills and trenches around the Allied facilities were just as badly contaminated. Government agencies were forced to close 100 mi (161 km) of the James River and its tributaries to commercial and recreational fishing and shellfishing. In the middle of 1975, Life Science Products finally closed its manufacturing facility. It has been estimated that since 1966, it and Allied together produced 3.2 million lb (1.5 million kg) of Kepone and were responsible for releasing 100,000–200,000 lb (45,360–90,700 kg) into the environment. In 1976, the Northern District of Virginia filed criminal charges against Allied, Life Science Products, the city of Hopewell, and six individuals on 1,097 counts relating to the production and disposal of Kepone. The indictments were based on violations of the permit regulations, unlawful discharge into the sewer systems, and conspiracy related to that discharge. The case went to trial without a jury. The corporations and the individuals named in the charges negotiated lighter fines and sentences by entering pleas of “no contest.” Allied ultimately paid a fine of 13.3 million dollars, although their annual sales reach three billion dollars. Life Science Products 791
Kesterson National Wildlife Refuge
was fined four million dollars, which it could not pay due to lack of assets. Company officers were fined 25,000 dollars each, and the town of Hopewell was fined 10,000 dollars. No one was sentenced to a jail term. Civil suits brought against Allied and the other defendants resulted in a settlement of 5.25 million dollars to pay for cleanup expenses and to repair the damage that had been done to the sewage treatment plant. Allied paid another three million dollars to settle civil suits brought by workers for damage to their health. Environmentalists and many others considered the results of legal action against the manufacturers of Kepone unsatisfactory. Some have argued that these results are typical of environmental litigation. It is difficult to establish criminal intent beyond a reasonable doubt in such cases, and even when guilt is determined, sentencing is relatively light. Corporations are rarely fined in amounts that affect their financial strength, and individual officers are almost never sent to jail. Corporate fines are generally passed along as costs to the consumer, and public bodies are treated even more lightly, since it is recognized that the fines levied on public agencies are paid by taxpayers. Today, the James River has been reopened to fishing for those species that are not prone to the bioaccumulation of Kepone. Nevertheless, sediments in the river and its estuary contain large amounts of deposited Kepone which is released during periods of turbulence. Scientists have published studies which document that Kepone is still moving through the food chain/web and the ecosystem in this area, and Kepone toxicity has been demonstrated in a variety of invertebrate test species. There are still deposits of Kepone in the local sewer pipes in Hopewell; these continue to release the chemical, endangering treatment plant operations and polluting receiving waters. [Usha Vedagiri and Douglas Smith]
RESOURCES BOOKS Goldfarb, W. Kepone: A Case Study. New Brunswick, NJ: Rutgers University, 1977. Sax, N. I. Dangerous Properties of Industrial Materials. 6th ed. New York: Van Nostrand Reinhold, 1984.
Kesterson National Wildlife Refuge One of a dwindling number of freshwater marshes in California’s San Joaquin Valley, Kesterson National Wildlife Refuge achieved national notoriety in 1983 when refuge managers discovered that agricultural runoff was poisoning the area’s birds. Among other elements and agricultural chemicals reaching toxic concentrations in the wetlands, 792
Environmental Encyclopedia 3
Breeding populations of American coots were affected by selenium poisoning at Kesterson National Wildlife Refuge. (Photograph by Leonard Lee Rue III. Visuals Unlimited. Reproduced by permission.)
the naturally occurring element selenium was identified as the cause of falling fertility and severe birth defects in the refuge’s breeding populations of stilts, grebes, shovellers, coots, and other aquatic birds. Selenium, lead, boron, chromium, molybdenum, and numerous other contaminants were accumulating in refuge waters because the refuge had become an evaporation pond for tainted water draining from the region’s fields. The soils of the arid San Joaquin valley are the source of Kesterson’s problems. The flat valley floor is composed of ancient sea bed sediments that contain high levels of trace elements, heavy metals, and salts. But with generous applications of water, this sun-baked soil provides an excellent medium for food production. Perforated pipes buried in the fields drain away excess water—and with it dissolved salts and trace elements—after flood irrigation. An extensive system of underground piping, known as tile drainage, carries wastewater into a network of canals that lead to Kesterson Refuge, an artificial basin constructed by the Bureau of Reclamation to store irrigation runoff from central California’s heavily-watered agriculture. Originally a final drainage canal from Kesterson to San Francisco Bay was planned, but because an outfall point was never agreed upon, contaminated drainage water remained trapped in Kester-
Environmental Encyclopedia 3 son’s 12 shallow ponds. In small doses, selenium and other trace elements are not harmful and can even be dietary necessities. But steady evaporation in the refuge gradually concentrated these contaminants to dangerous levels. Wetlands in California’s San Joaquin valley were once numerous, supporting huge populations of breeding and migrating birds. In the past half century drainage and the development of agricultural fields have nearly depleted the area’s marshes. The new ponds and cattail marshes at Kesterson presented a rare opportunity to extend breeding habitat, and the area was declared a national wildlife refuge in 1972, one year after the basins were constructed. Eleven years later, in the spring of 1983, observers discovered that a shocking 60% of Kesterson’s nestlings were grotesquely deformed. High concentrations of selenium were found in their tissues, an inheritance from parent birds who ate algae, plants, and insects—all tainted with selenium—in the marsh. Following extensive public outcry the local water management district agreed to try to protect the birds. Alternate drainage routes were established, and by 1987 much of the most contaminated drainage had been diverted from the wildlife refuge. The California Water Resource Control Board ordered the Bureau of Reclamation to drain the ponds and clean out contaminated sediments, at a cost of well over $50 million. However, these contaminants, especially in such large volumes and high concentrations, are difficult to contain, and similar problems could quickly emerge again. Furthermore, these problems are widespread. Selenium poisoning from irrigation runoff has been discovered in least nine other national wildlife refuges, all in the arid west, since it appeared at Kesterson. Researchers continue to work on affordable and effective responses to such contamination in wetlands, an increasingly rare habitat in this country.
Keystone species
Ketones Ketones belong to a class of organic compounds known as carbonyls. They contain a carbon atom linked to an oxygen atom with a double bond (C=O). Acetone (dimethyl ketone) is a ketone commonly used in industrial applications. Other ketones include methyl ethyl ketone (MEK), methyl isobutyl ketone (MIBK), methyl amyl ketone (MAK), isophorone, and diacetone alcohol. As solvents, ketones have the ability to dissolve other materials or substances, particularly polymers and adhesives. They are ingredients in lacquers, epoxies, polyurethane, nail polish remover, degreasers, and cleaning solvents. Ketones are also used in industry for the manufacture of plastics and composites and in pharmaceutical and photographic film manufacturing. Because they have high evaporation rates and dry quickly, they are sometimes employed in drying applications. Some types of ketones used in industry, such as methyl isobutyl ketone and methyl ethyl ketone, are considered both hazardous air pollutants (HAP) and volatile organic compounds (VOC) by the EPA. As such, the Clean Air Act regulates their use. In addition to these industrial sources, ketones are released into the atmosphere in cigarette smoke and car and truck exhaust. More “natural” environmental sources such as forest fires and volcanoes also emit ketones. Acetone, in particular is readily produced in the atmosphere during the oxidation of organic pollutants or natural emissions. Ketones (in the form of acetone, beta-hydroxybutyric acid, and acetoacetic acid) also occur in the human body as a byproduct of the metabolism, or break down, of fat. [Paula Anne Ford-Martin]
[Mary Ann Cunningham Ph.D.]
RESOURCES PERIODICALS
RESOURCES BOOKS Harris, T. Death in the Marsh. Washington, DC: Island Press, 1991.
PERIODICALS Claus, K. E. “Kesterson: An Unsolvable Problem?” Environment 89 (1987): 4–5. Harris, T. “The Kesterson Syndrome.” Amicus Journal 11 (Fall 1989): 4–9.
Wood, Andrew. “Cleaner Ketone Oxidation.” Chemical Week (Aug 1, 2001).
OTHER U.S. National Library of Medicine. Hazardous Substances Data Bank. [cited May 2002]. .
ORGANIZATIONS American Chemical Society, 1155 Sixteenth St. NW, Washington, D.C. USA 20036 (202) 872-4600, Fax: (202) 872-4615, Toll Free: (800) 2275558, Email:
[email protected],
Marshal, E. “Selenium in Western Wildlife Refuges.” Science 231 (1986): 111–12. Tanji, K., A. La¨uchli, and J. Meyer. “Selenium in the San Joaquin Valley.” Environment 88 (1986): 6–11.
ORGANIZATIONS Kesterson National Wildlife Refuge, c/o San Luis NWR Complex, 340 I Street, P.O. Box 2176, Los Banos, CA USA 93635 (209) 826-3508,
Keystone species Keystone species have a major influence on the structure of their ecological community. The profound influence of keystone species occurs because of their position and activity 793
Keystone species
within the food chain/web. In the sense meant here, a “major influence” means that removal of a keystone species would result in a large change in the abundance, and even the local extirpation, of one or more species in the community. This would fundamentally change the structure of the overall community in terms of species composition, productivity, and other characteristics. Such changes would have substantial effects on all of the species that are present, and could allow new species to invade the community. The original use of the word “keystone” was in architecture. An architectural keystone is a wedge-shaped stone that is strategically located at the summit of an arch. The keystone serves to lock all other elements of the arch together, and it thereby gives the entire structure mechanical integrity. Keystone species play an analogous role in giving structure to the “architecture” of their ecological community. The concept of keystone species was first applied to the role of certain predators (i.e., keystone predators) in their community. More recently, however, the term has been extended to refer to other so-called “strong interactors.” This has been particularly true of keystone herbivores that have a relatively great influence on the species composition and relative abundance of plants in their community. Keystone species directly exert their influence on the populations of species that they feed upon, but they also have indirect effects on species lower in the food web. Consider, for example, a hypothetical case of a keystone predator that regulates the population of a herbivore. This effect will also, of course, indirectly influence the abundance of plant species that the herbivore feeds upon. Moreover, by affecting the competitive relationships among the various species of plants in the community, the abundance of plants that the herbivore does not eat will also be indirectly affected by the keystone predator. Although keystone species exert their greatest influence on species with which they are most closely linked through feeding relationships, their influences can ramify throughout the food web. Ecologists have documented the presence of keystone species in many types of communities. The phenomenon does not, however, appear to be universal, in that keystone species have not been identified in many ecosystems. Predators as keystone species The term “keystone species” was originally used by the American ecologist Robert Paine to refer to the critical influence of certain predators. His original usage of the concept was in reference to rocky intertidal communities of western North America, in which the predatory starfish Pisaster ochraceous prevents the mussel Mytilus californianus from monopolizing the available space on rocky habitats and thereby eliminating other, less-competitive herbivores and even seaweeds from the community. By feeding on mussels, which are the dominant competitor among the herbivores 794
Environmental Encyclopedia 3 in the community, the starfish prevents these shellfish from achieving the dominance that would otherwise be possible. This permits the development of a community that is much richer in species than would occur in the absence of the predatory starfish. Paine demonstrated the keystone role of the starfish by conducting experiments in which the predator was excluded from small areas using cages. When this was done, the mussels quickly became strongly dominant in the community and eliminated virtually all other species of herbivores. Paine also showed that once mussels reached a certain size they were safe from predation by the starfish. This prevented the predator from eliminating the mussel from the community. Sea otters (Enhydra lutris) of the west coast of North America are another example of a keystone predator. This species feeds heavily on sea urchins when these invertebrates are available. By greatly reducing the abundance of sea urchins, the sea otters prevent these herbivores from overgrazing kelps and other seaweeds in subtidal habitats. Therefore, when sea otters are abundant, urchins are not, and this allows luxurious kelp “forests” to develop. In the absence of otters, the high urchin populations can keep the kelp populations low, and the habitat then may develop as a rocky “barren ground.” Because sea otters were trapped very intensively for their fur during the eighteenth and nineteenth centuries, they were extirpated over much of their natural range. In fact, the species had been considered extinct until the 1930s, when small populations were “discovered” off the coast of California and in the Aleutian Islands of Alaska. Thanks to effective protection from trapping, and deliberate reintroductions to some areas, populations of sea otters have now recovered over much of their original range. This has resulted in a natural depletion of urchin populations, and a widespread increase in the area of kelp forests. Herbivores as Keystone Species Some herbivorous animals have also been demonstrated to have a strong influence on the structure and productivity of their ecological community. One such example is the spruce budworm (Choristoneura fumiferana), a moth that occasionally irrupts in abundance and becomes an important pest of conifer forests in the northeastern United States and eastern Canada. The habitat of spruce budworm is mature forests dominated by balsam fir (Abies balsamea), white spruce (Picea glauca), and red spruce (P. rubens). This native species of moth is always present in at least small populations, but it sometimes reaches very high populations, which are known as irruptions. When budworm populations are high, many species of forest birds and small mammals occur in relatively large populations that subsist by feeding heavily on larvae of the moth. However, during irruptions of budworm most of the fir and spruce foliage is eaten by the abundant larvae, and after this happens for several years
Environmental Encyclopedia 3
Kirtland’s warbler
many of the trees die. Because of damages caused to mature trees in the forest the budworm epidemic collapses, and then a successional recovery begins. The plant communities of early succession contain many species of plants that are uncommon in mature conifer forests. Eventually, however, another matures, conifer forest redevelops, and the cycle is primed for the occurrence of another irruption of the budworm. Clearly, spruce budworm is a good example of a keystone herbivore, because it has such a great influence on the populations of plant species in its habitat, and also on the many animal species that are predators of the budworm. Another example of a keystone herbivore concerns snow geese (Chen caerulescens) in salt marshes of western Hudson Bay. In the absence of grazing by flocks of snow geese this ecosystem would become extensively dominated by several competitively superior species, such as the saltmarsh grass Puccinellia phryganodes and the sedge Carex subspathacea. However, vigorous feeding by the geese creates bare patches of up to several square meters in area, which can then be colonized by other species of plants. The patchy disturbance regime associated with goose grazing results in the development of a relatively complex community, which supports more species of plants than would otherwise be possible. In addition, by manuring the community with their droppings, the geese help to maintain higher rates of plant productivity than might otherwise occur. In recent years, however, large populations of snow goose have caused severe damages to the salt-marsh habitat by over-grazing. This has resulted in the development of salt-marsh “barrens” in some places, which may take years to recover. Plants as keystone species Some ecologists have also extended the idea of keystone species to refer to plant species that are extremely influential in their community. For example, sugar maple (Acer saccharum) is a competitively superior species that often strongly dominates stands of forest in eastern North America. Under these conditions most of the communitylevel productivity is contributed by sugar-maple trees. In addition, most of the seedlings and saplings are of sugar maple. This is because few seedlings of other species of trees are able to tolerate the stressful conditions beneath a closed sugar-maple canopy. Other ecologists prefer to not use the idea of keystone species to refer to plants that, because of their competitive abilities, are strongly dominant in their community. Instead, these are sometimes referred to as “foundationstone species.” This term reflects the facts that strongly dominant plants contribute the great bulk of the biomass and productivity of their community, and that they support almost all herbivores, predators, and detritivores that are present. [Bill Freedman Ph.D.]
RESOURCES BOOKS Begon, M., J. L. Harper, and C. R. Townsend. Ecology. Individuals, Populations and Communities. 3rd ed. London: Blackwell Sci. Pub., 1996. Krebs, C. J. Ecology. The Experimental Analysis of Distribution and Abundance. San Francisco: Harper and Row, 1985. Ricklefs, R. E. Ecology. New York: W. H. Freeman and Co., 1990.
PERIODICALS Paine, R. T. “Intertidal Community Structure: Experimental Studies of the Relationship Between A Dominant Competitor and Its Principal Predator.” Oecologia 15 (1974): 93–120.
Killer bees see Africanized bees
Kirtland’s warbler Kirtland’s warbler (Dendroica kirtlandii) is an endangered species and one of the rarest members of the North American wood warbler family. Its entire breeding range is limited to a seven-county area of north-central Michigan. The restricted distribution of the Kirtland’s warbler and its specific niche requirements have probably contributed to low population levels throughout its existence, but human activity has had a large impact on their numbers over the past hundred years. The first specimen of Kirtland’s warbler was taken by Samuel Cabot in October 1841, and brought on ship in the West Indies during an expedition to the Yucatan. But this specimen went unnoticed until 1865, long after the species had been formally described. Charles Pease is credited with discovering Kirtland’s warbler. He collected a specimen on May 13, 1851 near Cleveland, Ohio, and gave it to his father-in-law, Dr. Jared P. Kirtland, a renowned naturalist. Kirtland sent the specimen to his friend, ornithologist Spencer Fullerton Baird, who described the new species the following year and named it in honor of the naturalist. The wintering grounds of Kirtland’s warbler is the Bahamas, a fact which was well established by the turn of the century, but its nesting grounds went undiscovered until 1903, when Norman Wood found the first nest in Oscoda County, Michigan. Every nest found since then has been within a 60 mile (95 km) radius of this spot. In 1951 the first exhaustive census of singing males was undertaken in an effort to establish the range of Kirtland’s warblers as well as its population level. Assuming that numbers of males and females are approximately equal and that a singing male is defending an active nesting site, the total of 432 in this census indicated a population of 864 birds. Ten years later another census counted 502 singing males, indicating the population was over 1,000 birds. In 1971, 795
Environmental Encyclopedia 3
Krakatoa
annual counts began, but for the next 20 years these counts revealed that the population had dropped significantly, reaching lows of 167 singing males in 1974 and 1987. In the early 1990s, conservation efforts on behalf of the species began to bear fruit and the population began to recover. By 2001 the annual census counted 1,085 singing males or a total population of over 2,000 birds. The first problem facing this endangered species centers on its specialized nesting and habitat requirements. The Kirtland’s warbler nests on the ground, and its reproductive success is tied closely to its selection of young jack pine trees as nesting sites. When the jack pines are 5–20 ft (1.5–6 m) tall, at an age of 8–20 years, their lower branches are at ground level and provide the cover this warbler needs. The life cycle of the pine, however, is dependent on forest fires, as the intense heat is needed to open the cones for seed release. The advent of fire protection in forest management reduced the production of the number of young trees the warblers needed and the population suffered. Once this relationship was fully understood, jack pine stands were managed for Kirtland’s warbler, as well as commercial harvest, by instituting controlled burns on a 50 year rotational basis. The second problem is the population pressures brought to bear by a nest parasite, the brown-headed cowbird (Molothrus ater), which lays its eggs in the nests of other songbirds. Originally a bird of open plains, it did not threaten Kirtland’s warbler until Michigan was heavily deforested, thus providing it with appropriate habitat. Once established in the warbler’s range, it has increasingly pressured the Kirtland’s population. Cowbird chicks hatch earlier than other birds and they compete successfully with the other nestlings for nourishment. Efforts to trap and destroy this nest parasite in the warbler’s range have resulted in improved reproductive success for Kirtland’s warbler. See also Deforestation; Endangered Species Act; International Council on Bird Preservation; Rare species; Wildlife management [Eugene C. Beckham]
RESOURCES BOOKS Ehrlich, P. R., D. S. Dobkin, and D. Wheye. Birds in Jeopardy: The Imperiled and Extinct Birds of the United States and Canada, Including Hawaii and Puerto Rico. Stanford: Stanford University Press, 1992.
PERIODICALS Weinrich, J. A. “Status of Kirtland’s Warbler, 1988.” Jack-Pine Warbler 67 (1989): 69–72.
OTHER U.S. Fish and Wildlife Service. Kirtland"s Warbler (Dendroica kirtlandii). April 2, 2002 [cited June 19, 2002]. .
796
Krakatoa The explosion of this triad of volcanoes on August 27, 1883, the culmination of a three-month eruptive phase, astonished the world because of its global impact. Perhaps one of the most influential factors, however, was its timing. It happened during a time of major growth in science, technology, and communications, and the world received current news accompanied by the correspondents’ personal observations. The explosion was heard some 3,000 mi (4,828 km) away, on the Island of Rodriguez in the Indian Ocean. The glow of sunsets was so vivid three months later that fire engines were called out in New York City and nearby towns. Krakatoa (or Krakatau), located in the Sunda Strait between Java and Sumatra, is part of the Indonesian volcanic system, which was formed by the subduction of the Indian Ocean plate under the Asian plate. A similar explosion occurred in A.D. 416, and another major eruption was recorded in 1680. Now a new volcano is growing out of the caldera, likely building toward some future cataclysm. This immense natural event, perhaps twice as powerful as the largest hydrogen bomb, had an extraordinary impact on the solid earth, the oceans, and the atmosphere, and demonstrated their interdependence. It also made possible the creation of a wildlife refuge and tropical rain forest preserve on the Ujung Kulon Peninsula of southwestern Java. Studies revealed that this caldera, like Crater Lake, Oregon, resulted from Krakatoa’s collapse into the now empty magma chamber. The explosion produced a 131-ft (40-m) high tsunami, or tidal wave, which carried a steamship nearly 2 mi (3.2 km) inland, and caused most of the fatalities resulting from the eruption. Tidal gauges as far away as San Francisco Bay and the English Channel recorded fluctuations. The explosion provided substantial benefits to the young science of meteorology. Every barometer on Earth recorded the blast wave as it raced towards its antipodal position in Columbia, and then reverberated back and forth in six more recorded waves. The distribution of ash in the stratosphere gave the first solid evidence of rapidly flowing westerly winds, as debris encircled the equator over the next 13 days. Global temperatures were lowered about 0.9°F (0.5°C), and did not return to normal until five years later. An ironic development is that the Ujung Kulon Peninsula was never resettled after the tsunami killed most of the people. Without Krakatoa’s explosion, the population would have most likely grown significantly and much of the habitat there would likely have been altered by agriculture. Instead, the area is now a national park that supports a variety of species, including the Javan rhino (Rhinoceros sondaicus), one of Earth’s rarest and most endangered species. This park has provided a laboratory for scientists to study nature’s
Environmental Encyclopedia 3
Joseph Wood Krutch
Krill. (McGraw-Hill Inc. Reproduced by permission.)
healing process after such devastation. See also Mount Pinatubo, Philippines; Mount Saint Helens, Washington; Volcano [Nathan H. Meleen]
RESOURCES BOOKS Nardo, D. Krakatoa. World Disasters Series. San Diego: Lucent Books, 1990. Simkin, T., and R. Fiske. Krakatau 1883: The Volcanic Eruption and Its Effects. Washington, DC: Smithsonian Books, 1983.
PERIODICALS Ball, R. “The Explosion of Krakatoa,” National Geographic 13 (June 1902): 200–203. Plage, D., and M. Plage. “Return of Java’s Wildlife,” National Geographic 167 (June 1985): 750–71.
Krill Marine crustaceans in the order Euphausiacea. Krill are zooplankton, and most feed on microalgae by filtering them from the water. In high latitudes, krill may account for a large proportion of the total zooplankton. Krill often occur in large swarms and in a few species these swarms may reach several hundred square meters in size with densities over 60,000 individuals per square meter. This swarming behavior makes them valuable food sources for many species of whales and seabirds. Humans have also begun to harvest krill for use as a dietary protein supplement.
Joseph Wood Krutch (1893 – 1970) American literary critic and naturalist
Through much of his career, Krutch was a teacher of criticism at Columbia University and a drama critic for The Nation. But then respiratory problems led him to early retirement in the desert near Tucson, Arizona. He loved the desert and there turned to biology and geology, which he applied to maintain a consistent, major theme found in all of his writings, that of the relation of humans and the universe. Krutch subsequently became an accomplished naturalist. Readers can find the theme of man and universe in Krutch’s early work, for example The Modern Temper (1929), and in his later writings on human-human and humannature relationships, including natural history—what Rene Jules Dubos described as “the social philosopher protesting against the follies committed in the name of technological progress, and the humanist searching for permanent values in man’s relationship to nature.” Assuming a pessimistic stance in his early writings, Krutch despaired about lost connections, arguing that for humans to reconnect, they must conceive of themselves, nature, “and the universe in a significant reciprocal relationship.” Krutch’s later writings repudiated much of his earlier despair. He argued against the dehumanizing and alienating forces of modern society and advocated systematically reassembling—by reconnecting to nature—"a world man can live in.” In The Voice of the Desert (1954), for instance, he claimed that “we must be part not only of the human community, but of the whole community.” In such books as The Twelve Seasons (1949) and The Great Chain of Life (1956), he demonstrated that humans “are a part of Nature...whatever we discover about her we are discovering also about ourselves.” This view was based on a solid anti-deterministic approach that opposed mechanistic and behavioristic theories of evolution and biology. His view of modern technology as out of control was epitomized in the automobile. Driving fast prevented people from reflecting or thinking or doing anything except controlling the monster: “I’m afraid this is the metaphor of our society as a whole,” he commented. Krutch also disliked the proliferation of suburbs, which he labeled “affluent slums.” He argued in Human Nature and the Human Condition (1959) that “modern man should be concerned with achieving the good life, not with raising the [material] standard of living.” An editorial ran in The New York Times a week after Krutch’s death: today’s younger generation, it read, “unfamiliar with Joseph Wood Krutch but concerned about the environment and contemptuous of materialism,” should “turn to a reading of his books with delight to themselves and profit to the world.” [Gerald L. Young Ph.D.]
797
Kudzu
RESOURCES BOOKS Krutch, J. W. The Desert Year. New York: Viking, 1951. Margolis, J. D. Joseph Wood Krutch: A Writer’s Life. Knoxville: The University of Tennessee Press, 1980. Pavich, P. N. Joseph Wood Krutch. Western Writers Series, no. 89. Boise: Boise State University, 1989.
PERIODICALS Gorman, J. “Joseph Wood Krutch: A Cactus Walden.” MELUS: The Journal of the Society for the Study of the Multi-Ethnic Literature of the United States 11 (Winter 1984): 93–101. Holtz, W. “Homage to Joseph Wood Krutch: Tragedy and the Ecological Imperative.” The American Scholar 43 (Spring 1974): 267–279. Lehman, A. L. “Joseph Wood Krutch: A Selected Bibliography of Primary Sources.” Bulletin of Bibliography 41 (June 1984): 74–80.
Kudzu Pueraria lobata or kudzu, also jokingly referred to as “foota-night” and “the vine that ate the South,” is a highly aggressive and persistent semi-woody vine introduced to the United States in the late nineteenth century. It has since become a symbol of the problems possible for native ecosystems caused by the introduction of exotic species. Kudzu’s best known characteristic is its extraordinary capacity for rapid growth, managing as much as 12 in (30.5 cm) a day and 60–100 ft (18–30 m) a season under ideal conditions. When young, kudzu has thin, flexible, downy stems that grow outward as well as upward, eventually covering virtually everything in its path with a thick mat of leaves and tendrils. This lateral growth creates the dramatic effect, common in southeastern states such as Georgia, of telephone poles, buildings, neglected vehicles, and whole areas of woodland being enshrouded in blankets of kudzu. Kudzu’s tendency towards aggressive and overwhelming colonization has many detrimental effects, killing stands of trees by robbing them of sunlight and pulling down or shorting out utility cables. Where stem nodes touch the ground, new roots develop which can extend 10 ft (3 m) or more underground and eventually weigh several hundred pounds. In the nearly ideal climate of the Southeast, the prolific vine easily overwhelms virtually all native competitors and also infests cropland and yards. A member of the pea family, kudzu is itself native to China and Japan. Introduced to the United States at the Japanese garden pavilion during the 1876 Philadelphia Centennial Exhibition, kudzu’s broad leaves and richly fragrant reddish-purple blooms made it seem highly desirable as an ornamental plant in American gardens. It now ranges along the eastern seaboard from Florida to Pennsylvania, and westward to Texas. Although hardy, kudzu does not tolerate cold weather and prefers acidic, well-drained soils and bright 798
Environmental Encyclopedia 3 sunlight. It rarely flowers or sets seed in the northern part of its range and loses its leaves at first frost. For centuries, the Japanese have cultivated kudzu for its edible roots, medicinal qualities, and fibrous leaves and stems, which are suitable for paper production. After its initial introduction as an ornamental, kudzu also was touted as a forage crop and as a cure for erosion in the United States. Kudzu is nutritionally comparable to alfalfa, and its tremendous durability and speed of growth were thought to outweigh the disadvantages caused for cutting and baling by its rope-like vines. But its effectiveness as a ground cover, particularly on steeply-sloped terrain, is responsible for kudzu’s spectacular spread. By the 1930s, the United States Soil Conservation Service was enthusiastically advocating kudzu as a remedy for erosion, subsidizing farmers as well as highway departments and railroads, with as much as $8 an acre to use kudzu for soil retention. The Depression-era Civilian Conservation Corps also facilitated the spread of kudzu, planting millions of seedlings as part of an extensive erosion control project. Kudzu also has had its unofficial champions, the best known of whom is Channing Cope of Covington, Georgia. As a journalist for Atlanta newspapers and popular radio broadcaster, Cope frequently extolled the virtues of kudzu, dubbing it the “miracle vine” and declaring that it had replaced cotton as “King” of the South. The spread of the vine was precipitous. In the early 1950s, the federal government began to question the wisdom of its support for kudzu. By 1953, the Department of Agriculture stopped recommending the use of kudzu for either fodder or ground cover. In 1982, kudzu was officially declared a weed. Funding is now directed more at finding ways to eradicate kudzu or at least to contain its spread. Continuous overgrazing by livestock will eventually eradicate a field of kudzu, as will repeated applications of defoliant herbicides. Even so, stubborn patches may take five or more years to be completely removed. Controlled burning is usually ineffective and attempting to dig up the massive root system is generally an exercise in futility, but kudzu can be kept off lawns and fences (as an ongoing project) by repeated mowing and enthusiastic pruning. A variety of new uses are being found for kudzu, and some very old uses are being rediscovered. Kudzu root can be processed into flour and baked into breads and cakes; as a starchy sweetener, it also may be used to flavor soft drinks. Medical researchers investigating the scientific bases of traditional herbal remedies have suggested that isoflavones found in kudzu root may significantly reduce craving for alcohol in alcoholics. Eventually, derivatives of kudzu may also prove to be useful for treatment of high blood pressure. Methane and gasohol have been successfully produced from kudzu, and kudzu’s stems may prove to be an economically viable
Environmental Encyclopedia 3
Kyoto Protocol/Treaty
source of fiber for paper production and other purposes. The prolific vine has also become something of a humorous cultural icon, with regional picture postcards throughout the south portraying spectacular and only somewhat exaggerated images of kudzu’s explosive growth. Fairs, festivals, restaurants, bars, rock groups, and road races have all borrowed their name and drawn some measure of inspiration from kudzu, poems have been written about it, and kudzu cookbooks and guides to kudzu crafts are readily available in bookstores. [Lawrence J. Biskowski]
RESOURCES PERIODICALS Dolby, V. “Kudzu Grows Beyond Erosion Control to Help Control Alcoholism.” Better Nutrition 58, no. 11 (November 1996): 32. Hipps, C. “Kudzu.” Horticulture 72, no. 6 (June 1994): 36–9. Tenenbaum, D. “Weeds from Hell,” Technology Review 99, no. 6 (August 1996): 32–40.
Kwashiorkor One of many severe protein energy malnutrition disorders that are a widespread problem among children in developing countries. The word’s origin is in Ghana, where it means a deposed child, or a child that is no longer suckled. The disease usually affects infants between one and four years of age who have been weaned from breast milk to a high starch, low protein diet. The disease is characterized by lethargy, apathy, or irritability. Over time the individual will experience retarded growth processes both physically and mentally. Approximately 25% of children suffer from recurrent relapses of kwashiorkor, interfering with their normal growth. Kwashiorkor results in amino acid deficiencies which inhibit protein synthesis in all tissues. The lack of sufficient plasma proteins, specifically albumin, results in systemic pressure changes, ultimately causing generalized edema. The liver swells with stored fat because there are no hepatic proteins being produced for digestion of fats. Kwashiorkor additionally results in reduced bone density and impaired renal function. If treated early on in its development the disease can be reversed with proper dietary therapy and treatment of associated infections. If the condition is not reversed in its early stages, prognosis is poor and physical and mental growth will be severely retarded. See also Sahel; Third World
Kyoto Protocol/Treaty In the mid-1980s, a growing body of scientific evidence linked man-made greenhouse gas emissions to global warming. In 1990, the United Nations General Assembly issued a report that confirmed this link. The Rio Accord of 1992 resulted from this report. Formally called the United Nations Framework Convention on Climate Change (UNFCC), the accord was signed by various nations in Rio de Janeiro, Brazil and committed industrialized nations to stabilizing their emissions at 1990 levels by 2000. In December 1997, representatives of 160 nations met in Kyoto, Japan, in an attempt to produce a new and improved treaty on climate change. Major differences occurred between industrialized and still developing countries with the United States perceived, particularly by representatives of the European Union (EU), as not doing its share to reduce emissions, especially those of carbon dioxide. The outcome of this meeting, the Kyoto Protocol to the United Nations Framework Convention on Climate Change (UNFCCC), required industrialized nations to reduce their emissions of carbon dioxide, methane, nitrous oxide, hydrofluorocarbons, sulfur dioxides, and perfluorocarbons below 1990 levels by 2012. The requirements would be different for each country and would have to begin by 2008 and be met by 2012. There would be no requirements for the developing nations. Whether or not to sign and ratify the treaty was left up to the discretion of each individual country. Global warming The organization that provided the research for the Kyoto Protocol was the Intergovernmental Panel on Climate Change (IPCC), set up in 1988 as a joint project of the United Nations Environment Programme (UNEP) and the World Meteorological Organization (WMO). In 2001 the IPCC released a report, “Climate Change, 2001". Using the latest climatic and atmospheric scientific research available, the report predicted that global mean surface temperatures on earth would increase by 2.5–10.4°F (1.5–5.9 °C) by the year 2100, unless greenhouse gas emissions were reduced well below current levels. This warming trend was seen as rapidly accelerating, with possible dire consequences to human society and the environment. These accelerating temperature changes were expected to lead to rising sea levels, melting glaciers and polar ice packs, heat waves, droughts and wildfires, and a profound and deleterious effect on human health and well-being. Some of the effects of these temperature changes may already be occurring. Most of the United States has already experienced increases in mean annual temperature of up to 4°F (2.3°C). Sea ice is melting in both the Antarctica and the Arctic. Ninety-eight percent of the world’s glaciers are 799
Kyoto Protocol/Treaty
shrinking. The sea level is rising at three times its historic rate. Florida is already feeling the early effects of global warming with shorelines suffering from erosion, with dying coral reefs, with saltwater polluting the fresh water sources, with an increase in wildfires, and with higher air and water temperatures. In Canada, forest fires have more than doubled since 1970, water wells are going dry, lake levels are down, and there is less rainfall. Controversy Since its inception, the Kyoto Protocol has generated a great deal of controversy. Richer nations have argued that the poorer, less developed nations are getting off easy. The developing nations, on the other hand, have argued that they will never be able to catch up with the richer nations unless they are allowed to develop with the same degree of pollution as that which let the industrial nations become rich in the first place. Another controversy rages between environmentalists and big business. Environmentalists have argued that the Kyoto Protocol doesn’t go far enough, while petroleum and industry spokespersons have argued that it would be impossible to implement without economic disaster. In the United States, the controversy has waxed especially high. The Kyoto Protocol was signed under the administration of President Bill Clinton, but was never ratified by the Republican-dominated U.S. Senate. Then in 2001, President George W. Bush, a former Texas oilman, backed out of the treaty, saying it would cost the U.S. economy 400 billion dollars and 4.9 million jobs. Bush unveiled an alternative proposal to the Kyoto accord that he said would reduce greenhouse gases, curb pollution and promote energy efficiency. But critics of his plan have argued that by the year 2012 it would actually increase the 1990 levels of greenhouse gas emissions by more than 30%. Soon after the Kyoto Protocol was rejected by the Bush administration, the European Union criticized the action. In particular, Germany was unable to understand why the Kyoto restrictions would adversely effect the American economy, noting that Germany had been able to reduce their emissions without serious economic problems. The Germans also suggested that President Bush’s program to induce voluntary reductions was politically motivated and was designed to prevent a drop in the unreasonably high level of consumption of greenhouse gases in the United States, a drop that would be politically damaging for the Bush administration. In rejecting the Kyoto Protocol, President Bush claimed that it would place an unfair burden on the United States. He argued that it was unfair that developing countries such as India and China should be exempt. But China had already taken major steps to affect climate change. According to a June report by the World Resources Institute, a Washington, D.C.-based environmental think tank, China volun800
Environmental Encyclopedia 3 tarily cut its carbon dioxide emissions by 19% between 1997 and 1999. Contrary to Bush’s fears that cutting carbon dioxide output would inevitably damage the United States economy, China’s economy grew by 15% during this same twoyear period. Politics has always been at the forefront of this debate. The IPCC has provided assessments of climate change that have helped shape international treaties, including the Kyoto Protocol. However, the Bush administration, acting at the request of ExxonMobil, the world’s largest oil company, and attempting to cast doubts upon the scientific integrity of the IPCC, was behind the ouster in 2002 of IPCC chairperson Robert Watson, an atmospheric scientist who supported implementing actions against global warming. The ability of trees and plants to fix carbon through the process of photosynthesis, a process called carbon or C sequestration, results in a large amount of carbon stored in biomass around the world. In the framework of the Kyoto Protocol, C sequestration to mitigate the greenhouse effect in the terrestrial ecosystem has been an important topic of discussion in numerous recent international meetings and reports. To increase C sequestration in soils in the dryland and tropical areas, as a contribution to global reductions of atmospheric CO2, the United States has promoted new strategies and new practices in agriculture, pasture use and forestry, including conservation agriculture and agroforestry. Such practices should be facilitated particularly by the application of article 3.4 of the Kyoto Protocol covering the additional activities in agriculture and forestry in the developing countries and by appropriate policies. Into the future In June 2002, the 15 member nations of the European Union formally signed the Kyoto Protocol. The ratification by the 15 EU countries was a major step toward making the 1997 treaty effective. Soon after, Japan signed the treaty, and Russia was expected to follow suit. To take effect, the Kyoto Protocol must be ratified by 55 countries, but these ratifications have to include industrialized nations responsible for at least 55% of the 1990 levels of greenhouse gases. As of 2002, over 70 countries had already signed, exceeding the minimum number of countries needed. If Russia signs the treaty, nations responsible for over 55% of the 1990 levels of greenhouse gas pollution will have signed, and the Kyoto Protocol will take effect. Before the EU ratified the protocol, the vast majority of countries that had ratified were developing countries. With the withdrawal of the United States, responsible for 36.1% of greenhouse gas emissions in 1990, ratification by industrialized nations was crucial. For example, environmentalists hope that Canada will ratify the treaty as it has already committed compliance.
Environmental Encyclopedia 3
Kyoto Protocol/Treaty
Although the Bush administration opposed the Kyoto Protocol, saying that its own plan of voluntary restrictions would work as well without the loss of billions of dollars and without driving millions of Americans out of work, the EPA, under its administrator Christine Todd Whitman, in 2002 sent a climate report to the United Nations detailing specific, far-reaching, and disastrous effects of global warming upon the American environment and its people. The EPA report also admitted that global warming is occurring because of man-made carbon dioxide and other greenhouse gases. However, it offered no major changes in administration policies, instead recommending adapting to the inevitable and catastrophic changes. Although the United States was still resisting the Kyoto Protocol in mid-2002, and the treaty’s implications for radical and effective action, various states and communities decided to go it alone. Massachusetts and New Hampshire enacted legislation to cut carbon emissions. California was considering legislation limiting emissions from cars and small trucks. Over 100 U.S. cities had already opted to cut carbon emissions. Even the U.S business community, because of their many overseas operations, was beginning to voluntarily cut back on their greenhouse emissions. [Douglas Dupler]
RESOURCES BOOKS Brown, Paige. Climate, Biodiversity and Forests: Issues and Opportunities Emerging from the Kyoto Protocol. Washington, DC: World Resources Institute, 1998.
Gelbspan, Ross. The Heat is On: The Climate Crisis, the Cover-up, the Prescription. New York: Perseus Book, 1998. McKibbin, Warwick J., and Peter Wilcoxen. Climate Change Policy After Kyoto: Blueprint for a Realistic Approach. Washington DC: The Brookings Institution Press, 2002. Victor, David G. Collapse of the Kyoto Protocol and the Struggle to Slow Global Warming. Boston: Princeton University Press, 2002.
PERIODICALS Benedick, Richard E. “Striking a New Deal on Climate Change.” Issues in Science and Technology, Fall 2001, 71. Gelbspan, Ross. “A Modest Proposal to Stop Global Warming.” Sierra, May/June 2001, 63. McKibben, Bill. “Climate Change 2001: Third Assessment Report.” New York Review of Books, July 5, 2001, 35. Rennie, John. “The Skeptical Environmentalist Replies.” Scientific American, May 2002, 14.
OTHER “Guide to the Kyoto Protocol.” Greenpeace International Web Site. 1998 [cited July 2002]. . Intergovernmental Panel on Climate Change Web Site. [cited July 2002]. . “Kyoto Protocol.” United Nations Convention of Climate Change. 1997 [cited July 1997]. . Union of Concerned Scientists Global Warming Web Site. [cited July 2002]. .
ORGANIZATIONS IPCC Secretariat, C/O World Meteorological Organization, 7bis Avenue de la Paix, C.P. 2300, CH- 1211, Geneva, Switzerland 41-22-730-8208, Fax: 41-22-730-8025, Email:
[email protected], UNIDO—Climate Change/Kyoto Protocol Activities, UNIDO New York Office, New York, NY USA 10017 (212) 963-6890, Email:
[email protected],
801
This Page Intentionally Left Blank
L ˜a La Nin La Nin˜a, Spanish for “the little girl,” is also called a cold episode, “El Viejo” (The Old Man), or anti-El Nin˜o. It is one of two major changes in the Pacific Ocean surface temperature that affect global weather patterns. La Nin˜a and El Nin˜ o ("the little boy") are the extreme phases of the El Nin˜o/Southern Oscillation, a climate cycle that occurs naturally in the eastern tropical Pacific Ocean. The effects of both phases are usually strongest from December to March. In some ways, La Nin˜a is the opposite of El Nin˜o. For example, La Nin˜a usually brings more rain to Australia and Indonesia, areas that are susceptible to drought during El Nin˜o. La Nin˜a is characterized by unusually cold ocean surface temperatures in the equatorial region. Ordinarily, the sea surface temperature off the western coast of South America ranges from 60–70°F (15–21°C). According to the National Oceanic and Atmospheric Administration (NOAA), the temperature dropped by up to 7°F (4°C) below normal during the 1988–1989 La Nin˜a. In the United States, La Nin˜a usually brings cooler and wetter than normal conditions in the Pacific Northwest and warmer and drier conditions in the Southeast. In contrast, during El Nin˜o, surface water temperatures in the tropical Pacific are unusually warm. Because water temperatures increase around Christmas, people in South America called the condition “El Nin˜o” to honor the Christ Child. The two weather phenomena are caused by the interaction of the ocean surface and the atmosphere in the tropical Pacific. Changes in the ocean affect the atmosphere and climate patterns around the world, with changes in the atmosphere in turn affecting the ocean temperature and currents. Before the onset of La Nin˜a, there is usually a build-up of cooler than normal subsurface water in the tropical Pacific. The cold water is brought to the surface by atmospheric waves and ocean waves. Winds and currents push warm water towards Asia. In addition, the system can drive the polar jet stream (a stream of winds at high altitude) to the north; this affects weather in the United States.
The effects of La Nin˜a and El Nin˜o are generally seen in the United States during the winter. The two conditions usually occur every three to five years. However, the period between episodes may be from two to seven years. The conditions generally last from nine to 12 months, but episodes could last as long as two years. Since 1975, El Nin˜os have occurred twice as frequently as La Nin˜as. While both conditions are cyclical, a La Nin˜a episode does not always follow an El Nin˜o episode. La Nin˜as in the twentieth century occurred in 1904, 1908, 1910, 1916, 1924, 1928, 1938, 1950, 1955, 1964, 1970, 1973, 1975, 1988, 1995, and 1998. Effects of the 1998 La Nin˜a included flooding in Mozambique in 2000 and a record warm winter in the United States. Nationwide temperatures averaged 38.4°F (3.5°C) from December 1999 through February 2000, according to the NOAA. In addition, that three-month period was the sixteenth driest winter in the 105 years that records have been kept by National Climatic Data Center. The 1998 La Nin˜a was diminishing by November 2000; another La Nin˜a had not been projected as of May 2002. Scientists from the NOAA and other agencies use various tools to monitor La Nin˜a and El Nin˜o. These tools include satellites and data buoys that are used to monitor sea surface temperatures. Tracking the two weather phenomena can help nations to prepare for potential disasters such as floods. In addition, knowledge of the systems can help businesses plan for the future. In a March 1999 article in Nation’s Restaurant News, writer John T. Barone described the impact that La Nin˜a could have on food and beverage prices. Barone projected that drought conditions in Brazil could bring an increase in the price of coffee. [Liz Swain]
RESOURCES BOOKS Caviedes, Cesar. El Nin˜o in History: Storming Through the Ages. Gainesville, FL: University Press of Florida, 2001.
803
Environmental Encyclopedia 3
La Paz Agreement
Glantz, Michael. Currents of Change: Impacts of El Nin˜o and La Nin˜a on Climate and Society. New York: Cambridge University Press, 2001.
PERIODICALS Barone, John T. “La Nin˜a to Put a Chill in Prices this Winter.” Nation’s Restaurant News (December 6, 1999): 78. Le Comte, Douglas. “Weather Around the World.” Weatherwise (March 2001): 23.
ORGANIZATIONS National Oceanic and Atmospheric Administration, 14th Street & Constitution Avenue, NW, Room 6013, Washington, DC USA 20230 (202) 482-6090, Fax: (202) 482-3154, Email:
[email protected],
the open ocean from the near shore body of water. Coral reef lagoons can form in two ways. The first type is found in barrier reefs such as those in Australia and Belize, where there is a body of water (lagoon) which is separated from the open ocean by the reef formed many miles off shore. Another type of lagoon is that formed in the center of atolls, which are circular or horse-shoe shaped bodies of water in the middle of partially sunken volcanic islands with coral reefs growing around their periphery. Some of these atoll lagoons are more than 30 mi (50 km) across and have breathtaking visibility, thus providing superb sites for SCUBA diving.
La Paz Agreement The 1983 La Paz Agreement between the United States and Mexico is a pact to protect, conserve, and improve the environment of the border region of both countries. The agreement defined the region as the 62 mi (100 km) to the north and south of the international border. This area includes maritime (sea) boundaries and land in four American states and six Mexican border states. Representatives from the two countries signed the agreement on Aug. 14, 1983, in La Paz, Mexico. The agreement took effect on Feb. 16, 1984. It established six workgroups, with each group concentrating on an environmental concern. Representatives from both countries serve on the workgroups that focus on water, air, hazardous and solid waste, pollution prevention, contingency planning and emergency response, and cooperative enforcement and compliance. In February of 1992, environmental officials from the two countries released the Integrated Environmental Plan for the Mexican-U.S. Border Area. The Border XXI Program created nine additional workgroups. These groups focus on environmental information resources, natural resources, and environmental health. Border XXI involves federal, state, and local governments on both sides of the border. Residents also participate through activities such as public hearings. [Liz Swain]
Lagoon A lagoon is a shallow body of water separated from a larger, open body of water. It is typically associated with the ocean, such as coastal lagoons and coral reef lagoons. Lagoon also can be used to describe shallow areas of liquid waste material as in sewage lagoons. Oceanic lagoons can be formed in several ways. Coastal lagoons are typically found along coastlines where there are sand bars or barrier islands that separate 804
Lake Baikal The Great Lakes are a prominent feature of the North American landscape, but Russia holds the distinction of having the “World’s Great Lake.” Called the “Pearl of Siberia” or the “Sacred Sea” by locals, Lake Baikal is the world’s deepest and largest lake by volume. It has a surface area of 12,162 sq miles (31,500 sq km), a maximum depth of 5,370 ft (1,637 m), or slightly more than 1 mile, an average depth of 2,428 ft (740 m), and a volume of 30,061 cu yd (23,000 cu m). It thus contains more water than the combined volume of all of the North American Great Lakes—20 percent of the world’s fresh water (and 80 percent of the fresh water of the former Soviet Union). Lake Baikal is located in Russia in south-central Siberia near the northern border of Mongolia. Scientists estimate that the lake was formed 25 million years ago by tectonic (earthquake) displacement, creating a crescent-shaped, steep-walled basin 395 miles (635 km) long by 50 miles (80 km) wide and nearly 5.6 miles (9 km) deep. In contrast, the Great Lakes were formed by glacial scouring a mere 10,000 years ago. Although sedimentation has filled in 80 percent of the basin over the years, the lake is believed to be widening and deepening ever so slightly with time because of recurring crustal movements. The area surrounding Lake Baikal is underridden by at least three crustal plates, causing frequent earthquakes. Fortunately, most are too weak to feel. Like similarly ancient Lake Tanganyika in Africa, the waters of Lake Baikal host a great number of unique species. Of the 1,200 known animal and 600 known plant species, more than 80 percent are endemic to this lake. These include many species of fish, shrimp, and the world’s only fresh water sponges and seals. Called nerpa or nerpy by the natives, these seals (Phoca sibirica) are silvery-gray in color and can grow to 5 ft (1.5 m) long and weigh up to 286 lb (130 kg). Their diet consists almost exclusively of a strange-looking relict fish called golomyanka (Comephorus baicalensis), ren-
Environmental Encyclopedia 3
Lake Baikal
A view of the Strait of Olkhon, on the west coast of Lake Baikal, Russia. (Photograph by Press Agency, Science Photo Library. Photo Researchers Inc. Reproduced by permission.)
dered translucent by its fat-filled body. Unlike other fish, they lack scales and swim bladders and give birth to live larvae rather than eggs. The seal population is estimated at 60,000. Commercial hunters are permitted to kill 6,000 each year. Although the waters of Lake Baikal are pristine by the standards of other large lakes, increased pollution threatens its future. Towns along its shores and along the stretches of the Selenga River, the major tributary flowing into Baikal, add human and industrial wastes, some of which is nonbiodegradable and some highly toxic. A hydroelectric dam on the Angara River, the lake’s only outlet, raised the water level and placed spawning areas of some fish below the optimum depth. Most controversial to the people who depend on this lake for their livelihood and pleasure, however, was the construction of a large cellulose plant at the southern end near the city of Baikalsk in 1957. Built originally to manufacture high-quality aircraft tires (ironically, synthetic tires proved superior), today it produces clothing from bleached cellulose and employs 3,500 people. Uncharacteristic public outcry over the years has resulted in the addition
of advanced sewage treatment facilities to the plant. Although some people would like to see it shut down, the local (and national) economy has taken precedence. In 1987 the Soviet government passed legislation protecting Lake Baikal from further destruction. Logging was prohibited anywhere close to the shoreline and nature reserves and national parks were designated. However, with the recent political turmoil and crippling financial situation in the former Soviet Union, these changes have not been enforced and the lake continues to receive pollutants. Much more needs to be done to assure the future of this magnificent lake. See also Endemic species [John Korstad]
RESOURCES BOOKS Feshbach, M., and A. Friendly, Jr. Ecocide in the USSR. New York: Basic Books, 1992. Matthiessen, P. Baikal: Sacred Sea of Siberia. San Francisco: Sierra Club Books, 1992.
805
Environmental Encyclopedia 3
Lake Erie
PERIODICALS Belt, D. “Russia’s Lake Baikal, the World’s Great Lake.” National Geographic 181 (June 1992): 2–39.
Lake Erie Lake Erie is the most productive of the Great Lakes. Located along the southern fringe of the Precambrian Shield of North America, Lake Erie has been ecologically degraded by a variety of anthropogenic stressors including nutrient loading; extensive deforestation of its watershed that caused severe siltation and other effects; vigorous commercial fishing; and pollution by toxic chemicals. The watershed of Lake Erie is much more agricultural and urban in character than are those of the other Great Lakes. Consequently, the dominant sources of phosphorus (the most important nutrient causing eutrophication) to Lake Erie are agricultural runoff and municipal point sources. The total input of phosphorus to Lake Erie (standardized to watershed area) is about l.3 times larger than to Lake Ontario and more than five times larger than to the other Great Lakes. Because of its large loading rates and concentrations of nutrients, Lake Erie is more productive and has a larger standing crop of phytoplankton, fish, and other biota than the other Great Lakes. During the late 1960s and early 1970s, the eutrophic western basin of Lake Erie had a summer-chlorophyll concentration averaging twice as large as in Lake Ontario and 11 times larger than in oligotrophic Lake Superior. However, since that time the eutrophication of Lake Erie has been alleviated somewhat, in direct response to decreased phosphorus inputs with sewage and detergents. A consequence of the eutrophic state of Lake Erie was the development of anoxia (lack of oxygen) in its deeper waters during summer stratification. In the summer of 1953, this condition caused a collapse of the population of benthic mayfly larvae (Hexagenia spp.), a phenomenon that was interpreted in the popular press as the “death” of Lake Erie. Large changes have also taken place in the fish community of Lake Erie, mostly because of its fishery, the damming of streams required for spawning by anadromous fishes (fish that ascend rivers or streams to spawn), and sedimentation of shallow-water habitat by silt eroded from deforested parts of the watershed. Lake Erie has always had the most productive fishery on the Great Lakes, with fish landings that typically exceed the combined totals of all the other Great Lakes. The peak years of commercial fishery in Lake Erie were in 1935 and 1956 (62 million lb/28 million kg), while the minima were in 1929 and 1941 (24 million lb/11 million kg). Overall, the total catch by the commercial fishery has been remarkably stable over time, despite large changes 806
in species, effort, eutrophication, toxic pollution, and other changes in habitat. The historical pattern of development of the Lake Erie fishery was characterized by an initial exploitation of the most desirable and valuable species. As the populations of these species collapsed because of unsustainable fishing pressure, coupled with habitat deterioration, the fishery diverted to a progression of less-desirable species. The initial fishery focused on lake white fish (Coregonus clupeaformis), lake trout (Salvelinus namaycush), and lake herring (Leucichthys artedi), all of which rapidly declined to scarcity or extinction. The next target was “second-choice” species, such as blue pike (Stizostedion vitreum glaucum) and walleye (S. v. vitreum), which are now extinct or rare. Today’s fishery is dominated by species of much smaller economic value, such as yellow perch (Perca flavescens), rainbow smelt (Osmerus mordax), and carp (Cyprinus carpio). In 1989 an invasive species—the zebra mussel (Dreissena polymorpha—reached Lake Erie and began to have a significant ecological impact on the lake. Zebra mussels are filter feeders, and each adult mussel can filter a liter of water per day, removing every microscopic plant (phytoplankton or algae) and animal (zooplankton) in the process. Zebra mussel densities in Lake Erie have reached such a level that the entire volume of the lake’s western basin is filtered each week. This has increased water clarity up to 600 percent and reduced some forms of phytoplankton in the lake’s food web by as much as 80 percent. In addition, the increased clarity of the water allows light to penetrate deeper into the water, thus facilitating the growth of rooted aquatic plants and increasing populations of some bottom-dwelling algae and tiny animals. Zebra mussels also concentrate 10 times more toxins than do native mussels, and these contaminants are passed up the food chain to the fish and birds that eat zebra mussels. Since bioaccumulation of toxins has already led to advisories against eating some species of Great Lakes fish, the contribution of zebra mussels to contaminant cycling in lake species is a serious concern. See also Cultural eutrophication; Water pollution [Bill Freedman Ph.D.]
RESOURCES BOOKS Ashworth, W. The Late, Great Lakes: An Environmental History. New York: Knopf, 1986. Freedman, B. Environmental Ecology. 2nd edition San Diego: Academic Press, 1995.
PERIODICALS Regier, H. A., and W. L. Hartman. “Lake Erie’s Fish Community: 150 Years of Cultural Stresses.” Science 180 (1973): 1248–55.
Environmental Encyclopedia 3 OTHER “Zebra Mussels and Other Nonindigenous Species.” Sea Grant Great Lakes Network. August 15, 2001 [June 19, 2002]. .
Lake Tahoe A beautiful lake 6,200 ft (1,891 m) high in the Sierra Nevada, straddling the California-Nevada state line, Lake Tahoe is a jewel to both nature-lovers and developers. It is the tenth deepest lake in the world, with a maximum depth of 1,600 ft (488 m) and a total volume of 37 trillion gallons. At the south end of the lake sits a dam that supplies up to six feet of Lake Tahoe’s water flow into the outlet of the Truckee River. The U.S. Bureau of Reclamation controls water diversion into the Truckee, which is used for irrigation, power, and recreational purposes throughout Nevada. Tahoe and Crater Lake are the only two large alpine lakes remaining in the United States. Visitors have expressed their awe of the lake’s beauty since it was discovered by General John Fre´mont in 1844. Mark Twain wrote that it was “the fairest sight the whole Earth affords.” The arrival of Europeans in the Tahoe area was quickly followed by environmental devastation. Between 1870 and 1900, forests around the lake were heavily logged to provide timber for the mine shafts of the Comstock Lode. While this logging dramatically altered the area’s appearance for years, the natural environment eventually recovered and no long-term logging-related damage to the lake can now be detected. The same can not be said for a later assault on the lake’s environment. Shortly after World War II, people began moving into the area to take advantage of the region’s natural wonders—the lake itself and superb snow skiing— as well as the young casino business on the Nevada side of the lake. The 1960 Winter Olympics, held at Squaw Valley, placed Tahoe’s recreational assets in the international spotlight. Lakeside population grew from about 20,000 in 1960 to more than 65,000 today, with an estimated tourist population of 22 million annually. The impact of this rapid population growth soon became apparent in the lake itself. Early records showed that the lake was once clear enough to allow visibility to a depth of about 130 ft (40 m). By the late 1960s, that figure had dropped to about 100 ft (30 m). Tahoe is now undergoing eutrophication at a fairly rapid rate. Algal growth is being encouraged by sewage and fertilizer produced by human activities. Much of the area’s natural pollution controls, such as trees and plants, have been removed to make room for residential and commercial development. The lack of significant flow into and out of
Lake Tahoe
the lake also contributes to a favorable environment for algal growth. Efforts to protect the pristine beauty of Lake Tahoe go back at least to 1912. Three efforts were made during that decade to have the lake declared a national park, but all failed. By 1958, concerned conservationists had formed the Lake Tahoe Area Council to “promote the preservation and long-range development of the Lake Tahoe basin.” The Council was followed by other organizations with similar objectives, the League to Save Lake Tahoe among them. An important step in resolving the conflict between preservationists and developers occurred in 1969 with the creation of the Tahoe Regional Planning Agency (TRPA). The agency was the first and only land use commission with authority in more than one state. It consisted of fourteen members, seven appointed by each of the governors of the two states involved, California and Nevada. For more than a decade, the agency attempted to write a land-use plan that would be acceptable to both sides of the dispute. The conflict became more complex when the California Attorney General, John Van de Kamp, filed suit in 1985 to prevent TRPA from granting any further permits for development. Developers were outraged but lost all of their court appeals. By 2000, the strain of tourism, development, and nonpoint automobile pollution was having a visible impact on Lake Tahoe’s legendary deep blue surface. A study released by the University of California—Davis and the University of Nevada—Reno reported that visibility in the lake had decreased to 70 ft (21 m), an average decline of a foot a year since the 1960s. As part of a renewed effort to reverse Tahoe’s environmental decline, President Clinton signed the Lake Tahoe Restoration Act into law in late 2000, authorizing $300 million towards restoration of water quality in Lake Tahoe over a period of 10 years. See also Algal bloom; Cultural eutrophication; Environmental degradation; Fish kills; Sierra Club; Water pollution [David E. Newton and Paula Anne Ford-Martin]
RESOURCES BOOKS Strong, Douglas. Tahoe: From Timber Barons to Ecologists.Lincoln, NE: Bison Books, 1999.
OTHER United States Department of Agriculture (USDA) Forest Service. Lake Tahoe Basin Management Unit. [cited July 8, 2002]. . University of California-Davis. Tahoe Research Group. [cited July 8, 2002]. . United States Geological Survey (USGS) Lake Tahoe Data Clearinghouse. Lake Tahoe Data Clearinghouse. [cited July 8, 2002]. .
807
Environmental Encyclopedia 3
Lake Washington
Lake Washington One of the great messages to come out of the environmental movement of the 1960s and 1970s is that, while humans can cause pollution, they can also clean it up. Few success stories illustrate this point as clearly as that of Lake Washington. Lake Washington lies along the state of Washington’s west coastline, near the city of Seattle. It is 24 miles (39 km) from north to south and its width varies from 2–4 miles (3–6 km). For the first half of this century, Lake Washington was clear and pristine, a beautiful example of the Northwest’s spectacular natural scenery. Its shores were occupied by extensive wooded areas and a few small towns with populations of no more than 10,000. The lake’s purity was not threatened by Seattle, which dumped most of its wastes into Elliot Bay, an arm of Puget Sound. This situation changed rapidly during and after World War II. In 1940, the spectacular Lake Washington Bridge was built across the lake, joining its two facing shores with each other and with Seattle. Population along the lake began to boom, reaching more than 50,000 by 1950. The consequence of these changes for the lake are easy to imagine. Many of the growing communities dumped their raw sewage directly into the lake or, at best, passed their wastes though only preliminary treatment stages. By one estimate, 20 million gallons (76 million liters) of wastes were being dumped into the lake each day. On average these wastes still contained about half of their pollutants when they reached the lake. In less than a decade, the effect of these practices on lake water quality were easy to observe. Water clarity was reduced from at least 15 ft (4.6 m) to 2.5 ft (0.8 m) and levels of dissolved oxygen were so low that some species of fish disappeared. In 1956, W. T. Edmonson, a zoologist and pollution authority, and two colleagues reported their studies of the lake. They found that eutrophication of the lake was taking place very rapidly as a result of the dumping of domestic wastes into its water. Solving this problem was especially difficult because water pollution is a regional issue over which each individual community had relatively little control. The solution appeared to be the creation of a new governmental body that would encompass all of the Lake Washington communities, including Seattle. In 1958, a ballot measure establishing such an agency, known as Metro, was passed in Seattle but defeated in its suburbs. Six months later, the Metro concept was redefined to include the issue of sewage disposal only. This time it passed in all communities. Metro’s approach to the Lake Washington problem was to construct a network of sewer lines and sewage treatment plants that directed all sewage away from the lake and delivered it instead to Puget Sound. The lake’s 808
pollution problems were solved within a few years. By 1975 the lake was back to normal, water clarity returned to 15 ft and levels of potassium and nitrogen in the lake decreased by more than 60 percent. Lake Washington’s biological oxygen demand (BOD), a critical measure of water purity, decreased by 90 percent and fish species that had disappeared were once again found in the lake. See also Aquatic chemistry; Cultural eutrophication; Waste management; Water quality standards [David E. Newton]
RESOURCES BOOKS Edmonson, W. T. “Lake Washington.” In Environmental Quality and Water Development, edited by C. R. Goodman, et al. San Francisco: W. H. Freeman, 1973. ———. The Uses of Ecology: Lake Washington and Beyond. Seattle: University of Washington Press, 1991.
OTHER Li, Kevin. “The Lake Washington Story.” King County Web Site. May 2, 2001 [June 19,2002]. .
Lakes see Experimental Lakes Area; Lake Baikal; Lake Erie; Lake Tahoe; Lake Washington; Mono Lake; National lakeshore
Land degradation see Desertification
Land ethic Land ethic refers to an approach to issues of land use that emphasizes conservation and respect for our natural environment. Rejecting the belief that all natural resources should be available for unchecked human exploitation, a land ethic advocates land use without undue disturbances of the complex, delicately balanced ecological systems of which humans are a part. Land ethic, environmental ethics, and ecological ethics are sometimes used interchangeably. Discussions of land ethic, especially in the United States, usually begin with a reference of some kind to Aldo Leopold. Many participants in the debate over land and resource use admire Leopold’s prescient and pioneering quest and date the beginnings of a land ethic to his A Sand County Almanac, published in 1949. However, Leopold’s earliest
Environmental Encyclopedia 3 formulation of his position may be found in “A Conservation Ethic,” a benchmark essay on ethics published in 1933. Even recognizing Leopold’s remarkable early contribution, it is still necessary to place his pioneer work in a larger context. Land ethic is not a radically new invention of the twentieth century but has many ancient and modern antecedents in the Western philosophical tradition. The Greek philosopher Plato, for example, wrote that morality is “the effective harmony of the whole"—not a bad statement of an ecological ethic. Reckless exploitation has at times been justified as enjoying divine sanction in the Judeo-Christian tradition (man was made master of the creation, authorized to do with it as he saw fit). However, most Christian thought through the ages has interpreted the proper human role as one of careful husbandry of resources that do not, in fact, belong to humans. In the nineteenth century, the Huxleys, Thomas and Julian, worked on relating evolution and ethics. The mathematician and philosopher Bertrand Russell wrote that “man is not a solitary animal, and so long as social life survives, self-realization cannot be the supreme principle of ethics.” Albert Schweitzer became famous—at about the same time that Leopold formulated a land ethic—for teaching reverence for life, and not just human life. Many nonwestern traditions also emphasize harmony and a respect for all living things. Such a context implies that a land ethic cannot easily be separated from age-old thinking on ethics in general. See also Land stewardship [Gerald L. Young and Marijke Rijsberman]
RESOURCES BOOKS Bormann, F. H., and S. R. Kellert, eds. Ecology, Economics, Ethics: The Broken Circle. New Haven, CT: Yale University Press, 1991. Kealey, D. A. Revisioning Environmental Ethics. Albany: State University of New York Press, 1989. Leopold, A. A Sand County Almanac. New York: Oxford University Press, 1949. Nash, R. F. The Rights of Nature: A History of Environmental Ethics. Madison: University of Wisconsin Press, 1989. Rolston, H. Environmental Ethics. Philadelphia: Temple University Press, 1988. Turner, F. “A New Ecological Ethics.” In Rebirth of Value. Albany: State University of New York Press, 1991.
OTHER Callicott, J. Baird. “The Land Ethic: Key Philosophical and Scientific Challenges.” October 15, 1998 [June 19, 2002]. .
Land Institute Founded in 1976 by Wes and Dana Jackson, the Land Institute is both an independent agricultural research station
Land Institute
and a school devoted to exploring and developing alternative agricultural practices. Located on the Smoky Hill River near Salina, Kansas, the Institute attempts—in Wes Jackson’s words—to “make nature the measure” of human activities so that humans “meet the expectations of the land,” rather than abusing the land for human needs. This requires a radical rethinking of traditional and modern farming methods. The aim of the Land Institute is to find “new roots for agriculture” by reexamining its traditional assumptions. In traditional tillage farming, furrows are dug into the topsoil and seeds planted. This leaves precious topsoil exposed to erosion by wind and water. Topsoil loss can be minimized but not eliminated by contour plowing, the use of windbreaks, and other means. Although critical of traditional tillage agriculture, Jackson is even more critical of the methods and machinery of modern industrial agriculture, which in effect trades topsoil for high crop yields (roughly one bushel of topsoil is lost for every bushel of corn harvested). It also relies on plant monocultures—genetically uniform strains of corn, wheat, soybeans, and other crops. These crops are especially susceptible to disease and insect infestations and require extensive use of pesticides and herbicides which, in turn, kill useful creatures (for example, worms and birds), pollute streams and groundwater, and produce other destructive side effects. Although spectacularly successful in the short run, such an agriculture is both nonsustainable and self-defeating. Its supposed strengths—its productivity, its efficiency, its economies of scale—are also its weaknesses. Short-term gains in production do not, Jackson argues, justify the longer term depletion of topsoil, the diminution of genetic diversity, and such social side-effects as the disappearance of small family farms and the abandonment of rural communities. If these trends are to be questioned—much less slowed or reversed—a practical, productive, and feasible alternative agriculture must be developed. To develop such a workable alternative is the aim of the Land Institute. The Jacksons and their associates are attempting to devise an alternative vision of agricultural possibilities. This begins with the important but oft-neglected truism that agriculture is not selfcontained but is intertwined with and dependent on nature. The Institute explores the feasibility of alternative farming methods that might minimize or even eliminate the planting and harvesting of annual crops, turning instead to “herbaceous perennial seed-producing polycultures” that protect and bind topsoil. Food grains would be grown in pasturelike fields and intermingled with other plants that would replenish lost nitrogen and other nutrients, without relying on chemical fertilizers. Covered by a rooted living net of diverse plant life, the soil would at no time be exposed to erosion and would be aerated and rejuvenated by natural 809
Environmental Encyclopedia 3
Land reform
means. And the farmer, in symbiotic partnership, would take nature as the measure of his methods and results. The experiments at the Land Institute are intended to make this vision into a workable reality. It is as yet too early to tell exactly what these continuing experiments might yield. But the re-visioning of agriculture has already begun and continues at the Land Institute. [Terence Ball]
such as health care and education. Without these measures land reform usually falls short of redistributing wealth and power, or fails to maintain or increase production. See also Agricultural pollution; Sustainable agriculture; Sustainable development [Linda Rehkopf]
RESOURCES BOOKS
RESOURCES ORGANIZATIONS
Mengisteab, K. Ethiopia: Failure of Land Reform and Agricultural Crisis. Westport, CT: Greenwood Publishing Group, 1990.
The Land Institute, 2440 E. Water Well Road, Salina, KS USA 67401 (785) 823-5376, Fax: (785) 823-8728, Email: thelandweb@ landinstitute.org,
PERIODICALS
Land reform Land reform is a social and political restructuring of the agricultural systems through redistribution of land. Successful land reform policies take into account the political, social, and economic structure of the area. In agrarian societies, large landowners typically control the wealth and the distribution of food. Land reform policies in such societies allocate land to small landowners, to farm workers who own no land, to collective farm operations, or to state farm organizations. The exact nature of the allocation depends on the motivation of those initiating the changes. In areas where absentee ownership of farmland is common, land reform has become a popular method for returning the land to local ownership. Land reforms generally favor the family-farm concept, rather than absentee landholding. Land reform is often undertaken as a means of achieving greater social equality, but it can also increase agricultural productivity and benefit the environment. A tenant farmer may have a more emotional and protective relation to the land he works, and he may be more likely to make agricultural decisions that benefit the ecosystem. Such a farmer might, for instance, opt for natural pest control. An absentee owner often does not have the same interest in land stewardship. Land reform does have negative connotations and is often associated with the state collective farms under communism. Most proponents of land reform, however, do not consider these collective farms good examples, and they argue that successful land reform balances the factors of production so that the full agricultural capabilities of the land can be realized. Reforms should always be designed to increase the efficiency and economic viability of farming. Land reform is usually more successful if it is enacted with agrarian reforms, which may include the use of agricultural extension agents, agricultural cooperatives, favorable labor legislation, and increased public services for farmers, 810
Perney, L. “Unquiet on the Brazilian Front.” Audubon 94 (January-February 1992): 26–9.
Land stewardship Little has been written explicitly on the subject of land stewardship. Much of the literature that does exist is limited to a biblical or theological treatment of stewardship. However, literature on the related ideas of sustainability and the land ethic has expanded dramatically in recent years, and these concepts are at the heart of land stewardship. Webster’s and the Oxford English Dictionary both define a “steward” as an official in charge of a household, church, estate, or governmental unit, or one who makes social arrangements for various kinds of events; a manager or administrator. Similarly, stewardship is defined as doing the job of a steward or, in ecclesiastical terms, as “the responsible use of resources,” meaning especially money, time and talents, “in the service of God.” Intrinsic in those restricted definitions is the idea of responsible caretakers, of persons who take good care of the resources in their charge, including natural resources. “Caretaking” universally includes caring for the material resources on which people depend, and by extension, the land or environment from which those resources are extracted. Any concept of steward or stewardship must include the notion of ensuring the essentials of life, all of which derive from the land. While there are few works written specifically on land stewardship, the concept is embedded implicitly and explicitly in the writings of many articulate environmentalists. For example, Wendell Berry, a poet and essayist, is one of the foremost contemporary spokespersons for stewardship of the land. In his books, Farming: A Handbook (1970), The Unsettling of America (1977), The Gift of Good Land (1981), and Home Economics (1987), Berry shares his wisdom on caring for the land and the necessity of stewardship. He finds a mandate for good stewardship in religious traditions, includ-
Environmental Encyclopedia 3 ing Judaism and Christianity: “The divine mandate to use the world justly and charitably, then, defines every person’s moral predicament as that of a steward.” Berry, however, does not leave stewardship to divine intervention. He describes stewardship as “hopeless and meaningless unless it involves long-term courage, perseverance, devotion, and skill” on the part of individuals, and not just farmers. He suggests that when we lost the skill to use the land properly, we lost stewardship. However, Berry does not limit his notion of stewardship to a biblical or religious one. He lays down seven rules of land stewardship—rules of “living right.” These are: using the land will lead to ruin of the land unless it “is properly cared for;” Oif people do not know the land intimately, they cannot care for it properly; Omotivation to care for the land cannot be provided by “general principles or by incentives that are merely economic;” Omotivation to care for the land, to live with it, stems from an interest in that land that “is direct, dependable, and permanent;” Omotivation to care for the land stems from an expectation that people will spend their entire lives on the land, and even more so if they expect their children and grandchildren to also spend their entire lives on that same land; Othe ability to live carefully on the land is limited; owning too much acreage, for example, decreases the quality of attention needed to care for the land; Oa nation will destroy its land and therefore itself if it does not foster rural households and communities that maintain people on the land as outlined in the first six rules. Stewardship implies at the very least then, an attempt to reconnect to a piece of land. Reconnecting means getting to know that land as intimately as possible. This does not necessarily imply ownership, although enlightened ownership is at the heart of land stewardship. People who own land have some control of it, and effective stewardship requires control, if only in the sense of enough power to prevent abuse. But, ownership obviously does not guarantee stewardship—great and widespread abuses of land are perpetrated by owners. Absentee ownership, for example, often means a lack of connection, a lack of knowledge, and a lack of caring. And public ownership too often means non-ownership, leading to the “Tragedy of the Commons.” Land ownership patterns are critical to stewardship, but no one type of ownership guarantees good stewardship. Berry argues that true land stewardship usually begins with one small piece of land, used or controlled or owned by an individual who lives on that land. Stewardship, however, extends beyond any one particular piece of land. It implies O
Land Stewardship Project
knowledge, and caring for, the entire system of which that land is a part, a knowledge of a land’s context as well as its content. It also requires understanding the connections between landowners or land users and the larger communities of which they are a part. This means that stewardship depends on interconnected systems of ecology and economics, of politics and science, of sociology and planning. The web of life that exists interdependent with a piece of land mandates attention to a complex matrix of connections. Stewardship means keeping the web intact and functional, or at least doing so on enough land over a long-enough period of time to sustain the populations dependent on that land. Berry and many other critics of contemporary landuse patterns and policies claim that little attention is being paid to maintaining the complex communities on which sustenance, human and otherwise, depends. Until holistic, ecological knowledge becomes more of a basis for economic and political decision-making, they assert, stewardship of the critical land-base will not become the norm. See also Environmental ethics; Holistic approach; Land use; Sustainable agriculture; Sustainable biosphere; Sustainable development [Gerald L. Young Ph.D.]
RESOURCES BOOKS Byron, W. J. Toward Stewardship: An Interim Ethic of Poverty, Power and Pollution. New York: Paulist Press, 1975. de Jouvenel, B. “The Stewardship of the Earth.” In The Fitness of Man’s Environment. New York: Harper & Row, 1968. Knight, Richard L., and Peter B. Landres, eds. Stewardship Across Boundaries. Washington, DC: Island Press, 1998. Paddock, J., N. Paddock, and C. Bly. Soil and Survival: Land Stewardship and the Future of American Agriculture. San Francisco: Sierra Club Books, 1986.
Land Stewardship Project The Land Stewardship Project (LSP) is a nonprofit organization based in Minnesota and committed to promoting an ethic of environmental and agricultural stewardship. The group believes that the natural environment is not an exploitable resource but a gift given to each generation for safekeeping. To preserve and pass on this gift to future generations, for the LSP, is both a moral and a practical imperative. Founded in 1982, the LSP is an alliance of farmers and city-dwellers dedicated both to preserving the small family farm and practicing sustainable agriculture. Like Wendell Berry and Wes Jackson (with whom they are affiliated), the LSP is critical of conventional agricultural 811
Environmental Encyclopedia 3
Land trusts
practices that emphasize plant monocultures, large acreage, intensive tillage, extensive use of herbicides and pesticides, and the economies of scale that these practices make possible. The group believes that agriculture conducted on such an industrial scale is bound to be destructive not only of the natural environment but of family farms and rural communities as well. The LSP accordingly advocates the sort of smaller scale agriculture that, in Berry’s words, “depletes neither soil, nor people, nor communities.” The LSP sponsors legislative initiatives to save farmland and wildlife habitat, to limit urban sprawl and protect family farms, and to promote sustainable agricultural practices. It supports educational and outreach programs to inform farmers, consumers, and citizens about agricultural and environmental issues. The LSP also publishes a quarterly Land Stewardship Letter and distributes video tapes about sustainable agriculture and other environmental concerns. [Terence Ball]
RESOURCES ORGANIZATIONS The Land Stewardship Project, 2200 4th Street, White Bear Lake, MN USA 55110 (651) 653-0618, Fax: (651) 653-0589, Email:
[email protected],
Land trusts A land trust is a private, legally incorporated, nonprofit organization that works with property owners to protect open land through direct, voluntary land transactions. Land trusts come in many varieties, but their intent is consistent. Land trusts are developed for the purpose of holding land against a development plan until the public interest can be ascertained and served. Some land trusts hold land open until public entities can purchase it. Some land trusts purchase land and manage it for the common good. In some cases land trusts buy development rights to preserve the land area for future generations while leaving the current use in the hands of private interests with written documentation as to how the land can be used. This same technique can be used to adjust land use so that some part of a parcel is preserved while another part of the same parcel can be developed, all based on land sensitivity. There is a hierarchy of land trusts. Some trusts protect areas as small as neighborhoods, forming to address one land use issue after which they disband. More often, land trusts are local in nature but have a global perspective with regard to their goals for future land protection. The big national trusts are names that we all recognize such as the Conservation Fund, The Nature Conservancy, the Ameri812
can Farmland Trust, and the Trust for Public Land. The Land Trust Alliance coordinates the activities of many land trusts. Currently, there are over 1,200 local and regional land trusts in the United States. Some of these trusts form as a direct response to citizen concerns about the loss of open space. Most land trusts evolve out of citizen’s concerns over the future of their state, town, and neighborhood. Many are preceded by failures on the part of local governments to respond to stewardship mandates by the voters. Land trusts work because they are built by concerned citizens and funded by private donations with the express purpose of securing the sustainability of an acceptable quality of life. Also, land trusts are effective because they purchase land (or development rights) from local people for local needs. Transactions are often carried out over a kitchen table with neighbors discussing priorities. In some cases the trust’s board of directors might be engaged in helping a citizen to draw up a will leaving farmland or potential recreation land to the community. This home rule concept is the backbone of the land trust movement. Additionally, land trusts gain strength from public/private partnerships that emerge as a result of shared objectives with governmental agencies. If the work of the land trust is successful, part of the outcome is an enhanced ability to cooperate with local government agencies. Agencies learn to trust the land trust staff and begin to rely on the special expertise that grows within a land trust organization. In some cases the land trust gains both opportunities and resources as a result of its partnership with governmental agencies. This public/private partnership benefits citizens as projects come together and land use options are retained for current and future generations. Flexibility is an important and essential quality of a land trust that enables it to be creative. Land trusts can have revolving accounts, or lines of credit, from banks that allow them to move quickly to acquire land. Compensation to landowners who agree to work with the trust may come in the form of extended land use for the ex-owner, land trades, tax compensation, and other compensation packages. Often some mix of protection and compensation packages will be created that a governmental agency simply does not have the ability to implement. A land trust’s flexibility is its most important attribute. Where a land trust can negotiate land acquisition based on a discussion among the board members, a governmental agency would go through months or even years of red tape before an offer to buy land for the public domain could be made. This quality of land trusts is one reason why many governmental agencies have built relationships with land trusts in order to protect land that the agency deems sensitive and important. There are some limiting factors constraining what land trusts can do. For the more localized trusts, limited volunteer staff and extremely limited budgets cause fund raising to
Environmental Encyclopedia 3 become a time-consuming activity. Staff turnover can be frequent so that a knowledge base is difficult to maintain. In some circumstances influential volunteers can capture a land trust organization and follow their own agenda rather than letting the agenda be set by affected stakeholders. Training is needed for those committed to working within the legal structure of land trusts. The national Trust for Public Lands has established training opportunities to better prepare local land trust staff for the complex negotiations that are needed to protect public lands. Staff that work with local citizenry to protect local needs must be aware of the costs and benefits of land preservation mechanisms. Lease purchase agreements, limited partnerships, and fee simple transactions all require knowledge of real estate law. Operating within enterprise zones and working with economic development corporations requires knowledge of state and federal programs that provide money for projects on the urban fringe. In some cases urban renewal work reveals open space within the urban core that can be preserved for community gardens or parks if that land can be secured using HUD funds or other government financing mechanisms. A relatively new source of funding for land acquisition is mitigation funds. These funds are usually generated as a result of settlements with industry or governmental agencies as compensation for negative land impacts. Distinguishing among financing mechanisms requires specialized knowledge that land trust staff need to have available within their ranks in order to move quickly to preserve open space and enhance the quality of life for urban dwellers. On the other hand some land trusts in rural areas are interested in conserving farmlands using preserves that allow farmers to continue to farm while protecting the rural character of the countryside. Like their urban counterparts, these farmland preserve programs are complex, and if they are to be effective the trust needs to employ its solid knowledge of economic trends and resources. The work that land trusts do is varied. In some cases a land trust incorporates as a result of a local threat, such as a pipeline or railway coming through an area. In some cases a trust forms to counter an undesirable land use such as a landfill or a low-level radioactive waste storage facility. In other instances, a land trust comes together to take advantage of a unique opportunity, such as a family wanting to sell some pristine forest close to town or an industry deciding to relocate leaving a lovely waterfront location with promise as a riverfront recreation area. It is rare that a land trust forms without a focused need. However, after the initial project is completed, its success breeds selfconfidence in those who worked on the project and new opportunities or challenges may sustain the goals of the fledgling organization. There are many examples of land trusts and the few highlighted here may help to enhance understanding of the
Land trusts
value of land trust activities and to offer guidance to local groups wanting to preserve land. One outstanding example of land trust activity is the Rails to Trails program in Michigan. Under this program, abandoned railroad right of ways are preserved to create green belts for recreation use through agreements with the railroad companies. The Trust for Public Lands (TPL) has assisted many local land trusts to implement a wide variety of land acquisition projects. One such complex agreement took place in Tucson, Arizona. In this case, the Tucson city government wanted to acquire seven parcels of land that totaled 40 acres (164 ha). For financial reasons the city was not able to acquire the land. At that point the Trust for Public Land was asked to become a private nonprofit partner and to work with the city to acquire the land. TPL used its creative expertise to help each of the landowners make mutually beneficial arrangements with the city so that a large urban park could become a reality. In some cases the TPL offered a life tenancy to the current owners in exchange for a reduced land price. In another case they offered a five-year tenancy and a job as caretaker, in exchange for a reduced purchase price. As the community worked on the future of the park, another landowner who owned a contiguous parcel stepped forward with an offer to sell. Each of these transactions was successful because the land trust was flexible, considerate of the land owners and up front about the goals of their work, and responsive to their public partner, the city government. Our current land trust effort in the United States has affected the way we protect our sensitive lands, reclaim damaged lands, and respond to local needs. Land trusts conserve land, guide future planning, educate local citizens and government agencies to a new way of doing business, and do it all with a minimum amount of confrontation and legal interaction. These private, non-profit organizations have stepped in and filled a niche in the environmental conservation movement started in the 1970s and have gotten results through a system of cooperative and well-informed action. [Cynthia Fridgen]
RESOURCES BOOKS Diamond, H. L., and P. F. Noonan. Land Use in America: The Report of the Sustainable Use of Land Project. Lincoln Institute of Land Policy, Washington, DC: Island Press, 1996. Endicott, E., ed. Land Conservation Through Public/Private Partnerships. Lincoln Institute of Land Policy, Washington DC: Island Press, 1993. Platt, R. H. Land Use and Society: Geography, Law, and Public Policy. Washington DC: Island Press, 1996.
OTHER Land Trust Alliance. 2002 [June 20, 2002]. . Trust for Public Land. 2002 [June 20, 2002]. .
813
Environmental Encyclopedia 3
Land use
Land use Land is any part of the earth’s surface that can be owned as property. Land comprises a particular segment of the earth’s crust and can be defined in specific terms. The location of the land is extremely important in determining land use and land value. Land is limited in supply, and, as our population increases, we have less land to support each person. Land nurtures the plants and animals that provide our food and shelter. It is the watershed or reservoir for our water supply. Land provides the minerals we utilize, the space on which we build our homes, and the site of many recreational activities. Land is also the depository for much of the waste created by modern society. The growth of human population only provides a partial explanation for the increased pressure on land resources. Economic development and a rise in the standard of living have brought about more demands for the products of the land. This demand now threatens to erode the land resource. We are terrestrial in our activities and as our needs have diversified, so has land use. Conflicts among the competing land uses have created the need for land-use planning. Previous generations have used and misused the land as though the supply was inexhaustible. Today, goals and decisions about land use must take into account and link information from the physical and biological sciences with the current social values and political realities. Land characteristics and ownership provide a basis for the many uses of land. Some land uses are classified as irreversible, for example, when the application of a particular land use changes the original character of the land to such a extent that reversal to its former use is impracticable. Reversible land uses do not change the soil cover or landform, and the land manager has many options when overseeing reversible land uses. A framework for land-use planning requires the recognition that plans, policies, and programs must consider physical and biological, economical, and institutional factors. The physical framework of land focuses on the inanimate resources of soil, rocks and geological features, water, air, sunlight, and climate. The biological framework involves living things such as plants and animals. A key feature of the physical and biological framework is the need to maintain healthy ecological relationships. The land can support many human activities, but there are limits. Once the resources are brought to these limits, they can be destroyed and replacing them will be difficult. The economic framework for land use requires that operators of land be provided sufficient returns to cover the cost of production. Surpluses of returns above costs must be realized by those who make the production decisions and 814
World land use. (McGraw-Hill Inc. Reproduced by permission.)
by those who bear the production costs. The economic framework provides the incentive to use the land in a way that is economically feasible. The institutional framework requires that programs and plans be acceptable within the working rules of society. Plans must also have the support of current governments. A basic concept of land use is the right of land—who has the right to decide the use of a given tract of land. Legal decisions have provided the framework for land resource protection. Attitudes play an important role in influencing land use decisions, and changes in attitudes will often bring changes in our institutional framework. Recent trends in land use in the United States show that substantial areas have shifted to urban and transportation uses, state and national parks, and wildlife refuges since 1950. The use of land has become one of our most serious environmental concerns. Today’s land use decisions will determine the quality of our future life styles and environment. The land use planning process is one of the most complex and least understood domestic concerns facing the nation. Additional changes in the institutional framework governing land use are necessary to allow society to protect the most limited resource on the planet—the land we live on. [Terence H. Cooper]
RESOURCES BOOKS Beatty, M. T. Planning the Uses and Management of Land. Series in Agronomy, no. 21. Madison, WI: American Standards Association, 1979. Davis, K. P. Land Use. New York: McGraw-Hill, 1976. Fabos, J. G. Land-Use Planning: From Global to Local Challenge. New York: Chapman and Hall, 1985. Lyle, John T., and Joan Woodward. Design for Human Ecosystems: Landscape, Land Use and Natural Resources. Washington, DC: Island Press, 1999.
Environmental Encyclopedia 3
Landfill
McHarg, I. L. Design With Nature. New York: John Wiley and Sons, 1995. Silber, Jane, and Chris Maser. Land-Use Planning for Sustainable Development. Boca Raton: CRC Press, 2000.
Landfill Surface water, oceans and landfills are traditionally the main repositories for society’s solid and hazardous waste. Landfills are located in excavated areas such as sand and gravel pits or in valleys that are near waste generators. They have been cited as sources of surface and groundwater contamination and are believed to pose a significant health risk to humans, domestic animals, and wildlife. Despite these adverse effects and the attendant publicity, landfills are likely to remain a major waste disposal option for the immediate future. Among the reasons that landfills remain a popular alternative are their simplicity and versatility. For example, they are not sensitive to the shape, size, or weight of a particular waste material. Since they are constructed of soil, they are rarely affected by the chemical composition of a particular waste component or by any collective incompatibility of co-mingled wastes. By comparison, composting and incineration require uniformity in the form and chemical properties of the waste for efficient operation. Landfills also have been a relatively inexpensive disposal option, but this situation is rapidly changing. Shipping costs, rising land prices, and new landfill construction and maintenance requirements contribute to increasing costs. About 57% of the solid waste generated in the United States still is dumped in landfills. In a sanitary landfill, refuse is compacted each day and covered with a layer of dirt. This procedure minimizes odor and litter, and discourages insect and rodent populations that may spread disease. Although this method does help control some of the pollution generated by the landfill, the fill dirt also occupies up to 20 percent of the landfill space, reducing its waste-holding capacity. Sanitary landfills traditionally have not been enclosed in a waterproof lining to prevent leaching of chemicals into groundwater, and many cases of groundwater pollution have been traced to landfills. Historically landfills were placed in a particular location more for convenience of access than for any environmental or geological reason. Now more care is taken in the siting of new landfills. For example, sites located on faulted or highly permeable rock are passed over in favor of sites with a less-permeable foundation. Rivers, lakes, floodplains, and groundwater recharge zones are also avoided. It is believed that the care taken in the initial siting of a landfill will reduce the necessity for future clean-up and site rehabilitation. Due to these and other factors, it is becoming increasingly difficult to find suitable locations for new landfills.
A secure landfill. (McGraw-Hill Inc. Reproduced by permission.)
Easily accessible open space is becoming scarce and many communities are unwilling to accept the siting of a landfill within their boundaries. Many major cities have already exhausted their landfill capacity and must export their trash, at significant expense, to other communities or even to other states and countries. Although a number of significant environmental issues are associated with the disposal of solid waste in landfills, the disposal of hazardous waste in landfills raises even greater environmental concerns. A number of urban areas contain hazardous waste landfills. Love Canal is, perhaps, the most notorious example of the hazards associated with these landfills. This Niagara Falls, New York neighborhood was built over a dump containing 20,000 metric tons of toxic chemical waste. Increased levels of cancer, miscarriages, and birth defects among those living in Love Canal led to the eventual evacuation of many residents. The events at Love Canal were also a major impetus behind the passage of the Comprehensive Environmental Response, Compensation and Liability Act in 1980, designed to clean up such sites. The U. S. Environmental Protection Agency estimates that there may be as many as 2,000 hazardous waste disposal sites in this country that pose a significant threat to human health or the environment. Love Canal is only one example of the environmental consequences that can result from disposing of hazardous waste in landfills. However, techniques now exist to create secure landfills that are an acceptable disposal option for hazardous waste in many cases. The bottom and sides of a secure landfill contain a cushion of recompacted clay that is flexible and resistant to cracking if the ground shifts. This clay layer is impermeable to groundwater and safely contains the waste. A layer of gravel containing a grid of perforated drain pipes is laid over the clay. These pipes collect any 815
Landscape ecology
seepage that escapes from the waste stored in the landfill.
Over the gravel bed a thick polyethylene liner is positioned. A layer of soil or sand covers and cushions this plastic liner, and the wastes, packed in drums, are placed on top of this layer. When the secure landfill reaches capacity it is capped by a cover of clay, plastic and soil, much like the bottom layers. Vegetation in planted to stabilize the surface and make the site more attractive. Sump pumps collect any fluids that filter through the landfill either from rainwater or from waste leakage. This liquid is purified before it is released. Monitoring wells around the site ensure that the groundwater does not become contaminated. In some areas where the water table is particularly high, above-ground storage may be constructed using similar techniques. Although such facilities are more conspicuous, they have the advantage of being easier to monitor for leakage. Although technical solutions have been found to many of the problems associated with secure landfills, several nontechnical issues remain. One of these issues concerns the transportation of hazardous waste to the site. Some states do not allow hazardous waste to be shipped across their territory because they are worried about the possibility of accidental spills. If hazardous waste disposal is concentrated in only a few sites, then a few major transportation routes will carry large volumes of this material. Citizen opposition to hazardous waste landfills is another issue. Given the past record of corporate and governmental irresponsibility in dealing with hazardous waste, it is not surprising that community residents greet proposals for new landfills with the NIMBY (Not In My BackYard) response. However, the waste must go somewhere. These and other issues must be resolved if secure landfills are to be a viable long-term solution to hazardous waste disposal. See also Groundwater monitoring; International trade in toxic waste; Storage and transportation of hazardous materials [George M. Fell and Christine B. Jeryan]
RESOURCES BOOKS Bagchi, A. Design, Construction and Monitoring of Landfills. 2nd ed. New York: Wiley, 1994. Neal, H. A. Solid Waste Management and the Environment: The Mounting Garbage and Trash Crisis. Englewood Cliffs, NJ: Prentice-Hall, 1987. Noble, G. Siting Landfills and Other LULUs. Lancaster, PA: Technomic Publishing, 1992. Requirements for Hazardous Waste Landfill Design, Construction and Closure. Cincinnati: U.S. Environmental Protection Agency, 1989.
PERIODICALS “Experimental Landfills Offer Safe Disposal Options.” Journal of Environmental Health 51 (March-April 1989): 217–18.
816
Environmental Encyclopedia 3 Loupe, D. E. “To Rot or Not; Landfill Designers Argue the Benefits of Burying Garbage Wet vs. Dry.” Science News 138 (October 6, 1990): 218–19+. Wingerter, E. J., et al. “Are Landfills and Incinerators Part of the Answer? Three Viewpoints.” EPA Journal 15 (March-April 1989): 22–26.
Landscape ecology Landscape ecology is an interdisciplinary field that emerged from several intellectual traditions in Europe and North America. An identifiable landscape ecology started in central Europe in the 1960s and in North America in the late 1970s and early 1980s. It became more visible with the establishment, in 1982, of the International Association of Landscape Ecology, with the publication of a major text in the field, Landscape Ecology, by Richard Forman and Michel Godron in 1984, and with the publication of the first issue of the association’s journal, Landscape Ecology in 1987. The phrase ’landscape ecology’ was first used in 1939 by the German geographer, Carl Troll. He suggested that the “concept of landscape ecology is born from a marriage of two scientific outlooks, the one geographical (landscape), the other biological (ecology).” Troll coined the term landscape ecology to denote “the analysis of a physico-biological complex of interrelations, which govern the different area units of a region.” He believed that “landscape ecology...must not be confined to the large scale analysis of natural regions. Ecological factors are also involved in problems of population, society, rural settlement, land use, transport, etc.” Landscape has long been a unit of analysis and a conceptual centerpiece of geography, with scholars such as Carl Sauer and J. B. Jackson adept at “reading the landscape,” including both the natural landscape of landforms and vegetation, and the cultural landscape as marked by human actions and as perceived by human minds. Zev Naveh has been working on his own version of landscape ecology in Israel since the early 1970s. Like Troll, Naveh includes humans in his conception, in fact enlarges landscape ecology to a global human ecosystem science, sort of a “bio-cybernetic systems approach to the landscape and the study of its use by [humans].” He sees landscape ecology first as a holistic approach to biosystems theory, the centerpiece being “recognition of the total human ecosystem as the highest level of integration,” and, second, as playing a central role in cultural evolution and as a “basis for interdisciplinary, task-oriented, environmental education.” Landscape architecture is also to some extent landscape ecology, since landscape architects design complete vistas, from their beginnings and at various scales. This concern with designing and creating complete landscapes from bare ground can certainly be considered ecological, as it includes creating or adapting local land forms, planting appropriate
Environmental Encyclopedia 3 vegetation, and designing and building various kinds of ’furniture’ and other artifacts on site. The British Landscape Institute and the British Ecological Society held a joint meeting in 1983, recognizing “that the time for ecology to be harnessed for the service of landscape design has arrived.” The meeting produced the twenty-fourth symposium of the British Ecological Society titled Ecology and Design in Landscape. Landscape planning can also to some degree be considered landscape ecology, especially in the ecological approach to landscape planning developed by Ian McHarg and his students and colleagues, and the LANDEP, or Landscape Ecological Planning, approach designed by Ladislav Miklos and Milan Ruzicka. Both of these ecological planning approaches are complex syntheses of spatial patterns, ecological processes, and human needs and wants. Building on all of these traditions, yet slowly finding its own identity, landscape ecology is considered by some as a sub-domain of biological ecology and by others as a discipline in its own right. In Europe, landscape ecology continues to be an extension of the geographical tradition that is preoccupied with human-landscape interactions. In North America, landscape ecology has emerged as a branch of biological ecology, more concerned with landscapes as clusters of interrelated natural ecosystems. The European form of landscape ecology is applied to land and resource conservation, while in North America it focuses on fundamental questions of spatial pattern and exchange. Both traditions can address major environmental problems, especially the extinction of species and the maintenance of biological diversity. The term landscape, despite the varied traditions and emerging disciplines described above, remains somewhat indeterminate, depending on the criteria set by individual researchers to establish boundaries. Some consensus exists on its general definition in the new landscape ecology, as described in the composite form attempted here: a terrestrial landscape is miles- or kilometers-wide in area; it contains a cluster of interacting ecosystems repeated in somewhat similar form; and it is a heterogeneous mosaic of interconnected land forms, vegetation types, and land uses. As Risser and his colleagues emphasize, this interdisciplinary area focuses explicitly on spatial patterns: “Specifically, landscape ecology considers the development and dynamics of spatial hetereogeneity, spatial and temporal interactions and exchanges across heterogeneous landscapes, influences of spatial heterogeneity on biotic and abiotic processes, and management of spatial heterogeneity.” Instead of trying to identify homogeneous ecosystems, landscape ecology focuses particularly on the heterogeneous patches and mosaics created by human disruption of natural systems, by the intermixing of cultural and natural landscape patterns. The real rationale for a land-
Landscape ecology
scape ecology perhaps should be this acknowledgment of the heterogeneity of contemporary landscape patterns, and the need to deal with the patchwork mosaics and intricate matrices that result from long-term human disturbance, modification, and utilization of natural systems. Typical questions asked by landscape ecologists include these formulated by Risser and his colleagues: “What formative processes, both historical and present, are responsible for the existing pattern in a landscape?” “How are fluxes of organisms, of material, and of energy related to landscape heterogeneity?” “How does landscape heterogeneity affect the spread of disturbances?” While the first question is similar to ones long asked in geography, the other two are questions traditional to ecology, but distinguished here by the focus on heterogeneity. Richard Forman, a prominent figure in the evolving field of landscape ecology, thinks the field has matured enough for general principles to have emerged; not ecological laws as such, but principles backed by enough evidence and examples to be true for 95 percent of landscape analyses. His 12 principles are organized by four categories: landscapes and regions; patches and corridors; mosaics; and applications. The principles outline expected or desirable spatial patterns and relationships, and how those patterns and relationships affect system functions and flows, organismic movements and extinctions, resource protection, and optimal environmental conditions. Forman claims the principles “should be applicable for any environmental or societal landuse objective,” and that they are useful in more effectively “growing wood, protecting species, locating houses, protecting soil, enhancing game, protecting water resources, providing recreation, locating roads, and creating sustainable environments.” Perhaps Andre Corboz provided the best description when he wrote of “the land as palimpsest:” landscape ecology recognizes that humans have written large on the land, and that behind the current writing visible to the eye, there is earlier writing as well, which also tells us about the patterns we see. Landscape ecology also deals with gaps in the text and tries to write a more complete accounting of the landscapes in which we live and on which we all depend. [Gerald L. Young Ph.D.]
RESOURCES BOOKS Farina, Almo. Landscape Ecology in Action. New York: Kluwer, 2000. Forman, R. T. T., and M. G. Landscape Ecology. New York: Wiley, 1986. Risser, P. G., J. R. Karr, and R. T. T. Forman. Landscape Ecology: Directions and Approaches. Champaign: Illinois Natural History Survey, 1983. Tjallingii, S. P., and A. A. de Veer, eds. Perspectives in Landscape Ecology: Contributions to Research, Planning and Management of Our Environment.
817
Landslide
Troll, C. Landscape Ecology. Delft, The Netherlands: The ITC-UNESCO Centre for Integrated Surveys, 1966. Turner, Monica, R. H. Gardner, and R. V. O’Neill. Landscape Ecology in Theory and Practice: Patterns and Processes. New York: Springer Verlag, 2001. Wageningen, The Netherlands: Pudoc, 1982. (Proceedings of the International Congress Organized by the Netherlands Society for Landscape Ecology, Veldhoven, The Netherlands, 6-11 April, 1981). Zonneveld, I. S., and R. T. T. Forman, eds. Changing Landscapes: An Ecological Perspective. New York: Springer-Verlag, 1990.
PERIODICALS Forman, R. T. T. “Some General Principles of Landscape and Regional Ecology.” Landscape Ecology (June 1995): 133–142. Golley, F.B. “Introducing Landscape Ecology.” Landscape Ecology 1, no. 1 (1987): 1–3. Naveh, Z. “Landscape Ecology as an Emerging Branch of Human Ecosystem Science.” Advances in Ecological Research 12 (1982): 189–237.
Landslide A general term for the discrete downslope movement of rock and soil masses under gravitational influence along a failure zone. The term “landslide” can refer to the resulting land form, as well as to the process of movement. Many types of landslides occur, and they are classified by several schemes, according to a variety of criteria. Landslides are categorized most commonly on basis of geometric form, but also by size, shape, rate of movement, and water content or fluidity. Translational, or planar, failures, such as debris avalanches and earth flows, slide along a fairly straight failure surface which runs approximately parallel to the ground surface. Rotational failures, such as rotational slumps, slide along a spoon shaped failure surface, leaving a hummocky appearance on the landscape. Rotational slumps commonly transform into earthflows as they continue down slope. Landslides are usually triggered by heavy rain or melting snow, but major earthquakes can also cause landslides.
Land-use control Land-use control is a relatively new concept. For most of human history, it was assumed that people could do whatever they wished with their own property. However, societies have usually recognized that the way an individual uses private property can sometimes have harmful affects on neighbors. Land-use planning has reached a new level of sophistication in developed countries over the last century. One of the first restrictions on land use in the United States, for example, was a 1916 New York City law limiting the size of skyscrapers because of the shadows they might cast on adjacent property. Within a decade, the federal government began to act aggressively on land control measures. It passed the Mineral Leasing Act of 1920 in an attempt to control the exploitation of oil, natural gas, phosphate, and potash. 818
Environmental Encyclopedia 3 It adopted the Standard State Zoning Act of 1922 and the Standard City Planning Enabling Act of 1928 to promote the concept of zoning at state and local levels. Since the 1920s, every state and most cities have adopted zoning laws modeled on these two federal acts. Often detailed, exhaustive, and complex zoning regulations now control the way land is used in nearly every governmental unit. They specify, for example, whether land can be used for single-dwelling construction, multiple-dwelling construction, farming, industrial (heavy or light) development, commercial use, recreation or some other purpose. Requests to use land for purposes other than that for which it is zoned requires a variance or conditional use permit, a process that is often long, tedious, and confrontational. Many types of land require special types of zoning. For example, coastal areas are environmentally vulnerable to storms, high tides, flooding, and strong winds. The federal government passed laws in 1972 and 1980, the National Coastal Zone Management Acts, to help states deal with the special problem of protecting coastal areas. Although initially slow to make use of these laws, states are becoming more aggressive about restricting the kinds of construction permitted along seashore areas. Areas with special scenic, historic, or recreational value have long been protected in the United States. The nation’s first national park, Yellowstone National Park, was created in 1872. Not until 44 years later, however, was the National Park Service created to administer Yellowstone and other parks established since 1872. Today, the National Park Service and other governmental agencies are responsible for a wide variety of national areas such as forests, wild and scenic rivers, historic monuments, trails, battlefields, memorials, seashores and lakeshores, parkways, recreational areas, and other areas of special value. Land-use control does not necessarily restrict usage. Individuals and organizations can be encouraged to use land in certain desirable ways. An enterprise zone, for example, is a specifically designated area in which certain types of business activities are encouraged. The tax rate might be reduced for businesses locating in the area or the government might relax certain regulations there. Successful land-use control can result in new towns or planned communities, designed and built from the ground up to meet certain pre-determined land-use objectives. One of the most famous examples of a planned community is Brasilia, the capital of Brazil. The site for a new capital— an undeveloped region of the country—was selected and a totally new city was built in the 1950s. The federal government moved to the new city in 1960, and it now has a population of more than 1.5 million. See also Bureau of Land Management; Riparian rights [David E. Newton]
Environmental Encyclopedia 3 RESOURCES BOOKS Becker, Barbara, Eric D. Kelly, and Frank So. Community Planning: An Introduction to the Comprehensive Plan. Washington, DC: Milldale Press, 2000. Newton, D. E. Land Use, A–Z. Hillside, NJ: Enslow Press, 1991. Platt, Rutherford H. Land Use and Society: Geography, Law, and Public Policy. Washington, DC: Island Press, 1996.
Lawn treatment
confused with the symptoms of other, similar diseases. Childhood AIDS symptoms appear more quickly since young children have immune systems that are less fully developed. [Liane Clorfene Casten]
Lawn treatment Latency Latency refers to the period of time it takes for a disease to manifest itself within the human body. It is the state of seeming inactivity that occurs between the instant of stimulation or initiating event and the beginning of response. The latency period differs dramatically for each stimulation and as a result, each disease has its unique time period before symptoms occur. When pathogens gain entry into a potential host, the body may fail to maintain adequate immunity and thus permits progressive viral or bacterial multiplication. This time lapse is also known as the incubation period. Each disease has definite, characteristic limits for a given host. During the incubation period, dissemination of the pathogen takes place and leads to the inoculation of a preferred or target organ. Proliferation of the pathogen, either in a target organ or throughout the body, then creates an infectious disease. Botulism, tetanus, gonorrhea, diphtheria, staphylococcal and streptococcal disease, pneumonia, and tuberculosis are among the diseases that take varied periods of time before the symptoms are evident. In the case of the childhood diseases—measles, mumps, and chicken pox—the incubation period is 14–21 days. In the case of cancer, the latency period for a small group of transformed cells to result in a tumor large enough to be detected is usually 10–20 years. One theory postulates that every cancer begins with a single cell or small group of cells. The cells are transformed and begin to divide. Twenty years of cell division ultimately results in a detectible tumor. It is theorized that very low doses of a carcinogen could be sufficient to transform one cell into a cancerous tumor. In the case of AIDS, an eight- to eleven-year latency period passes before the symptoms appear in adults. The length of this latency period depends upon the strength of the person’s immune system. If a person suspects he or she has been infected, early blood tests showing HIV antibodies or antigens can indicate the infection within three months of the stimulation. The three-month period before the appearance of HIV antibodies or antigens is called the “window period.” In many cases, doctors may fail to diagnose the disease at first, since AIDS symptoms are so general they may be
Lawn treatment in the form of pesticides and inorganic fertilizers poses a substantial threat to the environment. Homeowners in the United States use approximately three times more pesticides per acre than the average farmer, adding up to some 136 million pounds (61.7 kg)annually. Home lawns occupy more acreage in the United States than any agricultural crop, and a majority of the wildlife pesticide poisonings tracked by the Environmental Protection Agency (EPA) annually are attributed to chemicals used in lawn care. The use of grass fertilizer is also problematic when it runs off into nearby waterways. Lawn grass in almost all climates in the United States requires watering in the summer, accounting for some 40 to 60 percent of the average homeowner’s water use annually. Much of the water sprinkled on lawns is lost as runoff. When this runoff carries fertilizer, it can cause excess growth of algae in downstream waterways, clogging the surface of the water and depleting the water of oxygen for other plants and animals. Herbicides and pesticides are also carried into downstream water, and some of these are toxic to fish, birds, and other wildlife. Turf grass lawns are ubiquitous in all parts of the United States, regardless of the local climate. From Alaska to Arizona to Maine, homeowners surround their houses with grassy lawns, ideally clipped short, brilliantly green, and free of weeds. In almost all cases, the grass used is a hybrid of several species of grass from Northern Europe. These grasses thrive in cool, moist summers. In general, the United States experiences hotter, dryer summers than Northern Europe. Moving from east to west across the country, the climate becomes less and less like that the common turf grass evolved in. The ideal American lawn is based primarily on English landscaping principals, and it does not look like an English lawn unless it is heavily supported with water. The prevalence of lawns is a relatively recent phenomenon in the United States, dating to the late nineteenth century. When European settlers first came to this country, they found indigenous grasses that were not as nutritious for livestock and died under the trampling feet of sheep and cows. Settlers replaced native grasses with English and European grasses as fodder for grazing animals. In the late eighteenth century, American landowners began surrounding their estates with lawn grass, a style made popular 819
Lawn treatment
earlier in England. The English lawn fad was fueled by eighteenth century landscaper Lancelot “Capability” Brown, who removed whole villages and stands of mature trees and used sunken fences to achieve uninterrupted sweeps of green parkland. Both in England and the United States, such lawns and parks were mowed by hand, requiring many laborers, or they were kept cropped by sheep or even deer. Small landowners meanwhile used the land in front of their houses differently. The yard might be of stamped earth, which could be kept neatly swept, or it may have been devoted to a small garden, usually enclosed behind a fence. The trend for houses set back from the street behind a stretch of unfenced lawn took hold in the mid-nineteenth century with the growth of suburbs. Frederick Law Olmsted, the designer of New York City’s Central Park, was a notable suburban planner, and he fueled the vision of the English manor for the suburban home. The unfenced lawns were supposed to flow from house to house, creating a common park for the suburb’s residents. These lawns became easier to maintain with the invention of the lawn mower. This machine debuted in England as early as 1830, but became popular in the United States after the Civil War. The first patent for a lawn sprinkler was granted in the United States in 1871. These developments made it possible for middle class home owners to maintain lush lawns themselves. Chemicals for lawn treatment came into common use after World War II. Herbicides such as 2,4-D were used against broadleaf weeds. The now-banned DDT was used against insect pests. Homeowners had previously fertilized their lawns with commercially available organic formulations like dried manure, but after World War II inorganic, chemical-based fertilizers became popular for both agriculture and lawns and gardens. Lawn care companies such as Chemlawn and Lawn Doctor originated in the 1960s, an era when homeowners were confronted with a bewildering array of chemicals deemed essential to a healthy lawn. Rachel Carson’s 1962 book Silent Spring raised an alarm about the prevalence of lawn chemicals and their environmental costs. Carson explained how the insecticide DDT builds up in the food chain, passing from insects and worms to fish and small birds that feed on them, ultimately endangering large predators like the eagle. DDT was banned in 1972, and some lawn care chemicals were restricted. Nevertheless, the lawn care industry continued to prosper, offering services such as combined seeding, herbicide, and fertilizer at several intervals throughout the growing season. Lawn care had grown to a $25 billion industry in the United States by the 1990s. Even as the perils of particular lawn chemicals became clearer, it was difficult for homeowners to give them up. Statistics from the United States National Cancer Institute show that the incidence of childhood leukemia is 6.5% greater in familes that use lawn pesticides than in those who 820
Environmental Encyclopedia 3 do not. In addition, 32 of the 34 most widely used lawn care pesticides have not been tested for health and environmental issues. Because some species of lawn grasses grow poorly in some areas of the United States, it does not thrive without extra water and fertilizer. It is vulnerable to insect pests, which can be controlled with pesticides, and if a weed-free lawn is the aim, herbicides are less labor-intensive than digging out dandelions one by one. Some common pesticides used on lawns are acephate, bendiocarb, and diazinon. Acephate is an organophosphate insecticide which works by damaging the insect’s nervous system. Bendiocarb is called a carbamate insecticide, sold under several brand names, which works in the same way. Both were first developed in the 1940s. These will kill many insects, not only pests such as leafminers, thrips, and cinch bugs, but also beneficial insects, such as bees. Bendiocarb is also toxic to earthworms, a major food source for some birds. Birds too can die from direct exposure to bendiocarb, as can fish. Both these chemicals can persist in the soil for weeks. Diazinon is another common pesticide used by homeowners on lawns and gardens. It is toxic to humans, birds, and other wildlife, and it has been banned for use on golf courses and turf farms. Nevertheless, homeowners may use it to kill pest insects such as fire ants. Harmful levels of diazinon and were found in metropolitan storm water systems in California in the early 1990s, leached there from orchard run-off. Diazinon is responsible for about half of all reported wildlife poisonings involving lawn and garden chemicals. Common lawn and garden herbicides appear to be much less toxic to humans and animals than pesticides. The herbicide 2,4-D, one of the earliest herbicides used in this country, can cause skin and eye irritation to people who apply it, and it is somewhat toxic to birds. It can be toxic to fish in some formulations. Although contamination with 2,4-D has been found in some urban waterways, it has only been in trace amounts not thought to be harmful to humans. Glyphosate is another common herbicide, sold under several brand names, including the well-known Roundup. It is considered non-toxic to humans and other animals. Unike 2,4D, which kills broadleaf plants, glyphosate is a broad spectrum herbicide used to control control a great variety of annual, biennial, and perennial grasses, sedges, broad leafed weeds and woody shrubs. Common lawn and garden fertilizers are generally not toxic unless ingested in sufficient doses, yet they can have serious environmental effects. Run-off from lawns can carry fertilizer into nearby waterways. The nitrogen and phosphorus in the fertilizer stimulates plant growth, principally algae and microscopic plants. These tiny plants bloom, die, and decay. Bacteria that feed off plant decay then also undergo a surge in population. The overabundant bacteria con-
Environmental Encyclopedia 3
LD50
sume oxygen, leading to oxygen-depleted water. This condition is called hypoxia. In some areas, fertilized run-off from lawns is as big a problem as run-off from agricultural fields. Lawn fertilizer is thought to be a major culprit in pollution of the Everglades in Florida. In 2001 the Minnesota legislature debated a bill to limit homeowners’ use of phosphorus in fertilizers because of problems with algae blooms on the state’s lakes. There are several viable alternatives to the use of chemicals for lawn care. Lawn care companies often recommend multiple applications of pesticides, herbicides, and fertilizers, but an individual lawn may need such treatment on a reduced schedule. Some insects such as thrips and mites are suceptible to insecticidal soaps and oils, which are not long-lasting in the environment. These could be used in place of diazinon, acephate and other pesticides. Weeds can be pulled by hand, or left alone. Homeowners can have their lawn evaluated and their soil tested to determine how much fertilizer is needed. Slow-release fertilizers or organic fertilizers such as compost or seaweed emulsion do not give off such a large concentration of nutrients at once, so these are gentler on the environment. Another way to cut back on the excess water and chemicals used on lawns is to reduce the size of the lawn. The lawn can be bordered with shrubbery and perennial plants, leaving just enough open grass as needed for recreation. Another alternative is to replace non-native turf grass with a native grass. Some native grasses stay green all summer, can be mown short, and look very much like a typical lawn. Native buffalo grass (Buchloe dactyloides) has been used successfully for lawns in the South and Southwest. Other native grass species are adapted to other regions. Another example is blue grama grass (Bouteloua gracilis), native to the Great Plains. This grass is tolerant of extreme temperatures and very little rainfall. Some native grasses are best left unmowed, and in some regions homeowners have replaced their lawns with native grass prairies or meadows. In some cases, homeowners have done away with their lawns altogether, using stone or bark mulch in its place, or planting a groundcover plant like ivy or wild ginger. These plants might grow between trees, shrubs and perennials, creating a very different look than the traditional green carpet. For areas with water shortages, or for those who are concerned about conserving natural resources, xeriscape landscaping should be considered. Xeriscape comes from the Greek word xeros, meaning dry. Xeriscaping takes advantage of using plants, such as cacti and grasses, such as Mexican feather grass and blue oat grass that thrive in desert conditions. Xeriscaping can also include rock gardening as part of the overall landscape plan. [Angela Woodward]
RESOURCES BOOKS Bormann, F. Herbert, Diana Balmori, and Gordon T. Geballe. Redesigning the American Lawn. New Haven and London: Yale University Press, 1993. Jenkins, Virginia Scott. The Lawn: A History of an American Obsession.Washington and London: Smithsonian Institution Press: 1994. Stein, Sara. Planting Noah’s Garden: Further Adventures in Backyard Ecology. Boston: Houghton Mifflin Co., 1997. Wasowski, Andy, and Sally Wasowski. The Landscaping Revolution. Chicago: Contemporary Books, 2000.
PERIODICALS Bourne, Joel. “The Killer in Your Yard.” Audobon (May-June 2000): 108. “Easy Lawns.” Brooklyn Botanic Garden Handbook 160 (Fall 1999). Simpson, Sarah. “Shrinking the Dead Zone.” Scientific American (July 2001): 18. Stewart, Doug. “Our Love Affair with Lawns.” Smithsonian (April 1999): 94. Xeriscaping Tips Page. 2002 [cited June 18, 2002]. .
LDC see Less developed countries
LD50 LD50 is the dose of a chemical that is lethal to 50 percent of a test population. It is therefore a measure of a particular median response which, in this case, is death. The term is most frequently used to characterize the response of animals such as rats and mice in acute toxicity tests. The term is generally not used in connection with aquatic or inhalation toxicity tests. It is difficult, if not impossible, to determine the dosage of an animal in such tests; results are most commonly represented in terms of lethal concentrations (LC), which refer to the concentration of the substance in the air or water surrounding an animal. In LD testing, dosages are generally administered by means of injection, food, water, or forced feeding. Injections are used when an animal is to receive only one or a few dosages. Greater numbers of injections would disturb the animal and perhaps generate some false-positive types of responses. Food or water may serve as a good medium for administering a chemical, but the amount of food or water wasted must be carefully noted. Developing a healthy diet for an animal which is compatible with the chemical to be tested can be as much art as science. The chemical may interact with the foods and become more or less toxic, or it may be objectionable to the animal due to taste or odor. Rats are often used in toxicity tests because they do not have the ability to vomit. The investigator therefore has the option of gavage, a way to force-feed rats with a stomach tube or other device when a chemical smells or tastes bad. 821
Environmental Encyclopedia 3
Leaching
Bioassay; Dose response; Ecotoxicology; Hazardous material; Toxic substance [Gregory D. Boardman]
RESOURCES BOOKS Casarett, L. J., J. Doull, and C. D. Klaassen, eds. Casarett and Doull’s Toxicology: The Basic Science of Poisons. 6th ed. New York: McGraw Hill, 2001. Hodgson, E., R. B. Mailman, and J. E. Chambers. Dictionary of Toxicology. 2nd ed. New York: John Wiley and Sons, 1997. Lu, F. C. Basic Toxicology: Fundamentals, Target Organs, and Risk Assessment. 3rd ed. Hebron, KY: Taylor & Francis, 1996. Rand, G. M., ed. Fundamentals of Aquatic Toxicology: Effects, Environmental Fate, and Risk Assessment. 2nd ed. Hebron, KY: Taylor & Francis, 1995.
Leachate see Contaminated soil; Landfill
Leaching A chart showing the increased dose of LD50. (McGraw-Hill Inc. Reproduced by permission.)
Toxicity and LD50 are inversely proportional, which means that high toxicity is indicated by a low LD50 and vice versa. LD50 is a particular type of effective dose (ED) for 50 percent of a population (ED50). The midpoint (or effect on half of the population) is generally used because some individuals in a population may be highly resistant to a particular toxicant, making the dosage at which all individuals respond a misleading data point. Effects other than death, such as headaches or dizziness, might be examined in some tests, so EDs would be reported instead of LDs. One might also wish to report the response of some other percent of the test population, such as the 20 percent response (LD20 or ED20) or 80 percent response (LD80 or ED80). The LD is expressed in terms of the mass of test chemical per unit mass of the test animals. In this way, dose is normalized so that the results of tests can be analyzed consistently and perhaps extrapolated to predict the response of animals that are heavier or lighter. Extrapolation of such data is always questionable, especially when extrapolating from animal response to human response, but the system appears to be serving us well. However, it is important to note that sometimes better dose-response relations and extrapolations can be derived through normalizing dosages based on surface area or the weight of target organs. See also 822
The process by which soluble substances are dissolved out of a material. When rain falls on farmlands, for example, it dissolves weatherable minerals, pesticides, and fertilizers as it soaks into the ground. If enough water is added to the soil to fill all the pores, then water carrying these dissolved materials moves to the groundwater—the soil becomes leached. In soil chemistry, leaching refers to the process by which nutrients in the upper layers of soil are dissolved out and carried into lower layers, where they can be a valuable nutrient for plant roots. Leaching also has a number of environmental applications. For example, toxic chemicals and radioactive materials stored in sealed containers underground may leach out if the containers break open over time. See also Landfill; Leaking underground storage tank
Lead One of the oldest metals known to humans, lead compounds were used by Egyptians to glaze pottery as far back as 7000 B.C. The toxic effects of lead also have been known for many centuries. In fact, the Romans limited the amount of time slaves could work in lead mines because of the element’s harmful effects. Some consequences of lead poisoning are anemia, headaches, convulsions, and damage to the kidneys and central nervous system. The widespread use of lead in plumbing, gasoline, and lead-acid batteries, for example, has made it a serious environmental health problem. Bans on the use of lead in motor fuels and paints attempt to deal
Environmental Encyclopedia 3 with this problem. See also Heavy metals and heavy metal poisoning; Lead shot
Lead management Lead, a naturally occurring bluish gray metal, is extensively
used throughout the world in the manufacture of storage batteries, chemicals including paint and gasoline, and various metal products including sheet lead, solder, pipes, and ammunition. Due to its widespread use, large amounts of lead exist in the environment, and substantial quantities of lead continue to be deposited into air, land, and water. Lead is a poison that has many adverse effects, and children are especially susceptible. At present, the production, use, and disposal of lead are regulated with demonstrably effective results. However, because of its previous widespread use and persistence in the environment, lead exposure is a pervasive problem that affects many populations. Effective management of lead requires an understanding of its effects, blood action levels, sources of exposure, and policy responses, topics reviewed in that order. Effects of Lead Lead is a strong toxicant that adversely affects many systems in the body. Severe lead exposures can cause brain and kidney damage to adults and children, coma, convulsions, and death. Lower levels, e.g., lead concentrations in blood (PbB) below 50 g/dL, may impair hemoglobin synthesis, alter the central and peripheral nervous systems, cause hypertension, affect male and female reproductive systems, and damage the developing fetus. These effects depend on the level and duration of exposure and on the distribution and kinetics of lead in the body. Most lead is deposited in bone, and some of this stored lead may be released long after exposure due to a serious illness, pregnancy, or other physiological event. Lead has not been shown to cause cancer in humans, however, tumors have developed in rats and mice given large doses of lead and thus several United States agencies consider lead acetate and lead phosphate as human carcinogens. Children are particularly susceptible to lead poisoning. PbB levels as low as 10 g/dL are associated with decreased intelligence and slowed neurological development. Low PbB levels also have been associated with deficits in growth, vitamin metabolism, and effects on hearing. The neurological effects of lead on children are profound and are likely persistent. Unfortunately, childhood exposures to chronic but low lead levels may not produce clinical symptoms, and many cases go undiagnosed and untreated. In recent years, the number of children with elevated blood lead levels has declined substantially. For example, the average PbB level has decreased from over 15 g/dL in the
Lead management 1970s to about 5 g/dL in the 1990s. As described later, these decreases can be attributed to the reduction or elimination of lead in gasoline, food can and plumbing solder, and residential paint. Still, childhood lead poisoning remains the most widespread and preventable childhood health problem associated with environmental exposures, and childhood lead exposure remains a public health concern since blood levels approach or exceed levels believed to cause effects. Though widely perceived as a problem of inner city minority children, lead poisoning affects children from all areas and from all socioeconomic groups. The definition of a PbB level that defines a level of concern for lead in children continues to be an important issue in the United States. The childhood PbB concentration of concern has been steadily lowered by the Centers for Disease Control (CDC) from 40 g/dL in 1970 to 10 g/ dL in 1991. The Environmental Protection Agency lowered the level of concern to 10 g/dL ("10-15 and possibly lower") in 1986, and the Agency for Toxic Substances and Disease Registry (ATSDR) also identified 10 g/dL in its 1988 Report to Congress on childhood lead poisoning. In the workplace, the medical removal PbB concentration is 50 g/dL for three consecutive checks and 60 g/ dL for any single check. Blood level monitoring is triggered by an air lead concentration above 30 g/m3. A worker is permitted to return to work when his blood lead level falls below 40 g/dL. In 1991, the National Institute for Occupational Safety and Health (NIOSH) set a goal of eliminating occupational exposures that result in workers having PbB levels greater than 25 g/dL. Exposure and Sources Lead is a persistent and ubiquitous pollutant. Since it is an elemental pollutant, it does not dissipate, biodegrade, or decay. Thus, the total amount of lead pollutants resulting from human activity increases over time, no matter how little additional lead is added to the environment. Lead is a multi-media pollutant, i.e., many sources contribute to the overall problem, and exposures from air, water, soil, dust, and food pathways may be important. For children, an important source of lead exposure is from swallowing nonfood items, an activity known as pica (an abnormal eating habit e.g., chips of lead-containing paint), most prevalent in 2 and 3 year-olds. Children who put toys or other items in their mouths may also swallow lead if lead-containing dust and dirt are on these items. Touching dust and dirt containing lead is commonplace, but relatively little lead passes through the skin. The most important source of high-level lead exposure in the United States is household dust derived from deteriorated leadbased paint. Numerous homes contain lead-based paint and continue to be occupied by families with small children, including 21 million pre-1940 homes and rental units which, 823
Lead management
over time, are rented to different families. Thus, a single house with deteriorated lead-based paint can be the source of exposure for many children. In addition to lead-based paint in houses, other important sources of lead exposure include (1) contaminated soil and dust from deteriorated paints originally applied to buildings, bridges, and water tanks; (2) drinking water into which lead has leached from lead, bronze, or brass pipes and fixtures (including lead-soldered pipe joints) in houses, schools, and public buildings; (3) occupational exposures in smelting and refining industries, steel welding and cutting operations, battery manufacturing plants, gasoline stations, and radiator repair shops; (4) airborne lead from smelters and other point sources of air pollution, including vehicles burning leaded fuels; (5) hazardous waste sites which contaminate soil and water; (6) food cans made with leadcontaining solder and pottery made with lead-containing glaze; and (7) food consumption if crops are grown using fertilizers that contain sewage sludge or if much lead-containing dust is deposited onto crops. In the atmosphere, the use of leaded gasoline has been the single largest source of lead (90%) since the 1920s, although the use of leaded fuel has been greatly curtailed and gasoline contributions are now greatly reduced (35%). As discussed below, leaded fuel and many other sources have been greatly reduced in the United States, although drinking water and other sources remain important in some areas. A number of other countries, however, continue to use leaded fuel and other leadcontaining products. Government Responses Many agencies are concerned with lead management. Lead agencies in the United States include the Environmental Protection Agency, the Centers for Disease Control, the U.S. Department of Health and Human Services, the Department of Housing and Urban Development, the Food and Drug Administration, the Consumer Product Safety Commission, the National Institute for Occupational Safety and Health, and the Occupational Safety and Health Administration. These agencies have taken many actions to reduce lead exposures, several of which have been very successful. General types of actions include: (1) restrictions or bans on the use of many products containing lead where risks from these products are high and where substitute products are available, e.g., interior paints, gasoline fuels, and solder; (2) recycling and safer ultimate disposal strategies for products where risks are lower, or for which technically and economically feasible substitutes are not available, e.g., leadacid automotive batteries, lead-containing wastes, pigments and used oil; (3) emission controls for lead smelters, primary metal industries, and other industrial point sources, including the use of the best practicable control technology (BPCT) for new lead smelting and processing facilities and 824
Environmental Encyclopedia 3 reasonable available control technologies (RACT) for existing facilities, and; (4) education and abatement programs where exposure is based on past uses of lead. The current goals of the Environmental Protection Agency (EPA) strategy are to reduce lead exposures to the fullest extent practicable, to significantly reduce the incidence of PbB levels above 10 g/dL in children, and to reduce lead exposures that are anticipated to pose risks to children, the general public, or the environment. Several specific actions of this and other agencies are discussed below. The Residential Lead-based Paint Hazard Reduction Act of 1992 (Title X) provides the framework to reduce hazards from lead-based paint exposure, primarily in housing. It establishes a national infrastructure of trained workers, training programs and proficient laboratories, and a public education program to reduce hazards from lead exposure in paint in the nation’s housing stock. Earlier, to help protect small children who might swallow chips of paint, the Consumer Product Safety Commission (CPSC) restricted the amount of lead in most paints to 0.06 percent by weight. CDC further suggests that inside and outside paint used in buildings where people live be tested for lead. If the level of lead is high, the paint should be removed and replaced with a paint that contains an allowable level of lead. CPSC published a consumer safety alert/brochure on lead paint in the home in 1990, and has evaluated lead test kits for safety, efficacy, and consumer-friendliness. These kits are potential screening devices that may be used by the consumer to detect lead in paint and other materials. Title X also requires EPA to promulgate regulations that ensure personnel engaged in abatement activities are trained, to certify training programs, to establish standards for abatement activities, to promulgate model state programs, to establish a laboratory accreditation program, to establish a information clearinghouse, and to disclose lead hazards at property transfer. The Department of Housing and Urban Development (HUD) has begun activities that include updating regulations dealing with lead-based paint in HUD programs and federal property; providing support for local screening programs; increasing public education; supporting research to reduce the cost and improve the reliability of testing and abatement; increasing state and local support; and providing more money to support abatement in low and moderate income households. HUD estimated that the total cost of testing and abatement in high-priority hazard homes will be $8 to 10 billion annually over 10 years, although costs could be substantially lowered by integrating abatement with other renovation activities. CPSC, EPA, and states are required by the Lead Contamination Control Act of 1988 to test drinking water in schools for lead and to remove lead if levels are too high.
Environmental Encyclopedia 3 Drinking water coolers must also be lead-free and any that contain lead must be removed. EPA regulations limit lead in drinking water to 0.015 mg/L. To manage environmental exposures resulting from inhalation, EPA regulations limit lead to 0.1 and 0.05 g/ gal (0.38 and 0.19 g/L) in leaded and unleaded gasoline, respectively. Also, the National Ambient Air Quality Standards set a maximum lead concentrations of 1.5 g/m3 using a three month average, although typical levels are far lower, 0.1 or 0.2 g/m3. To identify and mitigate sources of lead in the diet, the Food and Drug Administration (FDA) has undertaken efforts that include voluntary discontinuation of lead solder in food cans by the domestic food industry, and elimination of lead in glazing on ceramic ware. Regulatory measures are being introduced for wine, dietary supplements, crystal ware, food additives, and bottled water. For workers in lead-using industries, the Occupational Safety and Health Administration (OSHA) has established environmental and biological standards that include maximum air and blood levels. This monitoring must be conducted by the employer, and elevated PbB levels may require the removal of an individual from the work place (levels discussed previously). The Permissible Exposure Level (PEL) limits air concentrations of lead to 50 g/m3, and, if 30 g/m3 is exceeded, employers must implement a program that includes medical surveillance, exposure monitoring, training, regulated areas, respiratory protection, protective work clothing and equipment, housekeeping, hygiene facilities and practices, signs and labels, and record keeping. In the construction industry, the PEL is 200 g/m3. The National Institute for Occupational Safety and Health (NIOSH) recommends that workers not be exposed to levels of more than 100 g/m3 for up to 10 hours, and NIOSH has issued a health alert to construction workers regarding possible adverse health effects from long-term and low-level exposure. NIOSH has also published alerts and recommendations for preventing lead poisoning during blasting, sanding, cutting, burning, or welding of bridges and other steel structures coated with lead paint. Finally, lead screening for children has recently increased. The CDC recommends that screening (testing) for lead poisoning be included in health care programs for children under 72 months of age, especially those under 36 months of age. For a community with a significant number of children having PbB levels between 10-14 g/dL, community-wide lead poisoning prevention activities should be initiated. For individual children with PbB levels between 15-19 g/dL, nutritional and educational interventions are recommended. PbB levels exceeding 20 g/dL should trigger investigations of the affected individual’s environment and medical evaluations. The highest levels, above 45 g/
Leafy spurge
dL, require both medical and environmental interventions, including chelation therapy. CDC also conducts studies to determine the impact of interventions on children’s blood lead levels. These regulatory activities have resulted in significant reductions in average levels of lead exposure. Nevertheless, lead management remains an important public health problem. [Stuart Batterman]
RESOURCES BOOKS Breen, J. J., and C. R. Stroup, eds. Lead Poisoning: Exposure, Abatement, Regulation. Lewis Publishers, 1995. Kessel, I., J. T. O’Connor, and J. W. Graef. Getting the Lead Out: The Complete Resource for Preventing and Coping with Lead Poisoning. Rev. ed. Cambridge, MA: Fisher Books, 2001. Pueschel, S. M., J. G. Linakis, and A. C. Anderson. Lead Poisoning in Childhood. Baltimore: Paul H. Brookes Publishing Co., 1996.
OTHER Farley, Dixie. “Dangers of Lead Still Linger.” FDA Consumer JanuaryFebruary 1998 [cited July 2002]. .
Lead shot Lead shot refers to the small pellets that are fired by shotguns while hunting waterfowl or upland fowl, or while skeet
shooting. Most lead shots miss their target and are dissipated into the environment. Because the shot is within the particle-size range that is favored by medium-sized birds as grit, it is often ingested and retained in the gizzard to aid in the mechanical abrasion of plant seeds, the first step in avian digestion. However, the shot also abrades during this process, releasing toxic lead that can poison the bird. It has been estimated that as much as 2–3 percent of the North American waterfowl population, or several million birds, may die from shot-caused lead poisoning each year. This problem will decrease in intensity, however, because lead shot is now being substantially replaced by steel shot in North America. See also Heavy metals and heavy metal poisoning
Leafy spurge Leafy spurge (Euphorbia esula L.), a perennial plant from Europe and Asia, was introduced to North America through imported grain products by 1827. It is 12 in (30.5 cm) to 3 ft (1 m) in height. Stems, leaves, and roots contain milky white latex which contains toxic cardiac glycosides that is distasteful to cattle, who will not eat it. Considered a noxious, or destructive, weed in southern Canada and the northern 825
Environmental Encyclopedia 3
League of Conservation Voters
Great Plains of the United States, it crowds out native rangeland grasses, reducing the number of cattle that can graze the land. It is responsible for losses of approximately 35–45 million dollars per year to the United States cattle and hay industries. Its aggressive root system makes controlling spread difficult. Roots spread vertically to 15 ft (5 m) with up to 300 root buds, and horizontally to nearly 30 ft (9 m). It regenerates from small portions of root. Tilling, burning, and herbicide use are ineffective control methods as roots are not damaged and may prevent immediate regrowth of the desired species. The introduction of specific herbivores of the leafy spurge from its native range, including certain species of beetles and moths, may be an effective means of control, as may be certain pathogenic fungi. Studies also indicate sheep and Angora goats will eat it. To control this plant’s rampant spread in North America, a combination of methods seems most effective. [Monica Anderson]
League of Conservation Voters In 1970 Marion Edey, a House committee staffer, founded the League of Conservation Voters (LCV) as the non-partisan political action arm of the United States’ environmental movement. LCV works to establish a pro-environment—or “green”—majority in Congress and to elect environmentally conscious candidates throughout the country. Through campaign donations, volunteers, and endorsements, pro-environment advertisements, and annual publications such as the National Environmental Scorecard, the League raises voter awareness of the environmental positions of candidates and elected officials. Technically it has no formal membership, but the League’s supporters—who make donations and purchase its publications—number 100,000. The board of directors is comprised of 24 important environmentalists associated with such organizations as the Sierra Club, the Environmental Defense Fund, and Friends of the Earth. Because these organizations would endanger their charitable tax status if they participated directly in the electoral process, environmentalists developed the League. Since 1970 LCV has influenced many elections. From its first effort in 1970—wherein LCV successfully prevented Rep. Wayne Aspinall of Colorado from obtaining a democratic nomination—the League has grown to be a significant force in American politics. In the 1989–90 elections LCV supported 120 pro-environment candidates and spent approximately $250,000 on their campaigns. In 1990 the League developed new endorsement tactics. First 826
it invented the term “greenscam” to identify candidates who only appear green. Next LCV produced two generic television advertisements for candidates. One advertisement, entitled “Greenscam,” attacked the aforementioned candidates; the other, entitled “Decisions,” was an award-winning, positive advertisement in support of pro-environment candidates. By the 2000 campaign the League had attained an unprecedented degree of influence in the electoral process. That year LCV raised and donated 4.1 million dollars in support of both Democratic and Republican candidates in a variety of ways. In endorsing a candidate the League no longer simply contributes money to a campaign. It provides “in-kind” assistance—for example, it places a trained field organizer on a staff, creates radio and television advertisements, or develops grassroots outreach programs and campaign literature. In addition to supporting specific candidates, LCV holds all elected officials accountable for their track records on environmental issues. The League’s annual publication National Environmental Scorecard lists the voting records of House and Senate members on environmental legislation. Likewise, the Presidential Scorecard identifies the positions that presidential candidates have taken. Through these publications and direct endorsement strategies, the League continues to apply pressure in the political process and elicit support for the environment [Andrea Gacki]
RESOURCES ORGANIZATIONS League of Conservation Voters, 1920 L Street, NW, Suite 800, Washington, D.C. USA 20036 (202) 785-8683, Fax: (202) 835-0491,
Louis Seymour Bazett Leakey (1903 – 1972) African-born English paleontologist and anthropologist Louis Seymour Bazett Leakey was born on August 7, 1903, in Kabete, Kenya. His parents, Mary Bazett (d. 1948) and Harry Leakey (1868–1940) were Church of England missionaries at the Church Missionary Society, Kabete, Kenya. Louis spent his childhood in the mission, where he learned the Kikuyu language and customs (he later compiled a Kikuyu grammar book). As a child, while pursuing his interest in ornithology—the study of birds—he often found stone tools washed out of the soil by the heavy rains, which Leakey believed were of prehistoric origin. Stone tools were primary evidence of the presence of humans at a particular site, as toolmaking was believed at the time to be practiced only by
Environmental Encyclopedia 3 humans and was, along with an erect posture, one of the chief characteristics used to differentiate humans from nonhumans. Scientists at the time, however, did not consider East Africa a likely site for finding evidence of early humans; the discovery of Pithecanthropus in Java in 1894 (the socalled Java Man, now considered to be an example of Homo erectus) had led scientists to assume that Asia was the continent from which human forms had spread. Shortly after the end of World War I, Leakey was sent to a public school in Weymouth, England, and later attended St. John’s College, Cambridge. Suffering from severe headaches resulting from a sports injury, he took a year off from his studies and joined a fossil-hunting expedition to Tanganyika (now Tanzania). This experience, combined with his studies in anthropology at Cambridge (culminating in a degree in 1926), led Leakey to devote his time to the search for the origins of humanity, which he believed would be found in Africa. Anatomist and anthropologist Raymond A. Dart’s discovery of early human remains in South Africa was the first concrete evidence that this view was correct. Leakey’s next expedition was to northwest Kenya, near Lakes Nakuru and Naivasha, where he uncovered materials from the Late Stone Age; at Kariandusi he discovered a 200,000year-old hand ax. In 1928 Leakey married Henrietta Wilfrida Avern, with whom he had two children: Priscilla, born in 1930, and Colin, born in 1933; the couple was divorced in the mid-1930s. In 1931 Leakey made his first trip to Olduvai Gorge—a 350-mi (564-km) ravine in Tanzania—the site that was to be his richest source of human remains. He had been discouraged from excavating at Olduvai by Hans Reck, a German paleontologist who had fruitlessly sought evidence of prehistoric humans there. Leakey’s first discoveries at that site consisted of both animal fossils, important in the attempts to date the particular stratum (or layer of earth) in which they were found, and, significantly, flint tools. These tools, dated to approximately one million years ago, were conclusive evidence of the presence of hominids—a family of erect primate mammals that use only two feet for locomotion—in Africa at that early date; it was not until 1959, however, that the first fossilized hominid remains were found there. In 1932, near Lake Victoria, Leakey found remains of Homo sapiens (modern man), the so-called Kanjera skulls (dated to 100,000 years ago) and Kanam jaw (dated to 500,000 years ago); Leakey’s claims for the antiquity of this jaw made it a controversial find among other paleontologists, and Leakey hoped he would find other, independent, evidence for the existence of Homo sapiens from an even earlier period—the Lower Pleistocene. In the mid-1930s, a short time after his divorce from Wilfrida, Leakey married his second wife, Mary Douglas
Louis Seymour Bazett Leakey
Nicol; she was to make some of the most significant discoveries of Leakey’s team’s research. The couple eventually had three children: Philip, Jonathan, and Richard E. Leakey. During the 1930s, Leakey also became interested in the study of the Paleolithic period in Britain, both regarding human remains and geology, and he and Mary Leakey carried out excavations at Clacton in southeast England. Until the end of the 1930s, Leakey concentrated on the discovery of stone tools as evidence of human habitation; after this period he devoted more time to the unearthing of human and prehuman fossils. His expeditions to Rusinga Island, at the mouth of the Kavirondo Gulf in Kenya, during the 1930s and early 1940s produced a large number of finds, especially of remains of Miocene apes. One of these apes, which Leakey named Proconsul africanus, had a jaw lacking in the so-called simian shelf that normally characterized the jaws of apes; this was evidence that Proconsul represented a stage in the progression from ancient apes to humans. In 1948 Mary Leakey found a nearly complete Proconsul skull, the first fossil ape skull ever unearthed; this was followed by the unearthing of several more Proconsul remains. Louis Leakey began his first regular excavations at Olduvai Gorge in 1952; however, the Mau Mau (an antiwhite secret society) uprising in Kenya in the early 1950s disrupted his paleontological work and induced him to write Mau Mau and the Kikuyu, in an effort to explain the rebellion from the perspective of a European with an insider’s knowledge of the Kikuyu. A second work, Defeating Mau Mau, followed in 1954. During the late 1950s, the Leakeys continued their work at Olduvai. In 1959, while Louis was recuperating from an illness, Mary Leakey found substantial fragments of a hominid skull that resembled the robust australopithecines—African hominids possessing small brains and nearhuman dentition—found in South Africa earlier in the century. Louis Leakey, who quickly reported the find to the journal Nature, suggested that this represented a new genus, which he named Zinjanthropus boisei, the genus name meaning “East African man,” and the species name commemorating Charles Boise, one of Leakey’s benefactors. This species, now called Australopithecus boisei, was later believed by Leakey to have been an evolutionary dead end, existing contemporaneously with Homo rather than representing an earlier developmental stage. In 1961, at Fort Ternan, Leakey’s team located fragments of a jaw that Leakey believed were from a hitherto unknown genus and species of ape, one he designated as Kenyapithecus wickeri, and which he believed was a link between ancient apes and humans, dating from 14 million years ago; it therefore represented the earliest hominid. In 1967, however, an older skull, one that had been found two decades earlier on Rusinga Island and which Leakey had 827
Louis Seymour Bazett Leakey
Louis Leakey. (The Library of Congress.)
originally given the name Ramapithecus africanus, was found to have hominid-like lower dentition; he renamed it Kenyapithecus africanus, and Leakey believed it was an even earlier hominid than Kenyapithecus wickeri. Leakey’s theories about the place of these Lower Miocene fossil apes in human evolution have been among his most widely disputed. During the early 1960s, a member of Leakey’s team found fragments of the hand, foot, and leg bones of two individuals, in a site near where Zinjanthropus had been found, but in a slightly lower and, apparently, slightly older layer. These bones appeared to be of a creature more like modern humans than Zinjanthropus, possibly a species of Homo that lived at approximately the same time, with a larger brain and the ability to walk fully upright. As a result of the newly developed potassium-argon dating method, it was discovered that the bed from which these bones had come was 1.75 million years old. The bones were, apparently, the evidence for which Leakey had been searching for years: skeletal remains of Homo from the Lower Pleistocene. Leakey designated the creature whose remains these were as Homo habilis ("man with ability"), a creature who walked upright and had dentition resembling that of modern humans, hands capable of toolmaking, and a large cranial capacity. Leakey saw this hominid as a direct ancestor of Homo erectus and modern humans. Not unexpectedly, Leakey was attacked by other scholars, as this identification of the frag828
Environmental Encyclopedia 3 ments moved the origins of the genus Homo back substantially further in time. Some scholars felt that the new remains were those of australopithecines, if relatively advanced ones, rather than very early examples of Homo. Health problems during the 1960s curtailed Leakey’s field work; it was at this time that his Centre for Prehistory and Paleontology in Nairobi became the springboard for the careers of such paleontologists as Jane Goodall and Dian Fossey in the study of nonhuman primates. A request came in 1964 from the Israeli government for assistance with the technical as well as the fundraising aspects involved in the excavation of an early Pleistocene site at Ubeidiya. This produced evidence of human habitation dating back 700,000 years, the earliest such find outside Africa. During the 1960s, others, including Mary Leakey and the Leakeys’ son Richard, made significant finds in East Africa; Leakey turned his attention to the investigation of a problem that had intrigued him since his college days: the determination of when humans had reached the North American continent. Concentrating his investigation in the Calico Hills in the Mojave Desert, California, he sought evidence in the form of stone tools of the presence of early humans, as he had done in East Africa. The discovery of some pieces of chalcedony (translucent quartz) that resembled manufactured tools in sediment dated from 50,000 to 100,000 years old stirred an immediate controversy; at that time, scientists believed that humans had settled in North America approximately 20,000 years ago. Many archaeologists, including Mary Leakey, criticized Leakey’s California methodology—and his interpretations of the finds—as scientifically unsound, but Leakey, still charismatic and persuasive, was successful in obtaining funding from the National Geographic Society and, later, several other sources. Human remains were not found in conjunction with the supposed stone tools, and many scientists have not accepted these “artifacts” as anything other than rocks. Shortly before Louis Leakey’s death, Richard Leakey showed his father a skull he had recently found near Lake Rudolf (now Lake Turkana) in Kenya. This skull, removed from a deposit dated to 2.9 million years ago, had a cranial capacity of approximately 800 cubic centimeters, putting it within the range of Homo and apparently vindicating Leakey’s long-held belief in the extreme antiquity of that genus; it also appeared to substantiate Leakey’s interpretation of the Kanam jaw. Leakey died of a heart attack in early October, 1972, in London. Some scientists have questioned Leakey’s interpretations of his discoveries. Other scholars have pointed out that two of the most important finds associated with him were actually made by Mary Leakey, but became widely known when they were interpreted and publicized by him; Leakey had even encouraged criticism through his tendency to publi-
Environmental Encyclopedia 3
Mary Douglas Nicol Leakey
cize his somewhat sensationalistic theories before they had been sufficiently tested. Critics have cited both his tendency toward hyperbole and his penchant for claiming that his finds were the “oldest,” the “first,” the “most significant"; in a 1965 National Geographic article, for example, Melvin M. Payne pointed out that Leakey, at a Washington, D.C., press conference, claimed that his discovery of Homo habilis had made all previous scholarship on early humans obsolete. Leakey has also been criticized for his eagerness to create new genera and species for new finds, rather than trying to fit them into existing categories. Leakey, however, recognized the value of publicity for the fundraising efforts necessary for his expeditions. He was known as an ambitious man, with a penchant for stubbornly adhering to his interpretations, and he used the force of his personality to communicate his various finds and the subsequent theories he devised to scholars and the general public. Leakey’s response to criticism was that scientists have trouble divesting themselves of their own theories in the light of new evidence. “Theories on prehistory and early man constantly change as new evidence comes to light,” Leakey remarked, as quoted by Payne in National Geographic. “A single find such as Homo habilis can upset long-held— and reluctantly discarded—concepts. A paucity of human fossil material and the necessity for filling in blank spaces extending through hundreds of thousands of years all contribute to a divergence of interpretations. But this is all we have to work with; we must make the best of it within the limited range of our present knowledge and experience.” Much of the controversy derives from the lack of consensus among scientists about what defines “human"; to what extent are toolmaking, dentition, cranial capacity, and an upright posture defining characteristics, as Leakey asserted? Louis Leakey’s significance revolves around the ways in which he changed views of early human development. He pushed back the date when the first humans appeared to a time earlier than had been believed on the basis of previous research. He showed that human evolution began in Africa rather than Asia, as had been maintained. In addition, he created research facilities in Africa and stimulated explorations in related fields, such as primatology (the study of primates). His work is notable as well for the sheer number of finds—not only of the remains of apes and humans, but also of the plant and animal species that comprised the ecosystems in which they lived. These finds of Leakey and his team filled numerous gaps in scientific knowledge of the evolution of human forms. They provided clues to the links between prehuman, apelike primates, and early humans, and demonstrated that human evolution may have followed more than one parallel path, one of which led to modern humans, rather than a single line, as earlier scientists had maintained. [Michael Sims]
RESOURCES BOOKS Cole, S. Leakey’s Luck: The Life of Louis Seymour Bazett Leakey, 1903–1972. Harcourt, 1975. Isaac, G., and E. R. McCown, eds., Human Origins: Louis Leakey and the East African Evidence. Benjamin-Cummings, 1976. Johanson, D. C., and M. A. Edey. Lucy: The Beginnings of Humankind. Simon & Schuster, 1981. Leakey, M. Disclosing the Past. Doubleday, 1984. Leakey, R. One Life: An Autobiography. Salem House, 1984. Malatesta, A., and R. Friedland, The White Kikuyu: Louis S. B. Leakey. McGraw-Hill, 1978.
Mary Douglas Nicol Leakey (1913 – 1996) English paleontologist and anthropologist For many years Mary Leakey lived in the shadow of her husband, Louis Leakey, whose reputation, coupled with the prejudices of the time, led him to be credited with some of his wife’s discoveries in the field of early human archaeology. Yet she has established a substantial reputation in her own right and has come to be recognized as one of the most important paleoanthropologists of the twentieth century. It was Mary Leakey who was responsible for some of the most important discoveries made by Louis Leakey’s team. Although her close association with Louis Leakey’s work on Paleolithic sites at Olduvai Gorge—a 350-mi (564-km) ravine in Tanzania—has led to her being considered a specialist in that particular area and period, she has in fact worked on excavations dating from as early as the Miocene Age (an era dating to approximately 18 million years ago) to those as recent as the Iron Age of a few thousand years ago. Mary Leakey was born Mary Douglas Nicol on February 6, 1913, in London. Her mother was Cecilia Frere, the great-granddaughter of John Frere, who had discovered prehistoric stone tools at Hoxne, Suffolk, England, in 1797. Her father was Erskine Nicol, a painter who himself was the son of an artist, and who had a deep interest in Egyptian archaeology. When Mary was a child, her family made frequent trips to southwestern France, where her father took her to see the Upper Paleolithic cave paintings. She and her father became friends with Elie Peyrony, the curator of the local museum, and there she was exposed to the vast collection of flint tools dating from that period of human prehistory. She was also allowed to accompany Peyrony on his excavations, though the archaeological work was not conducted in what would now be considered a scientific way— artifacts were removed from the site without careful study of the place in the earth where each had been found, obscuring valuable data that could be used in dating the artifact and analyzing its context. On a later trip, in 1925, she was taken 829
Mary Douglas Nicol Leakey
to Paleolithic caves by the Abbe Lemozi of France, parish priest of Cabrerets, who had written papers on cave art. After her father’s death in 1926, Mary Nicol was taken to Stonehenge and Avebury in England, where she began to learn about the archaeological activity in that country and, after meeting the archaeologist Dorothy Liddell, to realize the possibility of archaeology as a career for a woman. By 1930 Mary Nicol had undertaken coursework in geology and archaeology at the University of London and had participated in a few excavations in order to obtain field experience. One of her lecturers, R. E. M. Wheeler, offered her the opportunity to join his party excavating St. Albans, England, the ancient Roman site of Verulamium; although she only remained at that site for a few days, finding the work there poorly organized, she began her career in earnest shortly thereafter, excavating Neolithic (early Stone Age) sites in Henbury, Devon, where she worked between 1930 and 1934. Her main area of expertise was stone tools, and she was exceptionally skilled at making drawings of them. During the 1930s Mary met Louis Leakey, who was to become her husband. Leakey was by this time well known because of his finds of early human remains in East Africa; it was at Mary and Louis’s first meeting that he asked her to help him with the illustrations for his 1934 book, Adam’s Ancestors: An Up-to-Date Outline of What Is Known about the Origin of Man. In 1934 Mary Nicol and Louis Leakey worked at an excavation in Clacton, England, where the skull of a hominid—a family of erect primate mammals that use only two feet for locomotion—had recently been found and where Louis was investigating Paleolithic geology as well as fauna and human remains. The excavation led to Mary Leakey’s first publication, a 1937 report in the Proceedings of the Prehistoric Society. By this time, Louis Leakey had decided that Mary should join him on his next expedition to Olduvai Gorge in Tanganyika (now Tanzania), which he believed to be the most promising site for discovering early Paleolithic human remains. On the journey to Olduvai, Mary stopped briefly in South Africa, where she spent a few weeks with an archaeological team and learned more about the scientific approach to excavation, studying each find in situ—paying close attention to the details of the geological and faunal material surrounding each artifact. This knowledge was to assist her in her later work at Olduvai and elsewhere. At Olduvai, among her earliest discoveries were fragments of a human skull; these were some of the first such remains found at the site, and it would be twenty years before any others would be found there. Mary Nicol and Louis Leakey returned to England. Leakey’s divorce from his first wife was made final in the mid-1930s, and he and Mary Nicol were then married; the couple returned to Kenya 830
Environmental Encyclopedia 3 in January of 1937. Over the next few years, the Leakeys excavated Neolithic and Iron Age sites at Hyrax Hill, Njoro River Cave, and the Naivasha Railway Rock Shelter, which yielded a large number of human remains and artifacts. During World War II, the Leakeys began to excavate at Olorgasailie, southwest of Nairobi, but because of the complicated geology of that site, the dating of material found there was difficult. It did prove to be a rich source of material, however; in 1942 Mary Leakey uncovered hundreds, possibly thousands, of hand axes there. Her first major discovery in the field of prehuman fossils was that of most of the skull of a Proconsul africanus on Rusinga Island, in Lake Victoria, Kenya, in 1948. Proconsul was believed by some paleontologists to be a common ancestor of apes and humans, an animal whose descendants developed into two branches on the evolutionary tree: the Pongidae (great apes) and the Hominidae (who eventually evolved into true humans). Proconsul lived during the Miocene Age, approximately 18 million years ago. This was the first time a fossil ape skull had ever been found—only a small number have been found since— and the Leakeys hoped that this would be the ancestral hominid that paleontologists had sought for decades. The absence of a “simian shelf,” a reinforcement of the jaw found in modern apes, is one of the features of Proconsul that led the Leakeys to infer that this was a direct ancestor of modern humans. Proconsul is now generally believed to be a species of Dryopithecus, closer to apes than to humans. Many of the finds at Olduvai were primitive stone hand axes, evidence of human habitation; it was not known, however, who had made them. Mary’s concentration had been on the discovery of such tools, while Louis’s goal had been to learn who had made them, in the hope that the date for the appearance of toolmaking hominids could be moved back to an earlier point. In 1959 Mary unearthed part of the jaw of an early hominid she designated Zinjanthropus (meaning “East African Man") and whom she referred to as “Dear Boy"; the early hominid is now considered to be a species of Australopithecus—apparently related to the two kinds of australopithecine found in South Africa, Australopithecus africanus and Australopithecus robustus— and given the species designation boisei in honor of Louis Leakey’s sponsor Charles Boise. By means of potassium-argon dating, recently developed, it was determined that the fragment was 1.75 million years old, and this realization pushed back the date for the appearance of hominids in Africa. Despite the importance of this find, however, Louis Leakey was slightly disappointed, as he had hoped that the excavations would unearth not another australopithecine, but an example of Homo living at that early date. He was seeking evidence for his theory that more than one hominid form lived at Olduvai at the same time; these forms were the australopithecines, who eventually died out, and some early form of Homo, which
Environmental Encyclopedia 3 survived—owing to toolmaking ability and larger cranial capacity—to evolve into Homo erectus and, eventually, the modern human. Leakey hoped that Mary Leakey’s find would prove that Homo existed at that early level of Olduvai. The discovery he awaited did not come until the early 1960s, with the identification of a skull found by their son Jonathan Leakey that Louis designated as Homohabilis ("man with ability"). He believed this to be the true early human responsible for making the tools found at the site. In her autobiography, Disclosing the Past, released in 1984, Mary Leakey reveals that her professional and personal relationship with Louis Leakey had begun to deteriorate by 1968. As she increasingly began to lead the Olduvai research on her own, and as she developed a reputation in her own right through her numerous publications of research results, she believes that her husband began to feel threatened. Louis Leakey had been spending a vast amount of his time in fundraising and administrative matters, while Mary was able to concentrate on field work. As Louis began to seek recognition in new areas, most notably in excavations seeking evidence of early humans in California, Mary stepped up her work at Olduvai, and the breach between them widened. She became critical of his interpretations of his California finds, viewing them as evidence of a decline in his scientific rigor. During these years at Olduvai, Mary made numerous new discoveries, including the first Homo erectus pelvis to be found. Mary Leakey continued her work after Louis Leakey’s death in 1972. From 1975 she concentrated on Laetoli, Tanzania, which was a site earlier than the oldest beds at Olduvai. She knew that the lava above the Laetoli beds was dated to 2.4 million years ago, and the beds themselves were therefore even older; in contrast, the oldest beds at Olduvai were two million years old. Potassium-argon dating has since shown the upper beds at Laetoli to be approximately 3.5 million years old. In 1978 members of her team found two trails of hominid footprints in volcanic ash dated to approximately 3.5 million years ago; the form of the footprints gave evidence that these hominids walked upright, thus moving the date for the development of an upright posture back significantly earlier than previously believed. Mary Leakey considers these footprints to be among the most significant finds with which she has been associated. In the late 1960s Mary Leakey received an honorary doctorate from the University of the Witwatersrand in South Africa, an honor she accepted only after university officials had spoken out against apartheid. Among her other honorary degrees are a D.S.Sc. from Yale University and a D.Sc. from the University of Chicago. She received an honorary D.Litt. from Oxford University in 1981. She has also received the Gold Medal of the Society of Women Geographers. Louis Leakey was sometimes faulted for being too quick to interpret the finds of his team and for his propensity
Richard Erskine Frere Leakey
for developing sensationalistic, publicity-attracting theories. In recent years Mary Leakey had been critical of the conclusions reached by her husband—as well as by some others— but she did not add her own interpretations to the mix. Instead, she has always been more concerned with the act of discovery itself; she wrote that it is more important for her to continue the task of uncovering early human remains to provide the pieces of the puzzle than it is to speculate and develop her own interpretations. Her legacy lies in the vast amount of material she and her team have unearthed; she leaves it to future scholars to deduce its meaning. [Michael Sims]
RESOURCES BOOKS Isaac, G., and E. R. McCown, eds. Human Origins: Louis Leakey and the East African Evidence. Benjamin-Cummings, 1976. Reader, J. Missing Links. Little, Brown, 1981. Moore, R. E., Man, Time, and Fossils: The Story of Evolution. Knopf, 1961. Malatesta, A., and R. Friedland, The White Kikuyu: Louis S. B. Leakey. McGraw-Hill, 1978. Leakey, R. One Life: An Autobiography. Salem House, 1984. Johanson, D. C., and M. A. Edey, Lucy: The Beginnings of Humankind. Simon & Schuster, 1981. Cole, S. Leakey’s Luck: The Life of Louis Seymour Bazett Leakey, 1903–1972. Harcourt, 1975. Leakey, L. By the Evidence: Memoirs, 1932–1951. Harcourt, 1974.
Richard Erskine Frere Leakey (1944 – ) African-born English paleontologist and anthropologist Richard Erskine Frere Leakey was born on December 19, 1944, in Nairobi, Kenya. Continuing the work of his parents, Leakey has pushed the date for the appearance of the first true humans back even further than they had, to nearly three million years ago. This represents nearly a doubling of the previous estimates. Leakey also has found more evidence to support his father’s still controversial theory that there were at least two parallel branches of human evolution, of which only one was successful. The abundance of human fossils uncovered by Richard Leakey’s team has provided an enormous number of clues as to how the various fossil remains fit into the puzzle of human evolution. The team’s finds have also helped to answer, if only speculatively, some basic questions: When did modern human’s ancestors split off from the ancient apes? On what continent did this take place? At what point did they develop the characteristics now considered as defining human attributes? What is the relationship among and the chronology of the various genera and species of the fossil remains that have been found? 831
Richard Erskine Frere Leakey
While accompanying his parents on an excavation at Kanjera near Lake Victoria at the age of six, Richard Leakey made his first discovery of fossilized animal remains, part of an extinct variety of giant pig. Richard Leakey, however, was determined not to “ride upon his parents’ shoulders,” as Mary Leakey wrote in her autobiography, Disclosing the Past. Several years later, as a young teenager in the early 1960s, Richard demonstrated a talent for trapping wildlife, which prompted him to drop out of high school to lead photographic safaris in Kenya. His paleontological career began in 1963, when he led a team of paleontologists to a fossil-bearing area near Lake Natron in Tanganyika (now Tanzania), a site that was later dated to approximately 1.4 million years ago. A member of the team discovered the jaw of an early hominid—a member of the family of erect primate mammals that use only two feet for locomotion—called an Australopithecus boisei (then named Zinjanthropus).) This was the first discovery of a complete Australopithecus lower jaw and the only Australopithecus skull fragment found since Mary Leakey’s landmark discovery in 1959. Jaws provide essential clues about the nature of a hominid, both in terms of its structural similarity to other species and, if teeth are present, its diet. Richard Leakey spent the next few years occupied with more excavations, the most important result of which was the discovery of a nearly complete fossil elephant. In 1964 Richard married Margaret Cropper, who had been a member of his father’s team at Olduvai the year before. It was at this time that he became associated with his father’s Centre for Prehistory and Paleontology in Nairobi. In 1968, at the age of 23, he became administrative director of the National Museum of Kenya. While his parents had mined with great success the fossil-rich Olduvai Gorge, Richard Leakey concentrated his efforts in northern Kenya and southern Ethiopia. In 1967 he served as the leader of an expedition to the Omo Delta area of southern Ethiopia, a trip financed by the National Geographic Society. In a site dated to approximately 150,000 years ago, members of his team located portions of two fossilized human skulls believed to be from examples of Homo sapiens, or modern humans. While the prevailing view at the time was that Homo sapiens emerged around 60,000 years ago, these skulls were dated at 130,000 years old. While on an airplane trip, Richard Leakey flew over the eastern portion of Lake Rudolf (now Lake Turkana) on the Ethiopia-Kenya border, and he noticed from the air what appeared to be ancient lake sediments, a kind of terrain that he felt looked promising as an excavation site. He used his next National Geographic Society grant to explore this area. The region was Koobi Fora, a site that was to become Richard Leakey’s most important area for excavation. At Koobi Fora his team uncovered more than four hundred hominid fossils and an abundance of stone tools, such tools 832
Environmental Encyclopedia 3 being a primary indication of the presence of early humans. Subsequent excavations near the Omo River in Kenya, from 1968, unearthed more examples of early humans, the first found being another Australopithecus lower jaw fragment. At the area of Koobi Fora known as the KBS tuff (tuff being volcanic ash; KBS standing for the Kay Behrensmeyer Site, after a member of the team) stone tools were found. Preliminary dating of the site placed the area at 2.6 million years ago; subsequent tests over the following few years determined the now generally accepted age of 1.89 million years. In July of 1969, Richard Leakey came across a virtually complete Australopithecus boisei skull—lacking only the teeth and lower jaw—lying in a river bed. A few days later a member of the team located another hominid skull nearby, comprising the back and base of the cranium. The following year brought the discovery of many more fossil hominid remains, at the rate of nearly two per week. Among the most important finds was the first hominid femur to be found in Kenya, which was soon followed by several more. It was at about this time that Leakey obtained a divorce from his first wife, and in October of 1970, he married Meave Gillian Epps, who had been on the 1969 expedition. In 1972, Richard Leakey’s team uncovered a skull that appeared to be similar to the one identified by his father and called Homo habilis ("man with ability"). This was the early human that Louis Leakey maintained had achieved the toolmaking skills that precipitated the development of a larger brain capacity and led to the development of the modern human—Homo sapiens. This skull was more complete and apparently somewhat older than the one Louis Leakey had found and was thus the earliest example of the species Homo yet discovered. They labeled the new skull, which was found below the KBS tuff, “Skull 1470,” and this proved to among Richard Leakey’s most significant discoveries. The fragments consisted of small pieces of all sides of the cranium, and, unusually, the facial bones, enough to permit a reasonably complete reconstruction. Larger than the skulls found in 1969 and 1970, this example had approximately twice the cranial capacity of Australopithecus and more than half that of a modern human—nearly 800 cubic centimeters. At the time, Leakey believed the fragments to be 2.9 million years old (although a more recent dating of the site would place them at less than 2 million years old). Basing his theory in part on these data, Leakey developed the view that these early hominids may have lived as early as 2.5 or even 3.5 million years ago and gave evidence to the theory that Homo habilis was not a descendant of the australopithecines, but a contemporary. By the late 1960s, relations between Richard Leakey and his father had become strained, partly because of real or imagined competition within the administrative structure of the Centre for Prehistory, and partly because of some
Environmental Encyclopedia 3 divergences in methodology and interpretation. Shortly before Louis Leakey’s death, however, the discovery of Skull 1470 by Richard Leakey’s team allowed Richard to present his father with apparent corroboration of one of his central theories. Richard Leakey did not make his theories of human evolution public until 1974. At this time, scientists were still grappling with Louis Leakey’s interpretation of his findings that there had been at least two parallel lines of human evolution, only one of which led to modern humans. After Louis Leakey’s death, Richard Leakey reported that, based on new finds, he believed that hominids diversified between 3 and 3.5 million years ago. Various lines of australopithecines and Homo coexisted, with only one line, Homo, surviving. The australopithecines and Homo shared a common ancestor; Australopithecus was not ancestral to Homo. As did his father, Leakey believes that Homo developed in Africa, and it was Homo erectus who, approximately 1.5 million years ago, developed the technological capacity to begin the spread of humans beyond their African origins. In Richard Leakey’s scheme, Homo habilis developed into Homo erectus, who in turn developed into Homo sapiens, the present-day human. As new finds are made, new questions arise. Are newly discovered variants proof of a plurality of species, or do they give evidence of greater variety within the species that have already been identified? To what extent is sexual dimorphism responsible for the apparent differences in the fossils? In some scientific circles, the discovery of fossil remains at Hadar in Ethiopia by archaeologist Donald Carl Johanson and others, along with the more recent revised dating of Skull 1470, cast some doubt on Leakey’s theory in general and on his interpretation of Homo habilis in particular. Johanson believed that the fossils he found at Hadar and the fossils Mary Leakey found at Laetoli in Tanzania, and which she classified as Homo habilis, were actually all australopithecines; he termed them Australopithecus afarensis and claimed that this species is the common ancestor of both the later australopithecines and Homo. Richard Leakey has rejected this argument, contending that the australopithecines were not ancestral to Homo and that an earlier common ancestor would be found, possibly among the fossils found by Mary Leakey at Laetoli. The year 1975 brought another significant find by Leakey’s team at Koobi Fora: the team found what was apparently the skull of a Homo erectus, according to Louis Leakey’s theory a descendent of Homo habilis and probably dating to 1.5 million years ago. This skull, labeled “3733,” represents the earliest known evidence for Homo erectus in Africa. Richard Leakey began to suffer from health problems during the 1970s, and in 1979 he was diagnosed with a serious kidney malfunction. Later that year he underwent a
Leaking underground storage tank
kidney transplant operation, his younger brother Philip being the donor. During his recuperation Richard completed his autobiography, One Life, which was released in 1984, and following his recovery, he renewed his search for the origins of the human species. The summer of 1984 brought another major discovery: the so-called Turkana boy, a nearly complete skeleton of a Homo erectus, missing little but the hands and feet, and offering, for the first time, the opportunity to view many bones of this species. It was shortly after the unearthing of Turkana boy—whose skeletal remains indicate that he was a twelve-year-old youngster who stood approximately five-and-a-half feet tall—that the puzzle of human evolution became even more complicated. The discovery of a new skull, called the Black Skull, with an Australopithecus boisei face but a cranium that was quite apelike, introduced yet another complication, possibly a fourth branch in the evolutionary tree. Leakey became the Director of the Wildlife Conservation and Management Department for Kenya (Kenya Wildlife Service) in 1989 and in 1999 became head of the Kenyan civil service. [Michael Sims]
RESOURCES BOOKS Leakey, M. Disclosing the Past: An Autobiography. Doubleday, 1984. Leakey, R., and R. Lewin. Origins Reconsidered: In Search of What Makes Us Human. Doubleday, 1992. Reader, J. Missing Links. Little, Brown, 1981.
Leaking underground storage tank Leaking underground storage tanks (LUST) that hold toxic substances have come under new regulatory scrutiny in the United States because of the health and environmental hazards posed by the materials that can leak from them. These storage tanks typically hold petroleum products and other toxic chemicals beneath gas stations and other petroleum facilities. An estimated 63,000 of the nation’s underground storage tanks have been shown to leak contaminants into the environment or are considered to have the potential to leak at any time. One reason for the instability of underground storage tanks is their construction. Only five percent of underground storage tanks are made of corrosion-protected steel, while 84 percent are made of bare steel, which corrodes easily. Another 11 percent of underground storage tanks are made of fiberglass. Hazardous materials seeping from some of the nation’s six million LUSTs can contaminate aquifers, the waterbearing rock units that supply much of the earth’s drinking water. An aquifer, once contaminated, can be ruined as a source of fresh water. In particular, benzene has been found 833
Environmental Encyclopedia 3
Aldo Leopold
to be a contaminant of groundwater as a result of leaks from underground gasoline storage tanks. Benzene and other volatile organic compounds have been detected in bottled water despite manufacturers’ claims of purity. According to the Environmental Protection Agency (EPA), more than 30 states reported groundwater contamination from petroleum products leaking from underground storage tanks. States also reported water contamination from radioactive waste leaching from storage containment facilities. Other reported pollution problems include leaking hazardous substances that are corrosive, explosive, readily flammable, or chemically reactive. While water pollution may be the most visible consequence of leaks from underground storage tanks, fires and explosions are dangerous and sometimes real possibilities in some areas. The EPA is charged with exploring, developing, and disseminating technologies and funding mechanisms for cleanup. The primary job itself, however, is left to state and local governments. Actual cleanup is sometimes funded by the Leaking Underground Storage Tank trust fund established by Congress in 1986. Under the Superfund Amendment and Reauthorization Act, owners and operators of underground storage tanks are required to take corrective action to prevent leakage. See also Comprehensive Environmental Response, Compensation and Liability Act; Groundwater monitoring; Groundwater pollution; Storage and transport of hazardous materials; Toxic Substances Control Act [Linda Rehkopf]
RESOURCES BOOKS Epstein, L., and K. Stein. Leaking Underground Storage Tanks—Citizen Action: An Ounce of Prevention. New York: Environmental Information Exchange (Environmental Defense Fund), 1990.
PERIODICALS Breen, B. “A Mountain and a Mission.” Garbage 4 (May-June 1992): 52–57. Hoffman, R. D. R. “Stopping the Peril of Leaking Tanks.” Popular Science 238 (March 1991): 77–80.
OTHER U.S. Environmental Protection Agency. Office of Underground Storage Tanks (OUST). June 13, 2002 [June 21, 2002]. .
Aldo Leopold (1886 – 1978) American conservationist, ecologist, and writer Leopold was a noted forester, game manager, conservationist, college professor, and ecologist. Yet he is known worldwide for A Sand County Almanac, a little book considered an important, influential work to conservation movement 834
of the twentieth century. In it, Leopold established the land ethic, guidelines for respecting the land and preserving its integrity. Leopold grew up in Iowa, in a house overlooking the Mississippi River, where he learned hunting from his father and an appreciation of nature from his mother. He received a master’s degree in forestry from Yale and spent his formative professional years working for the United States Forest Service in the American Southwest. In the Southwest, Leopold began slowly to consider preservation as a supplement to Gifford Pinchot’s “conservation as wise use—greatest good for the greatest number” land management philosophy that he learned at Yale and in the Forest Service. He began to formulate arguments for the preservation of wilderness and the sustainable development of wild game. Formerly a hunter who encouraged the elimination of predators to save the “good” animals for hunters, Leopold became a conservationist who remembered with sadness the “dying fire” in the eyes of a wolf he had killed. In the Journal of Forestry, he began to speculate that perhaps Pinchot’s principle of highest use itself demanded “that representative portions of some forests be preserved as wilderness.” Leopold must be recognized as one of a handful of originators of the wilderness idea in American conservation history. He was instrumental in the founding of the Wilderness Society in 1935, which he described in the first issue of Living Wilderness as “one of the focal points of a new attitude—an intelligent humility toward man’s place in nature.” In a 1941 issue of that same journal, he asserted that wilderness also has critical practical uses “as a base-datum of normality, a picture of how healthy land maintains itself,” and that wilderness was needed as a living “land laboratory.” This thinking led to the first large area designated as wilderness in the United States. In 1924, some 574,000 acres (232,000 ha) of the Gila National Forest in New Mexico was officially named a wilderness area. Four years before, the much smaller Trappers Lake valley in Colorado was the first area designated “to be kept roadless and undeveloped.” Aldo Leopold is also widely acknowledged as the founder of wildlife management in the United States. His classic text on the subject, Game Management (1933), is still in print and widely read. Leopold tried to write a general management framework, drawing upon and synthesizing species monographs and local manuals. “Details apply to game alone, but the principles are of general import to all fields of conservation,” he wrote. He wanted to coordinate “science and use” in his book and felt strongly that land managers could either try to apply such principles, or be reduced to “hunting rabbits.” Here can be found early uses of concepts still central to conservation and management, such as limiting factor, niche, saturation point, and carrying capacity. Leopold later became the first professor of game
Environmental Encyclopedia 3 management in the United States at the University of Wisconsin. Leopold’s A Sand County Almanac, published in 1949, a year after his death, is often described as “the bible of the environmental movement” of the second half of the twentieth century. The Almanac is a beautifully written source of solid ecological concepts such as trophic linkages and biological community. The book extends basic ecological concepts, forming radical ideas to reformulate human thinking and behavior. It exhibits an ecological conscience, a conservation aesthetic, and a land ethic. He advocated his concept of ecological conscience to fill in a perceived gap in conservation education: “Obligations have no meaning without conscience, and the problem we face is the extension of the social conscience from people to land.” Lesser known is his attention to the aesthetics of land: according to the Almanac, an acceptable land aesthetic emerges only from learned and sensitive perception of the connections and needs of natural communities. The last words in the Almanac are that a true conservation aesthetic is developed “not of building roads into lovely country, but of building receptivity into the still unlovely human mind.” Leopold derived his now famous land ethic from an ecological conception of community. All ethics, he maintained, “rest upon a single premise: that the individual is a member of a community of interdependent parts.” He argued that “the land ethic simply enlarges the boundaries of the community to include soils, waters, plants, and animals, or collectively: the land.” Perhaps the most widely quoted statement from the book argues that “a thing is right when it tends to preserve the integrity, stability, and beauty of the biotic community. It is wrong when it tends otherwise.” Leopold’s land ethic was first proposed in the Journal of Forestry article in 1933 and later expanded in the Almanac. It is a plea to care for land and its biological complex, instead of considering it a commodity. As Wallace Tegner noted, Leopold’s ideas were heretical in 1949, and to some people still are. “They smack of socialism and the public good,” he wrote. “They impose limits and restraints. They are antiProgress. They dampen American initiative. They fly in the face of the faith that land is a commodity, the very foundation stone of American opportunity.” As a result, Stegner and others do not think Leopold’s ethic had much influence on public thought, though the book has been widely read. Leopold recognized this. “The case for a land ethic would appear hopeless but for the minority which is in obvious revolt against these ’modern’ trends,” he commented. Nevertheless, the land ethic is alive and still flourishing, in an ever-growing minority. Even Stegner argued that “Leopold’s land ethic is not a fact but [an on-going] task.” Leopold did not shrink from that task, being actively involved in many
Aldo Leopold
Aldo Leopold examining a gray partridge. (Photograph by Robert Oetking. University of Wisconsin-Madison Archives. Reproduced by permission.)
conservation associations, teaching management principles and the land ethic to his classes, bringing up all five of his children to become conservationists, and applying his beliefs directly to his own land, a parcel of “logged, fire-swept, overgrazed, barren” land in Sauk County, Wisconsin. As his work has become more recognized and more influential, many labels have been applied to Leopold by contemporary writers. He is a “prophet” and “intellectual touchstone” to Roderick Nash a “founding genius” to J. Baird Callicott “an American Isaiah” to Stegner the “Moses of the new conservation impulse” to Donald Fleming. In a sense, he may have been all of these, but more than anything else, Leopold was an applied ecologist who tried to put into practice the principles he learned from the land. [Gerald R. Young Ph.D.]
RESOURCES BOOKS Callicott, J. B., ed. Companion to A Sand County Almanac: Interpretive and Critical Essays. Madison: University of Wisconsin Press, 1987. Flader, S. L., and J. B. Callicott, eds. The River of the Mother of God and Other Essays by Aldo Leopold. Madison: University of Wisconsin Press, 1991.
835
Environmental Encyclopedia 3
Less developed countries
Fritzell, P. A. “A Sand County Almanac and The Conflicts of Ecological Conscience.” In Nature Writing and America: Essays Upon a Cultural Type. Ames: Iowa State University Press, 1990. Leopold, A. Game Management. New York: Charles Scribner’s Sons, 1933. ———. A Sand County Almanac. New York: Oxford University Press, 1949. Meine, C. Aldo Leopold: His Life and Work. Madison: University of Wisconsin Press, 1988. Oelschlaeger, M. “Aldo Leopold and the Age of Ecology.” In The Idea of Wilderness: From Prehistory to the Age of Ecology. New Haven, CT: Yale University Press, 1991. Strong, D. H. “Aldo Leopold.” In Dreamers and Defenders: American Conservationists. Lincoln: University of Nebraska Press, 1988.
Less developed countries Less developed countries (LDCs) have lower levels of economic prosperity, health care, and education than most other countries. Development or improvement in economic and social conditions encompasses various aspects of general welfare, including infant survival, expected life span, nutrition, literacy rates, employment, and access to material goods. Less developed countries (LDCs) are identified by their relatively poor ratings in these categories. In addition, most LDCs are marked by high population growth, rapidly expanding cities, low levels of technological development, and weak economies dominated by agriculture and the export of natural resources. Because of their limited economic and technological development, LDCs tend to have relatively little international political power compared to more developed countries (MDC) such as Japan, the United States, and Germany. A variety of standard measures, or development indices, are used to assess development stages. These indices are generalized statistical measures of quality of life for individuals in a society. Multiple indices are usually considered more accurate than a single number such as Gross National Product, because such figures tend to give imprecise and simplistic impressions of conditions in a country. One of the most important of the multiple indices is the infant mortality rate. Because children under five years old are highly susceptible to common diseases, especially when they are malnourished, infant mortality is a key to assessing both nutrition and access to health care. Expected life span, the average age adults are able to reach, is used as a measure of adult health. Daily calorie and protein intake per person are collective measures that reflect the ability of individuals to grow and function effectively. Literacy rates, especially among women, who are normally the last to receive an education, indicate access to schools and preparation for technologically advanced employment. Fertility rates are a measure of the number of children produced per family or per woman in a population and are regarded as an important measure of the confidence parents 836
have in their childrens’ survival. High birth rates are associated with unstable social conditions because a country with a rapidly growing population often cannot provide its citizens with food, water, sanitation, housing space, jobs, and other basic needs. Rapidly growing populations also tend to undergo rapid urbanization. People move to cities in search of jobs and educational opportunities, but in poor countries the cost of providing basic infrastructure in an expanding city can be debilitating. As most countries develop, they pass from a stage of high birth rates to one of low birth rates, as child survival becomes more certain and a family’s investment in educating and providing for each child increases. Most LDCs were colonies under foreign control during the past 200 years. Colonial powers tended to undermine social organization, local economies, and natural resource bases, and many recently independent states are still recovering from this legacy. Thus, much of Africa, which provided a wealth of natural resources to Europe between the seventeenth and twentieth centuries, now lacks the effective and equitable social organization necessary for continuing development. Similarly, much of Central America (colonized by Spain in the fifteenth century) and portions of South and Southeast Asia (colonized by England, France, the Netherlands, and others) remain less developed despite their wealth of natural resources. The development processes necessary to improve standards of living in LDCs may involve more natural resource extraction, but usually the most important steps involve carefully choosing the goods to be produced, decreasing corruption among government and business leaders, and easing the social unrest and conflicts that prevent development from proceeding. All of these are extraordinarily difficult to do, but they are essential for countries trying to escape from poverty. See also Child survival revolution; Debt for nature swap; Economic growth and the environment; Indigenous peoples; Shanty towns; South; Sustainable development; Third World; Third World pollution; Tropical rain forest; World Bank [Muthena Naseri]
RESOURCES BOOKS Gill, S., and D. Law. The Global Political Economy. Baltimore: Johns Hopkins University Press, 1991. World Bank. World Development Report: Development and the Environment. Oxford, England: Oxford University Press, 1992. World Bank. World Development Report 2000/2001: Attacking Poverty. Oxford, England: Oxford University Press, 2000.
Leukemia Leukemia is a disease of the blood-forming organs. Primary tumors are found in the bone marrow and lymphoid tissues,
Environmental Encyclopedia 3 specifically the liver, spleen, and lymph nodes. The characteristic common to all types of leukemia is the uncontrolled proliferation of leukocytes (white blood cells) in the blood stream. This results in a lack of normal bone marrow growth, and bone marrow is replaced by immature and undifferentiated leukocytes or “blast cells.” These immature and undifferentiated cells then migrate to various organs in the body, resulting in the pathogenesis of normal organ development and processing. Leukemia occurs with varying frequencies at different ages, but it is most frequent among the elderly who experience 27,000 cases a year in the United States to 2,200 cases a year for younger people. Acute lymphoblastic leukemia, most common in children, is responsible for two-thirds of all cases. Acute nonlymphoblastic leukemia and chronic lymphocytic leukemia are most common among adults— they are responsible for 8,000 and 9,600 cases a year respectively. The geographical sites of highest concentration are the United States, Canada, Sweden, and New Zealand. While there is clear evidence that some leukemias are linked to genetic traits, the origins of this disease in most cases is mysterious. It seems clear, however, that environmental exposure to radiation, toxic substances, and other risk factors plays an important role in many leukemias. See also Cancer; Carcinogen; Radiation exposure; Radiation sickness
Lichens Lichens are composed of fungi and algae. Varying in color from pale whitish green to brilliant red and orange, lichens usually grow attached to rocks and tree trunks and appear as thin, crusty coatings, as networks of small, branched strands, or as flattened, leaf-like forms. Some common lichens are reindeer moss and the red “British soldiers.” There are approximately 20,000 known lichen species. Because they often grow under cold, dry, inhospitable conditions, they are usually the first plants to colonize barren rock surfaces. The fungus and the alga form a symbiotic relationship within the lichen. The fungus forms the body of the lichen, called the thallus. The thallus attaches itself to the surface of a rock or tree trunk, and the fungal cells take up water and nutrients from the environment. The algal cells grow inside the fungal cells and perform photosynthesis, as do other plant cells, to form carbohydrates. Lichens are essential in providing food for other organisms, breaking down rocks, and initiating soil building. They are also important indicators and monitors of air pollution effects. Since lichens grow attached to rock and tree surfaces, they are fully exposed to airborne pollutants, and chemical analysis of lichen tissues can be used to measure the quantity
Life cycle assessment
of pollutants in a particular area. For example, sulfur dioxide, a common emission from power plants, is a major air pollutant. Many studies show that as the concentrations of sulfur dioxide in the air increase, the number of lichen species decreases. The disappearance of lichens from an area may be indicative of other, widespread biological impacts. Sometimes, lichens are the first organisms to transfer contaminants to the food chain. Lichens are abundant through vast regions of the arctic tundra and form the main food source for caribou (Rangifer tarandus) in winter. The caribou are hunted and eaten by northern Alaskan Eskimos in spring and early summer. When the effects of radioactive fallout from weapons-testing in the arctic tundra were studied, it was discovered that lichens absorbed virtually all of the radionuclides that were deposited on them. Strontium90 and cesium-137 were two of the major radionuclide contaminants. As caribou grazed on the lichens, these radionuclides were absorbed into the caribous’ tissues. At the end of the winter, caribou flesh contained three to six times as much cesium-137 as it did in the fall. When the caribou flesh was consumed by the Eskimos, the radionuclides were transferred to them as well. See also Indicator organism; Symbiosis [Usha Vedagiri]
RESOURCES BOOKS Connell, D. W., and G. J. Miller. Chemistry and Ecotoxicology of Pollution. New York: Wiley, 1984. Smith, R. L., and T. M. Smith Ecology and Field Biology. 6th ed. Upper Saddle River, NJ: Prentice Hall, 2002. Weier, T. E., et al. Botany: An Introduction to Plant Biology. New York: Wiley, 1982.
Life cycle assessment Life cycle assessment (or LCA) refers to a process in industrial ecology by which the products, processes, and facilities used to manufacture specific products are each examined for their environmental impacts. A balance sheet is prepared for each product that considers: the use of materials; the consumption of energy; the recycling, re-use, and/or disposal of non-used materials and energy (in a less-enlightened context, these are referred to as “wastes"); and the recycling or re-use of products after their commercial life has passed. By taking a comprehensive, integrated look at all of these aspects of the manufacturing and use of products, life cycle assessment finds ways to increase efficiency, to re-use, reduce, and recycle materials, and to lessen the overall environmental impacts of the process. 837
Limits to Growth (1972) and Beyond the Limits (1992)
Lignite see Coal
Limits to Growth (1972) and Beyond the Limits (1992) Published at the height of the oil crisis in the 1970s, the Limits to Growth study is credited with lifting environmental concerns to an international and global level. Its fundamental conclusion is that if rapid growth continues unabated in the five key areas of population, food production, industrialization, pollution, and consumption of nonrenewable natural resources, the planet will reach the limits of growth within 100 years. The most probable result will be a “rather sudden and uncontrollable decline in both population and industrial capacity.” The study grew out of an April 1968 meeting of 30 scientists, educators, economists, humanists, industrialists, and national and international civil servants who had been brought together by Dr. Aurelio Peccei, an Italian industrial manager and economist. Peccei and the others met at the Accademia dei Lincei in Rome to discuss the “present and future predicament of man,” and from their meeting came the Club of Rome. Early meetings of the club resulted in a decision to initiate the Project on the Predicament of Mankind, intended to examine the array of problems facing all nations. Those problems ranged from poverty amidst plenty and environmental degradation to the rejection of traditional values and various economic disturbances. In the summer of 1970, Phase One of the project took shape during a series of meetings in Bern, Switzerland and Cambridge, Massachusetts. At a two-week meeting in Cambridge, Professor Jay Forrester of the Massachusetts Institute of Technology (MIT) presented a global model for analyzing the interacting components of world problems. Professor Dennis Meadows led an international team in examining the five basic components, mentioned above, that determine growth on this planet and its ultimate limits. The team’s research culminated in the 1972 publication of the study, which touched off intense controversy and further research. Underlying the study’s dramatic conclusions is the central concept of exponential growth, which occurs when a quantity increases by a constant percentage of the whole in a constant time period. “For instance, a colony of yeast cells in which each cell divides into two cells every ten minutes is growing exponentially,” the study explains. The model used to capture the dynamic quality of exponential growth is a System Dynamics model, developed over a 30-year period at MIT, which recognizes that the structure of any 838
Environmental Encyclopedia 3
system determines its behavior as much as any individual parts of the system. The components of a system are described as “circular, interlocking, sometimes time-delayed.” Using this model (called World3), the study ran scenarios— what-if analyses—to reach its view of how the world will evolve if present trends persist. “Dynamic modeling theory indicates that any exponentially growing quantity is somehow involved with a positive feedback loop,” the study points out. “In a positive feedback loop a chain of cause-and-effect relationships closes on itself, so that increasing any one element in the loop will start a sequence of changes that will result in the originally changed element being increased even more.” In the case of world population growth, the births per year act as a positive feedback loop. For instance, in 1650, world population was half a billion and was growing at a rate of 0.3 percent a year. In 1970, world population was 3.6 billion and was growing at a rate of 2.1 percent a year. Both the population and the rate of population growth have been increasing exponentially. But in addition to births per year, the dynamic system of population growth includes a negative feedback loop: deaths per year. Positive feedback loops create runaway growth, while negative feedback loops regulate growth and hold a system in a stable state. For instance, a thermostat will regulate temperature when a room reaches a certain temperature, the thermostat shuts off the system until the temperature decreases enough to restart the system. With population growth, both the birth and death rates were relatively high and irregular before the Industrial Revolution. But with the spread of medicines and longer life expectancies, the death rate has slowed while the birth rate has risen. Given these trends, the study predicted a worldwide jump in population of seven billion over 30 years. This same dynamic of positive and negative feedback loops applies to the other components of the world system. The growth in world industrial capital, with the positive input of investment, creates rising industrial output, such as houses, automobiles, textiles, consumer goods, and other products. On the negative feedback side, depreciation, or the capital discarded each year, draws down the level of industrial capital. This feedback is “exactly analogous to the death rate loop in the population system,” the study notes. And, as with world population, the positive feedback loop is “strongly dominant,” creating steady growth in worldwide industrial capital and the use of raw materials needed to create products. This system in which exponential growth is occurring, with positive feedback loops outstripping negative ones, will push the world to the limits of exponential growth. The study asks what will be needed to sustain world economic and population growth until and beyond the year 2000 and concludes that two main categories of ingredients can be
Environmental Encyclopedia 3 defined. First, there are physical necessities that support all physiological and industrial activity: food, raw materials, fossil and nuclear fuels, and the ecological systems of the planet that absorb waste and recycle important chemical substances. Arable land, fresh water, metals, forests, and oceans are needed to obtain those necessities. Second, there are social necessities needed to sustain growth, including peace, social stability, education, employment, and steady technological progress. Even assuming that the best possible social conditions exist for the promotion of growth, the earth is finite and therefore continued exponential growth will reach the limits of each physical necessity. For instance, about 1 acre (0.4 ha) of arable land is needed to grow enough food per person. With that need for arable land, even if all the world’s arable land were cultivated, current population growth rates will still create a “desperate land shortage before the year 2000,” the study concludes. The availability of fresh water is another crucial limiting factor, the study points out. “There is an upper limit to the fresh water runoff from the land areas of the earth each year, and there is also an exponentially increasing demand for that water.” This same analysis is applied to nonrenewable resources, such as metals, coal, iron, and other necessities for industrial growth. World demand is rising steadily and at some point demand for each nonrenewable resource will exceed supply, even with recycling of these materials. For instance, the study predicts that even if 100 percent recycling of chromium from 1970 onward were possible, demand would exceed supply in 235 years. Similarly, while it is not known how much pollution the world can take before vital natural processes are disrupted, the study cautions that the danger of reaching those limits is especially great because there is usually a long delay between the time a pollutant is released and the time it begins to negatively affect the environment. While the study foretells worldwide collapse if exponential growth trends continue, it also argues that the necessary steps to avert disaster are known and are well within human capabilities. Current knowledge and resources could guide the world to a sustainable equilibrium society provided that a realistic, long-term goal and the will to achieve that goal are pursued. The sequel to the 1972 study, Beyond the Limits, was not sponsored by the Club of Rome, but it is written by three of the original authors. While the basic analytical framework remains the same in the later work—drawing upon the concepts of exponential growth and feedback loops to describe the world system—its conclusions are more severe. No longer does the world only face a potential of “overshooting” its limits. “Human use of many essential resources and generation of many kinds of pollutants have
Limnology
already surpassed rates that are physically sustainable,” according to the 1992 study. “Without significant reductions in material and energy flows, there will be in the coming decades an uncontrolled decline in per capita food output, energy use, and industrial output.” However, like its predecessor, the later study sounds a note of hope, arguing that decline is not inevitable. To avoid disaster requires comprehensive reforms in policies and practices that perpetuate growth in material consumption and population. It also requires a rapid, drastic jump in the efficiency with which we use materials and energy. Both the earlier and the later study were received with great controversy. For instance, economists and industrialists charged that the earlier study ignored the fact that technological innovation could stretch the limits to growth through greater efficiency and diminishing pollution levels. When the sequel was published, some critics charged that the World3 model could have been refined to include more realistic distinctions between nations and regions, rather than looking at all trends on a world scale. For instance, different continents, rich and poor nations, North, South, and East, various regions—all are different, but those differences are ignored in the model, thereby making it unrealistic even though modeling techniques have evolved significantly since World3 was first developed. See also Sustainable development [David Clarke]
RESOURCES BOOKS Meadows, D., et al. The Limits to Growth: A Report for The Club of Rome’s Project on the Predicament of Mankind. New York: Universe Books, 1972. Meadows, D., D. L. Meadows, and J. Randers. Beyond the Limits: Confronting Global Collapse, Envisioning a Sustainable Future. Post Mills, VT: Chelsea Green, 1992.
Limnology Derived from the Greek word limne, meaning marsh or pond, the term limnology was first used in reference to lakes by F. A. Forel (1841–1912) in 1892 in a paper titled “Le Le´man: Monographie Limnology,” a study of what we now call Lake Geneva in Switzerland. Limnology, also known as aquatic ecology, refers to the study of fresh water communities within continental boundaries. It can be subdivided into the study of lentic (standing water habitats such as lakes, ponds, bogs, swamps, and marshes) and lotic (running water habitats such as rivers, streams, and brooks) environments. Collectively, limnologists study the morphological, physical, chemical, and biological aspects of these habitats. 839
Environmental Encyclopedia 3
Raymond L. Lindeman
Raymond L. Lindeman
(1915 – 1942)
American ecologist Few scholars or scientists, even those much published and long-lived, leave singular, indelible imprints on their disciplines,. Yet, Raymond Lindeman, in 26 short years, who published just six articles, was described shortly after his death by G. E. Hutchinson as “one of the most creative and generous minds yet to devote itself to ecological science,” and the last of those six papers, “The Trophic-Dynamic Aspect of Ecology,"—published posthumously in 1942— continues to be considered one of the foundational papers in ecology, an article “path-breaking in its general analysis of ecological succession in terms of energy flow through the ecosystem,” an article based on an idea that Edward Kormondy has called “the most significant formulation in the development of modern ecology.” Immediately after completing his doctorate at the University of Minnesota, Lindeman accepted a one-year Sterling fellowship at Yale University to work with G. Evelyn Hutchinson, the Dean of American limnologists. He had published chapters of his thesis one by one and at Yale worked to revise the final chapter, refining it with ideas drawn from Hutchinson’s lecture notes and from their discussions about the ecology of lakes. Lindeman submitted the manuscript to Ecology with Hutchinson’s blessings, but it was rejected based on reviewers’ claims that it was speculation far beyond the data presented from research on three lakes, including Lindeman’s own doctoral research on Cedar Bog Lake. After input from several well-known ecologists, further revisions, and with further urging from Hutchinson, the editor finally overrode the reviewers’ comments and accepted the manuscript; it was published in the October, 1942 issue of Ecology, a few months after Lindeman died in June of that year. The important advances made by Lindeman’s seminal article included his use of the ecosystem concept, which he was convinced was “of fundamental importance in interpreting the data of dynamic ecology,” and his explication of the idea that “all function, and indeed all life” within ecosystems depends on the movement of energy through such systems by way of trophic relationships. His use of ecosystem went beyond the little attention paid to it by Hutchinson and beyond Tansley’s labeling of the unit seven years earlier to open up “new directions for the analysis of the functioning of ecosystems.” More than half a century after Lindeman’s article, and despite recent revelations on the uncertainty and unpredictability of natural systems, a majority of ecologists probably still accept ecosystem as the basic unit in ecology and, in those systems, energy exchange as the basic process. Lindeman was able to effectively demonstrate a way to bring together or synthesize two quite separate traditions 840
in ecology, autecology, dependent on physiological studies of individual organisms and species, and synecology focused on studies of communities, aggregates of individuals. He believed, and demonstrated, that ecological research would benefit from a synthesis of these organism-based approaches and focus on the energy relationships that tied organism and environment into one unit—the ecosystem—suggesting as a result that biotic and abiotic could not realistically be disengaged, especially in ecology. Half a decade or so of work on cedar bog lakes, and half a dozen articles would seem a thin stem on which to base a legacy. But it really boils down to Lindeman’s synthesis of that work in that one singular, seminal paper, in which he created one of the significant stepping stones from a mostly descriptive discipline toward a more sophisticated and modern theoretical ecology. [Gerald J. Young Ph.D.]
RESOURCES PERIODICALS Cook, Robert E. “Raymond Lindeman and the Trophic-Dynamic Concept in Ecology.” Science 198, no. 4312 (October 1977): 22–26. Lindsey, Alton A. “The Ecological Way.” Naturalist-Journal of the Natural History Society of Minnesota 31 (Spring 1980): 1–6. Reif, Charles B. “Memories of Raymond Laurel Lindeman.” The Bulletin of the Ecological Society of America 67, no. 1 (March 1986): 20–25.
Liquid metal fast breeder reactor The liquid metal fast breeder reactor (LMFBR) is a nuclear reactor that has been modified to increase the efficiency at which non-fissionable uranium-238 is converted to fissionable plutonium-239, which can be used as fuel in the production of nuclear power. The reactor uses “fast” rather than “slow” neutrons to strike a uranium-238 nucleus, resulting in the formation of plutonium-239. In a second modification, it uses a liquid metal, usually sodium, rather than neutronabsorbing water as a more efficient coolant. Since the reactor produces new fuel as it operates, it is called a breeder reactor. The main appeal of breeder reactors is that they provide an alternative way of obtaining fissionable materials. The supply of natural uranium in the earth’s crust is fairly large, but it will not last forever. Plutonium-239 from breeder reactors might become the major fuel used in reactors built a few hundred or thousand years from now. However, the potential of LMFBRs has not as yet been realized. One serious problem involves the use of liquid sodium as coolant. Sodium is a highly corrosive metal and in an LMFBR it is converted into a radioactive form, sodium-24. Accidental release of the coolant from such a plant could, therefore, constitute a serious environmental hazard.
Environmental Encyclopedia 3
Liquified natural gas
Liquid metal fast breeder reactor. (McGraw-Hill Inc. Reproduced by permission.)
In addition, plutonium itself is difficult to work with. It is one of the most toxic substances known to humans, and its half-life of 24,000 years means that its release presents longterm environmental problems. Small-scale pilot LMFBR reactors have been tested in the United States, Saudi Arabia, Great Britain, and Germany since 1966, and all have turned out to be far more expensive than had been anticipated. The major United States research program based at Clinch, Tennessee, began in 1970. By 1983, the U. S. Congress refused to continue funding the project due to its slow and unsatisfactory progress. See also Nuclear fission; Nuclear Regulatory Commission; Radioactivity; Radioactive waste management [David E. Newton]
RESOURCES BOOKS Cochran, Thomas B. The Liquid Metal Fast Breeder Reactor: An Environmental and Economic Critique. Baltimore: Johns Hopkins University Press, 1974. Mitchell III, W., and S. E. Turner. Breeder Reactors. Washington, DC: U.S. Atomic Energy Commission, 1971.
OTHER International Nuclear Information System. Links to Fast Reactor Related Sites. June 7, 2002 [June 21, 2002]. .
Liquified natural gas Natural gas is a highly desirable fuel in many respects. It burns with the release of a large amount of energy, producing almost entirely carbon dioxide and water as waste products. Except for possible greenhouse effects of carbon dioxide, these compounds produce virtually no environmental hazard. Transporting natural gas through transcontinental pipelines is inexpensive and efficient where topography allows the laying of pipes. Oceanic shipping is difficult, however, because of the flammability of the gas and the high volumes involved. The most common way of dealing with these problems is to condense the gas first and then transport it in the form of liquified natural gas (LNG). But LNG must be maintained at temperatures of about -260°F (-160°C) and protected from leaks and flames during loading and unloading. See also Fossil fuels
841
Lithology
Lithology Lithology is the study of rocks, emphasizing their macroscopic physical characteristics, including grain size, mineral composition, and color. Lithology and its related field, petrography (the description and systematic classification of rocks), are subdisciplines of petrology, which also considers microscopic and chemical properties of minerals and rocks as well as their origin and decay.
Littoral zone In marine systems, littoral zone is synonymous with intertidal zone and refers to the area on marine shores that is periodically exposed to air during low tide. The freshwater littoral zone is that area near the shore characterized by submerged, floating, or emergent vegetation. The width of a particular littoral zone may vary from several miles to a few feet. These areas typically support an abundance of organisms and are important feeding and nursery areas for fishes, crustaceans, and birds. The distribution and abundance of individual species in the littoral zone is dependent on predation and competition as well as tolerance of physical factors. See also Neritic zone; Pelagic zone
Loading The term loading has a wide variety of specialized meanings in various fields of science. In general, all refer to the addition of something to a system, just as loading a truck means filling it with objects. In the science of acoustics, for example, loading refers to the process of adding materials to a speaker in order to improve its acoustical qualities. In environmental science, loading is used to describe the contribution made to any system by some component. One might analyze, for example, how an increase in chlorofluorocarbon (CFC) loading in the stratosphere might affect the concentration of ozone there.
Logging Logging is the systematic process of cutting down trees for lumber and wood products. The method of logging called clearcutting, in which entire areas of forests are cleared, is the most prevalent practice used by lumber companies. Clearcutting is the cheapest and most efficient way to harvest a forest’s available resources. This practice drastically alters the forest ecosystem, and many plants and animals are displaced or destroyed by it. After clearcutting is performed on forests, forestry management techniques may be intro842
Environmental Encyclopedia 3 duced in order to manage the growth of new trees on the cleared land. Selective logging is an alternative to clearcutting. In selective logging, only certain trees in a forest are chosen to be logged, usually on the basis of their size or species. By taking a smaller percentage of trees, the forest is protected from destruction and fragile plants and animals in the forest ecosystem are more likely to survive. New, innovative techniques offer alternatives for preserving the forest. For example, the Shelterwood Silvicultural System harvests mature trees in phases. First, part of the original stand is removed to promote growth of the remaining trees. After this occurs, regeneration naturally follows using seeds provided by the remaining trees. Once regeneration has occurred, the remaining mature trees are harvested. Early logging equipment included long, two-man straight saws and teams of animals to drag trees away. After World War II, technological advances made logging easier. The bulldozer and the helicopter allowed loggers to enter into new and previously untouched areas. The chainsaw allowed loggers to cut down many more trees each day. Today, enormous machines known as feller-bunchers take the place of human loggers. These machines use a hydraulic clamp that grasps the individual tree and huge shears that cut through it in one swift motion. High demands for lumber and forest products have caused prolific and widespread commercial logging. Certain methods of timber harvesting allow for subsequent regeneration, while others cause deforestation, or the irreversible creation of a non-forest condition. Deforestation significantly changed the landscape of the United States. Some observers remarked as early as the mid-1700s upon the rapid changes made to the forests from the East Coast to the Ohio River Valley. Often, the lumber in forests was burned away so that the early settlers could build farms upon the rich soil that had been created by the forest ecosystem. The immediate results of deforestation are major changes to Earth’s landscapes and diminishing wildlife habitats. Longer-range results of deforestation, including unrestrained commercial logging, may include damage to Earth’s atmosphere and the unbalancing of living ecosystems. Forests help to remove carbon dioxide from the air. Through the process of photosynthesis, forests release oxygen into the air. A single acre of temperate forest releases more than six tons of oxygen into the atmosphere every year. In the last 150 years, deforestation, together with the burning of fossil fuels, has raised the amount of carbon dioxide in the atmosphere by more than 25%. It has been theorized that this has contributed to global warming, which is the accumulation of gasses leading to a gradual increase in Earth’s surface temperature. Human beings are still learning how to measure their need for wood against their need for a viable environment for themselves and other life forms.
Environmental Encyclopedia 3 Although human activity, especially logging, has decimated many of the world’s forests and the life within them, some untouched forests still remain. These forests are known as old-growth or ancient-growth forests. Old-growth forests are at the center of a heated debate between environmentalists, who wish to preserve them, and the logging industry, which continually seeks new and profitable sources of lumber and other forest products. Very little of the original uncut North American forest still remains. It has been estimated that the United States has lost over 96% of its old-growth forests. This loss continues as logging companies become more attracted to ancient-growth forests, which contain larger, more profitable trees. A majority of old-growth forests in the United States are in Alaska and Pacific Northwest. On the global level, barely 20% of the old-growth forests still remain, and the South American rainforests account for a significant portion of these. About 1% of the Amazon rainforest is deforested each year. At the present rate of logging around the world, old-growth forests could be gone within the first few decades of the twentyfirst century unless effective conservation programs are instituted. As technological advancements of the twentieth century dramatically increased the efficiency of logging, there was also a growth in understanding about the contribution of the forest to the overall health of the environment, including the effect of logging upon that health. Ecologists, who are scientists that study the complex relationships within natural systems, have determined that logging can affect the health of air, soil, water, plant life, and animals. For instance, clearcutting was at one time considered a healthy forestry practice, as proponents claimed that clearing a forest enabled the growth of new plant life, sped the process of regeneration, and prevented fires. The American Forest Institute, an industry group, ran an ad in the 1970s that stated, “I’m clearcutting to save the forest.” Ecologists have come to understand that clearcutting old-growth forests has a devastating effect on plant and animal life, and affects the health of the forest ecosystem from its rivers to its soil. Old-growth trees, for example, provide an ecologically diverse habitat including woody debris and fungi that contribute to nutrient-rich soil. Furthermore, many species of plants and wildlife, some still undiscovered, are dependent upon old-growth forests for survival. The huge canopies created by old-growth trees protect the ground from water erosion when it rains, and their roots help to hold the soil together. This in turn maintains the health of rivers and streams, upon which fish and other aquatic life depend. In the Pacific Northwest, for example, ecologists have connected the health of the salmon population with the health of the forests and the logging practices therein. Ecologists now understand that clearcutting and the planting of new trees, no matter how scientif-
Logging
ically managed, cannot replace the wealth of biodiversity maintained by old-growth forests. The pace of logging is dictated by the consumer demand for lumber and wood products. In the United States, for instance, the average size of new homes doubled between 1970 and 2000, and the forests ultimately bear the burden of the increasing consumption of lumber. In the face of widespread logging, environmentalists have become more desperate to protect ancient forests. There is a history of controversy between the timber industry and environmentalists regarding the relationship between logging and the care of forests. On the one hand, the logging industry has seen forests as a source of wealth, economic growth, and jobs. On the other hand, environmentalists have viewed these same forests as a source of recreation, spiritual renewal, and as living systems that maintain the overall environmental health. In the 1980s, a controversy raged between environmentalists and the logging industry over the protection of the northern spotted owl, a threatened species of bird whose habitat is the old-growth forest of the Pacific Northwest. Environmentalists appealed to the Endangered Species Act of 1973 to protect some of these old-growth forests. In other logging controversies, some environmentalists chained themselves to old-growth trees to prevent their destruction, and one activist, Julia Butterfly Hill, lived in an old-growth California redwood tree for two years in the 1990s to prevent it from being cut down. The clash between environmentalists and the logging industry may become more intense as the demand for wood increases and supplies decrease. However, in recent years these opposing views have been tempered by discussion of concepts such as responsible forest management to create sustainable growth, in combination with preservation of protected areas. Most of the logging in the United States occurs in the national forests. From the point of view of the U.S. Forest Service, logging provides jobs, helps manage the forest in some respects, prevents logging in other parts of the world, and helps eliminate the danger of forest fires. To meet the demands of the logging industry, the national forests have been developed with a labyrinth of logging roads and contain vast areas that have been devastated by clearcutting. In the United States there are enough logging roads in the National Forests to circle the earth 15 times, roads that speed up soil erosion that then washes away fertile topsoil and pollutes streams and rivers. The Roadless Initiative was established in 2001 to protect 60 million acres (24 million ha) of national forests. The initiative was designed by the Clinton administration to discourage logging and taxpayer-supported road building on public lands. The goal was to establish total and permanent protection for designated roadless areas. Advocates of the initiative contended that roadless areas encompassed 843
Environmental Encyclopedia 3
Logistic growth
some of the best wildlife habitats in the nation, while forest service officials argued that banning road building would significantly reduce logging in these areas. Under the initiative, more than half of the 192 million acres (78 million ha) of national forest would still remain available for logging and other activities. This initiative was considered one of the most important environmental protection measures of the Clinton administration. Illegal logging has become a problem with the growing worldwide demand for lumber. For example, the World Bank predicted that if Indonesia does not halt all current logging, it would lose its entire forest within the next 10 to 15 years. Estimates indicate that up to 70% of the wood harvested in Indonesia comes from illegal logging practices. Much of the timber being taken is sent to the United States. Indigenous peoples of Indonesia are being displaced from their traditional territories. Wildlife, including endangered tigers, elephants, rhinos, and orangutans are also being displaced and may be threatened with extinction. In 2002 Indonesia placed a temporary moratorium on logging in an effort to stop illegal logging. Other countries around the world were addressing logging issues in the early twenty-first century. In China, 160 million acres (65 million ha) out of 618 million acres (250 million ha) were put under state protection. Loggers turned in their tools to become forest rangers, working for the government in order to safeguard trees from illegal logging. China has set aside millions of acres of forests for protection, particularly those forests that are crucial sources of fresh water. China also announced that it was planning to further reduce its timber output in order to restore and enhance the life-sustaining abilities of its forests. [Douglas Dupler]
RESOURCES BOOKS Dietrich, William.The Final Forest: The Battle for the Last Great Trees of the Pacific Northwest. New York: Simon & Schuster, 1992. Durbin, Kathie.Tree Huggers: Victory, Defeat and Renewal in the Northwest Ancient Forest Campaign. Seattle, WA: Mountaineers, 1996. Hill, Julia Butterfly. The Legacy of Luna: The Story of a Tree, a Woman, and the Struggle to Save the Redwoods. San Francisco: Harper, 2000. Luoma, Jon R. The Hidden Forest. New York: Henry Holt and Co., 1999. Nelson, Sharlene P., and Ted Nelson. Bull Whackers to Whistle Punks: Logging in the Old West. New York: Watts, 1996.
Murphy, Dan. “The Rise of Robber Barons Speeds Forest Decline.” Christian Science Monitor, August 14, 2001, 8.
OTHER
American Lands Home Page. [cited July 2002]. . Global Forest Watch Home Page. [cited July 2002]. . SmartWood Program of the Rainforest Alliance. [cited July 2002]. .
Logistic growth Assuming the rate of immigration is the same as emigration, population size increases when births exceed deaths. As population size increases, population density increases, and the supply of limited available resources per organism decreases. There is thus less food and less space available for each individual. As food, water, and space decline, fewer births or more deaths may occur, and this imbalance continues until the number of births are equal to the number of deaths at a population size that can be sustained by the available resources. This equilibrium level is called the carrying capacity for that environment. A temporary and rapid increase in population may be due to a period of optimum growth conditions including physical and biological factors. Such an increase may push a population beyond the environmental carrying capacity. This sudden burst will be followed by a decline, and the population will maintain a steady fluctuation around the carrying capacity. Other population controls, such as predators and weather extremes (drought, frost, and floods), keep populations below the carrying capacity. Some environmentalists believe that the human population has exceeded the earth’s carrying capacity. Logistic growth, then, refers to growth rates that are regulated by internal and external factors that establish an equilibrium with environmental resources. The sigmoid (idealized S-shaped) curve illustrates this logistic growth where environmental factors limit population growth. In this model, a low-density population begins to grow slowly, then goes through an exponential or geometric phase, and then levels off at the environmental carrying capacity. See also Exponential growth; Growth limiting factors; Sustainable development; Zero population growth [Muthena Naseri]
PERIODICALS Alcock, James. “Amazon Forest Could Disappear, Soon.” Science News, July 14, 2001. De Jong, Mike. “Optimism Over Lumber.” Maclean’s, November 29, 2001, 16. Kerasote, Ted. “The Future of our Forests.” Audubon, January/February 2001, 44.
844
Dr. Bjørn Lomborg (1965 – ) Danish political scientist In 2001, Cambridge University Press published The Skeptical Environmentalist: Measuring the Real State of the World by
Environmental Encyclopedia 3 the Danish statistician Bjørn Lomborg. The book triggered a firestorm of criticism, with many well-known scientists denouncing it as an effort to “confuse legislators and regulators, and poison the well of public environmental information.” In January 2002, Scientific American published a series of articles by five distinguished environmental scientists contesting Lomborg’s claims. To some observers, the ferocity of the attack was surprising. Why so much furor over a book that claims to have good news about our environmental condition? Lomborg portrays himself as an “left-wing, vegetarian, Greenpeace member,” but says he worries about the unrelenting “doom and gloom” of mainstream environmentalism. He describes what he regards as an all-pervasive ideology that says, among other things, “Our resources are running out. The population is ever growing, leaving less and less to eat. The air and water are becoming ever more polluted. The planet’s species are becoming extinct in vast numbers. The forests are disappearing, fish stocks are collapsing, and coral reefs are dying.” This ideology has pervaded the environmental debate so long, Lomborg says, “that blatantly false claims can be made again and again, without any references, and yet still be believed.” In fact, Lomborg tells us, these allegations of the collapse of ecosystems are “simply not in keeping with reality. We are not running out of energy or natural resources. There will be more and more food per head of the world’s population. Fewer and fewer people are starving. In 1900 we lived for an average of 30 years; today we live 67. According to the UN we have reduced poverty more in the last 50 years than in the preceding 500, and it has been reduced in practically every country.” He goes on to challenge conventional scientific assessment of global warming, forest losses, fresh water scarcity, energy shortages, and a host of other environmental problems. Is Lomborg being deliberately (and some would say, hypocritically) optimistic, or are others being unreasonably pessimistic? Is this simply a case of regarding the glass as half full versus half empty? The inspiration to look at environmental statistics, Lomborg says, was a 1997 interview with the controversial economist Dr. Julian L. Simon in Wired magazine. Simon, who died in 1998, spent a good share of his career arguing that the “litany” of the Green movement—human overpopulation leading to starvation and resource shortages—was premeditated hyperbole and fear mongering. The truth, Simon, claimed is that the quality of human life is improving, not declining. Lomborg felt sure that Simon’s allegations were “simple American right-wing propaganda.” It should be a simple matter, he thought, to gather evidence to show how wrong Simon was. Back at his university in Denmark, Lomborg set out with 10 of his sharpest students to study Simon’s
Dr. Bjørn Lomborg
claims. To their surprise, the group found that while not everything Simon said was correct, his basic conclusions seemed sound. When Lomborg began to publish these findings in a series of newspaper articles in the London Guardian in 1998, he stirred up a hornet’s nest. Some of his colleagues at the University of Aarhus set up a website to denounce the work. When the whole book came out, their fury only escalated. Altogether, between 1998 and 2002, more than 400 articles appeared in newspapers and popular magazines either attacking or defending Lomborg and his conclusions. In general, the debate divides between mostly conservative supporters on one side and progressive, environmental activists and scientists on the other. The Wall Street Journal described the Skeptical Environmentalist as “superbly documented and readable.” The Economist called it “a triumph.” A review in the Daily Telegraph (London) declared it “the most important book on the environment ever written.” A review in the Washington Post said it is a “richly informative, lucid book, a magnificent achievement.” And, The Economist, which started the debate by publishing his first articles, announced that, “this is one of the most valuable books on public policy—not merely on environmental policy—to have been written in the past ten years.” Among most environmentalists and scientists, on the other hand, Lomborg has become an anathema. A widely circulated list of “Ten things you should know about the Skeptical Environmentalist” charged that the book is full of pseudo-scholarship, statistical fallacies, distorted quotations, inaccurate or misleading citations, misuse of data, interpretations that contradict well-established scientific work, and many other serious errors. This list accuses Lomborg of having no professional credentials or training—and having done no professional research—in ecology, climate science, resource economic, environmental policy, or other fields covered by his book. In essence, they complain, “Who is this guy, and how dare he say all this terrible stuff?” Harvard University Professor E. O. Wilson, one of the world’s most distinguished biologists, deplores what he calls “the Lomborg scam,” and says that he and his kind “are the parasite load on scholars who earn success through the slow process of peer review and approval.” It often seems that more scorn and hatred is focused on those, like Lomborg, who are viewed as a turncoats and heretics, than for those who are actually out despoiling the environment and squandering resources. Perhaps the most withering criticism of Lomborg comes from his reporting of statistics and research results. Stephen Schneider, a distinguished climate scientist from Stanford University, for instance, writes in Scientific American “most of [Lomborg’s] nearly 3,000 citations are to secondary literature and media articles. Moreover, even when cited, the peer-reviewed articles come elliptically from those studies 845
Environmental Encyclopedia 3
Dr. Bjørn Lomborg
that support his rosy view that only the low end of the uncertainty ranges [of climate change] will be plausible. IPCC authors, in contrast, were subjected to three rounds of review by hundreds of outside experts. They didn’t have the luxury of reporting primarily from the part of the community that agrees with their individual views.” Lomborg also criticizes extinction rate estimates as much too large, citing evidence from places like Brazil’s Atlantic Forest, where about 90% of the forest has been cleared without large numbers of recorded extinctions. Thomas Lovejoy, chief biodiversity adviser to the World Bank, responds, “First, this is a region with very few field biologists to record either species or their extinction. Second, there is abundant evidence that if the Atlantic forest remains as reduced and fragmented as it is, will lose a sizable fraction of the species that at the moment are able to hang on.” Part of the problem is that Lomborg is unabashedly anthropocentric. He dismisses the value of biodiversity, for example. As long as there are plants and animals to supply human needs, what does it matter if a few non-essential species go extinct? In Lomborg’s opinion, poverty, hunger, and human health problems are much more important problems than endangered species or possible climate change. He isn’t opposed to reducing greenhouse gas emissions, for instance, but argues that rather than spend billions of dollars per year to try to meet Kyoto standards, we could provide a healthy diet, clean water, and basic medical services to everyone in the world, thereby saving far more lives than we might do by reducing global climate change. Furthermore, Lomborg believes, solar energy will probably replace fossil fuels within 50 years anyway, making worries about increasing CO2 concentrations moot. Lomborg infuriates many environmentalists by being intentionally optimistic, cheerfully predicting that progress in population control, use of renewable energy, and unlimited water supplies from desalination technology will spread to the whole world, thus avoiding crises in resource supplies and human impacts on our environment. Others, particularly Lester Brown of the Worldwatch Institute and Professor Paul Ehrlich of Stanford University, according to Lomborg, seem to deliberately adopt worst-case scenarios. Protagonists on both sides of this debate use statistics selectively and engage in deliberate exaggeration to make their points. As Stephen Schneider, one of the most prominent anti-Lomborgians, said in an interview in Discover in 1989, “[We] are not just scientists but human beings as well. And like most people we’d like to see the world a better place. To do that we need to get some broad-based support, to capture the public’s imagination. That, of course, entails getting loads of media coverage. So we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have. Each of us has 846
to decide what the right balance is between being effective and being honest.” As is often the case in complex social issues, there are both truth and error on both sides in this debate. It takes good critical thinking skills to make sense out of the flurry of charges and counter charges. In the end, what you believe depends on your perspective and your values. Future events will show us whether Bjørn Lomborg or his critics are correct in their interpretations and predictions. In the meantime, it’s probably healthy to have the vigorous debate engendered by strongly held beliefs and articulate partisans from many different perspectives. In November 2001, Lomborg was selected Global Leader for Tomorrow by the World Economic Forum, and in February 2002, he was named director of Denmark’s national Environmental Assessment Institute. In addition to the use of statistics in environmental issues, his professional interests are simulation of strategies in collective action dilemmas, simulation of party behavior in proportional voting systems, and use of surveys in public administration. [William Cunningham Ph.D.]
RESOURCES BOOKS Lomborg, Bjorn. The Skeptical Environmentalist: Measuring the Real State of the World. Cambridge University Press, 2001.
PERIODICALS Bell, Richard C. “Media Sheep: How did The Skeptical Environmentalist Pull the Wool over the Eyes of so Many Editors?” Worldwatch 15, no. 2 (2002): 11–13. Dutton, Denis. “Greener than you think.” Washington Post, October 21, 2001. Schneider, Stephen. “Global Warming: Neglecting the Complexities.” Scientific American 286 (2002): 62–65. Wade, Nicholas. “Bjørn Lomborg: A Chipper Environmentalist.” The New York Times, August 7, 2001.
OTHER Anti-Lomborgian Web Site. December 2001 [cited July 9, 2002]. . Bjørn Lomborg Home Page. 2002 [cited July 9, 2002]. . Regis, Ed. “The Doomslayer: The environment is going to hell, and human life is doomed to only get worse, right? Wrong. Conventional wisdom, meet Julian Simon, the Doomslayer.” February 1997. Wired. [cited July 9, 2002]. . Wilson, E. O. “Vanishing Point: On Bjørn Lomborg and Extinction.” Grist December 12, 2001 [cited July 9, 2002]. . World Resources Institute and World Wildlife Fund. Ten Things Environmental Educators Should Know About The Skeptical Environmentalist. January 2002 [cited July 9, 2002]. .
Environmental Encyclopedia 3
London Dumping Convention see Convention on the Prevention of Marine Pollution by Dumping of Waste and Other Matter (1972)
Barry Holstun Lopez
(1945 – )
American environmental writer Barry Lopez has often called his own nonfiction writing natural history, and he is often categorized as a nature writer. This partially describes his work, but limiting him to that category is misleading, partly because his work transcends the kinds of subjects implicit in that classification. He could as well, for example, be called a travel writer, though that label also does not completely describe his work. He has in addition published a number of unusual works of fiction, and even has one children’s book (on Crow and Weasel) to his credit. Barry Lopez was born in Port Chester, New York, but he spent several early childhood years in rural southern California and, at the age of 10, returned to the East to grow up in New York City. He earned a BA degree from the University of Notre Dame, followed by an MAT from the University of Oregon in 1968. His initial goal was to teach, but in the late 1960s he set out to become a professional writer and since 1970 has earned his living writing (as well as by lecturing and giving readings). Lopez’s nonfiction writing transcends the category of natural history because his real topic, as Rueckert suggests, is the search for human relationships with nature, relationships that are “dignified and honorable.” Natural history as a category of literature implies a focus on primeval nature undisturbed by human activity, or at least on nature as it exists in its own right rather than as humans relate to it. He is a practitioner of what some have called “the new naturalism— a search for the human as mirrored in nature.” Lopez’s focus then is human ecology, the interactions of human beings with the world around them, especially the natural world. Even his most “natural” book of natural history, Of Wolves and Men, is not just about the natural history of wolves but about how that species’ existence or “being” in the wild is affected by human perceptions and actions, and it is as well about the image of the wolf in human minds. His fiction works can be called unusual, partly because they are often presented as brief notes or sketches, and partly because they are frequently blended into and combined with legends, factual observations of nature, and personal meditations. Everything Lopez writes, however, is in the form of a story, whether fiction, natural history, folklore, or travel writing. His story-telling makes all of his writing enjoyable to read, easy to access and ingest, and often memorable.
Los Angeles Basin
Lopez as a story teller, occupies the spaces between truth-teller and mystic, between natural scientist and folklorist. He has written of wolves and humans, and then of people with blue skins who could not speak and, apparently, subsisted only on clean air. He writes of the land in reality and the land in our imaginations, frequently in the same text. His writings on natural history provide the reader with great detail about the places in the world he describes, but his fiction can force the shock of recognition of places in the mind. In 1998, Lopez was a National Magazine Award in Fiction finalist for The Letters of Heaven and in 1999 he received the Lannan residency fellowship. Barry Lopez is in part a naturalist, in the best sense of that word. He is also something of an anthropologist. He is a student of folklore and mythology. He travels widely, but he studies his own home place and local environment intently. And, of course, he is a writer. [Gerald L. Young Ph.D.]
RESOURCES BOOKS Lopez, Barry. About this Life: Journeys on the Threshold of Memory. Random, 1998. ———. Arctic Dreams: Imagination and Desire in a Northern Landscape. New York: Scribner, 1986. ———. Crossing Open Ground. New York: Scribner, 1988. ———. Desert Notes; Reflections in the Eye of a Raven. Kansas City: Sheed, Andrews & McMeel. ———. Lessons from the Wolverine. Illustrated by Tom Pohrt. University of Georgia Press, 1997. ———. Light Action in the Caribbean. Knopf, 2000. Rueckert, W. H. “Barry Lopez and the Search for a Dignified and Honorable Relationship With Nature.” In Earthly Words: Essays on Contemporary American Nature and Environmental Writers. Ed. J. Cooley. Ann Arbor: University of Michigan Press, 1994.
PERIODICALS Paul, S. “Barry Lopez.” Hewing to Experience: Essays and Reviews on Recent American Poetry and Poetics, Nature and Culture. Iowa City: University of Iowa Press, 1989.
Los Angeles Basin The second most populous city in the United States, Los Angeles has perhaps the most fascinating environmental history of any urban area in the country. The Los Angeles Basin, into which more than 80 communities of Los Angeles County are crowded, is a trough-shaped region bounded on three sides by the Santa Monica, Santa Susana, San Gabriel, San Bernadino, and Santa Ana Mountains. On its fourth side, the county looks out over the Pacific Ocean. The earliest settlers arrived in the Basin in 1769 when Spaniard Gaspar de Portola´ and his expedition set up camp 847
Los Angeles Basin
along what is now known as the Los Angeles River. The site was eventually given the name El Pueblo de la Reyna de Los Angeles (the Town of the Queen of the Angels). For the first century of its history, Los Angeles grew very slowly. Its population in 1835 was only 1,250. By the end of the century, however, the first signs of a new trend appeared. In response to the promises of sunshine, warm weather, and “easy living,” immigrants from the East Coast began to arrive in the Basin. Its population more than quadrupled between 1880 and 1890, from 11,183 to 50,395. The rush was on, and it has scarcely abated today. The metropolitan population grew from 102,000 in 1900 to 1,238,000 in 1930 to 3,997,000 in 1950 to 9,838,861 in 2000. The pollution facing Los Angeles today results from a complex mix of natural factors and intense population growth. The first reports of Los Angeles’s famous photochemical smog go back to 1542. The “many smokes” described by Juan Cabrillo in that year were not the same as today’s smog, but they occurred because of geographic and climatic conditions that are responsible for modern environmental problems. The Los Angeles Basin has one of the highest probabilities of experiencing thermal inversions of any area in the United States. An inversion is an atmospheric condition in which a layer of cold air becomes trapped beneath a layer of warm air. That situation is just the reverse of the most normal atmospheric condition in which a warm layer near the ground is covered by a cooler layer above it. The warm air has a tendency to rise, and the cool air has a tendency to sink. As a result, natural mixing occurs. In contrast, when a thermal inversion occurs, the denser cool air remains near the ground while the less dense air above it tends to stay there. Smoke and other pollutants released into a thermal inversion are unable to rise upward and tend to be trapped in the cool lower layer. Furthermore, horizontal movements of air, which might clear out pollution in other areas, are blocked by the mountains surrounding LA county. The lingering haze of the “many smokes” described by Cabrillo could have been nothing more than the smoke from campfires trapped by inversions that must have existed even in 1542. As population and industrial growth occurred in Los Angeles during the second half of the twentieth century, the amount of pollutants trapped in thermal inversions also grew. By the 1960s, Los Angeles had become a classic example of how modern cities were being choked by their own wastes. The geographic location of the Los Angeles Basin contributes another factor to Los Angeles’s special environmental problems. Sunlight warms the Basin for most of the 848
Environmental Encyclopedia 3 year and attracts visitors and new residents. Solar energy fuels reactions between components of Los Angeles’s polluted air, producing chemicals even more toxic than those from which they came. The complex mixture of noxious compounds produced in Los Angeles has been given the name smog, reflecting the combination of human (smoke) and natural factors (fog) that make it possible. Smog, also called ground level ozone, can cause a myriad of health problems including breathing difficulties, coughing, chest pains, and congestion. It may also exacerbate asthma, heart disease, and emphysema. As Los Angeles grew in area and population, conditions which guaranteed a continuation of smog increased. The city and surrounding environs eventually grew to cover 400 square miles (1,036 square kilometers), a widespread community held together by freeways and cars. A major oil company bought the city’s public transit system, then closed it down, ensuring the wide use of automobile transportation. Thus, gases produced by the combustion of gasoline added to the city’s increasing pollution levels. Los Angeles and the State of California have been battling air pollution for over 20 years. California now has some of the strictest emission standards of any state in the nation, and LA has begun to develop mass transit systems once again. For an area that has long depended on the automobile, however, the transition to public transportation has not been an easy one. But some measurable progress has been made in controlling ground level ozone. In 1976, smog was detectable at levels above the state standard acceptable average of 0.09 ppm a staggering 237 days out of the year. By 2001, the number had dropped to 121 days. Still, much work remains to be done; in 2000, 2001, and 2002 Los Angeles topped the American Lung Association’s annual list of most ozone polluted cities and counties. Another of Los Angeles’s population-induced problems is its enormous demand for water. As early as 1900, it was apparent that the Basin’s meager water resources would be inadequate to meet the needs of the growing urban area. The city turned its sights on the Owens Valley, 200 mi (322 km) to the northeast in the Sierra Nevada. After a lengthy dispute, the city won the right to tap the water resources of this distant valley. A 200-mile water diversion public works project, the Los Angeles Aqueduct, was completed in 1913. This development did not satisfy the area’s growing need for water, however, and in the 1930s, a second canal was built. This canal, the Colorado River Aqueduct, carries water from the Colorado River to Los Angeles over a distance of 444 mi (714 km). Even this proved to be inadequate, however, and the search for additional water sources has gone on almost without stop. In fact, one of the great ongoing debates in California is between legislators from
Environmental Encyclopedia 3 Northern California, where the state’s major water resources are located, and their counterparts from Southern California, where the majority of the state’s people live. Since the latter contingent is larger in number, it has won many of the battles so far over distribution of the state’s water resources. Of course, Los Angeles has also experienced many of the same problems as urban areas in other parts of the world, regardless of its special geographical character. For example, the Basin was at one time a lush agricultural area, with some of the best soil and growing conditions found anywhere. From 1910 to 1950, Los Angeles County was the wealthiest agricultural region in the nation. But as urbanization progressed, more and more farmland was sacrificed for commercial and residential development. During the 1950s, an average of 3,000 acres (1,215 hectares) of farmland per day was taken out of production and converted to residential, commercial, industrial, or transportation use. One of the mixed blessings faced by residents of the Los Angeles Basin is the existence of large oil reserves in the area. On the one hand, the oil and natural gas contained in these reserves is a valuable natural resource. On the other hand, the presence of working oil wells in the middle of a modern metropolitan area creates certain problems. One is aesthetic, as busy pumps in the midst of barren or scraggly land contrasts with sleek new glass and steel buildings. Another petroleum-related difficulty is that of land subsidence. As oil and gas are removed from underground, land above it begins to sink. This phenomenon was first observed as early as 1937. Over the next two decades, subsidence had reached 16 ft (5 m) at the center of the Wilmington oil fields. Horizontal shifting of up to 9 ft (2.74 m) was also recorded. Estimates of subsidence of up to 45 ft (14 m) spurred the county to begin remedial measures in the 1950s. These measures included the construction of levees to prevent seawater from flowing into the subsided area and the repressurizing of oil zones with water injection. These measures have been largely successful, at least to the present time, in halting the subsidence of the oil field. Los Angeles’ annual ritual of pumping and storing water into underground aquifers in anticipation of the long, dry summer season has also been responsible for elevation shifts in the region. Researchers with the United States Geological Survey (USGS) observed that the ground surface of a 20 by 40 km area of Los Angeles rises and falls approximately 10–11 centimeters annually in conjunction with the water storage activities. As if population growth itself were not enough, the Basin poses its own set of natural challenges to the community. For example, the area has a typical Mediterranean climate with long hot summers, and short winters with little rain. Summers are also the occasion of Santa Ana winds,
Love Canal
severe windstorms in which hot air sweeps down out of the mountains and across the Basin. Urban and forest fires that originate during a Santa Ana wind not uncommonly go out of control causing enormous devastation to both human communities and the natural environment. The Los Angeles Basin also sits within a short distance of one of the most famous fault systems in the world, the San Andreas Fault. Other minor faults spread out around Los Angeles on every side. Earthquakes are common in the Basin, and the most powerful earthquake in Southern California history struck Los Angeles in 1857 (8.25 magnitude). Sixty miles (97 kilometers) from the quake’s epicenter, the tiny community of Los Angeles lost the military base at Fort Tejon although only two lives were lost in the disaster. Like San Franciscans, residents of the Los Angeles Basin live not wondering if another earthquake will occur, but only when “The Big One” will hit. See also Air quality; Atmospheric inversion; Environmental Protection Agency (EPA); Mass transit; Oil drilling [David E. Newton and Paula Anne Ford-Martin]
RESOURCES BOOKS Davis, Mike. Ecology of Fear: Los Angeles and the Imagination of Disaster. New York: Vintage Books, 1999. Gumprecht, Blake. The Los Angeles River: Its Life, Death, and Possible Rebirth Baltimore: John Hopkins University Press, 2001.
PERIODICALS Hecht, Jeff. “Finding Fault” New Scientist 171 (August 2001): 8.
OTHER American Lung Association. State of the Air 2002 Report. [cited July 9, 2002]. . South Coast Air Quality Management District. Smog Levels. [cited July 9, 2002]. . United States Geological Survey (USGS). Earthquake Hazards Program: Northern California. [cited July 9, 2002]. .
Love Canal Probably the most infamous of the nation’s hazardous waste sites, the Love Canal neighborhood of Niagara Falls, New York, was largely evacuated of its residents in 1980 after testing revealed high levels of toxic chemicals and genetic damage. Between 1942 and 1953, the Olin Corporation and the Hooker Chemical Corporation buried over 20,000 tons of deadly chemical waste in the canal, much of which is known to be capable of causing cancer, birth defects, miscarriages, and other health disorders. In 1953, Hooker deeded the land to the local board of education but did not clearly warn of the deadly nature of the chemicals buried 849
Love Canal
Environmental Encyclopedia 3
Weeds grow around boarded up homes in Love Canal, New York in 1980. (Corbis-Bettmann. Reproduced by permission.)
there, even when homes and playgrounds were built in the area. The seriousness of the situation became apparent in 1976, when years of unusually heavy rains raised the water table and flooded basements. As a result, houses began to reek of chemicals, and children and pets experienced chemical burns on their feet. Plants, trees, gardens, and even some pets died. Soon neighborhood residents began to experience an extraordinarily high number of illnesses, including cancer, miscarriages, and deformities in infants. Alarmed by the situation, and frustrated by inaction on the part of local, state, and federal governments, a 27-year-old housewife named Lois Gibbs began to organize her neighbors. In 1978 they formed the Love Canal Homeowners Association and began a two-year fight to have the government relocate them into another area. In August 1978 the New York State Health Commissioner recommended that pregnant women and young children be evacuated from the area, and subsequent studies documented the extraordinarily high rate of birth defects, miscarriages, genetic damage and other health affects. In 850
1979, for example, of 17 pregnant women in the neighborhood, only two gave birth to normal children. Four had miscarriages, two suffered stillbirths, and nine had babies with defects. Eventually, the state of New York declared the area “a grave and imminent peril” to human health. Several hundred families were moved out of the area, and the others were advised to leave. The school was closed and barbed wire placed around it. In October 1980 President Jimmy Carter declared Love Canal a national disaster area. In the end, some 60 families decided to remain in their homes, rejecting the government’s offer to buy their properties. The cost for the cleanup of the area has been estimated at $250 million. Ironically, twelve years after the neighborhood was abandoned, the state of New York approved plans to allow families to move back to the area, and homes were allowed to be sold. Love Canal is not the only hazardous waste site in the country that has become a threat to humans--only the best known. Indeed, the United States Environmental Protection Agency has estimated that up to 2,000 hazardous waste disposal sites in the United States may pose “significant risks
Environmental Encyclopedia 3
Sir James Ephraim Lovelock
to human health or the environment,” and has called the toxic waste problem “one of the most serious problems the nation has ever faced.” See also Contaminated soil; Hazardous waste site remediation; Hazardous waste siting; Leaching; Storage and transport of hazardous material; Stringfellow Acid Pits; Toxic substance [Lewis G. Regenstein]
RESOURCES BOOKS Gibbs, Lois. Love Canal: My Story. Albany: State University of New York Press, 1982. Regenstein, L. G. How to Survive in America the Poisoned. Washington, DC: Acropolis Books, 1982.
PERIODICALS Brown, M. H. “Love Canal Revisited.” Amicus Journal 10 (Summer 1988): 37–44. Kadlecek, M. “Love Canal—10 Years Later.” Conservationist 43 (November-December 1988): 40–43. ———. “A Toxic Ghost Town: Ten Years Later, Scientists Are Still Assessing the Damage From Love Canal.” The Atlantic 263 (July 1989): 23–26.
Sir James Ephraim Lovelock (1919 – ) English chemist Sir James Lovelock is the framer of the Gaia hypothesis and developer of, among many other devices, the electron capture gas chromatographic detector. The highly selective nature and great sensitivity of this detector made possible not only the identification of chlorofluorocarbons in the atmosphere, but led to the measurement of many pesticides, thus providing the raw data that underlie Rachel Carson’s Silent Spring. Sir James Lovelock was born in Letchworth Garden City, earned his degree in chemistry from Manchester University, and took a Ph.D. in medicine from the London School of Hygiene and Tropical Medicine. His early studies in medical topics included work at Harvard University Medical School and Yale University. He spent three years as a professor of chemistry at Baylor University College of Medicine in Houston, Texas. It was from that position that his work with the Jet Propulsion Laboratory for NASA began. The Gaia hypothesis, Sir James Lovelock’s most significant contribution to date, grew out of the work of Sir James Lovelock and his colleagues at the lab. While attempting to design experiments for life detection on Mars, Sir James Lovelock, Dian Hitchcock, and later Lynn Margulis, posed the question, “If I were a Martian, how would
Sir James E. Lovelock at his home in Cornwall with thedevice he invented to measure chlorofluorocarbons in the atmosphere. (Photograph by Anthony Howarth. Photo Researchers Inc. Reproduced by permission.)
I go about detecting life on Earth?” Looking in this way, the team soon realized that our atmosphere is a clear sign of life and it is totally impossible as a product of strictly chemical equilibria. One consequence of viewing life on this or another world as a single homeostatic organism is that energy will be found concentrated in certain locations rather than spread evenly, frequently, or even predominantly, as chemical energy. Thus, against all probability, the earth has an atmosphere containing about 21% free oxygen and has had about this much for millions of years. Sir James Lovelock bestowed on this superorganism comprising the whole of the biosphere the name “Gaia,” one spelling of the name of the Greek earth-goddess, at the suggestion of a neighbor, William Golding, author of Lord of the Flies. Sir James Lovelock’s hypothesis was initially attacked as requiring the whole of life on earth to have purpose, and hence in some sense, common intelligence. Sir James Lovelock then developed a computer model called “Daisyworld” in which the presence of black and white daisies controlled the global temperature of the planet to a nearly constant value despite a major increase in the heat output of its sun. The concept that the biosphere keeps the environment constant has been attacked as sanctioning 851
Environmental Encyclopedia 3
Amory Bloch Lovins
environmental degradation, and accusers took a cynical view of Sir James Lovelock’s service to the British petrochemical industry. However, the hypothesis has served the environmental community well in suggesting many ideas for further studies, virtually all of which have given results predicted by the hypothesis. Since 1964, Sir James Lovelock has operated a private consulting practice, first out of his home in Bowerchalke, near Salisbury, England, and later from a home near Launceston, Cornwall. He has authored over 200 scientific papers, covering research that ranged from techniques for freezing and successfully reviving hamsters to global systems science, which he has proposed to call geophysiology. Sir James Lovelock has been honored by his peers worldwide with numerous awards and honorary degrees, including Fellow of the Royal Society. He was also named a Commander of the British Empire by Queen Elizabeth in 1990.
[James P. Lodge Jr.]
RESOURCES BOOKS Joseph, L. E. Gaia: The Growth of an Idea. New York: St. Martin’s Press, 1990. Lovelock, J. The Ages of Gaia, A Biography of Our Living Earth. New York: Norton, 1988. ———. Gaia, a New Look at Life on Earth. Oxford: Oxford University Press, 1979. ———, and M. Allaby. The Greening of Mars. New York: St. Martin’s Press, 1984.
Amory Bloch Lovins (1947 – ) American physicist and energy conservationist Amory Lovins is a physicist specializing in environmentally safe and sustainable energy sources. Born in 1947 in Washington, D.C., Lovins attended Harvard and Oxford universities. He has had a distinguished career as an educator and scientist. After resigning his academic post at Oxford in 1971, Lovins became the British representative of Friends of the Earth. He has been Regents’ Lecturer at the University of California at Berkeley and has served as a consultant to the United Nations and other international and environmental organizations. Lovins is a leading critic of hard energy paths and an outspoken proponent of soft alternatives. According to Lovins, an energy path is “hard” if the route from source to use is complex and circuitous requires extensive, expensive, and highly complex technological means and centralized power to produce, transmit, and store the energy produces toxic wastes or other unwanted side effects has hazardous social uses or implications and tends over time to harden even more, as other options are fore852
closed or precluded as the populace becomes ever more dependent on energy from a particular source. A hard energy path can be seen in the case of fossil fuel energy. Once readily abundant and cheap, petroleum fueled the internal combustion engines and other machines on which humans have come to depend, but as that energy source becomes scarcer, oil companies must go farther afield to find it, potentially causing more environmental damage. As oil supplies run low and become more expensive, the temptation is to sustain the level of energy use by turning to another, and even harder, energy path—nuclear energy. With its complex technology, its hazards, its longlived and highly toxic wastes, its myriad military uses, and the possibility of its falling into the hands of dictators or terrorists, nuclear power is perhaps the hardest energy path. No less important are the social and political implications of this hard path: radioactive wastes will have to be stored somewhere nuclear power plants and plutonium transport routes must be guarded we must make trade-offs between the ease, convenience, and affluence of people presently living and the health and well-being of future generations and so on. A hard energy path is also one that, once taken, forecloses other options because, among other considerations, the initial investment and costs of entry are so high as to render the decision, once made, nearly irreversible. The longer term economic and social costs of taking the hard path, Lovins argues, are astronomically high and incalculable. Soft energy paths, by contrast, are shorter, more direct, less complex, cheaper (at least over the long run), are inexhaustible and renewable, have few if any unwanted sideeffects, have minimal military uses, and are compatible with decentralized local forms of community control and decision-making. The old windmill on the family farm offers an early example of such a soft energy source newer versions of the windmill, adapted to the generation of electricity, supply a more modern example. Other soft technologies include solar energy, biomass furnaces burning peat, dung or wood chips, and methane from the rotting of vegetable matter, manure, and other cheap, plentiful, and readily available organic material. Much of Lovins’s work has dealt with the technical and economic aspects, as well as the very different social impacts and implications, of these two competing energy paths. A resource consultant agency, The Rocky Mountain Institute, was founded by Lovins and his wife, Hunter in 1982. In 1989, Lovins and his wife won the Onassis Foundation’s first Delphi Prize for their “essential contribution towards finding alternative solutions to energy problems.” [Terence Ball]
Environmental Encyclopedia 3
Low-head hydropower
standard. See also Air quality; Best Available Control Technology (BAT); Emission standards
Low-head hydropower
Amory Lovins. (Reproduced by permission of the Rocky Mountain Institute.)
RESOURCES BOOKS Nash, H., ed. The Energy Controversy: Amory B. Lovins and His Critics. San Francisco: Friends of the Earth, 1979. Lovins, A. B. Soft Energy Paths. San Francisco: Friends of the Earth, 1977. ———, and L. H. Lovins. Energy Unbound: Your Invitation to Energy Abundance. San Francisco: Sierra Club Books, 1986.
PERIODICALS Louma, J. R. “Generate ’Nega-Watts’ Says Fossil Fuel Foe.” New York Times (April 2, 1993): B5, B8.
Lowest Achievable Emission Rate Governments have explored a number of mechanisms for reducing the amount of pollutants released to the air by factories, power plants, and other stationary sources. One mechanism is to require that a new or modified installation releases no more pollutants than determined by some law or regulation determining the lowest level of pollutants that can be maintained by existing technological means. These limits are known as the Lowest Achievable Emission Rate (LAER). The Clean Air Act of 1970 required, for example, that any new source in an area where minimum air pollution standards were not being met had to conform to the LAER
The term hydropower often suggests giant dams capable of transmitting tens of thousands of cubic feet of water per minute. Such dams are responsible for only about six percent of all the electricity produced in the United States today. Hydropower facilities do not have to be massive buildings. At one time in the United States--and still, in many places around the world--electrical power is generated at low-head facilities, dams where the vertical drop through which water passes is a relatively short distance and/or where water flow is relatively modest. Indeed, the first commercial hydroelectric facility in the world consisted of a waterwheel on the Fox River in Appleton, Wisconsin. The facility, opened in 1882, generated enough electricity to operate lighting systems at two paper mills and one private residence. Electrical demand grew rapidly in the United States during the early twentieth century, and hydropower supplied much of that demand. By the 1930s, nearly 40 percent of the electricity used in this country was produced by hydroelectric facilities. In some Northeastern states, hydropower accounted for 55-85 percent of the electricity produced. A number of social, economic, political, and technical changes soon began to alter that pattern. Perhaps most important was the vastly increased efficiency of power plants operated by fossil fuels. The fraction of electrical power from such plants rose to more than 80 percent by the 1970s. In addition, the United States began to move from a decentralized energy system in which many local energy companies met the needs of local communities, to large, centralized utilities that served many counties or states. In the 1920s, more than 6,500 electric power companies existed in the nation. As the government recognized power companies as monopolies, that number began to drop rapidly. Companies that owned a handful of low-head dams on one or more rivers could no longer compete with their giant cousins that operated huge plants powered by oil, natural gas, or coal. As a result, hundreds of small hydroelectric plants around the nation were closed down. According to one study, over 770 low-head hydroelectric plants were abandoned between 1940 and 1980. In some states, the loss of low-head generating capacity was especially striking. Between 1950 and 1973, Consumers Power Company, one of Michigan’s two electric utilities, sold off 44 hydroelectric plants. Some experts believe that low-head hydropower should receive more attention today. Social and technical factors still prevent low-head power from seriously compet853
Environmental Encyclopedia 3
Low-level radioactive waste
ing with other forms of energy on a national scale. But it may meet the needs of local communities in special circumstances. For example, a project has been undertaken to rehabilitate four low-head dams on the Boardman River in northwestern Michigan. The new facility is expected to increase the electrical energy available to nearby Traverse City and adjoining areas by about 20 percent. Low-head hydropower appears to have a more promising future in less-developed parts of the world. For example, China has more than 76,000 low-head dams that generate a total of 9,500 megawatts of power. An estimated 50 percent of rural townships depend on such plants to meet their electrical needs. Low-head hydropower is also of increasing importance in nations with fossil-fueled plants and growing electricity needs. Among the fastest growing of these are Peru, India, the Philippines, Costa Rica, Thailand, and Guatemala. See also Alternative energy sources; Electric utilities; Wave power [David E. Newton]
RESOURCES BOOKS Lapedes, D. N., ed. McGraw-Hill Encyclopedia of Energy. New York: McGraw-Hill, 1976.
PERIODICALS Kakela, P., G, Chilson, and W. Patric. “Low-Head Hydropower for Local Use.” Environment (January-February 1984): 31–38.
Low-inut agriculture see Sustainable agriculture
Low-level radioactive waste Low-level radioactive waste consists of materials used in a variety of medical, industrial, commercial, and research applications. They tend to release a low level of radiation that dissipates in a relatively short period of time. Although care must be taken in handling such materials, they pose little health or environmental risk. Among the most common low level radioactive materials are rags, papers, protective clothing, and filters. Such materials are often stored temporarily in sealed containers at their use site. They are then disposed of by burial at one of three federal sites: Barnwell, South Carolina, Beatty, Nevada, or Hanford, Washington. See also Hanford Nuclear Reservation; High-level radioactive waste; Radioactive waste; Radioactivity
LUST see Leaking underground storage tank 854
Sir Charles Lyell (1797 – 1875) Scottish geologist Lyell was born in Kinnordy, Scotland, the son of well-todo parents. When Lyell was less than a year old, his father moved his family to the south of England where he leased a house near the New Forest in Hampshire. Lyell spent his boyhood there, surrounded by his father’s collection of rare plants. At the age of seven, Lyell became ill with pleurisy and while recovering began to collect and study insects. As a young man he entered Oxford to study law, but he also became interested in mineralogy after attending lectures by the noted geologist William Buckland. Buckland advocated the theories of Abraham Gottlob Werner, a neptunist who postulated that a vast ocean once covered the earth and that the various rocks resulted from chemical and mechanical deposition underwater, over a long period of time. This outlook was more in keeping with the Biblical story of Genesis than that of the vulcanists or plutonists who suscribed to the idea that volcanism, along with erosion and deposition, were the major forces sculpting the Earth. While on holidays with his family, Lyell made the first of many observations in hopes of confirming the views of Buckland and Werner. However, he continued to study law and was eventually called to the bar in 1822. Lyell practiced law until 1827 while still devoting time to geology. Lyell traveled to France and Italy where he collected extensive data which caused him to reject the neptunist philosophy. He instead drew the conclusion that volcanic activity and erosion by wind and weather were primarily responsible for the different strata rather than the deposition of sediments from a “world ocean.” He also rejected the catastrophism of Georges Cuvier, who believed that global catastrophes, such as the biblical Great Flood, periodically destroyed life on Earth, thus accounting for the different fossils found in each rock layer. Lyell believed change was a gradual process that occurred over a long period of time at a constant rate. This theory, known as uniformitarianism, had been postulated 50 years earlier by Scottish geologist James Hutton. It was Lyell, though, who popularized uniformitarianism in his work The Principles of Geology, which is now considered a classic text in this field. By 1850 his views and those of Hutton had become the standard among geologists. However, unlike many of his colleagues, Lyell adhered so strongly to uniformitarianism that he rejected the possibility of even limited catastrophe. Today most scientists accept that catastrophes such as meteor impacts played an important, albeit supplemental, role in the earth’s evolution. In addition to his championing of uniformitarianism, Lyell named several divisions of geologic time such as the Eocene, Miocene, and Pliocene Epochs. He also estimated the age of some of the oldest fossil-bearing rocks known at
Environmental Encyclopedia 3 that time, assigning them the then startling figure of 240 million years. Even though Lyell came closer than his contemporaries to guessing the correct age, it is still less than half the currently accepted figure used by geologists today. While working on The Principles of Geology, Lyell formed a close friendship with Charles Darwin who had outlined his evolutionary theory in The Origin of Species. Both scientists quickly accepted the work of the other (Lyell was one of two scientists who presented Darwin’s work to the influential Linnaean Society). Lyell even extended evolutionary theory to include humans at a time when Darwin was unwilling to do so. In his The Antiquity of Man (1863), Lyell argued that humans were much more ancient than creationists (those who interpreted Book of Genesis literally) and catastrophists believed, basing his ideas on archaeological artifacts such as ancient ax heads. Lyell was knighted for his work in 1848 and created a baronet in 1864. He also served as president of the Geological Society and set up the Lyell Medal and the Lyell Fund. He died in 1875 while working on the twelfth edition of his Principles of Geology.
Lysimeter
Wilson, Leonard G. Charles Lyell—The Years to 1841: The Revolution in Geology. New Haven, CN: Yale University Press, 1972.
PERIODICALS Camardi, Giovanni. “Charles Lyell and the Uniformity Principle.” Biology and Philosophy 14, no. 4 (October 1999): 537–560. Kennedy, Barbara A. “Charles Lyell and ’Modern Changes of the Earth’: the Milledgeville Gully.” Geomorphology 40 (2001): 91–98.
Lysimeter A device for 1) measuring percolation and leaching losses from a column of soil under controlled conditions, or 2) for measuring gains (precipitation, irrigation, and condensation) and losses (evapotranspiration) by a column of soil. Many kinds of lysimeters exist: weighing lysimeters record the weight changes of a block of soil; non-weighing lysimeters enclose a block of soil so that losses or gains in the soil must occur through the surface suction lysimeters are devises for removing water and dissolved chemicals from locations within the soil.
RESOURCES BOOKS Adams, Alexander B. “Reading the Earth’s Story: Charles Lyell—1979– 1875.” In Eternal Quest: The Story of the Great Naturalists. NY: G. P. Putnam’s Sons, 1969.
Lythrum salicaria see Purple loosestrife
855
This Page Intentionally Left Blank
M
Robert Helmer MacArthur 1972)
(1930 –
Canadian biologist and ecologist Few scientists have combined the skills of mathematics and biology to open new fields of knowledge the way Robert H. MacArthur did in his pioneering work in evolutionary ecology. Guided by a wide-ranging curiosity for all things natural, MacArthur had a special interest in birds and much of his work dealt primarily with bird populations. His conclusions, however, were not specific to ornithology but transformed both population biology and biogeography in general. Robert Helmer MacArthur was born in Toronto, Ontario, Canada, on April 7, 1930, the youngest son of John Wood and Olive (Turner) MacArthur. While Robert spent his first seventeen years attending public schools in Toronto, his father shuttled between the University of Toronto and Marlboro College in Marlboro, Vermont, as a professor of genetics. Robert MacArthur graduated from high school in 1947 and immediately immigrated to the United States to attend Marlboro College. He received his undergraduate degree from Marlboro in 1951 and a master’s degree in mathematics from Brown University in 1953. Upon receiving his doctorate in 1957 from Yale University under the direction of G. Evelyn Hutchinson, MacArthur headed for England to spend the following year studying ornithology with David Lack at Oxford University. When he returned to the United States in 1958, he was appointed Assistant Professor of Biology at the University of Pennsylvania. As a doctoral student at Yale, MacArthur had already proposed an ecological theory that encompassed both his background as a mathematician and his growing knowledge as a naturalist. While at Pennsylvania, MacArthur developed a new approach to the frequency distribution of species. One of the problems confronting ecologists is measuring the numbers of a specific species within a geographic area— one cannot just assume that three crows in a 10-acre corn field means that in a 1000-acre field there will be 300 crows.
Much depends on the number of species occupying a habitat, species competition within the habitat, food supply, and other factors. MacArthur developed several ideas relating to the measurement of species within a known habitat, showing how large masses of empirical data relating to numbers of species could be processed in a single model by employing the principles of information theory. By taking the sum of the product of the frequencies of occurrences of a species and the logarithms of the frequencies, complex data could be addressed more easily. The most well-known theory of frequency distribution MacArthur proposed in the late 1950s is the so-called broken stick model. This model had been suggested by MacArthur as one of three competing models of frequency distribution. He proposed that competing species divide up available habitat in a random fashion and without overlap, like the segments of a broken stick. In the 1960s, MacArthur noted that the theory was obsolete. The procedure of using competing explanations and theories simultaneously and comparing results, rather than relying on a single hypothesis, was also characteristic of MacArthur’s later work. In 1958, MacArthur initiated a detailed study of warblers in which he analyzed their niche division, or the way in which the different species will to be best suited for a narrow ecological role in their common habitat. His work in this field earned him the Mercer Award of the Ecological Society of America. In the 1960s, he studied the so-called “species-packing problem.” Different kinds of habitat support widely different numbers of species. A tropical rain forest habitat, for instance, supports a great many species, while arctic tundra supports relatively few. MacArthur proposed that the number of species crowding a given habitat correlates to niche breadth. The book The Theory of Island Biogeography, written with biodiversity expert Edward O. Wilson and published in 1967, applied these and other ideas to isolated habitats such as islands. The authors explained the species-packing problem in an evolutionary light, as an equilibrium between the rates at which new species arrive or develop and the extinction rates of species already present. 857
Mad cow disease
These rates vary with the size of the habitat and its distance from other habitats. In 1965 MacArthur left the University of Pennsylvania to accept a position at Princeton University. Three years later, he was named Henry Fairfield Osborn Professor of Biology, a chair he held until his death. In 1971, MacArthur discovered that he suffered from a fatal disease and had only a few years to live. He decided to concentrate his efforts on encapsulating his many ideas in a single work. The result, Geographic Ecology: Patterns in the Distribution of Species, was published shortly before his death the following year. Besides a summation of work already done, Geographic Ecology was a prospectus of work still to be carried out in the field. MacArthur was a Fellow of the American Academy of Arts and Science. He was also an Associate of the Smithsonian Tropical Research Institute, and a member of both the Ecological Society and the National Academy of Science. He married Elizabeth Bayles Whittemore in 1952; they had four children: Duncan, Alan, Donald, and Elizabeth. Robert MacArthur died of renal cancer in Princeton, New Jersey, on November 1, 1972, at the age of 42. RESOURCES BOOKS Carey, C. W. “MacArthur, Robert Helmer.” Vol. 14, American National Biography edited by J. A. Garraty and M. C. Carnes. NY: Oxford University Press, 1999. Gillispie, Charles Coulson, ed. Dictionary of Scientific Biography. Vol. 17– 18: Scribner, 1990. MacArthur, Robert. Geographic Ecology: Patterns in the Distribution of Species. Harper, 1972. ———. The Biology of Populations. Wiley, 1966. ———. The Theory of Island Biogeography. Princeton University Press, 1967. Notable Scientists: From 1900 to the Present. Farmington Hills, MI: Gale Group, 2002.
Mad cow disease Mad cow disease, a relatively newly discovered malady, was first identified in Britain in 1986, when farmers noticed that their cows’ behavior had changed. The cows began to shake and fall, became unable to walk or even stand, and eventually died or had to be killed. It was later determined that a variation of this fatal neurological disease, formally known as Bovine Spongiform Encephalopathy (BSE), could be passed on to humans. It is still not known to what extent the population of Britain and perhaps other countries is at risk from consumption of contaminated meat and animal by-products. The significance of the BSE problem lies in its as yet unquantifiable potential to not only damage Britain’s $7.5 billion beef 858
Environmental Encyclopedia 3 industry, but also to endanger millions of people with the threat of a fatal brain disease. A factor that stood out in the autopsies of infected animals was the presence of holes and lesions in the brains, which were described as resembling a sponge or Swiss cheese. This was the first clue that BSE was a subtype of untreatable, fatal brain diseases called transmissible spongiform encephalopathies (TSEs). These include a very rare human malady known as Creutzfeldt-Jakob Disease (CJD), which normally strikes just one person in a million, usually elderly or middleaged. In contrast to previous cases of CJD, the new bovinerelated CJD in humans is reported to affect younger people, and manifests with unusual psychiatric and sensory abnormalities that differentiate it from the endemic CJD. The BSE-related CJD has a delayed onset that includes shaky limb movements, sudden muscle spasms, and dementia. As the epidemic of BSE progressed, the number of British cows diagnosed began doubling almost yearly, growing from some 7,000 cases in 1989, to 14,000 in 1990, to over 25,000 in 1991. The incidence of CJD in Britain was simultaneously increasing, almost doubling between 1990 and 1994 and reaching 55 cases by 1994. In response to the problem and growing public concern, the government’s main strategy was to issue reassurances. However, it did undertake two significant measures to try to safeguard public health. In July 1988, it ostensibly banned meat and bone meal from cow feed, but failed to strictly enforce the action. In November 1989, a law that intended to remove those bovine body parts considered to be the most highly infective (brain, spinal cord, spleen, tonsils, intestines, and thymus) from the public food supply was passed. A 1995 government report revealed that half of the time, the law was not being adhered to by slaughterhouses. Thus, livestock—and the public—continued to be potentially exposed to BSE. As the disease continued to spread, so did public fears that it might be transmissible to humans, and could represent a serious threat to human health. But the British government, particularly the Ministry of Agriculture, Fisheries, and Food (MAFF), anxious to protect the multibillion dollar cattle industry, insisted that there was no danger to humans. However, on March 20, 1996, in an embarrassing reversal, the government officially admitted that there could be a link between BSE and the unusual incidence of CJD among young people. At the time, 15 people had been newly diagnosed with CJD. Shocking the nation and making headlines around the world, the Minister of Health Stephen Dorrell announced to the House of Commons that consumption of contaminated beef was “the most likely explanation” for the outbreak of a new variant CJD in 10 people under the age of 42, including several teenagers. Four dairy farmers, including some with infected herds, had also contracted CJD, as had a Frenchman who died in January 1996.
Environmental Encyclopedia 3 British authorities estimated that some 163,000 British cows had contracted BSE. But other researchers, using the same database, put the figure at over 900,000, with 729,000 of them having been consumed by humans. In addition, an unknown number had been exported to Europe, traditionally a large market for British cattle and beef. Many non-meat products may also have been contaminated. Gelatin, made from ligaments, bones, skin, and hooves, is found in ice cream, lipstick, candy, and mayonnaise; keratin, made from hooves, horns, nails, and hair, is contained in shampoo; fat and tallow are used in candles, cosmetics, deodorants, soap, margarine, detergent, lubricants, and pesticides; and protein meal is made into medical and pharmaceutical products, fertilizer, and food additives. Bone meal from dead cows is used as fertilizer on roses and other plants, and is handled and often inhaled by gardeners. In reaction to the government announcement, sales of beef dropped by 70%, cattle markets were deserted, and even hamburger chains stopped serving British beef. Prime Minister Major called the temporary reaction “hysteria” and blamed the press and opposition politicians for fanning it. On March 25, 1996, the European Union banned the import of British beef, which had since 1990 been excluded from the United States and 14 other countries. Shortly afterwards, in an attempt to have the European ban lifted, Britain announced that it would slaughter all of its 1.2 million cows over the age of 30 months (an age before which cows do not show symptoms of BSE), and began the arduous task of killing and incinerating 22,000 cows a week. The government later agreed to slaughter an additional l00,000 cows considered most at risk from BSE. A prime suspect in causing BSE is a by-product derived from the rendering process, in which the unusable parts of slaughtered animals are boiled down or “cooked” at high temperatures to make animal feed and other products. One such product, called meat and bone meal (MBM), is made from the ground-up, cooked remains of slaughtered livestock—cows, sheep, chicken, and hogs—and made into nuggets of animal feed. Some of the cows and sheep used in this process were infected with fatal brain disease. (Although MBM was ostensibly banned as cattle feed in 1988, spinal cords continued to be used.) It is theorized that sheep could have played a major role in initially infecting cows with BSE. For over 200 years, British sheep have been contracting scrapie, another TSE that results in progressive degeneration of the brain. Scrapie causes the sheep to tremble and itch, and to “scrape” or rub up against fences, walls, and trees to relieve the sensation. The disease, first diagnosed in British sheep in 1732, may have recently jumped the species barrier when cows ate animal feed that contained brain and spinal cord tissue from diseased sheep. In 1999 the World Health Organization
Mad cow disease
(WHO) implored high-risk countries to assess outbreaks of BSE-like manifestations in sheep and goat stocks. In August 2002, sheep farms in the United Kingdom demonstrated to the WHO that no increase in illnesses potentially linked to BSE occurred in non-cattle livestock. However, that same year, the European Union Scientific Steering Committee (SSC) on the risk of BSE identified the United Kingdom and Portugal as hotspots for BSE infection of domestic cattle relative to other European nations. Scrapie and perhaps these other spongiform brain diseases are believed to be caused not by a virus (as originally thought) but rather by a form of infectious protein-like particles called prions, which are extremely tenacious, surviving long periods of high intensity cooking and heating. They are, in effect, a new form of contagion. The first real insights into the origins of these diseases were gathered in the 1950s by Dr. D. Carleton Gajdusek, who was awarded the 1976 Nobel Prize in Medicine for his work. His research on the fatal degenerative disease “kuru” among the cannibals of Papua, New Guinea, which resulted in the now-familiar brain lesions and cavities, revealed that the malady was caused by consuming or handling the brains of relatives who had just died. In the United States, Department of Agriculture officials say that the risk of BSE and other related diseases is believed to be small, but cannot be ruled out. No BSE has been detected in the United States, and no cattle or processed beef is known to have been imported from Britain since 1989. However, several hundred mink in Idaho and Wisconsin have died from an ailment similar to BSE, and many of them ate meat from diseased “downer” cows, those that fall and cannot get up. Some experts believe that BSE can occur spontaneously, without apparent exposure to the disease, in one or two cows out of a million every year. This would amount to an estimated 150–250 cases annually among the United States cow population of some 150 million. Moreover, American feed processors render the carcasses of some 100,000 downer cows every year, thus utilizing for animal feed cows that are possibly seriously and neurologically diseased. In June 1997, the Food and Drug Administration (FDA) announced a partial ban on using in cattle feed remains from dead sheep, cattle, and other animals that chew their cud. But the ruling exempts from the ban some animal protein, as well as feed for poultry, pigs, and pets. In March of that year, a coalition of consumer groups, veterinarians, and federal meat inspectors had urged the FDA to include pork in the animal feed ban, citing evidence that pigs can develop a form of TSE, and that some may already have done so. The coalition had recommended that the United States adopt a ban similar to Britain’s, where protein from 859
Environmental Encyclopedia 3
Mad cow disease
The United States Centers for Disease Control (CDC) has reclassified CJD that is associated with the interspecies transmission of BSE disease-causing factor. The current categorization of CJD is termed new variant CJD (nvCJD) to distinguish it from the extremely rare form of CJD that is not associated with BSE contagion. According to the CDC, there have been 79 nvCJD deaths reported worldwide. By April 2002, the global incidence of nvCJD increased to 125 documented reports. Of these, most (117) were from the United Kingdom. Other countries reporting nvCJD included France, Ireland, and Italy. The CDC stresses that nvCJD should not be confused with the endemic form of CJD. In the United States, CJD seldom occurs in adults under 30 years old, having a median age of death of 68 years. In contrast, nvCJD, associated with BSE, tends to affect a much younger segment of society. In the United Kingdom, the median age of death from nvCJD is 28 years. As of April 2002, no cases of nvCJD have been reported in the United States, and all known worldwide cases of nvCJD have been associated with countries where BSE is known to exist. The first possible infection of a U.S. resident was documented and reported by the CDC in 2002. A 22-yearold citizen of the United Kingdom living in Florida was preliminarily diagnosed with nvCJD during a visit abroad. Unfortunately, the only way to verify a diagnosis with nvCJD is via brain biopsy or autopsy. If confirmed, the CDC and Florida Department of Health claim that this case would be the first reported in the United States. The outlook for BSE is uncertain. Since tens of millions of people in Britain may have been exposed to the infectious agent that causes BSE, plus an unknown number in other countries, some observers fear that a latent epidemic of serious proportions could be in the offing. (There is also concern that some of the four million Americans alone now diagnosed with Alzheimer’s disease may actually be suffering from CJD.) There are others who feel that a general removal of most infected cows and animal brain tissue from the food supply has prevented a human health disaster. But since the incubation period for CJD is thought to be 7–40 years, it will be some time before it is known how many people are already infected and the extent of the problem becomes apparent. French farmers protest against an allowance they must pay to bring dead animals to the knackery—a service that was free of charge prior to mad cow disease.
[Lewis G. Regenstein]
RESOURCES BOOKS Rhodes, R. Deadly Feasts. New York: Simon & Schuster, 1997.
all mammals is excluded from animal feed, and some criticized the FDA’s action as “totally inadequate in protecting consumers and public health.” 860
PERIODICALS Blakeslee, S. “Fear of Disease Prompts New Look at Rendering.” The New York Times, March 11, 1997.
Environmental Encyclopedia 3 Lanchester, J. “A New Kind of Contagion.” The New Yorker, December 2, 1996.
Madagascar Described as a crown jewel among earth’s ecosystems, this 1,000-mi long (1,610-km) island-continent is a microcosm of Third World ecological problems. It abounds with unique species which are being threatened by the exploding human population. Many scientists consider Madagascar the world’s foremost conservation priority. Since 1984 united efforts have sought to slow the island’s deterioration, hopefully providing a model for treating other problem areas. Madagascar is the world’s fourth largest island, with a rain forest climate in the east, deciduous forest in the west, and thorn scrub in the south. Its Malagasy people are descended from African and Indonesian seafarers who arrived about 1,500 years ago. Most farm the land using ecologically devastating slash and burn agriculture which has turned Madagascar into the most severely eroded land on earth. It has been described as an island with the shape, color, and fertility of a brick; second growth forest does not do well. Having been separated from Africa for 160 million years, this unique land was sufficiently isolated during the last 40 million years to become a laboratory of evolution. There are 160,000 unique species, mostly in the rapidly disappearing eastern rain forests. These include 65 percent of its plants, half of its birds, and all of its reptiles and mammals. Sixty percent of the earth’s chameleons live here. Lemurs, displaced elsewhere by monkeys, have evolved into 26 species. Whereas Africa has only one species of baobab tree, Madagascar has six, and one is termite resistant. The thorn scrub abounds with potentially useful poisons evolved for plant defense. One species of periwinkle provides a substance effective in the treatment of childhood (lymphocytic) leukemia. Humans have been responsible for the loss of 93 percent of tropical forest and two-thirds of rain forest. Fourfifths of the land is now barren as the result of habitat destruction set in motion by the exploding human population (3.2 percent growth per year). Although nature reserves date from 1927, few Malagasy have ever experienced their island’s biological wonders; urbanites disdain the bush, and peasants are driven by hunger. If they can see Madagascar’s rich ecosystems first hand, it may engender respect which, in turn, may encourage understanding and protection. The people are awakening to their loss and the impact this may have on all Madagascar’s inhabitants. Pride in their island’s unique biodiversity is growing. The World Bank has provided $90 million to develop and implement a 15year Environmental Action Plan. One private preserve in
Malaria
the south is doing well and many other possibilities exist for the development of ecotourism. If population growth can be controlled, and high yield farming replaces slash and burn agriculture, there is yet hope for preserving the diversity and uniqueness of Madagascar. See also Deforestation; Erosion; Tropical rain forest [Nathan H. Meleen]
RESOURCES BOOKS Attenborough, D. Bridge to the Past: Animals and People of Madagascar. New York: Harper, 1962. Harcourt, C., and J. Thornback. Lemurs of Madagascar and the Comoros: The IUCN Red Data Book. Gland, Switzerland: IUCN, 1990. Jenkins, M. D. Madagascar: An Environmental Profile. Gland, Switzerland: IUCN, 1987.
PERIODICALS Jolly, A. “Madagascar: A World Apart.” National Geographic 171 (February 1987): 148–83.
Magnetic separation An on-going problem of environmental significance is solid waste disposal. As the land needed to simply throw out solid wastes becomes less available, recycling becomes a greater priority in waste management programs. One step in recycling is the magnetic separation of ferrous (ironcontaining) materials. In a typical recycling process, wastes are first shredded into small pieces and then separated into organic and inorganic fractions. The inorganic fraction is then passed through a magnetic separator where ferrous materials are extracted. These materials can then be purified and reused as scrap iron. See also Iron minerals; Resource recovery
Malaria Malaria is a disease that affects hundreds of millions of people worldwide. In the developing world malaria contributes to a high infant mortality rate and a heavy loss of work time. Malaria is caused by the single-celled protozoan parasite, Plasmodium. The disease follows two main courses: tertian (three day) malaria and quartan (four day) malaria. Plasmodium vivax causes benign tertian malaria with a low mortality (5%), while Plasmodium falciparum causes malignant tertian malaria with a high mortality (25%) due to interference with the blood supply to the brain (cerebral malaria). Quartan malaria is rarely fatal. Plasmodium is transmitted from one human host to another by female mosquitoes of the genus Anopheles. Thou861
Environmental Encyclopedia 3
Male contraceptives
sands of parasites in the salivary glands of the mosquito are injected into the human host when the mosquito takes blood. The parasites (in the sporozoite stage) are carried to the host’s liver where they undergo massive multiplication into the next stage (cryptozoites). The parasites are then released into the blood stream, where they invade red blood cells and undergo additional division. This division ruptures the red blood cells and releases the next stage (the merozoites), which invade and destroy other red blood cells. This red blood cell destruction phase is intense but short-lived. The merozoites finally develop into the next stage (gametocytes) which are ingested by the biting mosquito. The pattern of chills and fever characteristic of malaria is caused by the massive destruction of the red blood cells by the merozoites and the accompanying release of parasitic waste products. The attacks subside as the immune response of the human host slows the further development of the parasites in the blood. People who are repeatedly infected gradually develop a limited immunity. Relapses of malaria long after the original infection can occur from parasites that have remained in the liver, since treatment with drugs kills only the parasites in the blood cells and not in the liver. Malaria can be prevented or cured by a wide variety of drugs (quinine, chloroquine, paludrine, proguanil, or pyrimethamine). However, resistant strains of the common species of Plasmodium mean that some prophylactic drugs (chloroquine and pyrimethamine) are no longer totally effective. Malaria is controlled either by preventing contact between humans and mosquitoes or by eliminating the mosquito vector. Outdoors, individuals may protect themselves from mosquito bites by wearing protective clothing, applying mosquito repellents to the skin, or by burning mosquito coils that produce smoke containing insecticidal pyrethrins. Inside houses, mosquito-proof screens and nets keep the vectors out, while insecticides (DDT) applied inside the house kill those that enter. The aquatic stages of the mosquito can be destroyed by eliminating temporary breeding pools, by spraying ponds with synthetic insecticides, or by applying a layer of oil to the surface waters. Biological control includes introducing fish (Gambusia) that feed on mosquito larvae into small ponds. Organized campaigns to eradicate malaria are usually successful, but the disease is sure to return unless the measures are vigilantly maintained. See also Epidemiology; Pesticide [Neil Cumberlidge Ph.D.]
RESOURCES BOOKS Bullock, W. L. People, Parasites, and Pestilence: An Introduction to the Natural History of Infectious Disease. Minneapolis: Burgess Publishing Company, 1982.
862
Knell, A. J., ed. Malaria: A Publication of the Tropical Programme of the Wellcome Trust. New York: Oxford University Press, 1991. Markell, E. K., M. Voge, and D. T. John. Medical Parasitology. 7th ed. Philadelphia: Saunders, 1992. Phillips, R. S. Malaria. Institute of Biology’s Studies in Biology, No. 152. London: E. Arnold, 1983.
Male contraceptives Current research into male contraceptives will potentially increase the equitability of family planning between males and females. This shift will also have the potential to address issues of population growth and its related detrimental effects on the environment. While prophylactic condoms provide good barrier protection from unwanted pregnancies, they are not as effective as oral contraceptives for women. Likewise, vasectomies are very effective, but few men are willing to undergo the surgery. There are three general categories of male contraceptives that are being explored. The first category functionally mimics a vasectomy by physically blocking the vas deferens, the channel that carries sperm from the seminiferous tubules to the ejaculatory duct. The second uses heat to induce temporary sterility. The third involves medications to halt sperm production. In essence, this third category concerns the development of “The Pill” for men. Despite its near 100% effectiveness, there are two major disadvantages to vasectomy that make it unattractive to many men as an option for contraception. The first is the psychological component relating to surgery. Although vasectomies are relatively non-invasive, when compared to taking a pill the procedure seems drastic. Second, although vasectomies are reversible, the rate of return to normal fertility is only about 40%. Therefore, newer “vas occlusive” methods offer alternatives to vasectomy with completely reversible effects. Vas occlusive devices block the flow of or render dysfunctional the sperm in the vas deferens. The most recent form of vas occlusive male contraception, called Reversible Inhibition of Sperm Under Guidance (RISUG), involves the use of a styrene that is combined with the chemical DMSO (dimethyl sulfoxide). The complex is injected into the vas deferens. The complex then partially occludes passage of sperm and also causes disruption of sperm cell membranes. As sperm cells contact the RISUG complex, they rupture. It is believed that a single injection of RISUG may provide contraception for up to 10 years. Large safety and efficacy trials examining RISUG are being conducted in India. Two additional vas occlusive methods of male contraception involve the injection of polymers into the vas deferens. Both methods involve injection of a liquid form of polymer, microcellular polyurethane (MPU) or medicalgrade silicon rubber (MSR), into the vas deferens where it
Environmental Encyclopedia 3 hardens within 20 minutes. The resulting plug provides a barrier to sperm. The technique was developed in China, and since 1983 some 300,000 men have reportedly undergone this method of contraception. Reversal of MPU and MSR plugs requires surgical removal of the polymers. Another method involving silicon plugs (called the Shug for short) offers an alternative to injectable plugs. This doubleplug design offers a back-up plug should sperm make their way past the first. Human sperm is optimally produced at a temperature that is a few degrees below body temperature. Infertility is induced if the temperature of the testes is elevated. For this reason, men trying to conceive are often encouraged to avoid wearing snugly-fitting undergarments. The thermal suspensory method of male contraception utilizes specially designed suspensory briefs to use natural body heat or externally applied heat to suppress spermatogenesis. Such briefs hold the testes close to the body during the day, ideally near the inguinal canal where local body heat is greatest. Sometimes this method is also called artificial cryptorchidism since is simulates the infertility seen in men with undescended testicles. When worn all day, suspensory briefs lead to a gradual decline in sperm production. The safety of briefs that contain heating elements to warm the testes is being evaluated. Externally applied heat in such briefs would provide results in a fraction of the time required using body heat. Other forms of thermal suppression of sperm production utilize simple hot water heated to about 116°F(46.7°C). Immersion of the testicles in the warm water for 45 minutes daily for three weeks is said to result in six months of sterility followed by a return to normal fertility. A newer, but essentially identical, method of thermal male contraception uses ultrasound. This simple, painless, and convenient method using ultrasonic waves to heat water results in six-month, reversible sterility within only 10 minutes. Drug therapy is also being evaluated as a potential form of male contraception. Many drugs have been investigated in male contraception. An intriguing possibility is the observation that a particular class of blood pressure medications, called calcium channel blockers, induces reversible sterility in many men. One such drug, nifedipine, is thought to induce sterility by blocking calcium channels of sperm cell membranes. This reportedly results in cholesterol deposition and membrane instability of the sperm, rendering them incapable of fertilization. Herbal preparations have also been used as male contraceptives. Gossypol, a constituent of cottonseed oil, was found to be an effective and reliable male contraceptive in very large-scale experiments conducted in China. Unfortunately, an unacceptable number of men experienced persistent sterility when gossypol therapy was discontinued. Additionally, up to 10% of men treated with gossypol experienced kidney problems in the studies conducted in
Man and the Biosphere Program
China. Because of the potential toxicity of gossypol, the World Health Organization concluded that research on this form of male contraception should be abandoned. Most recently, a form of sugar that sperm interact with in the fertilization process has been isolated from the outer coating of human eggs. An enzyme in sperm, called N-acetyl-betaD-hexosaminidase (HEX-B) cuts through the protective outer sugar layer of the egg during fertilization. A decoy sugar molecule that mimics the natural egg coating is being investigated. The synthetic sugar would bind specifically to sperm HEX-B enzyme, curtailing the sperm’s ability to penetrate the egg’s outer coating. Related experiments in male rats have shown effective and reversible contraceptive properties. Perhaps one of the most researched methods of male contraception using drugs involves the use of hormones. Like female contraceptive pills, Male Hormone Contraceptives (MHCs) seek to stop the production of sperm by stopping the production of hormones that direct the development of sperm. Many hormones in the human body work by feedback mechanisms. When levels of one hormone are low, another hormone is released that results in an increase in the first. The goal of MHCs is to artificially raise the levels of hormone that would result in suppression of hormone release required for sperm production. The best MHC produced only provides about 90% sperm suppression, which is not enough to reliably prevent conception. Also, for poorly understood reasons, some men do not respond to the MHC preparations under investigation. Despite initial promise, more research is needed to make MHCs competitive with female contraception. Response failure rates for current MHC drugs range from 5–20%. [Terry Watkins]
RESOURCES ORGANIZATIONS Contraceptive Research and Development Program (CONRAD), Eastern Virginia Medical School, 1611 North Kent Street, Suite 806, Arlington, VA USA 22209 (703) 524-4744, Fax: (703) 524-4770, Email:
[email protected],
Malignant tumors see Cancer
Man and the Biosphere Program The Man and the Biosphere (MAB) program is a global system of biosphere reserves begun in 1986 and organized by the United Nations Educational, Social, and Cultural Organization (UNESCO). MAB reserves are designed to conserve natural ecosystems and biodiversity and to incor863
Environmental Encyclopedia 3
Manatees
porate the sustainable use of natural ecosystems by humans in their operation. The intention is that local human needs will be met in ways compatible with resource conservation. Furthermore, if local people benefit from tourism and the harvesting of surplus wildlife, they will be more supportive of programs to preserve wilderness and protect wildlife. MAB reserves differ from traditional reserves in a number of ways. Instead of a single boundary separating nature inside from people outside, MAB reserves are zoned into concentric rings consisting of a core area, a buffer zone, and a transition zone. The core area is strictly managed for wildlife and all human activities are prohibited, except for restricted scientific activity such as ecosystem monitoring. Surrounding the core area is the buffer zone, where nondestructive forms of research, education, and tourism are permitted, as well as some human settlements. Sustainable light resource extraction such as rubber tapping, collection of nuts, or selective logging is permitted in this area. Preexisting settlements of indigenous peoples are also allowed. The transition zone is the outermost area, and here increased human settlements, traditional land use by native peoples, experimental research involving ecosystem manipulations, major restoration efforts, and tourism are allowed. The MAB reserves have been chosen to represent the world’s major types of regional ecosystems. Ecologists have identified some 14 types of biomes and 193 types of ecosystems around the world and about two-thirds of these ecosystem types are represented so far in the 276 biosphere reserves now established in 72 countries. MAB reserves are not necessarily pristine wilderness. Many include ecosystems that have been modified or exploited by humans, such as rangelands, subsistence farmlands, or areas used for hunting and fishing. The concept of biosphere reserves has also been extended to include coastal and marine ecosystems, although in this case the use of core, buffer, and transition areas is inappropriate. The establishment of a global network of biosphere reserves still faces a number of problems. Many of the MAB reserves are located in debt-burdened developing nations, because many of these countries lie in the biologically rich tropical regions. Such countries often cannot afford to set aside large tracts of land, and they desperately need the short-term cash promised by the immediate exploitation of their lands. One response to this problem is the debt for nature swaps in which a conservation organization buys the debt of a nation at a discount rate from banks in exchange for that nation’s commitment to establish and protect a nature reserve. Many reserves are effectively small, isolated islands of natural ecosystems surrounded entirely by developed land. The protected organisms in such islands are liable to suffer genetic erosion, and many have argued that a single large 864
reserve would suffer less genetic erosion than several smaller reserves which cumulatively protect the same amount of land. It has also been suggested that reserves sited as close to each other as possible, and corridors that allow movement between them, would increase the habitat and gene pool available to most species. [Neil Cumberlidge Ph.D.]
RESOURCES BOOKS Gregg, W. P., and S. L. Krugman, eds. Proceedings of the Symposium on Biosphere Reserves. Atlanta, GA: U.S. National Park Service, 1989. Office of Technology Assessment. Technologies to Maintain Biological Diversity. Philadelphia: Lippincott, 1988.
PERIODICALS Batisse, M. “Developing and Focusing the Biosphere Reserve Concept. Nature and Resources 22 (1986): 1–10.
Manatees A relative of the elephant, manatees are totally aquatic, herbivorous mammals of the family Trichechidae. This group arose 15–20 million years ago during the Miocene period, a time which also favored the development of a tremendous diversity of aquatic plants along the coast of South America. Manatees are adapted to both marine and freshwater habitats and are divided into three distinct species: the Amazonian manatee (Trichechus inunguis), restricted to the freshwaters of the Amazon River; the West African manatee (Trichechus senegalensis), found in the coastal waters from Senegal to Angola; and the West Indian manatee (Trichechus manatus), ranging from the northern South American coast through the Caribbean to the southeastern coastal waters of the United States. Two other species, the dugong (Dugong dugon) and Steller’s sea cow (Hydrodamalis gigas), along with the manatees, make up the order Sirenia. Steller’s sea cow is now extinct, having been exterminated by man in the mid-1700s for food. These animals can weigh 1,000–1,500 lb (454–680 kg) and grow to be more than 12 ft (3.7 m) long. Manatees are unique among aquatic mammals because of their herbivorous diet. They are non-ruminants, therefore, unlike cows and sheep, they do not have a chambered stomach. They do have, however, extremely long intestines (up to 150 ft/46 m) that contain a paired blind sac where bacterial digestion of cellulose takes place. Other unique traits of the manatee include horizontal replacement of molar teeth and the presence of only six cervical, or neck, vertebrae, instead of seven as in all other mammals. The intestinal sac and tooth replacement are adaptations designed to counteract the defenses evolved by the plants that the manatees eat. Several plant
Environmental Encyclopedia 3
Manatees
Manatee with a researcher, Homosassa Springs, Florida. (Photograph by Faulkner. Photo Researchers Inc. Reproduced by permission.)
species contain tannins, oxalates, and nitrates, which are toxic, but which may be detoxified in the manatee’s intestine. Other plant species contain silica spicules, which, due to their abrasiveness, wear down the manatee’s teeth, necessitating the need for tooth replacement. The life span of manatees is long, greater than 30 years, but their reproductive rate is low, with gestation being 13 months and females giving birth to one calf every two years. Because of this the potential for increasing the population is low, thus leaving the population vulnerable to environmental problems. Competition for food is not a problem. In contrast to terrestrial herbivores, which have a complex division of food resources and competition for the high-energy level land plants, manatees have limited competition from sea turtles. This is minimized by different feeding strategies employed within the two groups. Sea turtles eat blades of seagrasses at greater depths than manatees feed, and manatees tend to eat not only the blades, but also the rhizomes of these plants, which contain more energy for the warm-blooded mammals. Because manatees are docile creatures and a source of food, they have been exploited by man to the point of
extinction. There are currently between 1,500 and 3,000 in
the U.S. Also because manatees are slow moving, a more recent threat is taking its toll on these shallow-swimming animals. Power boat propellers have struck hundreds of manatees in recent years, causing 90% of the man-related manatee deaths. This has also resulted in permanent injury or scarring to others. Conservation efforts, such as the Marine Mammals Protection Act of 1972 and the Endangered Species Act of 1973, have helped reduce some of these problems but much more will have to be done to prevent the extirpation of the manatees. [Eugene C. Beckham]
RESOURCES BOOKS Ridgway, S. H., and R. Harrison, eds. Handbook of Marine Mammals. Vol. 3, The Sirenians and Baleen Whales. London: Academic Press, 1985.
OTHER
Manatees of Florida. [cited May 2002]. . Save the Manatees Club. [cited May 2002]. .
865
Environmental Encyclopedia 3
Mangrove swamp
Mangrove swamp Mangrove swamps or forests are the tropical equivalent of temperate salt marshes. They grow in protected coastal embayments in tropical and subtropical areas around the world, and some scientists estimate that 60-75 percent of all tropical shores are populated by mangroves. The term “mangrove” refers to individual trees or shrubs that are angiosperms (flowering plants) and belong to more than 80 species within 12 genera and five families. Though unrelated taxonomically, they share some common characteristics. Mangroves only grow in areas with minimal wave action, high salinity, and low soil oxygen. All of the trees have shallow roots, form pure stands, and have adapted to the harsh environment in which they grow. The mangrove swamp or forest community as a whole is called a mangal. Mangroves typically grow in a sequence of zones from seaward to landward. This zonation is most highly pronounced in the Indo-Pacific regions, where 30-40 species of mangroves grow. Starting from the shore-line and moving inland, the sequence of genera there is Avicennia followed by Rhizophora, Bruguiera, and finally Ceriops. In the Caribbean, including Florida, only three species of trees normally grow: red mangroves (Rhizophora mangle) represent the pioneer species growing on the water’s edge, black mangroves (Avicennia germinans) are next, and white mangroves (Laguncularia racemosa) grow mostly inland. In addition, buttonwood (Conocarpus erectus) often grows between the white mangroves and the terrestrial vegetation. Mangrove trees have made special adaptations to live in this environment. Red mangroves form stilt-like prop roots that allow them to grow at the shoreline in water up to several feet deep. Like cacti, they have thick succulent leaves which store water and help prevent loss of moisture. They also produce seeds which germinate directly on the tree, then drop into the water, growing into a long, thin seedling known as a “sea pencil.” These seedlings are denser at one end and thus float with the heavier hydrophilic (water-loving) end down. When the seedlings reach shore, they take root and grow. One acre of red mangroves can produce three tons of seeds per year, and the seeds can survive floating on the ocean for more than 12 months. Black mangroves produce straw-like roots called pneumatophores which protrude out of the sediment, thus enabling them to take oxygen out of the air instead of the anaerobic sediments. Both white and black mangroves have salt glands at the base of their leaves which help in the regulation of osmotic pressure. Mangrove swamps are important to humans for several reasons. They provide water-resistant wood used in construction, charcoal, medicines, and dyes. The mass of prop roots at the shoreline also provides an important habitat for a rich assortment of organisms, such as snails, barnacles, 866
Mangrove creek in the Everglades National Park. (Photograph by Max & Bea Hunn. Visuals Unlimited. Reproduced by permission.)
oysters, crabs, periwinkles, jellyfish, tunicates, and many species of fish. One group of these fish, called mud skippers (Periophthalmus), have large bulging eyes, seem to skip over the mud, and crawl up on the prop roots to catch insects and crabs. Birds such as egrets and herons feed in these productive waters and nest in the tree branches. Prop roots tend to trap sediment and can thus form new land with young mangroves. Scientists reported a growth rate of 656 feet (200 m) per year in one area near Java. These coastal forests can be helpful buffer zones to strong storms. Despite their importance, mangrove swamps are fragile ecosystems whose ecological importance is commonly unrecognized. They are being adversely affected worldwide by increased pollution, use of herbicides, filling, dredging, channelizing, and logging. See also Marine pollution; Wetlands [John Korstad]
RESOURCES BOOKS Castro, P., and M. E. Huber. Marine Biology. St. Louis: Mosby, 1992. Nybakken, J. W. Marine Biology: An Ecological Approach. 2d ed. New York: Harper & Row, 1988. Tomlinson, P. B. The Botany of Mangroves. Cambridge: Cambridge University Press, 1986.
Environmental Encyclopedia 3 Smith, R. E. Ecology and Field Biology. 4th ed. New York: Harper & Row, 1990.
PERIODICALS Lugo, A. E., and S. C. Snedaker. “The Ecology of Mangroves.” Annual Review of Ecology and Systematics 5 (1974): 39–64. Ru¨tzler, K., and C. Feller. “Mangrove Swamp Communities.” Oceanus 30 (1988): 16–24.
Manure see Animal waste
Manville Corporation see Asbestos
Marasmus A severe deficiency of all nutrients, categorized along with other protein energy malnutrition disorders. Marasmus, which means “to waste” can occur at any age but is most commonly found in neonates (children under one year old). Starvation resulting from marasmus is a result of protein and carbohydrate deficiencies. In developing countries and impoverished populations, early weaning from breast feeding and over dilution of commercial formulas places neonates at high risk for getting marasmus. Because of the deficiency in intake of all dietary nutrients, metabolic processes--especially liver functions--are preserved, while growth is severely retarded. Caloric intake is too low to support metabolic activity such as protein synthesis or storage of fat. If the condition is prolonged, muscle tissue wasting will result. Fat wasting and anemia are common and severe. Severe vitamin A deficiency commonly results in blindness, although if caught early, this process can be reversed. Death will occur in 40% of children left untreated.
Mariculture Mariculture is the cultivation and harvest of marine flora and fauna in a controlled saltwater environment. Sometimes called marine fish farming, marine aquaculture, or aquatic farming, mariculture involves some degree of human intervention to enhance the quality and/or quantity of a marine harvest. This may be achieved by feeding practices, protection from predators, breeding programs, or other means. Fish, crustaceans, salt-water plants, and shellfish may be farm raised for bait, fishmeal and fish oil production, scientific research, biotechnology development, and repopulating threatened or endangered species. Ornamental
Mariculture
fish are also sometimes raised by fish farms for commercial sales. The most widespread use of aquaculture, however, is the production of marine life for human food consumption. With seafood consumption steadily rising and overfishing of the seas a growing global problem, mariculture has been hailed as a low-cost, high-yield source of animal-derived protein. According to the Fisheries Department of the United Nations’ Food and Agriculture Organization (FAO), over 33 million metric tons of fish and shellfish encompassing 220 different species are cultured (or farmed) worldwide, representing an estimated $49 billion in 1999. Pound for pound, China leads the world in aquaculture production with 32.5% of world output. In comparison, the United States is only responsible for 4.4% of global aquaculture output by weight. Just 7% of the total U.S. aquatic production (both farmed and captured resources) is attributable to aquaculture (compared to 62% of China’s total aquatic output). In the United States, Atlantic salmon and channel catfish represent the largest segments of aquaculture production (34% and 40%, respectively, in 1997). Though most farmed seafood is consumed domestically, the United States imports over half of its total edible seafood annually, representing a $7 billion annual trade deficit in 2001. The Department of Commerce launched an aquaculture expansion program in 1999 with the goal of increasing domestic seafood supply derived from aquaculture production to $5 billion annually by the year 2025. According to the U.S. Joint Subcommittee on Aquaculture, U.S. aquaculture interests harvested 842 million pounds of product at an estimated value of $987 million in 1999. To encourage further growth of the U.S. aquaculture industry, the National Aquaculture Act was passed in 1980 (with amendments in 1985 and 1996). The Act established funding and mandated the development of a national aquaculture plan that would encourage “aquaculture activities and programs in both the public and private sectors of the economy; that will result in increased aquacultural production, the coordination of domestic aquaculture efforts, the conservation and enhancement of aquatic resources, the creation of new industries and job opportunities, and other national benefits.” In the United States, aquaculture is regulated by the U.S. Department of Agriculture (USDA) and the Department of Commerce through the National Marine Fisheries Service (NMFS), National Oceanic and Atmospheric Administration (NOAA). State and local authorities may also have some input into the location and practices of mariculture facilities if they are located within an area governed by a Coastal Zone Management Plan (CZMP). Coastal Zone Management Plans, as authorized by the Coastal Zone 867
Mariculture
Management Act (CZMA) of 1972, allow individual states to determine the appropriate use and development of their respective coastal zones. Subsequent amendments to the CZMA have also made provision for states to be eligible for federal funding for the creation of state plans, procedures, and regulations for mariculture activities in the coastal zone. Tilapia, catfish, striped bass, yellow perch, walleye, salmon, and trout are just a few of the fresh and salt-water finned fish species farmed in the United States. Crawfish, shrimp, and shellfish are also cultured in the U.S. Some shellfish, such as oysters, mussels, and clams, are “planted” as juveniles and farmed to maturity, when they are harvested and sold. Shellfish farmers buy the juvenile shellfish (known as “spat") from shellfish hatcheries and nurseries. Oysters and mussels are attached to lines or nets and put in a controlled ocean environment, while clams are buried in the beach or in sandy substrate below low tide. All three of these shellfish species feed on plankton from salt water. But just as overfarming takes a toll on terrestrial natural resources, aquaculture without appropriate environmental management can damage native ecosystems. Farmed fish are raised in open-flow pens and nets. Because large populations of farmed fish are often raised in confined areas, disease spreads easily and rapidly among them. And farmed fish often transmit sea lice and other parasites and diseases to wild fish, wiping out or crippling native stock. Organic pollution from effluent, the waste products from farmed fish, can build up and suffocate marine life on the sea floor below farming pens. This waste includes fish feces, which contributes to nutrient loading, and chemicals and drugs used to keep the fish disease free and promote growth. It also contains excess fish food, which often contains dyes to make farmed fish flesh more aesthetically analogous to its wild counterparts. Farmed fish that escape from their pens interbreed with wild fish and weaken the genetic line of the native stock. If escaped fish are diseased, they can trigger outbreaks among indigenous marine life. Infectious Salmon Anemia, a viral disease that has plagued fish farms in New Brunswick and Scotland in the 1990s, was eventually found in native salmon. In 2001, the disease first appeared at U.S. Atlantic salmon farms off the coast of Maine. The use of drugs in farmed fish and shellfish intended for human consumption is regulated by the U.S. Food and Drug Administration (FDA). In recent years, antibiotic resistance has been a growing issue in aquaculture, as fish have been treated with a wide array of human and veterinary antibiotic drugs to prevent disease. The commercial development of genetically-engineered, or transgenic, farmed fish is also regulated by FDA. As of May 2002, no transgenic fish had been cleared by FDA for human consumption. The impact transgenic fish
868
Environmental Encyclopedia 3 may have on the survival and reproduction of native species will have to be closely followed if and when commercial farming begins. As mandated by the 1938 Mitchell Act, the NMFS funds 25 salmon hatcheries in the Columbia River Basin of the Pacific Northwest, the largest federal marine fishery program in the United States. These aquaculture facilities were originally introduced to assist in repopulation of salmon stocks that had been lost to or severely hampered by hydroelectric dam projects. However, some environmentalists charge that the salmon hatcheries may actually be endangering wild salmon further, by competing for local habitat and weakening the genetic line of native species. Without careful resource management, aquaculture may eventually take an irreversible toll on other non-farmed marine species. Small pelagic fish, such as herring, anchovy, and chub, are captured and processed into fish food compounds for high-density carnivorous fish farms. According to the FAO, at its current rate, fish farming is consuming twice as many wild fish in feeding their domestic counterparts as aquaculture is producing in fish harvests—an average of 1.9 kg of wild fish required for every kilogram of fish farmed. No matter how economically sound mariculture has been, it has also led to serious habitat modification and destruction, especially in mangrove forests. In the past twenty years, the area of mangrove forests have dwindled by 35% worldwide. Though some of that loss is due to active herbicide control of mangroves, their conversion to salt flats, and the industrial harvesting of forest products (wood chips and lumber), mariculture is responsible for 52% of the world’s mangrove losses. Mangrove forests are important to the environment because these ecosystems are buffer zones between saltwater areas and freshwater/land areas. Mangroves act as filters for agricultural nutrients and pollutants, trapping these contaminants before they reach the deeper waters of the ocean. They also prevent coastal erosion, provide spawning and nursery areas for fish and shellfish, host a variety of migratory wildlife (birds, fish, and mammals), and support habitats for a number of endangered species. Shrimp farming, in particular, has played a major role in mangrove forest reduction. Increasing from 3% to 30% in less than 15 years, commercial shrimp farming has impacted coastal mangroves profoundly by cutting down mangrove trees to create shrimp and prawn ponds. In the Philippines, 50% of the mangrove environments were converted to ponds and between 50% and 80% of those in Southeast Asia were lost to pond culture as well. Shrimp mariculture places high demands on resources. It requires large supplies of juvenile shrimp, which can seriously deplete natural shrimp stocks, and large quantities of
Environmental Encyclopedia 3
Marine ecology and biodiversity
shrimp meal to feed them. There also is considerable waste derived from shrimp production. This can pump organic matter and nutrients into the ponds, causing eutrophication, which causes algae bloom and oxygen depletion in the ponds themselves or even downstream. Many shrimp farmers need to pump pesticides, fungicides, parasiticides, and algicides into the ponds between harvests to sterilize them and mitigate the effects of nutrient loading. Shrimp ponds also have extremely short life spans, usually about 5–10 years, forcing their abandonment and the cutting of more mangrove forests to create new ponds. Mariculture also limits other marine activities along coastal waters. Some aquaculture facilities can occupy large expanses of ocean along beaches which become commercial and recreational no-fish zones. These nursery areas are also sensitive to disturbances by recreational activities like boating or swimming and the introduction of pathogens by domestic or farm animals. [Paula Anne Ford-Martin]
RESOURCES BOOKS Food and Agriculture Organization of the United Nations. The State of World Fisheries and Aquaculture Rome, Italy: FAO, 2000. [cited June 5, 2002]. . Jahncke, Michael L. et al.Public, Animal, and Environmental Aquaculture Health Issues. New York: John Wiley & Sons, 2002. Olin, Paul. “Current Status of Aquaculture in North America.” From Aquaculture in the Third Millennium: Technical Proceedings of the Conference on Aquaculture in the Third Millennium, Bangkok, Thailand. 20-25 February 2000.Rome, Italy: FAO, 2000.
PERIODICALS Barcott, Bruce, and Natalie Fobes. “Aquaculture’s Troubled Harvest.” Mother Jones 26, no.6 (November –December 2001): 38 (8). Naylor, Rosamond, et al. “Effect of Aquaculture on World Fish Supplies.”Nature(June 29, 2000).
OTHER “National Aquaculture Policy, Planning, and Development.” 16 USC 2801. [cited June 4, 2002]. . National Marine Fisheries Service, National Oceanic and Atmospheric Administration. Aquaculture. [cited July 2002]. .
ORGANIZATIONS The Northeast Regional Aquaculture Center, University of Massachusetts Dartmouth, Violette Building, Room 201 285 Old Westport Road, Dartmouth, MA USA 02747-2300 (508) 999-8157, Fax: (508) 999-8590, Toll Free: (866) 472-NRAC (6722), Email:
[email protected], http:// www.umassd.edu/specialprograms/NRAC/
Marine ecology and biodiversity Understanding the nature of ocean life and the patterns of its diversity represents a difficult challenge. Not only are there technical difficulties involved with studying life under water (high pressure, need for oxygen tanks, lack of light), there is an urgency to develop a greater understanding of marine life as links between ocean processes and the larger patterns of terrestrial life become more well known. Our current understanding of oceanic life is based on three principal concepts: size, complexity, and spatial distribution. Our knowledge about the size of the ocean’s domain is grounded in three great discoveries of the past few centuries. When Magellan first circumnavigated the earth he inadvertently found that the oceans were a continuous water body, rather than a series of discrete bodies of water. Some time later, the encompassing nature of the world oceans was further clarified by the discovery that the oceans were a chemically uniform aqueous system. All of the principal ions (sodium, chloride, and sulfate) exist in the same concentrations. The third discovery, still underway, is that the ocean is composed of comparatively immense ecological systems. Thus in most ways the oceans are a unified system which is the first defining characteristic of the marine environment. There is, however, a dichotomy between the integral nature of the ocean and the external forces played upon it. Mechanical, thermodynamic, chemical, and biological forces create variation through such things as differential heating, Coriolis force, wind, dissolved gases, salinity differences, and evaporation. The actions in turn set controls in motion which move toward physical equilibrium through feedback mechanisms. Those physical changes then interact with biological systems in nonlinear ways, that is, out of synchronization with the external stimuli and become quite difficult to predict. Thus, we have the second broad characteristic of the oceans, complexity. The third major aspect of ocean life is that life itself is sparse in terms of the overall volume of the oceans, but locally productive systems can create immense populations and/or sites with exceptionally high species diversity. Life is arranged in active layers dictated by nutrients and light in the horizontal planes, and by vertical current (downwelling and upwelling) in the vertical planes. Life decreases through the depth zones from the epipelagic zone in the initial 328 ft (100 m) of the water column to the bottom layers of water, and then jumps again at the benthic layer at the watersubstrate interface. Life also decreases from the littoral zones along the world’s shorelines to the open ocean, interrupted by certain areas with special life supporting systems, like floating sargasso weed beds. In the past twenty years the focus of conservation has shifted to include not only individual species or habitats, 869
Marine Mammals Protection Act (1972)
but to a phenomenon called biological diversity, or biodiversity for short. Biological diversity encompasses from three to four levels. Genetic diversity is the level of genotypic differences within all the individuals that constitute a population of organisms; species diversity refers to the number of species in an area; and community diversity to the number of different community types in a landscape. The highest level, landscape diversity has not frequently been applied in aqueous environments and will not be discussed here. Commonly, species diversity is interpreted as biological diversity, and since few marine groups, except marine mammals, have had very much genetic work done, and community functions are only well known from a few systems, it is the taxonomic interpretation of diversity that is most commonly discussed (e.g., species or higher taxonomic levels such as families and classes, orders and phyla). Of all the species that we know, roughly 16% are from the seas. General diversity patterns in the sea are similar to those on land, there are more smaller than larger species, and there are more tropical species than temperate or polar species. There are centers of diversity for specific taxa, and the structure of communities and ecosystems is based on particular patterns of energy availability. For example, estuary systems are productive due to importation of nitrogen from the land, coral reefs are also productive, but use scarce nutrients efficiently by specially adapted filter feeding mechanisms. Abyssal communities, on the other hand, depend on their entire energy supply from detritus fall from upper levels in the ocean. Perhaps the most specifically adapted of all life forms are the hydrothermal vent communities that employ chemosynthesis rather than photosynthesis for primary production. Water temperature, salinity, and pressure create differing ecosystems in ways that are distinctly different from terrestrial systems. In addition, the boundaries between systems may be dynamic, and are certainly more difficult to detect than on land. Most marine biodiversity occurs at higher taxonomic levels, while the land holds more species, most of them are arthropods. Most of the phyla (32 of 33) that we now know are either marine, or both marine and non-marine, while only one is exclusively non-marine. Thus most of the major life plans exist in the sea. We are now learning that the number of species in the ocean is probably underestimated as we discover more cryptic species, very similar organisms that are actually distinct, many of which have been discovered on the deep ocean floor. This is one of the important diversity issues in the marine environment. Originally, the depths were cast as biological deserts, however, that view may have been promoted by a lack of samples, the small size of many benthic invertebrates, and the low density of benthic populations in the deep sea beds. 870
Environmental Encyclopedia 3 Improved sampling since the 1960s changed that view to one of the ocean floor as a highly species diverse environment. The deep sea is characterized by a few individuals in each of many species; rarity dominates. Whereas, in shallow water benthic environments, there are large, dense populations dominated by a few species. At least three theoretical explanations for this pattern have been made. The stabilitytime hypothesis suggests that ocean bottoms have been very stable environments over long periods of time. This condition causes very finely tuned adaptations to narrow niches, and results in many closely related species. The disturbance or “cropper” hypothesis suggests that intense predation of limited food sources prevents populations from reaching high levels and because the food source is dominated by detrital rain, generalist feeders abound and compete for the same food, which results in only small differences between species. A third hypothesis is that the area of the deep sea bed is so large it supports many species, following from generalizations made by the species area relationship concept used in island biogeography theory. The picture of species number and relative rarity is still not clearly understood. In general, some aspects of marine biology are well studied. Rocky intertidal life has been the subject of major ecological research and yielded important theoretical advances. Similarly, coral reefs are the subject of many studies of life history adaptation and evolutionary biology and ecology. Physiology and morphology research has used many marine animals as examples of organisms’ functions under extreme conditions. This new found knowledge is timely. Up until now we have considered the oceans as an inexhaustible source of food and a sink for our wastes, yet we now realize they are neither. Relative to the land, the sea is in good ecological condition, but to prevent major ecological problems in the marine environment we need to increase human knowledge rapidly and manage our behavior toward the oceans very conservatively, which is a difficult task under the conditions where the ocean is treated as a common resource. [David Duffus]
Marine Mammals Protection Act (1972) The Marine Mammals Protection Act (MMPA) was initially passed by Congress in 1972 and is the most comprehensive federal law aimed at the protection of marine mammals. The MMPA prohibits the taking (i.e., harassing, hunting, capturing, or killing, or attempting to harass, hunt, capture, or kill) on the high seas of any marine mammal by persons or vessels subject to the jurisdiction of the United States. It also prohibits the taking of marine mammals in
Environmental Encyclopedia 3
Marine Mammals Protection Act (1972)
An example of a marine ecosystem. (Illustration by Hans & Cassidy.)
waters or on land subject to United States jurisdiction and the importation into the United States of marine mammals, parts thereof, or products made from such animals. The MMPA provides that civil and criminal penalties apply to illegal takings. The MMPA specifically charges the National Marine Fisheries Service (NMFS) with responsibility for the protection and conservation of marine mammals. The NMFS is given statutory authority to grant or deny permits to take whales, dolphins, and other mammals from the oceans. The original legislation established “a moratorium on the taking of marine mammals and marine mammal products...during which time no permit may be issued for the taking of any marine mammal and no marine mammal may be imported into the United States.” Four types of exceptions allowed for limited numbers of marine mammals to be taken: (1) animals taken for scientific review and public display, after a specified review process; (2) marine mammals taken incidentally to commercial fishing operations prior to October 21, 1974; (3) animals taken by Native Americans and Inuit Eskimos for subsistence or for the production of traditional crafts or tools; and (4) animals taken under a temporary
exemption granted to persons who could demonstrate economic hardship as a result of MMPA (this exemption was to last for no more than a year and was to be eliminated in 1974). MMPA also sought specifically to reduce the number of marine mammals killed in purse-seine or drift net operations by the commercial tuna industry. The language used in the legislation is particularly notable in that it makes clear that the MMPA is intended to protect marine mammals and their supporting ecosystem, rather than to maintain or increase commercial harvests: “[T]he primary objective in management should be to maintain the health and stability of the marine ecosystem. Whenever consistent with this primary objective, it should be the goal to obtain an optimum sustainable population keeping in mind the optimum carrying capacity of the habitat.” All regulations governing the taking of marine mammals must take these considerations into account. Permits require a full public hearing process with the opportunity for judicial review for both the applicant and any person opposed to the permit. No permits may be issued for the taking or importation of a pregnant or nursing female, for 871
Environmental Encyclopedia 3
Marine pollution
taking in an inhumane manner, or for taking animals on the endangered species list. Subsidiary legislation and several court decisions have modified, upheld, and extended the original MMPA: Globe Fur Dyeing Corporation v. United States upheld the constitutionality of the statutory prohibition of the killing of marine mammals less than eight months of age or while still nursing. In Committee for Humane Legislation v. Richardson, the District of Columbia Court of Appeals ruled that the NMFS had violated MMPA by permitting tuna fishermen to use the purse-seine or drift net method for catching yellowfin tuna, which resulted in the drowning of hundreds of thousands of porpoises. Under the influence of the Reagan Administration, MMPA was amended in 1981 specifically to allow this type of fishing, provided that the fishermen employed “the best marine mammal safety techniques and equipment that are economically and technologically practicable.” The Secretaries of Commerce and the Interior were empowered to authorize the taking of small numbers of marine mammals, provided that the species or population stocks of the animals involved were not already depleted and that either Secretary found that the total of such taking would have a negligible impact. The 1984 reauthorization of MMPA continued the tuna industry’s general permit to kill incidentally up to 20,500 porpoises per year, but provided special protection for two threatened species. The new legislation also required that yellowfin tuna could only be imported from countries that have rules at least as protective of porpoises as those of the United States. In Jones v. Gordon (1985), a federal district court in Alaska ruled in effect that the National Environmental Policy Act provided regulations which were applicable to the MMPA permitting procedure. Significantly, this decision made an environmental impact statement mandatory prior to the granting of a permit. Presumably owing to the educational, organizing, and lobbying efforts of environmental groups and the resulting public outcry, the MMPA was amended in 1988 to provide a three-year suspension of the “incidental take” permits, so that more ecologically responsible standards could be developed. Subsequently, Congress decided to prohibit the drift netting method as of the 1990 season. In 1994, the MMPA was amended to include more concrete definitions of harassment levels and grouped them as level A harassment (potential to hurt a wild marine mammal) and level B harassment (potential to disrupt their environment or biology). The 1994 amendments also constructed new restrictions of photography permits and states 872
under harassment level B that scientific research must limit its influence on the marine mammals being studied. [Lawrence J. Biskowski]
RESOURCES BOOKS Dolgin, E. L., and T. G. P. Guilbert, eds. Federal Environmental Law. St. Paul, MN: West Publishing Co., 1974. Freedman, W. Federal Statutes on Environmental Protection. New York: Quorum Books, 1987.
PERIODICALS Hofman, J. “The Marine Mammals Protection Act: A First of Its Kind Anywhere.” Oceanus 32 (Spring 1989): 7–16.
OTHER National Marine Fisheries www.nmfs.noaa.gov>.
Services.
[cited
June
2002].
North American Water and Power Alliance Numerous schemes were suggested in the 1960s to accomplish large-scale water transfers between major basins in North America, and one of the best known is the North American Water and Power Alliance (NAWAPA). The plan was devised by the Ralph M. Parson Company of Los Angeles “to divert 36 trillion gallons of water (per year) from the Yukon River in Alaska (through the Great Bear and Great Slave Lakes) southward to 33 states, seven Canadian provinces, and northern Mexico.” The proposed NAWAPA system would bring water in immense quantities from western Canada and Alaska through the plains and desert states all the way down to the Rio Grande watershed and into the Northern Sonora
Environmental Encyclopedia 3
Northern spotted owl
and Chihuahua provinces of Mexico. The Rocky Mountain Trench, Peace River, Great Slave Lake, Lesser Slave Lake, North Saskatchewan River, Columbia River, Lake Winnipeg, Hudson River, James Bay, and numerous tributaries are part of the proposed NAWAPA feeder system designed to channel water from Canada to Mexico. A second feeder system in the plan would channel large quantities of water into the western portion of Lake Superior. This influx of water would wash the pollutants dumped into the Great Lakes out into the Atlantic Ocean. It would also boost the capacity of the area for generating hydroelectricity. The NAWAPA plan was also designed to develop hydroelectric plants within northern Quebec and Ontario which would produce power that would be diverted to the United States. The James Bay hydropower project in Quebec was completed in the early 1970s. It flooded an area 4,250 mi2 (11,000 km2), and 90% of its power goes directly into the Northeastern United States and Ohio. The James Bay II Project in Ontario will eventually incorporate over 80 dams, divert three major rivers, and flood traditional Cree land. The majority of its hydroelectric output will also go to the United States. Proponents of inter-basin transfers tend to focus on the impending water shortages in the western and southwestern United States. In the Great Plains, the Ogallala Aquifer is rapidly being depleted. The Black Mesa water table is almost exhausted, due to the excessive quantities of water used in mining operations, and California has been consistently unable to meet the needs of both its industries and its population. Supporters of NAWAPA have long argued that this plan is the only way the nation can solve these problems. On February 22, 1965, Newsweek hailed the NAWAPA plan as “the greatest, the most colossal, stupendous, supersplendificent public works project in history.” NAWAPA was described as “a monstrous concept— a diabolical thesis” by a former chairperson of the International Joint Commission. Much of the opposition to the plan in the 1960s was nationalist rather than environmental in character: The plans were viewed as an attempt to appropriate Canadian resources. Today, many people are asking whether it is necessary or even right to hydrologically reengineer the ecosystems of North America in order to meet the water needs of the United States. Environmentalists point out that entire ecosystems in many western states have already been disrupted by various water projects. They argue that it is time to investigate other methods, such as conservation, which would bring water consumption to levels sustainable by the watersheds of the plains and deserts. [Debra Glidden]
RESOURCES BOOKS Higgins, J. “Hydro-Quebec and Native People.” Cultural Survival Quarterly 11 (1987). Reisner, M. Cadillac Desert: The American West and Its Disappearing Water. New York: Viking Press, 1986. Royal Society of Canada. Water Resources of Canada. Ottawa: Royal Society of Canada, 1968. Welsh, F. How to Create a Water Crisis. Boulder: Johnson Publishing, 1985.
PERIODICALS Canadian Council of Resource Ministers. Water Diversion Proposals of North America. Ottawa: Canadian Council of Resource Ministers, 1968.
Northern spotted owl The northern spotted owl (Strix occidentalis caurina) is one of three subspecies of the spotted owl (Strix occidentalis). Adults are brown, irregularly spotted with white or light brown spots. The face is round with dark brown eyes and a dull yellow colored bill. They are 16–19 in (41–48 cm) long and have wing spans of about 42 in (107 cm). The average weight of a male is 1.2 lb (544 g), whereas the average female weighs 1.4 lb (635 g). This subspecies of the spotted owl is found only in the southwestern portion of British Columbia, western Washington, western Oregon, and the western coastal region of California south to the San Francisco Bay. Occasionally the bird can be found on the eastern slopes of the Cascade Mountains in Washington and Oregon. It is estimated that there are about 3,000–5,000 individuals of this subspecies. The other two subspecies of spotted owl are the California spotted owl (S. o. occidentalis) found in the coastal ranges and western slopes of the Sierra Nevada mountains from Tehama to San Diego counties, and the Mexican spotted owl (S. o. lucida) found from northern Arizona, southeastern Utah, southwestern Colorado, south through western Texas to central Mexico. It is thought that spotted owls mate for life and are monogamous. Breeding does not occur until the birds are two to three years of age. The typical clutch size is two, but sometimes as many as four eggs are laid in March or early April. The incubation period is 28–32 days. The female performs the task of incubating the eggs while the male bird brings food to the newly-hatched young. The owlets leave the nest for the first time when they are around 32–36 days old. Without fully mature wings, the young are not yet able to fly well and must often climb back to the nest using their talons and beak. Juvenile survivorship may be only 11%. Spotted owls hunt by sitting quietly on elevated perches and diving down swiftly on their prey. They forage during the night and spend most of the day roosting. Mammals make up over 90% of the spotted owl’s diet. The most 991
Northern spotted owl
important prey species is the northern flying squirrel (Glaucomys sabrinus) which makes up about 50% of the owl’s diet. Woodrats and hares also are important. In all, 30 species of mammals, 23 species of birds, two reptile species, and even some invertebrates have been found in the diets of spo