Handbook of Research on Technoethics Rocci Luppicini University of Ottawa, Canada Rebecca Adell Eck MacNeely Architects, USA
Volume I
InformatIon scIence reference Hershey • New York
Director of Editorial Content: Senior Managing Editor: Managing Editor: Assistant Managing Editor: Typesetter: Cover Design: Printed at:
Kristin Klinger Jennifer Neidig Jamie Snavely Carole Coulson Sean Woznicki Lisa Tosheff Yurchak Printing Inc.
Published in the United States of America by Information Science Reference (an imprint of IGI Global) 701 E. Chocolate Avenue, Suite 200 Hershey PA 17033 Tel: 717-533-8845 Fax: 717-533-8661 E-mail:
[email protected] Web site: http://www.igi-global.com and in the United Kingdom by Information Science Reference (an imprint of IGI Global) 3 Henrietta Street Covent Garden London WC2E 8LU Tel: 44 20 7240 0856 Fax: 44 20 7379 0609 Web site: http://www.eurospanbookstore.com Copyright © 2009 by IGI Global. All rights reserved. No part of this publication may be reproduced, stored or distributed in any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher. Product or company names used in this set are for identification purposes only. Inclusion of the names of the products or companies does not indicate a claim of ownership by IGI Global of the trademark or registered trademark.
Library of Congress Cataloging-in-Publication Data Handbook of research on technoethics / Rocci Luppicini and Rebecca Adell, Editors. p. cm. Includes bibliographical references and index. Summary: “This book traces the emergence of the new interdisciplinary field of technoethics by exploring its conceptual development, important issues, and key areas of current research. Compiling 50 authoritative articles from leading researchers on the ethical dimensions of new technologies”--Provided by publisher. ISBN 978-1-60566-000-4 (hard cover) -- ISBN 978-1-60566-001-1 (ebook) 1. Technology--Moral and ethical aspects. I. Luppicini, Rocci. II. Adell, Rebecca. BJ59.H36 2009 174’.96--dc22 2008007623
British Cataloguing in Publication Data A Cataloguing in Publication record for this book is available from the British Library. All work contributed to this book set is original material. The views expressed in this book are those of the authors, but not necessarily of the publisher.
If a library purchased a print copy of this publication, please go to http://www.igi-global.com/agreement for information on activating the library's complimentary electronic access to this publication.
Editorial Advisory Board
John M. Artz George Washington University, USA Ginger Taylor National Research Council of Canada, Canada Gary Boyd Concordia University, Canada
List of Contributors
Akbari, M. / The University of Guilan, Iran......................................................................................439 Allan, S. / Bournemouth University, UK ...........................................................................................373 Anderson, A. / University of Plymouth, UK ......................................................................................373 Andrews, Cecelia / University of New South Wales, Australia .........................................................806 Bagheri, Alireza / University of Toronto, Canada ............................................................................ 112 Barger, Robert N. / University of Notre Dame, USA .......................................................................794 Bauer, Keith / Marquette University, USA ........................................................................................170 Billinger, Michael S. / Edmonton Police Service, Canada .................................................................44 Butt, Adeel I. / Simon Fraser University, Canada ............................................................................354 Butt, Arsalan / Simon Fraser University, Canada ............................................................................354 Candor, Jennifer / Gahanna Lincoln High School, USA .................................................................409 Capurro, Rafael / Stuttgart Media University, Germany .................................................................339 Cerqui, Daniela / Université de Lausanne, Switzerland .....................................................................32 Chan, Helen Yue-lai / Hong Kong Polytechnic University, China ...................................................316 Conger, Sue / University of Dallas, USA ..........................................................................................767 Charlesworth, Matthew / The Jesuit Institute, South Africa ...........................................................186 Córdoba, José-Rodrigo / University of Hull, UK ............................................................................712 Cortés Pascual, Pilar Alejandra / University of Zaragoza, Spain ..........................................222, 426 Cortez, J. José / Syracuse University, USA ......................................................................................651 Crowell, Charles R. / University of Notre Dame, USA ............................................................700, 794 de Vries, Marc J. / Delft University of Technology, The Netherlands ................................................20 Fait, Stefano / University of St. Andrews, Scotland ..........................................................................145 Fleischmann, Kenneth R. / University of Maryland College Park, USA ........................................391 Flicker, Sarah / York University, Canada .........................................................................................295 Fortwengel, Gerrhard / University for Health Sciences, Medical Informatics and Technology, Austria ............................................................................................................................................126 Gearhart, Deb / Troy University, USA ..............................................................................................263 Gomberg, Anna / University of Notre Dame, USA ..........................................................................700 Guta, Adrian / University of Toronto, Canada .................................................................................295 Haghi, A.K. / The University of Guilan, Iran ...................................................................................439 Hongladarom, Soraj / Chulalongkorn University, Thailand ...........................................................496 Iannone, A. Pablo / Central Connecticut State University, USA ......................................................558 Ibrahim, Y. / University of Brighton, UK .........................................................................................512 Jones, D Gareth / University of Otago, New Zealand ......................................................................609
Kashmeery, Amin / University of Durham, UK ................................................................................752 Kaupins, Gundars / Boise State University, USA .............................................................................825 Klang, Mathias / University of Lund, Sweden & University of Göteborg, Sweden ..........................593 Lee, Joyce Yi- Hui / University of Bath, UK .....................................................................................623 Lewis, Edward / University of New South Wales, Australia .............................................................806 Luppicini, Rocci / University of Ottawa, Canada ................................................................................1 Macer, Darryl / Regional Unit for Social and Human Sciences in Asia and the Pacific (RUSHSAP), UNESCO, Thailand............................................................................................................................85 Matthias, Andreas / Lingnan University, Hong Kong ......................................................................635 McMahon, Joan D. / Towson University, USA .................................................................................729 Miah, Andy / University of the West of Scotland, Scotland .................................................................69 Minch, Robert / Boise State University, USA ....................................................................................825 Mottaghitalab, V. / The University of Guilan, Iran ...........................................................................439 Murphy, Timothy F. / University of Illinois College of Medicine, USA ...........................................162 Nakada, Makoto / University of Tsukuba, Japan ..............................................................................339 Narvaez, Darcia / University of Notre Dame, USA ...........................................................................700 Nkala, Busi / Chris Hani Baragwanath Hospital, South Africa ........................................................328 Norman, Cameron / University of Toronto, Canada ........................................................................295 Ostermann, Herwig / University for Health Sciences, Medical Informatics and Technology, Austria ............................................................................................................................................126 Pang, Samantha Mei-che / Hong Kong Polytechnic University, China ...........................................316 Panteli, Niki / University of Bath, UK ...............................................................................................623 Paterson, Barbara / Marine Biology Research Institute, Zoology Department, University of Cape Town, South Africa ...........................................................................................................736 Petersen, A. / Monash University, Australia ......................................................................................373 Pullen, Darren / University of Tasmania, Australia ..........................................................................680 Ribble, Mike / Kansas State University, USA ...................................................................................250 Robbins, Russell W. / Marist College, USA .....................................................................................391 Roberts, Lynne D. / Curtin University of Technology, Australia .............................................542, 575 Rogerson, Simon / De Montfort University, UK ..............................................................................752 Rouvroy, Antoinette / European University Institute, Italy .............................................................454 Rowe, Neil C. / U.S. Naval Postgraduate School, USA ....................................................................529 Rueda, Eduardo A. / Universidad Javeriana, Colombia .................................................................474 Ruzic, Fjodor / Institute for Informatics, Croatia ............................................................................843 Ryder, Martin / University of Colorado at Denver, USA ..................................................................232 Schnackenberg, Heidi L. / SUNY Plattsburgh, USA ........................................................................668 Sewry, David / Rhodes University, South Africa ...............................................................................186 Stahl, Bernd Carsten / De Montfort University, UK .......................................................................752 Staudinger, Roland / University for Health Sciences, Medical Informatics and Technology, Austria ............................................................................................................................................126 Stuehlinger, Verena / University for Health Sciences, Medical Informatics and Technology, Austria ............................................................................................................................................126 Sullins, John P. / Sonoma State University, USA ...............................................................................205 Thorseth, May / NTNU Norwegian University of Science and Technology, Norway .......................278 Vega, Edwin S. / SUNY Plattsburgh, USA ........................................................................................668
Visala, Seppo / University of Tampere, Finland ................................................................................103 Wallace, William A. / Rensselaer Polytechnic Institute, USA ..........................................................391 Warner, Zachary B. / SUNY Plattsburgh, USA ................................................................................668 Warwick, Kevin / University of Reading, UK .....................................................................................32 Whitaker, Maja I. / University of Otago, New Zealand ...................................................................609 Wilkinson, C. / University of West of England, Bristol, UK ..............................................................373
Table of Contents
Preface ............................................................................................................................................... xxx Acknowledgment ............................................................................................................................ xxxii
VOLuME I Section I Theoretical Frameworks in Technoethics Chapter I The Emerging Field of Technoethics ..................................................................................................... 1 Rocci Luppicini, University of Ottawa, Canada Chapter II A Multi-Disciplinary Approach to Technoethics ................................................................................. 20 Marc J. de Vries, Delft University of Technology, The Netherlands Chapter III Technoethics: An Anthropological Approach ...................................................................................... 32 Daniela Cerqui, Université de Lausanne, Switzerland Kevin Warwick, University of Reading, UK Chapter IV A Technoethical Approach to the Race Problem in Anthropology ...................................................... 44 Michael S. Billinger, Edmonton Police Service, Canada Chapter V The Ethics of Human Enhancement in Sport ....................................................................................... 69 Andy Miah, University of the West of Scotland, Scotland Chapter VI Education of Ethics of Science and Technology Across Cultures ....................................................... 85 Darryl Macer, Regional Unit for Social and Human Sciences in Asia and the Pacific (RUSHSAP), UNESCO, Thailand
Chapter VII Planning, Interests, and Argumentation .............................................................................................. 103 Seppo Visala, University of Tampere, Finland Section II Research Areas of Technoethics Chapter VIII Ethics Review on Externally-Sponsored Research in Developing Countries .................................... 112 Alireza Bagheri, University of Toronto, Canada Chapter IX Social and Ethical Aspects of Biomedical Research .......................................................................... 126 Gerrhard Fortwengel, University for Health Sciences, Medical Informatics and Technology, Austria Herwig Ostermann, University for Health Sciences, Medical Informatics and Technology, Austria Verena Stuehlinger, University for Health Sciences, Medical Informatics and Technology, Austria Roland Staudinger, University for Health Sciences, Medical Informatics and Technology, Austria Chapter X Ethical Aspects of Genetic Engineering and Biotechnology............................................................... 145 Stefano Fait, University of St. Andrews, Scotland Chapter XI Nanoscale Research, Ethics, and the Military..................................................................................... 162 Timothy F. Murphy, University of Illinois College of Medicine, USA Chapter XII Healthcare Ethics in the Information Age . ......................................................................................... 170 Keith Bauer, Marquette University, USA Chapter XIII Ethical Theories and Computer Ethics ............................................................................................... 186 Matthew Charlesworth, The Jesuit Institute, South Africa David Sewry, Rhodes University, South Africa Chapter XIV Artificial Moral Agency in Technoethics ............................................................................................ 205 John P. Sullins, Sonoma State University, USA Chapter XV Ethical Controversy over Information and Communication Technology . ......................................... 222 Pilar Alejandra Cortés Pascual, University of Zaragoza, Spain
Chapter XVI The Cyborg and the Noble Savage: Ethics in the War on Information Poverty ................................. 232 Martin Ryder, University of Colorado at Denver, USA Chapter XVII Becoming a Digital Citizen in a Technological World ....................................................................... 250 Mike Ribble, Kansas State University, USA Chapter XVIII Technoethics in Education for the Twenty-First Century ................................................................... 263 Deb Gearhart, Troy University, USA Chapter XIX The Ethics of Global Communication Online .................................................................................... 278 May Thorseth, Norwegian University of Science and Technology, Norway
Section III Case Studies and Applications in Technoethics Chapter XX Engaging Youth in Health Promotion Using Multimedia Technologies: Reflecting on 10 Years of TeenNet Research Ethics and Practice ........................................................................................... 295 Cameron Norman, University of Toronto, Canada Adrian Guta, University of Toronto, Canada Sarah Flicker, York University, Canada Chapter XXI Ethical Challenges of Engaging Chinese in End-of-Life Talk ............................................................ 316 Samantha Mei-che Pang, Hong Kong Polytechnic University, Hong Kong Helen Yue-lai Chan, Hong Kong Polytechnic University, Hong Kong Chapter XXII Community Education in New HIV Prevention Technologies Research .......................................... 328 Busi Nkala, Chris Hani Baragwanath Hospital, South Africa Chapter XXIII The Public / Private Debate: A Contribution to Intercultural Information Ethics .............................. 339 Makoto Nakada, University of Tsukuba, Japan Rafael Capurro, Stuttgart Media University, Germany
Chapter XXIV Ethical, Cultural and Socio-Economic Factors of Software Piracy Determinants in a Developing Country: Comparative Analysis of Pakistani and Canadian University Students .......... 354 Arsalan Butt, Simon Fraser University, Canada Adeel I. Butt, Simon Fraser University, Canada Chapter XXV Nanoethics: The Role of News Media in Shaping Debate ................................................................. 373 A. Anderson, University of Plymouth, UK S. Allan, Bournemouth University, UK A. Petersen, Monash University, Australia C. Wilkinson, University of the West of England, Bristol, UK Chapter XXVI Computing and Information Ethics Education Research ................................................................... 391 Russell W. Robbins, Marist College, USA Kenneth R. Fleischmann, University of Maryland, College Park, USA William A. Wallace, Rensselaer Polytechnic Institute, USA Chapter XXVII The Ethical Dilemma over Money in Special Education ................................................................... 409 Jennifer Candor, Gahanna Lincoln High School, USA Chapter XXVIII Educational Technoethics Applied to Career Guidance ..................................................................... 426 Pilar Alejandra Cortés Pascual, University of Zaragoza, Spain Chapter XXIX The Scholarship of Teaching Engineering: Some Fundamental Issues ............................................. 439 A.K. Haghi, The University of Guilan, Iran V. Mottaghitalab, The University of Guilan, Iran M. Akbari, The University of Guilan, Iran
VOLuME II Section IV Emerging Trends and Issues in Technoethics Chapter XXX Which Rights for Which Subjects? Genetic Confidentiality and Privacy in the Post-Genomic Era .............................................................................................................................. 454 Antoinette Rouvroy, European University Institute, Italy
Chapter XXXI Predictive Genetic Testing, Uncertainty, and Informed Consent ....................................................... 474 Eduardo A. Rueda, Universidad Javeriana, Colombia Chapter XXXII Privacy, Contingency, Identity, and the Group .................................................................................. 496 Soraj Hongladarom, Chulalongkorn University, Thailand Chapter XXXIII The Ethics of Gazing: The Politics of Online Pornography ............................................................... 512 Y. Ibrahim, University of Brighton, UK Chapter XXXIV The Ethics of Deception in Cyberspace ............................................................................................. 529 Neil C. Rowe, U.S. Naval Postgraduate School, USA Chapter XXXV Cyber Identity Theft ........................................................................................................................... 542 Lynne D. Roberts, Curtin University of Technology, Australia Chapter XXXVI Walking the Information Overload Tightrope .................................................................................... 558 A. Pablo Iannone, Central Connecticut State University, USA Chapter XXXVII Cyber-Victimization ........................................................................................................................... 575 Lynne D. Roberts, Curtin University of Technology, Australia Chapter XXXVIII Spyware .............................................................................................................................................. 593 Mathias Klang, University of Lund, Sweden & University of Göteborg, Sweden Chapter XXXIX In Vitro Fertilization and the Embryonic Revolution ........................................................................ 609 D. Gareth Jones, University of Otago, New Zealand Maja I. Whitaker, University of Otago, New Zealand Chapter XL Inter-Organizational Conflicts in Virtual Alliances ............................................................................ 623 Joyce Yi- Hui Lee, University of Bath, UK Niki Panteli, University of Bath, UK Chapter XLI From Coder to Creator: Responsibility Issues in Intelligent Artifact Design .................................... 635 Andreas Matthias, Lingnan University, Hong Kong
Chapter XLII Historical Perspective of Technoethics in Education ......................................................................... 651 J. José Cortez, Syracuse University, USA Chapter XLIII Podcasting and Vodcasting in Education and Training ...................................................................... 668 Heidi L. Schnackenberg, SUNY Plattsburgh, USA Edwin S. Vega, SUNY Plattsburgh, USA Zachary B. Warner, SUNY Plattsburgh, USA Chapter XLIV Technoethics in Schools ..................................................................................................................... 680 Darren Pullen, University of Tasmania, Australia
Section V Further Reading in Technoethics Chapter XLV Moral Psychology and Information Ethics: Psychological Distance and the Components of Moral Behavior in a Digital World ................................................................................................ 700 Charles R. Crowell, University of Notre Dame, USA Darcia Narvaez, University of Notre Dame, USA Anna Gomberg, University of Notre Dame, USA Chapter XLVI A Critical Systems View of Power-Ethics Interactions in Information Systems Evaluation ............. 712 José-Rodrigo Córdoba, University of Hull, UK Chapter XLVII Ethical Issues in Web-Based Learning ............................................................................................... 729 Joan D. McMahon, Towson University, USA Chapter XLVIII We Cannot Eat Data: The Need for Computer Ethics to Address the Cultural and Ecological Impacts of Computing ............................................................................................... 736 Barbara Paterson, Marine Biology Research Institute, Zoology Department, University of Cape Town, South Africa Chapter XLIX Current and Future State of ICT Deployment and Utilization in Healthcare: An Analysis of Cross-Cultural Ethical Issues .................................................................................... 752 Bernd Carsten Stahl, De Montfort University, UK Simon Rogerson, De Montfort University, UK Amin Kashmeery, University of Durham, UK
Chapter L Emerging Technologies, Emerging Privacy Issues ............................................................................ 767 Sue Conger, University of Dallas, USA Chapter LI Ethics of “Parasitic Computing”: Fair Use or Abuse of TCP/IP Over the Internet? .......................... 794 Robert N. Barger, University of Notre Dame, USA Charles R. Crowell, University of Notre Dame, USA Chapter LII Simulating Complexity-Based Ethics for Crucial Decision Making in Counter Terrorism ............... 806 Cecilia Andrews, University of New South Wales, Australia Edward Lewis, University of New South Wales, Australia Chapter LIII Legal and Ethical Implications of Employee Location Monitoring ................................................... 825 Gundars Kaupins, Boise State University, USA Robert Minch, Boise State University, USA Chapter LIV New Ethics for E-Business Offshore Outsourcing ............................................................................. 843 Fjodor Ruzic, Institute for Informatics, Croatia
Detailed Table of Contents
Preface ............................................................................................................................................... xxx Acknowledgment ............................................................................................................................ xxxii
VOLuME I Section I Theoretical Frameworks in Technoethics In Section I, the introductory chapter, “The Emerging Field of Technoethics”, traces the development of Technoethics to its larger historical and theoretical context. This helps to situate the reader within the emerging field of Technoethics developed over the last forty years in a broad range of contexts. Chapter II, entitled, “A MultiDisciplinary Approach to Technoethics”, proposes that a multidisciplinary approach to Technoethics is required because technology is complicated and the cooperation of many kinds of experts is needed to ensure its ethical use. Chapter III entitled, “Technoethics: An Anthropological Approach”, adapts an anthropological perspective that views technological devices as the result of a designing and building process transmitting social values, the impact of which, can be properly assessed only once these values are understood. Chapter IV, “A Technoethical Approach to the Race Problem in Anthropology”, expands on the anthropological approach to Technoethics by linking it to human variation. In a different area, Chapter V entitled, “The Ethics of Human Enhancement in Sport” outlines a Technoethics for sport by addressing the relationship between sport ethics and bioethics. This chapter attempts to establish the conditions under which a Technoethics of sport should be approached that takes into account the varieties and forms of technology in sport. In an effort to address a perceived need for international standard of ethics in Science and Technology, Chapter VI, “Education of Ethics of Science and Technology Across Cultures” uses Lawerence Kohlberg’s moral development theory to explain bioethical maturity within a universal framework consisting of three stages of common interest. Finally, Chapter VII, “Planning, Interest and Argumentation”, discusses the challenges of reaching rationally motivated consensus within organizational frameworks. The chapter explores important communication issues in Technoethics through Rawls' theory of justice and Habermas' communicative rationality.
Chapter I The Emerging Field of Technoethics ..................................................................................................... 1 Rocci Luppicini, University of Ottawa, Canada This chapter traces the development of Technoethics to its larger historical and theoretical context. This helps to situate the reader within the emerging field of Technoethics developed over the last forty years in a broad range of contexts.
Chapter II A Multi-Disciplinary Approach to Technoethics ................................................................................. 20 Marc J. de Vries, Delft University of Technology, The Netherlands In this chapter, de Vries maintains that a multidisciplinary approach to technoethics is needed because technology is so inherently complicated. Furthermore, ethics has also to address production and design of artifacts, taking advantage of the cooperation of many kinds of experts in this endeavor. Chapter III Technoethics: An Anthropological Approach ...................................................................................... 32 Daniela Cerqui, Université de Lausanne, Switzerland Kevin Warwick, University of Reading, UK Cerqui and Warwick assert that in the ethics of technology, it is necessary to change our view on people and see them as ‘things’ in some contexts. The authors refer to Kant, for whom humans were to be seen only as ends, not as means (means being equated with things in this text). Moral mediators are then things that can acquire the same sort of appreciation as humans. Furthermore, the chapter shows that in our moral appreciation of technology we should take into account that humans and ‘things’ often make up ‘cyborgs’. Chapter IV A Technoethical Approach to the Race Problem in Anthropology ...................................................... 44 Michael S. Billinger, Edmonton Police Service, Canada Billinger argues that the concept of ‘race’ is fundamentally flawed and that scholars have an ethical obligation to develop a solution which encourages us to rethink the ways in which we categorize human groups. This is a well-structured chapter which surveys a large body of literature, develops an effective line of argument and is appropriately referenced. The sections discussing the ethical dimension to the race problem and possible solutions are particilarly interesting. Chapter V The Ethics of Human Enhancement in Sport ....................................................................................... 69 Andy Miah, University of the West of Scotland, Scotland Miah describes a technoethical approach to sport. It focuses on the important relationship between sport ethics and bioethics. It reviews historical evidence on ethics and policy making with respect to sport technologies to help contextualise this work within the broader medical ethical sphere. It also provides useful examples of recent cases of hypoxic training and gene doping. Chapter VI Education of Ethics of Science and Technology Across Cultures ....................................................... 85 Darryl Macer, Regional Unit for Social and Human Sciences in Asia and the Pacific (RUSHSAP), UNESCO, Thailand
This chapter examines some of the cultural variation in the ethical factors associated with the use of science and technology. The issues discussed include access to technology, social justice, professional ethics, and value systems. The appropriate implementation of international standards in ethics of science and technology and bioethics is considered. There is global agreement that persons should be taught the ethics of science and technology, and discussion of new materials and methods is made. The goals of ethics education as explained in the Action Plan for Bioethics Education developed at the 2006 UNESCO Asia-Pacific Conference on Bioethics Education include knowledge, skills and personal moral development. Chapter VII Planning, Interests, and Argumentation .............................................................................................. 103 Seppo Visala, University of Tampere, Finland Rawls’ theory of justice and Habermas’ communicative rationality are described and compared in this chapter. The question of how to reach rationally motivated consensus within organizational development is addressed and interesting links are made to current work in the ethics of communication. Section II Research Areas of Technoethics In Section II, key areas of research in Technoethics are presented which focus on important ethical and social aspects of human activity affected by technology. Chapters VIII and IX focus on areas of research ethics connected to technology and its influence. Chapter X, “Ethics Review On Externally-Sponsored Research in Developing Countries” provides a glimpse at a pivotal area of Technoethics and current research innovation. The chapter deals with the issue of ethical review of externally-sponsored research in developing countries with an emphasis on research protocols designed and/or funded in a developed country involving subjects recruited from developing countries. Chapter IX, “Social and Ethical Aspects of Biomedical Research”, provides an overview of Technoethics in biomedical research, focusing on key relations between the ethical, social and legal dimensions of such research. This chapter contributes to Technoethics by identifying general principles applicable to all biomedical research while remaining sensitive to varying ethical and social dimensions within specific research contexts. Chapter X, “Ethical Aspects of Genetic Engineering and Biotechnology”, delves into an important area of Technoethics concerned with ethical issues associated with key areas of engineering and modern biotechnology, including, genetic engineering, gene patenting, chimeras, commodification of life, and genetic testing. The chapter frames the discussion within a historical review of eugenics. In Chapter XI, “Nanoscale Research, Ethics, and the Military”, ethical concerns are raised about the use of nanoscale Technology in Medical Research. Chapter XII, “Healthcare Ethics in the Information Age”, reviews current debates in telehealth and explores how telehealth ethical standards work to protect patient confidentiality, to improve healthcare relationship, and diminish instances of compromised access and equity in the healthcare system. In Chapter XIII”, Ethical Theories and Computer Ethics”, a historical perspective on the development of computer ethics is provided using of a number of ethical theories (Divine Command; Ethics of Conscience; Ethical Egoism; Ethics of Duty; Ethics of Respect; Ethics of Rights; Utilitarianism; Ethics of Justice; Virtue Ethics) with an eye to new theoretical contributions from Floridi and Sanders (2003) on information ethics. Chapter XIV, “Artificial Moral Agency in Technoethics”, deals with the possibility of assigning artificial moral agency and responsibility to increasingly autonomous technologies. In Chapter XV, “Ethical Controversy Over Information and Communications Technology”, a theoretical analysis is conducted to discern positive and negative aspects of new Information and Communications Technologies (ICT). Chapter XVI, “The Cyborg and the Noble Savage: Ethics in the War on Information Poverty”, deals with technical and social barriers that define the so-called ‘digital divide’ with an emphasis on how the ‘One Laptop Per Child’ project addresses the problem of digital poverty. Chapter XVII, “Becoming a Digital Citizen in a Technological World”, delves into
the topic of digital citizenship and offers insightful suggestion on how to define and promote digital citizenship. In Chapter XVIII, “Technoethics in Education for the Twenty-First Century”, defines Educational Technoethics and explores key issues related to the use of Technoethics for educational administrative concerns. It also and provides suggestions on how teachers can improve instruction to address ethical issues. Chapter XIX, “The Ethics of Global Communication Online”, deals with ethical issues arising from employing online technology for communication that impinges on the development of “fundamentalist knowledge” through deliberation.
Chapter VIII Ethics Review on Externally-Sponsored Research in Developing Countries ................................... 112 Alireza Bagheri, University of Toronto, Canada Bagheri deals with the issue of ethical review of externally-sponsored research in developing countries, that is, on research following protocols that have been designed and/or funded in a developed country but involving subjects recruited from developing countries. The author emphasizes the issues which local ethics committees should consider when reviewing externally-sponsored researches involving local populations recruited in developing countries. The chapter calls for further empirical studies on the role of ethics committees with regard to such research. Chapter IX Social and Ethical Aspects of Biomedical Research .......................................................................... 126 Gerrhard Fortwengel, University for Health Sciences, Medical Informatics and Technology, Austria Herwig Ostermann, University for Health Sciences, Medical Informatics and Technology, Austria Verena Stuehlinger, University for Health Sciences, Medical Informatics and Technology, Austria Roland Staudinger, University for Health Sciences, Medical Informatics and Technology, Austria The chapter reviews biomedical research ethics, concentrating on the implications of ethical, social and legal dimensions of such research. The authors effectively delineate a number of core and peripheral ethical and social issues, while elucidating general principles applicable to all biomedical research. Chapter X Ethical Aspects of Genetic Engineering and Biotechnology .............................................................. 145 Stefano Fait, University of St. Andrews, Scotland Fait addresses the ethical issues associated with modern biotechnology and genetic interventions to improve human life. The chapter explores key issues and concerns with the social impact of genetic engineering, gene patenting, chimeras, commodification of life, and genetic testing. Chapter XI Nanoscale Research, Ethics, and the Military .................................................................................... 162 Timothy F. Murphy, University of Illinois College of Medicine, USA Murphy explores the significance of nanoscale research for military applications including: new information systems, improved protective gear, improved performance of military personnel, and innovations in medical diagnosis and treatment.
Chapter XII Healthcare Ethics in the Information Age .......................................................................................... 170 Keith Bauer, Marquette University, USA Bauer examines key debates about the meaning of telehealth by examining ways in which new and emerging systems in telehealth ethical standards work to protect patient confidentiality, to improve healthcare relationship, and diminish instances of compromised access and equity in the healthcare system. The chapter also explores various emerging technologies to show how their implementation can ensure that their benefits outweigh their risks. Chapter XIII Ethical Theories and Computer Ethics ............................................................................................... 186 Matthew Charlesworth, The Jesuit Institute, South Africa David Sewry, Rhodes University, South Africa This chapter provides an excellent historical review and examination of the foundations for the study of computer ethics. To this end, the authors explore a number of key ethical theories (Divine Command; Ethics of Conscience; Ethical Egoism; Ethics of Duty; Ethics of Respect; Ethics of Rights; Utilitarianism; Ethics of Justice; Virtue Ethics) and offer a fresh perspective on new developments in computer ethics. Chapter XIV Artificial Moral Agency in Technoethics ........................................................................................... 205 John P. Sullins, Sonoma State University, USA Sullins posits that artificial agents created or synthesized by technologies create unique challenges to current ideas of moral agency. The author explores how technoethics must consider artificial agents as artificial moral agents (AMA) that warrant moral concern. The chapter thus extends current notions of moral agency to include artificial agents. Chapter XV Ethical Controversy over Information and Communication Technology .......................................... 222 Pilar Alejandra Cortés Pascual, University of Zaragoza, Spain This chapter focuses on current perceptions of information and communications technologies (ICT) along with the dilemmas revolving around their use. The author provides a theoretical analysis of various ICT characteristics, and presents the results of two work modules conducted in the Observation Laboratory of Technoethics for Adults (LOTA) project. Chapter XVI The Cyborg and the Noble Savage: Ethics in the War on Information Poverty ................................. 232 Martin Ryder, University of Colorado at Denver, USA This chapter reviews work on technical and social aspects of the ‘digital divide’ and explores the recent ‘One Laptop Per Child’ project as one response to the problem of digital poverty. Ryder provides an
insightful look at the notion of cultural hegemony and how the imposition of new technologies should take local control and user agency into consideration. Chapter XVII Becoming a Digital Citizen in a Technological World ....................................................................... 250 Mike Ribble, Kansas State University, USA Ribble examines digital technology’s uses and abuses and asserts that existing solutions to address such abuses are inadequate. The chapter then argues for the development of digital citizenship and offers an innovative model to help define and promote digital citizenship. Chapter XVIII Technoethics in Education for the Twenty-First Century ................................................................... 263 Deb Gearhart, Troy University, USA Gearhart investigates key issues related to technoethics for educational administration and provides information that teachers can use to improve instruction of ethical issues. The author defines technoethics for education and also provides useful suggestions to guide teachers. Technoethics, for the purposes of ethical instruction, is defined as the study of moral, legal and social issues involving technology. Chapter XIX The Ethics of Global Communication Online .................................................................................... 278 May Thorseth, Norwegian University of Science and Technology, Norway Thorseth addresses ethical implications of employing online technology for communication that contributes to the development of “fundamentalist knowledge” through deliberation. Instead, the chapter argues that it is preferable to develop internet technologies that stimulate imaginative powers to help combat power abuses.
Section III Case Studies and Applications in Technoethics Section III introduces a series of case studies in various areas of Technoethics. In Chapter XX, “Engaging Youth in Health Promotion Using Multimedia Technologies: Reflecting on 10 Years of TeenNet Research Ethics and Practice”, provides a case study on ethical challenges connected to rapidly changing online environments as a medium for dialogue and communication. This chapter draws on more than a decade of research and action with TeenNet, a youth-focused research group based at the University of Toronto. Chapter XXI, “Ethical Challenges of Engaging Chinese in End-of-Life Talk”, discussion of life-sustaining treatment preferences at the end of life”, explores the case of end-of life decision making in Hong Kong. It juxtaposes a discussion of traditional beliefs and principles with the results of an intervention designed to assist in making end-of life decisions. Chapter XXII, “Community Education in New HIV Prevention Technologies Research”, is a case study framed within the context of HIV Prevention Technologies. To this end, it examines the Belmont Principles and explores the importance of educating communities as a key strategy to support ethical research in this area. Chapter XXIII, “The Public / Private Debate: A Contribution To Intercultural Information Ethics”, examines cultural differences in Eastern and Western conceptions of “the public” and “ the private” in relation to the information society. This chapter discusses Japanese worldviews by drawing on the results of surveys carried out by the authors and providing a
Seken-Shakai-Ikai framework. Chapter XIV, “Ethical, Cultural and Socio-Economic Factors of Software Piracy Determinants in a Developing Country: Comparative Analysis of Pakistani and Canadian University Students”, offers an interesting comparative analysis of perceptions on key factors connected to the urgent problem of software piracy. Chapter XXV, “Nanoethics: The Role of News Media in Shaping Debate,” examines the portrayal of nanotechnology in the media with a focus on the United Kingdom. This chapter draws on existing survey data to argue that the public must have better access to critical perspectives on the development and use of technology in order to participate in a more meaningful dialogue, socially and politically. In Chapter XXVI, “Computing and Information Ethics Education Research”, the importance of CIE (computer and information ethics) education for IT professionals is explored through a review of “famous” IT technology failures that caused accidents with losses of human life followed by a discussion on the topics of information ownership and privacy and an examination of one current system used to assist CIE teaching. Chapter XXVII, “The Ethical Dilemma over Money in Special Education”, a case is presented of one school districts use of technology to support students with special education needs. Through the use of a district case study, this chapter explores how funding for special educational technology has declined over time but also starts to make inference as to why this may have an ethical component. Chapter XXVIII, “Educational Technoethics Applied to Careers Guidance”, examines the concept of educational technoethics investigates the aim and means of technoethics. This case focuses on current activities within the Observation Laboratory on Technoethics for Adults (LOTA) as well as its implications for professional orientation. Chapter XXX, “The Scholarship of Teaching Engineering: Some Fundamental Issues”, provides an overview of critical factors likely to have a significant effect on the sustainability of engineering as a discipline. It introduces key issues such as learning and teaching methodologies, as well as the perceived effects of e-development.
Chapter XX Engaging Youth in Health Promotion Using Multimedia Technologies: Reflecting on 10 Years of TeenNet Research Ethics and Practice ........................................................................................... 295 Cameron Norman, University of Toronto, Canada Adrian Guta, University of Toronto, Canada Sarah Flicker, York University, Canada This chapter explores ethical challenges in using new technologies for health promotion. The authors synthesize more than a decade of research conducted by TeenNet, a youth-focused research group based at the University of Toronto. Chapter XXI Ethical Challenges of Engaging Chinese in End-of-Life Talk ............................................................ 316 Samantha Mei-che Pang, Hong Kong Polytechnic University, Hong Kong Helen Yue-lai Chan, Hong Kong Polytechnic University, Hong Kong This chapter examines end-of life decision making in Hong Kong. To this end, the authors review traditional beliefs and discuss the results of an intervention model designed to assist decision making these difficult decisions. Chapter XXII Community Education in New HIV Prevention Technologies Research .......................................... 328 Busi Nkala, Chris Hani Baragwanath Hospital, South Africa Nkala discusses the importance of community education as a key strategy to support ethical research within the context of HIV Prevention Technologies and genetic research. The chapter presents a review and critique of the Belmont Principles and offers several strategies to advance work in this area.
Chapter XXIII The Public / Private Debate: A Contribution to Intercultural Information Ethics .............................. 339 Makoto Nakada, University of Tsukuba, Japan Rafael Capurro, Stuttgart Media University, Germany Nakada and Capurro explore cultural aspects of Eastern and Western conceptions of “the public” and “ the private” in relation to the information society. The chapter draws on a large body of existing scholarship as well as the results of surveys conducted by the authors themselves. Chapter XXIV Ethical, Cultural and Socio-Economic Factors of Software Piracy Determinants in a Developing Country: Comparative Analysis of Pakistani and Canadian University Students .......... 354 Arsalan Butt, Simon Fraser University, Canada Adeel I. Butt, Simon Fraser University, Canada This chapter explores demographic, ethical and socio-economical factors connected to software piracy as a social norm among a developing country’s university students. The authors present a comparative study of university students from Pakistan and Canada. Their findings regarding software piracy behavior provide evidence of both unique and shared behaviors between groups studied. Chapter XXV Nanoethics: The Role of News Media in Shaping Debate ................................................................. 373 A. Anderson, University of Plymouth, UK S. Allan, Bournemouth University, UK A. Petersen, Monash University, Australia C. Wilkinson, University of the West of England, Bristol, UK This chapter discusses nanotechnology in the media, particularly the United Kingdom. The authors draw on survey data demonstrating that the media popularly emphasizes the benefits of nanotechnology, rather than social implications of technological advancement. The argument presented is that the public must have better access to critical perspectives on the development and use of technology in order to participate in meaningful dialogue and informed decision making. Chapter XXVI Computing and Information Ethics Education Research ................................................................... 391 Russell W. Robbins, Marist College, USA Kenneth R. Fleischmann, University of Maryland, College Park, USA William A. Wallace, Rensselaer Polytechnic Institute, USA This chapter discusses the importance of CIE (computer and information ethics) education for IT professionals. It reviews selected IT technology failures that caused accidents with losses of human life and explores current topics of concern including: information ownership, privacy, information quality, and CIE education. It also reviews a current computerized interactive system designed to assist CIE teaching.
Chapter XXVII The Ethical Dilemma over Money in Special Education ................................................................... 409 Jennifer Candor, Gahanna Lincoln High School, USA This chapter explores one school districts use of technology to support students with special education needs. A case study is conducted which demonstrates how funding for special educational technology has declined over time. It also explores the ethical implications of this and offers recommendations. Chapter XXVIII Educational Technoethics Applied to Career Guidance ..................................................................... 426 Pilar Alejandra Cortés Pascual, University of Zaragoza, Spain This chapter explores the concept of educational technoethics from two angles: the intrinsic values that technology and the means of communication include (the aim of technoethics) and their use as mediators of ethical values (means of technoethics). It also reviews the implementation of the Observation Laboratory on Technoethics for Adults (LOTA) and discusses its implications for professional orientation. Chapter XXIX The Scholarship of Teaching Engineering: Some Fundamental Issues ............................................. 439 A.K. Haghi, The University of Guilan, Iran V. Mottaghitalab, The University of Guilan, Iran M. Akbari, The University of Guilan, Iran This chapter sketches out key critical factors related to the sustainability of engineering as a discipline. It introduces a number of important issues includeing, learning and teaching methodologies, the effect of e-development, and theimportance of communications.
VOLuME II Section IV Emerging Trends and Issues in Technoethics Section IV discusses issues and trends in Technoethics. Chapter XXX, “Which Rights for Which Subjects? Confidentiality and Privacy in the Post-Genetic Era”, discusses the challenging position of the individual legal subject in the context of human genetics. This chapter discusses individuals’ right and confidentiality issues in genetic testing, along with predispositions and risks. It raises important considerations surrounding confidentiality, intrafamilial disclosure and familial management of genetic information. Chapter XXI, “Predictive Genetic Testing, Uncertainty and Informed Consent”, extends the discussion of genetic testing with a slightly different angle. It discusses legitimate ways for coping with uncertainties within the informed consent process of predictive genetic testing and presents a three dimensional model of uncertainty that includes the role of genes in pathogenesis and the convenience to patients for undergoing predictive genetic testing. In Chapter XXII, “Privacy, Contingency, Identity, and the Group”, presuppositions about privacy are examined. The chapter asserts that the concept of privacy based on the presupposition of the individual being and his/her right is not sufficient. The chapter addresses problems in genomics research and the emergence of the crisis of ‘privacy’ of certain ethnic groups. Chapter XXXIII, “The Ethics of Gazing: The Politics of Online Pornography”, examines the ethics of gaze, the politics of looking, and how this can violate moral and ethical boundaries in society. This chapter helps to situate current debates on online pornography, along with its ethical and legal implications for users and researchers.
In a different area, Chapter XXXIV, “The Ethics of Deception in Cyberspace”, focuses on issue of intra-familial disclosure and health practitioner’s challenges regarding patient’s confidentiality. This chapter addresses current conflicts between individual rights to confidentiality and privacy and duties to prevent harm to others. In Chapter XXXV, “Cyber Identity Theft”, This chapter explores current tensions between using technological solutions to reduce online identity theft and privacy and civil liberties. It does so by reviewing work in cyber identity theft its impact on governments, law enforcement agencies and individuals. Chapter XXXVI, “Walking the Information Overload Tightrope”, discusses the current state of information overload in society. This chapter provides a detailed look at the types and affects of increasing information in a technological society. In Chapter XXXVII, “Cyber-Victimization”, the author provides salient details concerning identity theft online and ways that various countries and organizations have used to counter it. Chapter XXXVIII,” Spyware”, reviews work on spyware and the problems it raises, and the related difficulties in characterizing privacy and spyware in a manner useful for addressing these problems. It describes and assesses ways of addressing the problems through technology and the courts, and the regulatory issues involved. It concludes that more information should be made available to those affected by spyware so that they can engage in the dialogue needed to develop sound ways of dealing with the problems. In Chapter XXXIX, “In Vitro Fertilization and the Embryonic Revolution”, recent advances in assisted reproductive technologies (ARTs) and in vitro fertilization (IVF) are discussed with an emphasis on their ethical dimensions. The chapter makes the argument that are two main conceptions of these types of embryonic technologies - public vs. scientific - which presents a profound challenge for researchers and ethicists. Chapter XL, “Inter-Organizational Conflicts in Virtual Alliances”, explores strategic and cultural aspects of inter-organizational conflict, along with key ethical implications. This chapter posits two interesting conceptual frameworks of conflict tendencies to illustrate important considerations within global virtual alliances. In Chapter XLI, “From Coder to Creator: Responsibility Issues in Intelligent Artifact Design”, the problem of dealing with the advent of technologies that are increasingly autonomous is explored and suggestions are offered. Chapter XLII, “Historical Perspective of Technoethics in Education”, provides historical grounding for Educational Technoethics and the promotion of the basic concepts of common good, citizenship, and democratic values connected to the history of public schooling. Chapter XLIII “Podcasting and Vodcasting in Education and Training”, the how and why to pod/vodcasting are addressed, giving special attention to legal and ethical dilemmas arising from this constantly evolving technology. Finally, in Chapter XLIV,” Technoethics in Schools”, the use of information technology in education is considered with special attention to the role of teachers and parents. This chapter provides a review existing work on the legal and ethical use of digital resources and materials. Recommendations are offered for greater teacher understanding of the many issues that surround the use of digital technology in schools and how to help students become technologically responsible.
Chapter XXX Which Rights for Which Subjects? Genetic Confidentiality and Privacy in the Post-Genomic Era .............................................................................................................................. 454 Antoinette Rouvroy, European University Institute, Italy This chapter addresses the right to know / not to know about about genetic susceptibilities and risks when genetic tests exist. It examines the assumption that more information necessarily increases liberty and enhances autonomy. It also explores issues of confidentiality and intra-familial disclosure of genetic information. Chapter XXXI Predictive Genetic Testing, Uncertainty, and Informed Consent ....................................................... 474 Eduardo A. Rueda, Universidad Javeriana, Colombia This chapter explores strategies for coping with uncertainties connected to informed consent procedures within predictive genetic testing. To this end, it covers a number of key issues including, dimensions of
uncertainty, the role of genes in pathogenesis, treatment of patients, institutional aspects, informational considerations, and the need for transparency within the informed consent process. Chapter XXXII Privacy, Contingency, Identity, and the Group .................................................................................. 496 Soraj Hongladarom, Chulalongkorn University, Thailand This chapter analyzes existing assumptions about ‘group privacy’ and ‘individual privacy.’ The chapter argues that the notion of individual privacy is inadequate to deal with complex privacy issues, such as, privacy concerns in genomics research and privacy of certain ethnic groups. Chapter XXXIII The Ethics of Gazing: The Politics of Online Pornography ............................................................... 512 Y. Ibrahim, University of Brighton, UK This chapter delves into the world of Internet pornography. The chapter addresses key issues and pervasive problems to help raise general awareness of Internet problems to help inform research and practice. Chapter XXXIV The Ethics of Deception in Cyberspace ............................................................................................. 529 Neil C. Rowe, U.S. Naval Postgraduate School, USA This chapter examines the issue of intra-familial disclosure and the role of health practitioners regarding patient’s confidentiality. The chapter also explores pervasive ethical conflicts between individual rights to confidentiality and the duty to prevent harm to others. Chapter XXXV Cyber Identity Theft ........................................................................................................................... 542 Lynne D. Roberts, Curtin University of Technology, Australia This chapter reviews work on online criminal victimization within the broader context of crime, focusing on the victims rather than on the crimes themselves as well as using a broad concept of “cyber-crimes” that includes crimes provoked not only within cyberspace but also using digital technology. This matter has become more and more important as the Internet grew into a social platform. It is expected that the number of Internet users (and potential victims) will continue to grow. Chapter XXXVI Walking the Information Overload Tightrope .................................................................................... 558 A. Pablo Iannone, Central Connecticut State University, USA This chapter asks: What is information overload? At what levels of existence does it occur? Are there any features common to information overload at all these levels? What are information overload’s types? What are information overload’s current and future trends? What problems do they pose? How can they be addressed in both effective and morally justified ways? It argues that there is anarchy concerning the
meaning of information overload, that information overload’s precise characterization is best left open at this stage in the inquiry, that information overload occurs at the biological, psychological, and social levels, that it is relational, that there are at least two overall types of information overload—quantitative and semantic— involving various kinds and current and likely future trends which pose problems requiring specific ways of dealing with them. The essay closes outlining how to identify effective and morally justified ways of dealing with information overload. Chapter XXXVII Cyber-Victimization ........................................................................................................................... 575 Lynne D. Roberts, Curtin University of Technology, Australia This chapter provides an overview of criminal victimization online. The focus is on the impact of cybercrimes on victims and the associated legal, technical, educational and professional responses to cybervictimization. The focus on cyber-victimization is situated within the broader context of responses to victims of crime in off-line settings. The form of cyber-crimes will continue to change as new ICTs and applications emerge. Continued research into the prevalence, types and impacts of cyber-victimization is required in order to inform victim service provision and effectively address the needs of current and future cyber-victims. Chapter XXXVIII Spyware .............................................................................................................................................. 593 Mathias Klang, University of Lund, Sweden & University of Göteborg, Sweden It is well known that technology can be use as to effectively monitor the behavior of crows and individuals and in many cases this knowledge may b the motivation for people to behave differently than if they were not under surveillance. This internalization of surveillance has been widely discussed in privacy literature. This chapter argues that the integrity of the computer user is not protected under law and any rights the user may believe she has are easily circumvented. Chapter XXXIX In Vitro Fertilization and the Embryonic Revolution ........................................................................ 609 D. Gareth Jones, University of Otago, New Zealand Maja I. Whitaker, University of Otago, New Zealand This chapter examines recent advances in assisted reproductive technologies (ARTs) and in vitro fertilization (IVF). It then explores the ethical dimensions of a multidisciplinary dialogue on such technologies. The chaspter argues that there are two main conceptions of these types of embryonic technologies - public vs. scientific - which pose a difficult challenge for bioethecists. Chapter XL Inter-Organizational Conflicts in Virtual Alliances ............................................................................ 623 Joyce Yi- Hui Lee, University of Bath, UK Niki Panteli, University of Bath, UK
This chapter explores strategic and cultural aspects of inter-organizational conflict are well represented. To this end, it discusses various types of conflict that arise in virtuual inter-organizational alliances. Chapter XLI From Coder to Creator: Responsibility Issues in Intelligent Artifact Design .................................... 635 Andreas Matthias, Lingnan University, Hong Kong This chapter addresses the problem of dealing with harm caused by the advent of technologies that are increasingly autonomous. The chapter discusses vectors of increasing speed and complexity, along with the implications that these vectors are causing humans to lose control of their creations. Chapter XLII Historical Perspective of Technoethics in Education ......................................................................... 651 J. José Cortez, Syracuse University, USA This chapter explores the topic of technoethics as an applied field of ethics and research, viewed from a historical perspective of education in the United States and its embrace of technology. The underlying intent is to inform the readers’ understanding of the basic concepts of common good, citizenship, and democratic values that are the underlying precepts associated with the history of public schooling in the United States. Additionally, the author discusses the increasingly critical need for educators to address the social and ethical dilemmas associated with new technological developments and their application to educational settings. Chapter XLIII Podcasting and Vodcasting in Education and Training ...................................................................... 668 Heidi L. Schnackenberg, SUNY Plattsburgh, USA Edwin S. Vega, SUNY Plattsburgh, USA Zachary B. Warner, SUNY Plattsburgh, USA This chapter examines the the how and why of podcasting and vodcasting. It provides the reader with useful examples, along with a focused discussion of legal and ethical dilemmas including, ownership, lack of US government control, and future possibilities. Chapter XLIV Technoethics in Schools ..................................................................................................................... 680 Darren Pullen, University of Tasmania, Australia The chapter describes technical experience of stduents today and the role of teachers and parents in guiding the proper use of information technology needs. It provides a review of relevant literature on the topic and calls for teachers to enhance their understand of social, ethical, legal and human issues that surround the use of digital technology in schools.
Section V Further Reading in Technoethics Section V provides a useful collection of additional readings chosen by the editors for readers interested in deepening their understanding of selected areas of Technoethics. These chapters help shed new light in multiple areas where technoethiocal inquiry is being applied, including Psychology, information systems evaluation, web-based learning, computing, healthcare, national security, law, and e-business. A closer look at these additional readings reveals the ongoing expansion of Technoethics into important areas of human activity becoming increasingly intertined with new technologies.
Chapter XLV Moral Psychology and Information Ethics: Psychological Distance and the Components of Moral Behavior in a Digital World ................................................................................................ 700 Charles R. Crowell, University of Notre Dame, USA Darcia Narvaez, University of Notre Dame, USA Anna Gomberg, University of Notre Dame, USA The authors in this chapter examine ethical aspects of information technology and the problem of psychological distance from a moral psychology standpoint. A model is posited to help explain the complex interrelation of sensitivity, motivation, judgement and action within a technologically advanced society. Chapter XLVI A Critical Systems View of Power-Ethics Interactions in Information Systems Evaluation ............. 712 José-Rodrigo Córdoba, University of Hull, UK This chapter draws on the work of Michel Foucault to advance a critical systems view that addresses the role of power and ethics in guide information systems evaluation. The author provides useful strategies to improve current and further practices in information systems evaluation. Chapter XLVII Ethical Issues in Web-Based Learning ............................................................................................... 729 Joan D. McMahon, Towson University, USA The author focuses on key ethical issues connected to web-based learning, including, course integrity, advisory procedures, intellectual property, academic freedom, and succession planning. A number a useful strategies are offered to assist instructors, administrators, and researchers working in web-based learning environments. Chapter XLVIII We Cannot Eat Data: The Need for Computer Ethics to Address the Cultural and Ecological Impacts of Computing ............................................................................................... 736 Barbara Paterson, Marine Biology Research Institute, Zoology Department, University of Cape Town, South Africa
This chapter investigates the cultural underpinnings of computing. It discusses computing technology as a product of a Western tradition. The chapter asserts that computer ethics could advance understanding of computing and its influence on cultural diversity, non-Westernized traditions, and environmental conservation. Chapter XLIX Current and Future State of ICT Deployment and Utilization in Healthcare: An Analysis of Cross-Cultural Ethical Issues .................................................................................... 752 Bernd Carsten Stahl, De Montfort University, UK Simon Rogerson, De Montfort University, UK Amin Kashmeery, University of Durham, UK This chapter provides a cross-cultural analysis of new ethical issues created by the growing reliance on ICT use in healthcare. The authors offer the reader various scenarios to help situate the discussion in practical areas of healthcare. Chapter L Emerging Technologies, Emerging Privacy Issues ............................................................................ 767 Sue Conger, University of Dallas, USA This chapter explores the complex relationship between emerging technologies and new privacy issues created by these technologies. It does so by focusing on ethical issues connected to RFID chips, global positioning systems, and smart motes. Chapter LI Ethics of “Parasitic Computing”: Fair Use or Abuse of TCP/IP Over the Internet? .......................... 794 Robert N. Barger, University of Notre Dame, USA Charles R. Crowell, University of Notre Dame, USA The authors in this chapter address the ethical aspects of using TCP/IP Internet protocol to break up complex tasks and distribute processing across remote computers. Key ethical questions surrounding parasitic computer are raised. Chapter LII Simulating Complexity-Based Ethics for Crucial Decision Making in Counter Terrorism ............... 806 Cecilia Andrews, University of New South Wales, Australia Edward Lewis, University of New South Wales, Australia This chapter delves into current strategies and practices used by governments, military units, and other groups in the battle against terrorism. The authors put forth a systems planning approach intended to guide ethical decision making in counter terrorism contexts.
Chapter LIII Legal and Ethical Implications of Employee Location Monitoring ................................................... 825 Gundars Kaupins, Boise State University, USA Robert Minch, Boise State University, USA The authors in this chapter focus on a serious set of new legal and ethical issues connected to employee location monitoring. The gaps in international and American laws governing employee location monitoring are addressed and strategies are offered to help leverage understanding of key legal and ethical implications within the organizational context. Chapter LIV New Ethics for E-Business Offshore Outsourcing ............................................................................. 843 Fjodor Ruzic, Institute for Informatics, Croatia This chapter discusses corporate social responsibility in e-business. It explores ethical issues in one important area of E-business, namely, offshore outsourcing. An analysis of ethical aspects of technology in this domain is provided, along with useful suggestions on how to advance e-business ethical guidelines to assist individual companies within a increasing globalized business context.
xxx
Preface
The Handbook of Research on Technoethics was inspired by innovative work carried out by a group of dedicated scholars from diverse academic backgrounds who share a deep concern with the rapidly expanding world of technology and the new ethical issues arising through its growing influence in society. Mario Bunge’s first attempt at articulating this field in the 1970s had an important influence on how the field has evolved. He contributed by raising important questions about the type of relationships that engineers and technologists ought to have with the technologies they create. This spurred work within the Philosophy of Technology and a variety of areas of Applied Ethics which helped shape Technoethics as a field of inquiry with a practical focus on all areas of human conduct affected by technological development. In 2005, the long awaited Encycolopedia of Science, Technology, and Ethics, edited by Carl Mitcham, was another important work in the field. This was a four-volume publication with a vast list on contributing authors and entries. Although this work was limited to mostly short descriptive pieces, it attested to the widespread scholarly interest revolving around ethical issues in science and technology. It also raised awareness among scholars of the need for future work that provided more in-depth coverage of ethical issues focused primarily on technology. The rapid advancement of technology in contemporary society, combined with growing scholarly attention to its social and ethical implications, continues to raise new ethical considerations requiring special attention from this newly formed field of Technoethics.
OrganizatiOn It proved to be a protracted battle to present such a broad set of chapters in a way that best reflected developments in the field. The first strategy considered was to organize chapters based solely on existing areas of Applied Ethics with a technology focus. This had the advantage of demonstrating to the reader the broad scope of Technoethics and its connectors to existing work in Applied Philosophy. The disadvantage was that many branches of Applied Ethics share overlapping issues that need to be understood in their entirety (e.g. privacy issues are important in computer ethics and medical ethics). The second strategy, and the one adapted in this project, was to organize book chapters by kea area and by key issue in order to best represent the spirit of current scholarship in Technoethics. Introductory theoretical essays and practical case studies were also added to help situate the reader and provide detailed examples of how issues in Technoethics manifest themselves in specific real world situations.
ScOpe The Handbook of Research on Technoethics should be of interest to students, instructors, researchers, ethicists, and technology scholars who need expert knowledge about technology and ethics to inform cur-
xxxi
rent work in technology. This handbook is organized into five sections: Section I. Theoretical Frameworks in Technoethics, Section II. Research Areas of Technoethics, Section III. Case Studies in Technoethics, Section IV. Emerging Trends and Issues in Technoethics, and Section V. Further Reading in Technoethics. Section I introduces the reader to Technoethics and related issues. It provides an overview of various theoretical perspectives connected to work in Technoethics. Contributions from experts cover diverse conceptual and historical developments. Section II introduces key areas of research in Technoethics. Areas of research in Technoethics help group Technoethics in key areas of human conduct affected by technology. Section III introduces a series of case studies in various areas of Technoethics where research on ethical aspects of technology is taking root. Section IV discusses issues and trends in Technoethics. It addresses a number of emerging issues in Technoethics and new directions. Section V provides a useful collection of additional readings chosen by the editors for readers interested in deepening their understanding of selected areas of Technoethics. Because the issues related to technology and ethics are so broad, the Handbook of Research on Technoethics is necessarily selective. It attempts to advance in its own modest way a selective synthesis of key contemporary work on ethical aspects of technology to help guide future scholarship within a society shaped by and shaping technological developments. Despite the modest aims of this project, the editors realize that it is not possible to please all scholars, technologists, and general readers. It is hoped that this publication will stimulate the interest of sufficient numbers to continue developing this field. Critical comments and suggestions are welcome so that improvements can be made to assist the development of a second edition.
Rocci Luppicini and Rebecca Adell Co-editors
xxxii
Acknowledgment
Without the continual support of IGI Global, the Handbook of Research on Technoethics would not have been possible. Seminal works from the Handbook contributors played and continue to play a key role in shaping this evolving field. The authors would like to acknowledge the important contributions from Mario Bunge, a pioneer in Technoethics and Carl Mitcham, who helped ground the study of technology and ethics within Philososophy of Technology and Science and Technology Studies (STS). Thanks also go to Matthew Charlesworth, David Sewry, and Marc de Vries for their critical feedback on early drafts of the handbook introduction. The majority of the chapter authors included in this handbook also served as referees for chapters written by other authors. A special thanks go to all those who provided constructive and comprehensive reviews. Nonetheless, the co-editors take full responsibility for any errors, omissions, or weaknesses in this work. Furthermore, this volume marks the first dedicated reference work on Technoethics in the English language to date. It is hoped that this work provides solid grounding for developing future editions as the field continues to evolve. Sincerely, Rocci Luppicini and Rebecca Adell Co-Editors
Section I
Theoretical Frameworks in Technoethics
Chapter I
The Emerging Field of Technoethics Rocci Luppicini University of Ottawa, Canada
abstract Over the last 30 years, an amassing body of work has focused on ethical dimensions of technology in a variety of contexts impacting society. This purpose of this paper is to trace the emergence of this new interdisciplinary field by exploring its conceptual development, important issues, and key areas of current technoethics’ scholarship. The first part of this paper introduces key concepts and provides a skeletal description of its historical background and rationale. The second part of this paper identifies key areas and issues in technoethics in an effort to help inform scholarship in technoethics. This paper is based on the premise that it is of vital importance to encourage dialogue aimed at determining the ethical use of technology, guarding against the misuse of technology, and formulating common principles to help guide new advances in technological development and application to benefit society.
INtrODUctION The ethical use of new technologies is important in society today, particularly in areas where technological advances have a transforming effect on society. Moor (2005) referred to this transforming effect of technology as a technological revolution, which he argued was connected to growing ethical problems. Moor developed Moor’s Law, which holds that, as the social impact of technological revolutions grows, ethical problems increase
(Moor, 2005, pg. 117). This phenomenon is believed to occur not simply because an increasing number of people are affected by technology, but because revolutionary technology provides novel opportunities for action about which well thought out ethical policies have not yet been developed. What is important to note is the juxtaposition of technological growth with the growth of ethical needs. This phenomenon happens not simply because an increasing number of people are affected by technology but because revolutionary
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Emerging Field of Technoethics
technology provides numerous novel opportunities for action for which well thought out ethical policies will not have been developed. From this perspective, technology is recognized not as a solution to existing ethical problems, but as an intricate part of societal development which fosters change and new ethical considerations to address. This highlights the importance of ethics within the context of technological growth. The relationship between ethics and technology is of seminal importance to society and raises questions that continue to challenge learned scholars from a variety of fields and academic backgrounds. For instance, new life-preserving technologies, stem cell research, and cloning technologies are redefining current work in bioethics. Similarly, the development of new forms of surveillance and anonymity are redefining privacy laws and the right of privacy. Increased scholarly attention to ethical issues arising from technological transformations of work and life have created a need for a new framework dedicated to ethical considerations of all aspects of technology. Under this framework that would become known as technoethics, old ethical questions of privacy and free speech are given new shape and urgency within a technologically advanced society. Beginning in the 1970s, technoethics first emerged as an interdisciplinary field. Although research in technoethics had been done earlier than this, official work under this heading began with Mario Bunge, the first scholar to coin the term “technoethics” (Bunge, 1977). Bunge viewed technologists and engineers as professionals closely connected to technology with increased moral and social responsibility for technological innovations and applications. In order to meet these increased responsibilities, Bunge advocated the creation of a new type of ethical and moral theories which highlight the special problems posed by science and technology (Bunge, 1977). For Bunge, technology was a broad term encompassing general technologies, techniques, applications, as well as social, conceptual considerations.
For this reason, Bunge believed technologists and other professional working with technology had a unique moral responsibility for the outcomes of technological progress. As stated by Bunge (1977) “the technologist must be held not only technically but also morally responsible for whatever he designs or executes: not only should his artifacts be optimally efficient but, far from being harmful, they should be beneficial, and not only in the short run but also in the long term.” In addition, to coining the name of this field, Bunge brought to the forefront the core idea that technology should be moderated by moral and social controls and that the pursuit of such technology related issues requires special consideration and expertise, what eventually would become the field of technoethics
rationale: Why technoethics and Why Now? The rationale for technoethics derives from efforts to provide a solid grounding framework for technology focused sub-areas of Applied Ethics distinguished from other areas of scholarship. It is also to guard against potential limitations that may threaten the sustainability of technology focused ethical inquiry. First, the advent of technology in many areas of human activity has given rise to a plethora of technology focused programs of ethical inquiry scattered across multiple disciplines and fields. Efforts to reach an understanding of ethical aspects of the various types of technology are challenged by the tendencies within academia to create silos of information in separate fields and disciplines. Technoethics helps connect separate knowledge bases around a common theme (technology). To this end, technoethics is holistic in orientation and provides an umbrella for grounding all sub-areas of applied ethics focused on technology related areas of human activity including, business, politics, globalization, health and medicine, and research and development.
The Emerging Field of Technoethics
Second, technoethics is an interdisciplinary field based on a relational orientation to technology and human activity which creats new knowledge and builds on technology focused areas of ethical inquiry. While existing technology focused areas of inquiry in applied ethics typically apply bio-centric ethical principles to situations with technology, technoethics is techno- and bio-centric (biotechno-centric). This represents a major distinction from traditional work in ethics and applied ethics, which highlight living entities (biological) as being the centre of ethical concern, what Floridi (2003) considers a limitation of biocentric theories which typically focus on applying principles to real world situations. Technoethics provides a unique theoretical foundation to extend existing scholarship on technology and ethics. This is because in technoethics, both technology and biology are central. This means that technology acquires a new status previously reserved for only living entities. In other words, technology can be assigned certain rights and responsibilities from the designers and developers of technology as well as those affected by it. technoethics recognizes that there are important ethical considerations when addressing the conduct of an individual with or without a specific technology. For instance, the ethical considerations associated with nations at war are drastically different in the pre- and postnuclear era. This is because our relationship with technology creates new opportunities for action and raises new ethical considerations that did not exist previously. This distinguishes technoethics from other branches of applied ethics while building on work derived from key sub-areas of applied ethics and technology studies concerned with ethical aspects of technology. In terms of field parameters, technoethics constituted by a variety of sub-areas of applied ethics (e.g., computer ethics, Internet ethics, biotech ethics, etc.) but is not reducible to any one sub- area (or set). Instead, technoethics treats technology and biology as a central organizing construct for
all existing (and potential) technologies in relation to human activity and other living entities, rather than equating technoethics with specific technologies (e.g., computers, Internet, nanotechnology). An important scholar in computer ethics, Johnson (1999), has argued that computer ethics will disappear as a separate discipline in the near future when computing becomes a mature technology and computer use becomes part of ordinary human action. As indicated by Johnson: What was for a time an issue of Computer Ethics becomes simply an ethical issue. Copying software becomes simply an issue of intellectual property. Selling software involves certain legal and moral liabilities. Computer professionals understand they have responsibilities. Online privacy violations are simply privacy violations. So as we come to presume computer technology as part of the world we live in, computer Ethics as such is likely to disappear. (Johnson 1999) Under the framework of technoethics, the disappearance of ethical issues surrounding ‘mature’ technologies is not the end of technology focused programs of ethical inquiry. Rather, it is a progression in technoethics where unresolved ethical questions get resolved or redirected as ethical standards. In this sense, technoethics acts as an important organizing framework for past, present, and future work in technology and ethics. While there may be some debate over how many areas are key areas of technoethics or how it should evolve as a field in the future, there is now sufficient interest in technoethics for it to be recognized as an emerging interdisciplinary field on its own. As expressed by Gearhart, (2008): However viewed, technoethics must exist as a field worthy of study in its own right and not because it can provide a useful means to endure as a separate field, there must be a unique domain for technoethics distinct from the domain for moral education, distinct even from the domains
The Emerging Field of Technoethics
of other kinds of professional and Applied Ethics. Technology raises special ethical issues, hence technoethics deserves special status. The purpose of this chapter is to explore technoethics as a new field of interdisciplinary study. It attempts to demonstrate how technoethics emerged as a field focused on social and ethical issues arising from technological transformations of work and life. The main objective is to bring attention to the historical and theoretical groundwork for technoethics. This is accomplished in two steps: (1) providing a skeleton of relevant background information forming the basis for technoethics, and (2) discussing core sub-areas and issues that are currently defining the parameters of technoethics.
bacKGrOUND This section identifies key definitions and multidisciplinary roots that bear on an analysis of technology and key areas of applied ethics to help inform the reader about the development of this emerging field.
Definitions As a formal discipline, ethics refers to a branch of philosophy concerned with customs and values of people within society. It focuses on fundamental questions of right and wrong, good and evil, and responsibility and accountability. In the most general sense, ethical theories are theories and beliefs about moral standards and how they affect conduct. According to the Oxford Concise Dictionary, “Ethics are the moral principles influencing conduct” (p. 490). This entails that ethical considerations are connected to what people do and how they should conduct themselves in the world. Technology is a term defined in a variety of ways and, therefore, must be situated within the present discussion. Although the Greek root
of the word technology (techné) referred generally to art or craft, within the present context, technology is defined more closely in relation to human activity. This is because a great deal of what people do and how they do it depends on the use of technology, particularly in areas where technology contributes added efficiency and effectiveness to human practice. In this paper, technology is a relational concept applying to the design, development, or application of devices, machines, and techniques for human use. Technoethics has been defined in a variety of ways under various terminology. Bunge (1977) pioneered technoethics as a branch of technology study concerned with the special responsibilities of technologists and engineers. Hans Jonas (1985) defined technoethics (ethics of technology) as the ethics involved in the development of new technology and how technology alters the power of individuals. Galvan (2001) defined technoethics as the “sum total of ideas that bring into evidence a system of ethical reference that justifies that profound dimension of technology as a central element in the attainment of a ‘finalized’ perfection of man.” Bao and Xiang (2006) defined technoethics as the behavioral norm and ethical basis for the global community. For the purposes of this paper, technoethics is defined as an interdisciplinary field concerned with all ethical aspects of technology within a society shaped by technology. It deals with human processes and practices connected to technology which are embedded within social, political, and moral spheres of life. It also examines social policies and interventions occurring in response to issues generated by technology development and use. This includes critical debates on the responsible use of technology for advancing human interests in society. To this end, it attempts to provide conceptual grounding to clarify the role of technology in relation to those affected by it and to help guide ethical problem-solving and decision making in areas of activity that rely on technology.
The Emerging Field of Technoethics
Scholars in technoethics (many of whom do not call themselves technoethicists) tend to view technology in relation to human activity (and the activity of other living entities) and at the centre of ethical inquiry. Ursula Franklin’s ethical writings on technology and society in The Real World of Technology provide are in line with the view of technology conceptualized in terms of its relation to human practice (Franklin, 1990). As will be discussed later in the paper, technoethics provides an umbrella for an amassing body of interdisciplinary scholarship and sub-areas of Applied Ethics concerned with the ethical aspects of technology and its use. It also provides unique grounding for extending existing work within an emerging interdisciplinary field. This paper is based on the premise that it is of vital importance to encourage dialogue aimed at determining the ethical use of technology, guarding against the misuse of technology, and formulating common principles to help guide new advances in technological development and application to benefit society.
Philosophy of technology, technocritical scholarship, and applied Ethics Although, the philosophy of technology can be traced to early the Greek use of the term techne (art, or craft knowledge), 20th century philosophy of technology viewed technology in terms of tools and techniques. A number of scholarly works were contributed in this area, which are too numerous to describe in this paper focused on tracing the roots of technoethics. However, some key figures and works should be highlighted. In The Question Concerning Technology, Martin Heidegger argued that modern technology allowed new relationships to the world that were not previously possible (Heidegger, 1977). Heidegger viewed this type of technological relation between human beings and the world as a challenging one because it meant that entities in the world
(including people) were objects capable of being manipulated. This work foreshadowed the advent of networked objects and their complex relations. Subsequent writing expanding this body of work include: Autonomous Technology: Technicsout-of-Control as a Theme in Political Thought (Winner, 1977), Technology and the Character of Contemporary Life: A Philosophical Inquiry (Borgmann,1984), Thinking Through Technology (Mitcham, 1994), Questioning Technology (Feenberg, 1999), Technology and the Good Life (Higgs, Light & Strong, 2000), and Readings in the Philosophy of Technology (Kaplin, 2004). This growing body of knowledge was extremely important grounding philosophical inquiry on the human side of technology, that is, technology as it bears on human work and life. Concerted efforts to connect ethics and technology began materializing through the work of Hans Jonas (Jonas, 1979, 1985) and recent publications within the philosophy of technology (Mitcham, 1997, 2005). In The Imperative of Responsibility: In Search of Ethics for the Technological Age (Jonas, 1979), Jonas explored ethical questions specific to the technological age in an effort to advance an ethics of technology. This work explored the role of ethics in the development of new technology as well as how the standard ethical questions are changed by technology. This work continued in On Technology, Medicine and Ethics (Jonas, 1985) by focusing attention on how medicine and other human endeavors are altered by the development of new technologies. In bridging the philosophy of technology into the interdisciplinary field of science, technology, and society (STS) studies, Carl Mitcham provided further grounding for a field of technoethics in linking ethics and technology. Thinking Ethics in Technology: Hennebach Lectures and Papers, 1995-1996 (Mitcham, 1997) is a noteworthy work applying philosophical inquiry to questions of ethics arising in science, technology, and engineering. This work was expanded in Mitcham’s edited Encyclopedia of Science, Technology, and Ethics
The Emerging Field of Technoethics
(Mitcham, 2005). Additional efforts to advance philosophical inquiry into ethics and technology are contributed by the Society for Philosophy and Technology (established in 1976) through the publication of an academic journal called Techné: Research in Philosophy and Technology (1995 to present). Technology was also the locus of scholarly work within the social sciences and humanities through technocritical scholarship. Technocriticism emerged from critical theory and focused on the study of technological change. Technocriticism expanded the scope of technology studies into an examination of private and public uses of technology, along with the relations among these different uses. The Technological Society (Ellul, 1964) addressed technology from a deterministic standpoint, exploring technological control over humanity and its potential threat to human dignity and freedom. Ellul viewed technology (as technique) as “the totality of methods rationally arrived at and having absolute efficiency (for a given stage of development) in every field of human activity” (p. xxv). This text helped map out some of the potential threats and injustices that could be created in the absence of critical discourse that goes beyond technological efficiency. In a slightly different vein, The Real World of Technology (Franklin, 1990) is framed within a feminist standpoint with the goal of advancing understanding of the way technology changes social relationships by redefining conceptions of power and accountability. One particularly important element in this work is the conceptualization of technology as a form of practice (ways of doing something). In this view, technology as practice is embedded in society within its various organizations, procedures, symbols, language acts and mindsets. This technocritical work highlighted the importance of contextualizing technology within societal structures, activities, and practices in the human world. To this end, Franklin advocates the need to advance an ethical framework for
guiding technological development in line with technoethics. Although the body of work derived from the Philosophy of Technology and technocritical writings is primarily concerned with how technology influences social order, the ethical considerations raised in this body of reviewed work help set the stage for the advent of technoethics in two important ways. First, this work encouraged scholars to invest special attention to key areas of technology in order to ground growing efforts to develop ethical guidelines for technology use in society. Second, this work bridged philosophical inquiry into technology with related work within the applied Social Sciences. It did so by drawing on empirical studies to help situate the discourse on technology and Ethics within the context of human activity. The outcome of this is that many scholars became more aware of the need to connect technological development to ethical considerations entrenched in how people live, work, and play. General areas of inquiry within Philosophy and the Humanities focus on issues such as technocritical determinism, ontology, the digital divide, power, and control issues. General questions include, “Is technology immoral?”, “Is the use of technology right or wrong?”, “Are there formal principles governing the rights and responsibilities of technology users?”, and “Should artificial intelligence have rights, and if so, what are they?” In addition to a basis in the philosophy of technology, technoethics has strong connections to applied ethics. Growing interest among philosophers and ethicists in applied ethics and its focus on practical problems was a key contributor in facilitating the emergence of technoethics. The types of questions raised in applied ethics focused on problems that affect people living in society. Does equality entail the elimination of differences in the sex roles, or could there be equal status for different roles? Is it acceptable to give advantages to members of marginalized groups because they have been discriminated
The Emerging Field of Technoethics
against in the past? What ethical obligations do citizens of rich countries have to those from poor countries? To what extent should people be responsible to safeguard the environment? What do people living in today’s society owe to those not yet born? What obligations do people have to uphold animal rights? In terms of contributions to technoethics, applied ethics further grounded technoethics by bringing philosophical inquiry into the context of real world human problems and bringing real events, practices, and research activity into focus. To conclude, ethical considerations arising from key areas of philosophy of technology, technology studies in the humanities, and specialty areas within applied ethics focusing on technology redefined traditional ethical questions by the ways in which technology increased or decreased the power of individuals, institutions, and society. This placed ethical questions related to technology on new ground requiring special status, which led to the emergence of technoethics. This set the stage for a number of areas of technoethics to evolve, which are discussed in the next section.
IDENtIFYING KEY arEas aND IssUEs IN tEcHNOEtHIcs There are challenges in any effort to delineate the broad interdisciplinary field of technoethics. One useful way to describe technoethics is by identifying its key areas of academic research and study derived from various branches of applied ethics and other areas of academic scholarship with a technology focus. This has the advantage of conveying to the reader the broad scope of technoethics and its strong connections to existing academic work. The disadvantage of this is that many branches of applied ethics have overlapping issues that need to be understood (e.g., privacy issues are important in computer ethics and medical ethics). In an effort to accommodate this overlap, the field is described in terms of the ethical issues
arising from technology and its use. Based on an extensive review of the literature and researcher experience, key areas of technoethics and issues are illustrated in Figure 1 and discussed below. Figure 1 illustrates how key areas of technoethics (countries) map onto to ethical issues (bodies of water) and challenges (cities) of fundamental human practices as they are altered by new technologies. For instance, health and medical technoethics focuses attention on questions that have been exacerbated by genetic research and challenges related to information protection and confidentiality. In Internet ethics and cyberethics, users are continually challenged by new forms of surveillance and cybercrime that threaten user privacy and anonymity. This suggests that issues such as privacy (flow through) are key concerns in multiple areas of technoethics. It is worth noting that this conceptual map is not exhaustive and that, as history demonstrates, maps can be modified and augmented as geographical boundaries are refined and new countries are named. An overview of key areas of technoethics and selected issues is provided below.
computer Ethics The origins of computer ethics can be traced to the work of Norbert Wiener who spearheaded cybernetics as the science of control and communication in animals and machines in the 1940s and 1950s (Wiener, 1948, 1954). These developments led Wiener (1948) to recognize both the good and evil inherent in artificial machines. In his seminal text, The Human Use of Human Beings, Weiner was the first scholar to explore basic questions of computer ethics (Wiener, 1954). In the 1970s and 1980s, work in this area continued through critical scholarship on the human aspects of computer use (Weizenbaum, 1976) and ethical guidelines for computer use (Moor, 1985; Johnson, 1985). Up until the mid 1980s, computer ethics may be best understood narrowly as the ethical considerations of computer use. Since the early 1990s,
The Emerging Field of Technoethics
Figure 1 Conceptual map of technoethics
the rapid development of information technology significantly influenced many aspects of life and the nature of computer ethics. Beginning in the mid-1990s, theoretical work in information ethics emerged as an outgrowth of work in computer ethics (and other areas). It focused on ethical issues arising from the development and application of information technologies used in computing. It is based on the notion of “infosphere”, a term coined by Floridi to describe the informational environment constituted by informational entities, including processes and interactions (Floridi & Sanders, 2003). Floridi and Sanders (2003) viewed information as “a necessary prerequisite for any morally responsible action to being its primary object.” In recent
years, the advancement of information technology, combined with work on information ethics, helped extend the boundaries of computer ethics to include the study of social and ethical impacts of information technology. Computer ethics is a key area of technoethics that focuses on the human use of technology in a number of areas including graphic interfaces, visual technology, artificial intelligence, and robotics. Some key questions of interest in this area include, “How do we gauge the ethical use of new computer technologies?”, “What are the ethical implications of allowing monopolies to control software and hardware development?” “What are the ethical considerations in computer production?” and “What are the responsibilities to stakeholders for those involved in computer technology management?”
The Emerging Field of Technoethics
Engineering Ethics Engineering ethics developed as a branch of applied ethics dealing with professional standards of engineers and their moral responsibilities to the public. This was particularly salient in areas where engineers applied new technical and scientific knowledge to provide solutions to conflicting societal needs. Important areas and applications of Engineering include: Aerospace, agriculture, architecture, bioengineering, chemical engineering, civil engineering, construction, electrical engineering, industrial engineering, mechanical engineering, nuclear energy creation, and software design. Engineering ethics was a precursor and catalyst to the field of technoethics within the technoethics framework described by Bunge (1977) and a forerunner in professional technoethics (see below). Engineering ethics addressed concerns for the value dimension of engineering through technological innovation, public interest, and media attention. Many university degree programs in engineering require the completion of courses in engineering ethics. Johnson’s (1991) Ethical Issues in Engineering and Unger’s (1982) Controlling Technology: Ethics and the Responsible Engineer are two hallmark texts in this area and in professional technoethics. Some key questions of interest in this area include “Who should be responsible for the negative impacts of engineering on society?”, “How should the public be informed of engineering risks?”, and “How do we deal with the fact that many engineering applications can be used for evil?”
Internet Ethics and cyberethics Interest in Internet Ethics arose in the late 1980s and early 1990s with the advent of new challenges in computing revolving around the growth of the Internet and new computing technologies for the Internet (e.g., spyware, antivirus software, web browser cookies). One important event occurred in 1989, when the Internet Architecture Board
(IAB) created a policy concerning Internet ethics. This policy provided the first comprehensive set of general guidelines to guard against unethical activity including, gaining unauthorized access to Internet resources, disrupting the intended use of the Internet, compromising the integrity of computer-based information, and compromising the privacy of users (Internet Architecture Board, 1989). In a similar vein, cyberethics has advanced Internet research on serious ethical challenges and debates. One area of cyberethics, cyberdemocracy, focuses on whether or not the Internet is a democratic technology fostering digital citizenship construed as normative rules and practices for governing appropriate and responsible behavior when using technology (Ribble & Bailey, 2004). For instance, Regan (1996) explores ethical considerations derived from the ongoing debate over free speech versus censorship on the Internet, Adam (2002) examined the problem of cyberstalking, and Jenkins (2001) investigated Internet pornography and child exploitation. Other areas of research in cyberethics include, cyber identity theft (online theft of identity or knowledge of factual information that identifies an individual, identity fraud (online use of false and/or stolen identities to obtain money or other benefits illegally), and phishing (using Internet to ‘fish’ for credit card numbers, bank account information and other personal information to be used illegally). Other important work focuses on the creation of guidelines for Internet conduct commonly referred to as netiquette (Internet etiquette). Netiquette guidelines cover procedures for basic online behaviors such as posting messages and maintaining civility in discussions, but also special guidelines unique to the electronic nature of forum messages. Key questions in Internet ethics and cyberethics include, “How do we deal with Internet abuse and misuse such as piracy, pornography, and hate speech?”, “Who should have access to the Internet and who should be in control?”, “How should we
The Emerging Field of Technoethics
protect young Internet users from unnecessary risks derived from Internet use?”, “What are the ethical responsibilities of Internet researchers to research participants?” and “What are the ethical responsibilities of Internet researchers to protect the identity and confidentiality of data derived from the Internet?”
Media and communication technoethics Media and communication technoethics is an area of technoethics concerned with ethical issues and responsibilities when using mass media and communication technology in communication. It has roots in media studies, discourse ethics, organizational communications, and communication theory with pioneering work contributed by leading scholars including Marshall McLuhan and Jurgen Habermas. On the media side, Marshall McLuhan was a leading communication theorist from the 1960s and 1970s who spearheaded scholarly work in communication media and media discourse which helped connect communication and technology considerations. McLuhan’s The Gutenberg Galaxy: The Making of Typographic Man (1961) was a seminal exploration of how various communication technologies influence cognitive organization, which impacts social organization. McLuhan (1961) argued, ”If a new technology extends one or more of our senses outside us into the social world, then new ratios among all of our senses will occur in that particular culture.” Although this work did not advance a moral character to technology, it highlighted the need for human awareness of technology’s cognitive effects in shaping individual and societal conceptions. This work was extended in Understanding Media: The Extensions of Man (1964) where he argued that media characteristics (rather than media content) affect the society in which it operates. His popularly quoted statement, “the medium is the message” helped demonstrate the need for an approach in studying communication that
0
acknowledges how technological progress makes communications more pervasive and powerful than ever before with a wide range and diversity of media available. This provided grounding for a close connection between technology and communication studies. Another major contribution to media and communication technoethics was derived through connections made between communication and ethics, such as Habermas’s discourse ethics (1990) and his attempt to connect argumentative procedures used in everyday practice to the normative validity governing interactions. Based on Kantian deontological ethics, Habermas provided a unique analysis of communicative structures explaining the obligatory nature of morality rooted in universal obligations of communicative rationality. According to Habermas’ principle of discourse ethics, “Only those norms can claim to be valid that meet (or could meet) with the approval of all affected in their capacity as participants in a practical discourse” (Habermas, 1990). Other approaches to discourse ethics and online deliberation can be found in Benhabib (1992) and Thorseth (2006). This body of work is important to technoethics in that it situates ethical and moral inquiry within intersubjective processes of communication taking place in the real world of human practice. Since the 1990s, work under various communications research programs furthered connections between ethics, technology, and communication in a variety of communication contexts. Some key areas include: ethics in technical communications (Allen, & Voss, 1997), ethics in virtual organizations, netiquette and other ethical issues in online communication (Rotenberg, 1998). In line with Habermas’s work in discourse ethics, Allen and Voss (1997) explored ethics as values and designed procedures for addressing ethical issues in technical communications, such as identifying stakeholders and their interests, identifying relevant values related to the topic, determining any conflicting values and interests
The Emerging Field of Technoethics
that are in conflict, and weight the values and interests that are in conflict. In exploring implications for computer network design, Rotenberg (1998) addressed ethical issues related to online communications privacy, namely, confidentiality, anonymity, and data protection. Understanding the new ethical responsibilities and challenges in online communication is particularly important for researchers working in these areas (see Mann & Stewart, 2000). Some key questions of interest in this area include, “Is democratic decision making possible through online deliberation?”, “How does Internet filtering affect global communication?”, “How can virtual organizations resolve communication conflicts and satisfy stakeholder interests?”, and “What procedures should be followed to protect individual identity and anonymity in online communications?”
Professional technoethics Professional ethics within the framework of technoethics (henceforth professional technoethics) concerns all ethical considerations that revolve around the role of technology within professional conduct. As an important area of technoethics, professional technoethics deals with the issues of ethical responsibility for professionals working with technology. Although professional technoethics applies to any profession, there is more emphasis on professions that actively use new technologies such as, engineering, journalism, or medicine. Professional technoethics can be understood as identifying and analyzing issues of ethical responsibility for professionals working with technology. Among the technoethics issues considered from this perspective are those having to do with computer professionals’ role in designing, developing, and maintaining computer hardware and software systems and journalists’ role in protecting informant identity, accessing information, as well as presenting information for public viewing. Key questions include, “How will each technological innovation affect people
in the short term and the long-term?”, “What are the ethical responsibilities of professionals using technology to contribute to helping mankind?”, and “Do the outcomes of professional work with technology prioritize the rights of some individuals over others?”
Educational technoethics Educational technoethics is an area of technoethics concerned with the ethical issues and outcomes associated with using technology for educational aims. Much of the early work in educational technoethics emerged in the field of educational Technology through the work of experts like Nichols (1987, 1998) and Boyd (1991). Nichols (1987) explored the negative aspects of educational technology while attempting to derive moral guidelines for ethical use. In a slightly different vein, Boyd (1991) explored the idea of emancipatory educational technology as a means to foster positive educational values using new technology. The motivation behind this early work was to demonstrate that technology required an ethical examination in order to be employed suitably in education. One particularly influential work by Cortés (2005) provided a thorough examination of technoethics and the role of educational technoethics. Cortes recognized that because technoethics entailed an ethical purpose, this must be taken into account in examining the use of technology in education. Key topics in the current areas of research in educational technoethics include: The misuse of the Internet to commit plagiarism (Lathrop & Foss, 2000) and other academic offences (Underwood & Szabo, 2003). Current areas of research in educational technoethics include, access to educational technology, diversity issues in online learning, and educational technology uses and misuses. For instance, the popularization of the Internet and the World Wide Web (WWW) in the mid1990s contributed to student plagiarism within educational institutions. The Internet and the
The Emerging Field of Technoethics
WWW provided students with easy access and new opportunities to cut and paste assignments (or portions of assignments) from other people’s work or purchase term papers online (Ercegovac & Richardson, 2004). This led to the development of a number of computer programs to detect the unethical use of research papers and deter Internet plagiarism (Turnitin, 2007). Underwood and Szabo (2003) review ethical problems in e-learning and individual propensities in cheating. Although technologies have been developed to detect plagiarism, there are ethical issues in how to deal with the situation. Some key questions of interest in educational technoethics include, “How do advances in educational technology affect access to new educational resources and the growing digital divide?”, “How do new educational technologies influence multicultural education and inclusion?”, and ”What are the ethical considerations in online plagiarism and how should such situations be dealt with?”
biotech Ethics Biotech ethics is linked to advances in bioethics and medical ethics. Bioethics emerged in the mid-1960s and early 1970s with the introduction of renal dialysis and in vitro fertilization (IVF). Bioethics evolved as an ethics of biological science and medicine relying on philosophical inquiry to help analyze important ethical issues. With a specific focus on medicine and medical practices, medical ethics dealt with ethical conflicts in medicine, health care, and nursing. With the advent of reproductive medicine and research in the 1970s, bioethics and medical ethics began dealing with important ethical issues complicated by new technologies including, abortion, animal rights, eugenics, euthanasia, in vitro fertilization, and reproductive rights. Also during the 1970s, biotechnology emerged as biological technologies began to be used in agriculture and food science to modify products for human use. In the early 1980s and 1990s, the use of biotechnologies spread
rapidly to medical research, health care, and industrial applications. Hans Jonas’s On Technology, Medicine and Ethics (1985) was a pioneering text dealing social and ethical problems in medicine created by technology. Another catalyst was the introduction of the Human Genome Project by the U.S. Department of Energy in 1986 to begin identifying all human genes and map out the sequence of the human genome (U.S. Department of State International Information Programs, 2007). This led to a number of technological advances which sparked debate and created the need for a biotech ethics to address ethical considerations arising in many areas of biological technology including, cloning, genetically modified food, gene therapy, human genetic engineering, drug production, reprogenetics, and stem cell research. New applications in biotechnology continue to advance work in many areas (e.g., genetics, biochemistry, chemical engineering, information technology, biometrics nanotechnology, and robotics) while raising new ethical questions to further ground biotech ethics as an important branch of technoethics (Schummer & Baird, 2006). Some key questions of interest in this area include, “Who should have ownership and control of harvested DNA, human tissue and other genetic material?”, “How should information from genetic tests be protected to avoid infringing on human rights?”, and “How do we deal with the fact that developing countries are deprived the same access to applications of biotechnology (biotech divide) available in other countries?”
Environmental technoethics Environmental technoethics is a newly emerging branch of technoethics based in environmental ethics and concern over technological innovations that impact the environment and life. Environmental technoethics can be traced to the domain of environmental ethics and growing interest among scholars in the 1960s and 1970s in the relationship of human beings to the natural environment
The Emerging Field of Technoethics
(Passmore, 1974; Rolston III, 1975; Stone, 1972). This early work helped direct attention to the ethical aspects of natural environments. It did not stress other types of environments influenced by technological innovation including, constructed environments, urban landscapes, and extraterrestrial environments. Although not directly focused on ethical considerations, Jane Jacob’s The Death and Life of Great American Cities (1961) raised technoethical concerns in her forward thinking critique of American urban renewal policies of the 1950s. This text highlighted the importance of environmental technoethics for guiding urban renewal. A recent collection of readings on environmental ethics by Schmidtz and Willott (2002) examines ethical aspects of technologically infused (constructed), such as the effects of cities on resource consumption, poverty in a global context, and growth of human population within a technological society. As a relatively new area of technoethics, environmental technoethics focuses on the human use of technology in a number of areas that that have a definite or possible connection with natural and constructed environments including transport, urban planning, mining, sanitation, marine contamination, and terraforming. For instance, the ethics of terraforming (Earth shaping) originated in science fiction writing in the 1940s (Williamson, 1942) and became popularized in later science fiction writing (Robinson, 1992, 1993, 1996). Some key questions of interest in this area include, “How do we help improve the lives of people living in cities suffering from infrastructural decay?”, “What are the rights of individuals in environmental construction and management?”, and “What responsibilities do we have to extraterrestrial environments and its living material?”
Nanoethics Nanotechnology broadly refers to the alteration of matter at the level of atoms and molecules. In the 1980s, Drexler’s (1986) Engines of Creation:
The Coming Era of Nanotechnology was an early scholarly work mapping out contributions in nanoscale research and development. The discovery of nanotechnology and its applications have contributed to the advancement of research in a variety of disciplines including, computer science, engineering, biology, and chemistry. Because of the health and safety issues and potential risks to the environment, nanotechnology has provoked serious public debate and a number of ethical concerns. Hunt and Mehta (2006) provide an up to date overview of key developments in nanotechnology risks and ethical issues connected to this potentially dangerous technological innovation. As such, nanoethics is a new area of technoethics concerned with ethical and social issues associated with developments in nanotechnology. Some key questions of interest in this area include, “What are the potential health and safety risks with nanotechnology applications and who is responsible?”, “What are the rights of individuals in affecting nanotechnology applications?”, and “What responsibilities do we have to protect society from the risks of nanotechnology advancement?”
Military technoethics Military technoethics derives from military ethics and the study of ethical aspects of military conduct. Within contemporary studies of the military, advanced technologies have become increasingly dominant in military action. This raises a number of new ethical questions regarding the appropriate use of advanced technologies in situations where the military must act. An edited edition by Smith (1985) was important in focusing scholarly attention on technological change within the military. This text helped to draw attention to the need to understand war-technology and how it redefined relationships. Another useful text by Hartle (1989) addressed ethical and moral governing military decisions from the perspective of an American army scholar. This type of technology oriented inquiry highlights military technoethics as an
The Emerging Field of Technoethics
interesting new area of technoethics in development. At this early stage, military technoethics is primarily concerned with ethical issues associated with technology use in military action. Some key questions of interest in this area include, “What are the actual and possible risks with nuclear weapons?” and “Who should be responsible for controlling advanced military technology?”
FUtUrE trENDs The future of technoethics is in the hands of scholars capable of discerning important connections between technology and ethics within a complex multidisciplinary framework. The complexity of technoethics is partly attributable to the variety of areas it encompasses(i.e., computer ethics, engineering ethics, educational technoethics, media ethics, communication technoethics, internet ethics, cyberethics, professional technoethics, biotech ethics, medical ethics, nanoethics, and environmental technoethics). Because ethical considerations within technoethics are embedded within rapidly changing domains of technology, discerning ethical issues requires considerable effort to be properly understood. For this reason, an important future trend in technoethics scholarship focuses on the development of new strategies for conducting research in technoethics. Moor (2005) suggests that an ethical approach to technology can be fostered by: (1) taking into account that Ethics is an ongoing and dynamic enterprise, (2) creating multidisciplinary collaborations among ethicists, scientists, and technologists, and (3) developing more sophisticated ethical analyses. Building on work from Tavani (2004), one practical approach to future research on ethics and technology focuses the identification of controversial practices as moral problems, followed by an analysis and synthesis of factual data associated with these problems. Depending on the orientation of the research, the outcome may lead to new prescriptive models for guiding
technology related decision making or new moral theories and principles related to technoethics. It is expected that technoethics will continue to expand with new areas added as technology progresses in the 21st century.
cONcLUsION Over the last 30 years, a growing body of scholarship in technoethics under a variety of labels has expanded the scope of this field, while, at the same time, building on the pioneering work on technoethics as a domain of professional responsibility for engineers and others working closely with technology (Bunge, 1977). In terms of development, efforts to address the ethical responsibilities of technologists and engineers within society are as much a part of technoethics today as was the case in the 1970s. Moreover, work in various areas of technoethics has extended the scope of technoethics beyond the context of engineering and professional ethics to other areas of scholarship in ethics and technology including computer ethics, internet ethics and cyberethics, media and communication technoethics, biotech ethics, and environmental technoethics, professional technoethics, and educational technoethics. Table 1 presents the main areas of technoethics, along with key figures and selected topics covered in this paper.
FUtUrE rEsEarcH DIrEctIONs One promising area of future research focuses on the intersection of computer ethics and cyberethics with questions revolving moral rights and responsibilities of artificial agents. As technological development allows us to automate many everyday operations under the control of artificial agents, we must decide whether to accept that nobody is responsible for the outcomes of automated processes or we must find some
The Emerging Field of Technoethics
Table 1. Key areas of technoethics Author
Technoethic Areas
Selected Issues
Sample Questions
Weiner (1948) Johnson (1985)
Computer ethics
Interface design Software Piracy
What are the responsibilities of technologists to those affected by their work?”
Bunge (1977) Johnson (1991)
Engineering ethics
Engineering conduct Quality assurance
How should responsible be assigned for the negative impacts of Engineering on society?
Internet Architecture Board (1989) Ribble & Bailey (2004)
Internet ethics and cyberethics
Privacy Cybercrime
What are the ethical responsibilities of Internet researchers to research participants?
Cortes (2005) Gearhart (2000)
Educational technoethics
Access to education Plagiarism
How do advances in educational technology affect access to new educational resources and the growing digital divide?
Jacobs (1961) Jonas (1985
Biotech ethics
Reproductive technologies Stem cell research
Who should have ownership and control of harvested DNA, human tissue and other genetic material?
McLuhan (1962) Habermas (1990)
Media & communication technoethics
Freedom of speech Online discourse
How can virtual organizations resolve communication conflicts and satisfy stakeholder interests?
Unger (1982) Johnson (1991)
Professional technoethics
Conflict of interest Professional responsibility
What are the ethical responsibilities of professionals using technology to contribute to helping mankind?
Jacobs (1961) Schmidtz &Willott (2002)
Environmental technoethics
Sustainable development Terraforming
How do we assign responsibility in environmental construction and management?
Drexler (1986) Hunt and Mehta (2006)
Nanoethics
Health and safety Environmental risk
What are the potential risks with nanotechnology applications and how should responsibility be assigned?
Smith (1985) Hartle (1989)
Military technoethics
Military technology Nuclear weapons
Who should be responsible for controlling advanced military technology?
way to assign responsibility and accountability to artificial agents as discussed in (Matthias, 2004). Preliminary work is already beginning to arise in this area. For instance, Stahl (2006) provides one approach for ascribing quasi-responsibility to computers independent of personhood or agency. Using a different angle, Johnson (2006) differentiates moral agency from moral entities, arguing that computer systems should be considered moral entities but not moral agents. This work is still in its infancy and raising a number of challenging questions for future research including, “Should artificial agents be held legally responsible for their actions, if so, how?”, “Are the conditions
for moral responsibility the same for people as for artificial agents?”, and “How does increased interdependence on technology affect personal agency and moral responsibility?” A second area of future research focuses on the advancement of ethical policies and laws in technoethics for dealing with the unique ethical issues arising from new opportunities for thought and action made possible through technological developments. This is particularly salient in the abovementioned areas of computer ethics and cyberethics where humans are creating more autonomous and self regulated technological innovations. In an effort to advance this work a preliminary
The Emerging Field of Technoethics
formulation for the Law of Technoethics is offered which builds on insightful work by Moor (2005). Moor’s Law holds that ethical problems increase as technological revolutions increase their social impact, ethical problems increase (Moor, 2005). As a biotechnocentric oriented field, technoethics views the relation between technology and living entities as paramount. The Law of Technoethics derived from this provides the following: The Law of Technoethics holds that ethical rights and responsibilities assigned to technology and its creators increases as technological innovations increase their social impact. The Law of Technoethics makes two important contributions. First, it addresses the need for accountability among professions who create technology. This follows Bunge’s (1977) prescriptions in the original conceptualization of technoethics. Second, it addresses the changing nature of technology and how it affects society. Future work may focus on how to assign ethical rights and responsibilities under various conditions and within different contexts.
rEFErENcEs Adam, A. (2002) Cyberstalking and Internet Pornography: Gender and the Gaze. Ethics and Information Technology, 4, 133-142. Allen, L. & Voss, D. (1997). Ethics in technical communication: Shades of gray. New York: Wiley Computer Publishing, John Wiley & Sons, Inc. Borgmann, A. (1984). Technology and the character of contemporary life: A philosophical inquiry. Chicago: University of Chicago Press. Bunge, Mario. (1977). Towards a technoethics. Monist, 60(1), 96–107. Cavalier, R. (2005). The impact of the internet on our moral lives. NY: State University of New York Press.
Cortés, P. A. (2005). Educational Technology as a means to an end. Educational Technology Review, 13(1), 73-90. Drexler, E. (1986). Engines of creation: The coming era of nanotechnology. New York: Anchor Press/Doubleday. Ellul, Jacques. (1964). The technological society. NY: Vintage Books. Ercegovac, Z. & Richardson, J. Jr. (2004). Academic dishonesty, plagiarism included, in the digital age: A literature review. College and Research Libraries. Retrieved June 30, 2007, from http://privateschool.about.com/cs/forteachers/a/cheating.htm Ethics. (2007). In Encyclopædia Britannica. Retrieved June 4, 2007, from Encyclopædia Britannica Online: http://www.britannica.com/ eb/article-252578 Feenberg, A. (1999). Questioning technology. Routledge Press. Floridi, L. (1999). Information ethics: On the theoretical foundations of computer ethics. Ethics and Information Technology 1(1), 37-56. Floridi, L. & Sanders, J. (2003). Computer ethics: Mapping the foundationalist debate. Ethics and Information Technology, 4(1), 1-24. Habermas, J.(1990): Moral consciousness and communicative ethics. Cambridge, MA: MIT Press. Hartle, A. (1989). Moral issues in military decision making. Lawrence: University of Kansas Press. Heidegger, M. (1977). The question concerning technology. In Lovitt, W. (Ed.), The question concerning technology and other essays (pp.1339). New York: Harper and Row. Higgs, E., Light, A., & Strong, D. (2000). Technology and the good life. Chicago: Chicago University Press.
The Emerging Field of Technoethics
Hunt, G. & Mehta, M. (2006). Nanotechnology: Risk, ethics and law. London: Earthscan Book.
Mitcham, C. (1994). Thinking through technology. University of Chicago Press.
Internet Architecture Board. (1989). Retrieved June 4, 2007, from http://tools.ietf.org/html/ rfc1087
Mitcham , C. (1997). Thinking ethics in technology: Hennebach lectures and papers, 1995-1996. Golden, CO: Colorado School of Mines Press.
Jacobs, J. (1961). The death and life of great american cities. New York: Random House and Vintage Books.
Mitcham, C. (2005). Encyclopedia of science, technology, and ethics. Detroit: Macmillan Reference USA.
Jenkins, P. (2001) Beyond tolerance: Child pornography on the internet. New York University Press.
Moor, J. H. (1985). What is computer ethics. In T. W. Bynum (Ed.), Computers and ethics. Basil Blackwell, pp. 266-275.
Johnson, D. (n.d.). Ethical issues in engineering. Englewood Cliffs, NJ: Prentice-Hall,
Moor, J. H. (2005). Why we need better ethics for emerging technologies. Ethics and Information Technology, 7, 111-119.
Johnson, D. (1985) Computer ethics. NJ: Prentice-Hall Johnson, D.G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8, 195-204. Jonas, H. (1979). The imperative of responsibility: In search of ethics for the technological age. Chicago: Chicago University Press. Jonas, H. (1985). On technology, medicine and ethics. Chicago: Chicago University Press. Lathrop, A. & Foss, K. (2000). Student cheating and plagiarism in the Internet era: A wake-up call. Englewood, CD: Libraries Unlimited Inc. McLuhan, M. (1962). The Gutenberg galaxy. Toronto: McGraw Hill. McLuhan, Marshall. (1964). Understanding media: The extensions of man. Toronto: McGraw Hill. Mann, C., & Stewart, F. (2000). Internet communication and qualitative research- A handbook for researching online. London: Sage. Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175-18
Nichols, R. G. (1987). Toward a conscience: Negative aspect of educational technology. Journal of Visual/Verbal Languaging, 7(1), 121-137. Nichols, R. G. (1994). Searching for moral guidance about educational tecnology. Educational technology, 34(2), 40-48. OLT, M. R. (2002). Ethics and distance education: strategies for minimizing academic dishonesty in online assessment. Online Journal of Distance Learning Administration, 5(3). Passmore, J. (1974). Man’s responsibility for nature. London: Duckworth. Regan S. (1996) Is there free speech on the Internet? Censorship in the global information infrastructure. In R. Shields (ed.) Cultures of Internet virtual spaces, real histories, living bodies. London: Sage. Plagiarism.org (2007). Homepage. Retrieved June30, 2007, from http://www.plagiarism.org/ Ribble, M., & Bailey, G. (2004). Digital citizenship: Focus questions for implementation. Learning & Leading with Technology, 32(2), 12-15. Rolston, H. (1975). Is there an ecological ethic?, Ethics, 85, 93-109.
The Emerging Field of Technoethics
Rotenberg, M. (1998). Communications privacy: implications for network design. In Stichler, R. N. & Hauptman, R. (Eds.), Ethics, Information and Technology Readings. Jefferson, NC: McFarland & Company, Inc., Publishers. Schmidtz, D. & Willott, E. (2002). Environmental ethics: What really matters, what really works. New York: Oxford University Press. Schummer, J. & Baird, D. (2006). Nanotechnology Challenges: Implications for Philosophy, Ethics and Society. London: World Scientific. Stahl, B.C. (2006). Responsible computers? A case for ascribing quasi-responsibility to computers independent of personhood or agency. Ethics and Information Technology, 8, 205-213. Stallard, C.H. & Cocker, J.S. (2001). The promise of technology in schools: The next 20 years. Lanham, MD: Scarecrow Press, Inc. Stone, C. D. 1972. Should trees have standing? Southern California Law Review, 45,450-501 Swierstra, T. (1997). From critique to responsibility: The ethical turn in the technology debate. Society for Philosophy and Technology, 3(1). Retrieved January 12, 2007 from http://scholar. lib.vt.edu/ejournal/SPT/v3n1/swierstra.html Tavani, H. T. (2004). Ethics and technology: Ethical issues in an age of information and communication technology. Hoboken, NJ: John Wiley & Sons. Turnitin (2007). Turn it in. Retrieved January 12, 2007 from http://www.turnitin.com/static/home. html Underwood, J. & Szabo, A. (2003). Academic offences and e-learning: Individual propensities in cheating. British Journal of Educational Technology, 34(4), 467–477. Thorseth, M. (2006). Worldwide deliberation and public reason online. Ethics and Information Technology, 8, 243-252.
Unger, S. (1982). Controlling technology: Ethics and the responsible engineer. NY: Holt, Rinehart and Winston. U.S. Department of State International Information Programs. Frequently asked questions about biotechnology. USIS Online, available from http://usinfo.state.govt/ Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. New York: Freeman. Wiener, N. (1948). Cybernetics: Or control and communication in the animal and the machine. Cambridge: The Technology Press. Wiener, N. (1954). Human use of human beings. Houghton Mifflin, 2nd ed, Doubleday Anchor. Winner, L. (1977). Autonomous technology: Technics-out-of-control as a theme in political thought. Boston: MIT Press. Sparrow, R. (1999). The ethics of terraforming. Environmental Ethics, 21(3), 227-240. Zubrin,R. (1996). The case for Mars: The plan to settle the red planet and why we must ( pp. 248249). NY: Simon & Schuster/Touchstone.
aDDItIONaL rEaDINGs Bunge, M. (1967). Scientific research II: The search for truth. New York: Springer. Bunge, M. (1976). The philosophical richness of technology. Proceedings of the Biennial Meeting of the Philosophy of Science Association. Volume 2: Symposia and Invited Papers (pp. 153–172). Dreyfus, H. (2002). Thinking in action: On the Internet. London: Routledge Esquirol, J., ed. (2003). Tecnoética: Actas del II Congreso Internacional de Tecnoética. Barcelona: Publicaciones Universitat de Barcelona.
The Emerging Field of Technoethics
Freedman, D. (2006). The technoethics trap. Inc. Magazine. Retrieved June 30, 2007 from http://www.inv.com/magazine/20060301/column-freedman_Printer_Friendly.html Galván, J. (2001). Technoethics: Acceptability and social integration of artificial creatures. Retrieved June 30, 2007 from http://www.eticaepolitica.net/ tecnoetica/jmg_acceptability%5Bit%5D.htm Kaplan, D. (2004). Readings in the philosophy of technology. Rowman & Littlefield. Moor, J. & Weckert, J. (2004). Nanoethics: Assessing the nanoscale from an ethical point of view. In Baird, D., Nordmann, A. & J. Schummer (Eds.) Discovering the nanoscale. (pp. 301-310). Amsterdam: IOS Press. Swierstra, T. (1997). From critique to responsibility: the ethical turn in the technology debate. Society for Philosophy and Technology, 3(1). Retrieved January 12, 2007 from http://scholar. lib.vt.edu/ejournal/SPT/v3n1/swierstra.html Winner, L. (1992).Technology and democracy. Dordrecht and Boston: Reidel/Kluwer.
KEY tErMs Applied Ethics: A branch of philosophy concerned with the use of ethical values within society. Artificial Intelligence: Artificial Intelligence (AI) refers to the creation of computer programs and devices for simulating brain functions and ac-
tivity. It also refers to the research program aimed at designing and building intelligent artifacts. Bioethics: Bioethics is an area of applied ethics open to interpretation with a variety of different views posited on the topic. Discourse ethics: Discourse ethics is an area of philosophical inquiry and communication study focused on rational communication and normative values underlying discourse. Ethics: Ethics is the study of moral conduct, i.e., conduct regarded as right or wrong, good or evil, or as what ought or ought not to be done. Law of Technoethics: The Law of Technoethics holds that ethical rights and responsibilities assigned to technology and its creators increases as technological innovations increase their social impact. Media Ethics: Media ethics is an area of applied ethics concerned with ethical principles and media standards. Medical Ethics: Medical ethics is an area of applied ethics concerned with moral values in medicine, particularly in situations where moral values are in conflict. Nanotechnology: Nanotechnology describes the alteration of matter at the level of atoms and molecules. Nanotechnology research is conducted in a variety of areas and disciplines. Philosophy of Technology: The philosophy of technology is a branch of philosophy concerned with the nature of technology and its social implications.
0
Chapter II
A Multi-Disciplinary Approach to Technoethics Marc J. de Vries Delft University of Technology, The Netherlands
abstract In this chapter it is argued that a multidisciplinary approach to technoethics is necessary to do justice to the complexity of technology. Normativity pervades all aspects of technology, including technological knowledge. As a consequence, ethical considerations should guide not only the production of artifacts, but also their design and the research that is needed to acquire knowledge about the artifact-in-design. Experts from different disciplines should cooperate to identify relevant ethical issues related to the various aspects of the reality in which the artifact will function.
INtrODUctION: tHE cOMPLExItY OF tEcHNOLOGY If there is one lesson that engineers and society in general have learnt about technological developments in the past decades, then that is its complexity. In earlier days, such as the 1950s and 1960s, it seemed that technology was simply a matter of getting the technicalities worked out, so that the products would function well. The market would then absorb all new products and there was always an interest for the latest gadgets. There was not yet any concern about environmental issues, nor were there economic constraints. There was not
yet a need to set limits to technological developments by legislation, and aesthetical issues could easily be dealt with afterwards, once the product had already almost been completed. It was as the slogan of the 1933 Chicago World Fair suggested: Science Discovers, Technology Applies, Man Conforms. It was only later, since the 1970s, that industrial companies were confronted with critical consumers, governments that wanted to exert an influence on technological developments, economic constraints, and a growing concern about natural resources. Then it became clear that in technological developments a range of aspects has to be taken into account in order to
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Multi-Disciplinary Approach to Technoethics
bring about a successful product. Industrial companies have developed a whole range of methods to deal with this complexity, often formalized in ISO certification procedures (I have described these developments more extensively for the case of the research organization in the Philips Electronics company; see De Vries 2005ii). In all the aspects that create this complexity ethics is somehow involved. For instance, one can question what is ethically permissible when trying to please the customer, or when trying to conform to new legislation without too much effort. It seems, though, as if ethical debates still tend to be reduced to only a few aspects. Often this is the aspect of environmental effects. In other cases, such as in communication technologies, it is perhaps the aspect of privacy. It may seem as if other aspects are less relevant. In general one can state that ethical debates are often reduced to risks, and particularly to calculated risks. In that case the ethical debate is reduced to the numerical aspect of reality. In this chapter it will be claimed that a proper ethical debate needs to take into account reality in its full complexity. As a consequence, such a debate needs input from knowledge about these various aspects. In other words: contributions from different disciplines are needed to get a proper insight into the ethical considerations about new technological developments. A philosophical theoretical framework will be presented that is suitable for analyzing the complexity of technology and the range of disciplines that ought to contribute to ethical debates on technological developments.
bacKGrOUND Probably ethical considerations have always in some way or other accompanied the development of technology. But by the end of the 1960s ethical debates began to play a more explicit role, due to the fact that negative effects of technology started to become evident and social concern
about technological developments arose. This was also the period in which philosophers began to develop an interest in technology. Reflecting on the nature and effects of technology seemed to be a relevant contribution to discussions about how technological developments could be controlled socially. Until then this control had not been a great concern, as technology seemed to play a useful role in society and also its development was not held back by economic concerns. But once it became evident that technology’s role in society was less innocent than one had experienced before, critical reflection on technology emerged as a new field in philosophy. At that time, it was primarily Continental philosophers who developed this new interest. Analytical philosophy at that time had a different agenda. It should be noted here that the distinction between Continental and analytical philosophy is nowadays becoming more and more problematic. But in those days, the differentiation between philosophers who were concerned with critical and cultural debates and those who were concerned with analyzing the meaning of terms was still pretty much the same as that between philosophers working on the European continent and philosophers working in Anglo-American countries. Two Continental philosophers that stand out in their critical writings about technology are Martin Heidegger and Jacques Ellul. Both saw technology as a threat to humanity. Heidegger stressed the way technology narrows our perspective on reality: in stead of appreciating reality for its intrinsic value, through technology we tend to see it only as a resource, as something that still has to be processed to become valuable. Ellul pointed out that technology had become a system with certain autonomy. People felt that they had lost control over technology, and in fact they had. Even for an individual engineer it was no longer possible to control the development of technology in any substantial way. Both in the case of Heidegger and Ellul there is no ethics of any substance in their reflections. Although both pay much attention to the negative values that ac-
A Multi-Disciplinary Approach to Technoethics
company technological developments (negative in their view, that is), they do not offer any practical guidelines for an ethics of technology. Well-known is Heidegger’s claim that “only a god can save us”, now that we have become so dedicated to technology and its narrow view on reality, and as he was agnostic, this is to be interpreted as a purely fatalistic expression. For Ellul there are hardly any possibilities to control technology, and therefore ethical guidelines would be powerless anyway. Not all Continental philosophers wrote about technology in a negative way. Karl Marx, for example, was very positive about the role of technology in bringing about a new communist society. As his view on history was fairly deterministic (the new society would come anyway), he, too, did not develop a clear ethics of technology. For him such an ethics was superfluous because the coming of the communist society was not dependent on proper ethical behavior thanks to the role of technology. So apart from Marx, the overall impression of Continental philosophical reflection on technology was not very favorable towards technological developments and their impact on society and culture. In most of these writings no difference was made between different technologies. The philosophers’ claims were quite general. Also they seemed not much informed by studies about the actual way new artifacts were developed. In the 1970s and 1980s a growing amount of empirical studies about technological developments became available. These studies were inspired by similar studies on science, since Thomas Kuhn’s famous book The Structure of Scientific Revolutions (1962). In that book Kuhn showed that there are much more human values involved in science than suggested by earlier (positivist) philosophy of science. Later other historians and sociologists started showing the same for technology. Out of this grew a whole ‘school’ called the Social Construction Of Technology (SCOT). The next step was that analytical philosophers took up an interest in technology. They realized
that the sweeping generalizations in the Continental philosophers’ writings left much to be desired in terms of a more accurate description of how technology really develops. They also realized that the empirical studies by historians and sociologists could serve as a useful resource for philosophical reflection aimed at analyzing the nature of technological developments. This could also create a more balanced view on the effects of technological developments that could replace the dark and gloomy images that had been produced by the majority of Continental philosophers. Mitcham indicated this approach as an ‘engineering philosophy of technology’, contrary to the ‘humanities philosophy of technology’ of the Continental philosophers (Mitcham 1994, p. 17). Some philosophers feared that this ‘empirical turn’ in the philosophy of technology (Kroes and Meijers 2000) might cause the whole issue of values, and related to that, the ethics of technology, to disappear, as the values were mostly hidden in the empirical studies. This, however, proved not to be the case. Analytical philosophers also took into account the normative aspects of technology. These aspects came into their reflections particularly via the concept of functions (Houkes 2006). Functions are what an artifact ought to do, not what it actually does. Ascribing the function of a screwdriver to a long and thin artifact with a wedged top means to make a normative claim about the artifact: it should enable me to drive a screw into a piece of wood. As a consequence, knowledge about this artifact also has a normative component (De Vries 2006). It is knowledge not about the things are they are (as in science), but about how they should be. This can also be recognized in the knowledge of norms and standards that is very much part of today’s engineering knowledge. This opened ways of introducing ethical considerations in the study of the nature of artifacts and technological knowledge. In Continental philosophy, too, later developments led to a greater appreciation for ethical
A Multi-Disciplinary Approach to Technoethics
considerations than in the days of Heidegger and Ellul. Contemporary philosophers of technology such as Don Ihde and Albert Borgmann, who consider themselves to be followers of Heidegger, have developed ethical guidelines for technology. But still, these guidelines are fairly limited to the way we experience reality (in the realm of Heidegger). Others, that went more in the trace of Ellul, such as Andrew Feenberg and Langdon Winner, also paid attention to ethics of technology, but the view they developed is limited in that they primarily focus on the way technology influences social order. Although the empirical studies they all draw from made them more aware of the complexity of technological developments, this had only a limited effect on their ethical considerations. The empirical studies gave a more precise picture of technological developments than in the earlier Continentally-oriented analyses. It became clear that different technologies develop in different ways (Sarlemijn 1993). This is due to the fact that a whole range of aspects influence these developments. Two technologies may be similar if seen from a physics point of view, but different if viewed from a social or economic point of view, and therefore they may develop in different ways. This means that ethical consideration for those technologies should not be limited to one aspect one (for example, the physical, the psychic or the social aspect) but to the whole range of aspects that may have an influence. This demands a framework for analysis that captures this variety of aspects and indicates how ethics is involved in these aspects. In the next section such a framework will be presented.
a tHEOrEtIcaL FraMEWOrK FOr EtHIcaL aNaLYsEs OF tHE cOMPLExItY OF tEcHNOLOGY The framework that will be used here to analyze the ethics in the complexity of technology has been developed by a Dutch philosopher, Herman
Dooyeweerd. Dooyeweerd’s background was in law rather than in engineering, but his framework is quite suitable for analyzing the complexity of technology. He claimed that reality can only be described effectively in terms of a variety of aspects that can not be reduced to each other. Our direct and intuitive encounter with reality does not always reveal these aspects, as we experience reality as a whole. But human beings have the capability of separating out these aspects in an analytical stance. Thus they can consider the physical aspect of reality as separated out from reality as a whole. In doing that we start realizing that all things fall when dropped, often set other things in motion when hitting it, have properties related to heat and temperature, and can radiate or absorb light, just to mention some phenomena related to this aspect or reality. Humans can take a further step and examine these phenomena in a manner that sets more strict requirements for knowledge than ordinary daily-life beliefs abut these phenomena. That is what they do when they become involved in the scientific discipline of physics. Likewise they can focus on the economic aspect of reality and start realizing that all things can either be bought, buy or do both (when they are a slave). Doing this in a more systematic way brings them into the discipline of economics. In each of these disciplines certain regularities or ‘laws’ can be discovered. These are different for the different aspects. Even when there are similarities, such as in the case of the law of conservation, which holds both for energy in the physical aspect as in the economics aspect (“you can not spend your money and have it”), still the conservation of economical value can not be reduced to conservation of the physical means for payment (coins or bills). Based on such considerations Dooyeweerd concluded that the ambition of physicalists to reduce the scientific explanation of any phenomenon to physics (see, for instance, Seager 2000, for a more precise description of physicalism) is doomed to fail. His approach is very much a non-reductionist
A Multi-Disciplinary Approach to Technoethics
approach. Analogies exist between the ‘laws’ in the various aspects, but this does not mean that they can be reduced one to the other. Dooyeweerd made an effort to spell out the various aspects. He ended up with fifteen aspects (Dooyeweerd 1969). The philosophical school that grew out of his ideas, reformational philosophy, has always recognized the ambiguity in that number. It is mostly seen as one way of spelling out the complexity of reality that is useful because it offers a perspective on the variety of aspects, yet without turning each phenomenon in reality into the basis for a separate aspect. To a certain extent the list of fifteen aspects was also based on the existing variety of scientific disciplines in his time (it was about half way the 20th century when he wrote his main publication). Table 1 presents the aspects as distinguished by Dooyeweerd himself. Dooyeweerd also made the observation that entities can act both as subjects and as objects in the various aspects. All entities can be subjects in all aspects. A tree is a subject in the biotic aspect (it lives), it can be an object in the kinematical aspect (it can be moved by the wind) and in the
development aspect (it can be cut down and made into planks). The tree can not serve as a subject in all aspects. For instance, it can be bought (i.e., serve as an object in the economic aspect), but not buy itself (serve as a subject in that same aspect). The same holds for animals. Only humans can be subjects in all aspects. Actions have consequences in each of the aspects. That is why only humans can be held responsible for their actions. There is a certain order in these aspects, as Dooyeweerd tries to claim. The higher aspects assume the existence of the lower aspects. To a certain extent this can be defended. It is not possible to conceptualize space without the concept of number (2-dimensional does not make sense without 2). Likewise motion is not possible without space. Life is not possible without a physical substrate. For the higher aspects Dooyeweerd’s order claim seems more speculative. For instance, does one need language to have social community or is it the other way round? But here, again, the analysis does not depend on the exact order, as long as we recognize that there is a transition from the psychic to the logical aspects in that from then on only human beings can serve as subjects
Table 1. Aspects of reality according to Dooyeweerd Aspect
Meaning
Example of discipline
1. 2. 3. 4. 5.
Numerical Spatial Kinematical Physical Biotic
Arithmetic Geometry Kinematics Physics Biology
6. 7. 8. 9.
Psychic/sensitive Logical/analytical Cultural/developmental Symbolic/linguistic
10. 11. 12. 13. 14.
Social Economic Aesthetic Juridical Ethical
15.
Pistic/belief
Things are countable Things occupy a certain space Things can move or be moved Things can interact by mechanical cause-effect relations Some things live or are a part of other living beings’ environment People can observe things People can reason about things People develop things, make history People represent things by names or other symbolic representations People can live and work together People can buy and sell things People can appreciate things for their beauty People can make laws People can assess things and events from an ethical point of view People can believe and trust in other people, things or higher beings
Psychology Logic History Linguistics Sociology Economics Aesthetics Legal sciences Ethics Theology
A Multi-Disciplinary Approach to Technoethics
(I here assume that animals are still capable of having ‘mental states’, but they are not capable of self-conscious analytical reflection on reality, which is necessary to serve as a subject in the logical aspect; if one does not ascribe mental states to animals, the break is already after the biotic aspect, as it is for plants; see Scruton 1994, pp. 299-302).
tHE MULtIDIscIPLINarItY OF tEcHNOEtHIcaL rEsEarcH Finally, Dooyeweerd claimed that each aspect is related to all the other aspects. This claim makes Dooyeweerd’s approach interesting for examining the systems character of technologies, and several studies have already indicated the insightfulness of applying the Dooyeweerd approach to systems in technology, for example for information and communication technologies (Strijbos and Basden 2006). For technoethics this means that the question whether or not a new technology is ethically acceptable is not only a question in its own right, but is also related to all the considerations related to other aspects. For instance, it is also related to the economic aspect: is it ethically acceptable to spend this or that amount of money on such a technology? It is also related to, for example, the juridical aspect: is it possible to transform ethical concerns in legislation for this particular technology? And, as a last example, it is related to the spatial aspect: is it ethically justifiable to use such and such an amount of space for this product (think, e.g., of the car taking away precious space in a downtown area)? The fact that the ethical considerations pervade all aspects is very much the core of what this chapter intends to make clear: if technoethics is to be an area for research, than it is necessary that this research is multidisciplinary, because it needs to take into account all aspects of reality. Dooyeweerd’s conceptualization in terms of aspect is meant to be an ontological one: the aspects exist in reality.
Therefore the aspects offer an ontological basis for the epistemological issue of multidisciplinarity. I will from now on use the term multidisciplinarity to indicate cooperation of experts from different disciplines. This is also the social approach that Margareth Boden (1997) takes in her taxonomy of multidisciplinarity (she uses the term ‘interdisciplinarity’, which for me seems more suitable to indicate a new sort of knowledge emerging from such cooperation; see De Vries 2005i, p. 46). One can not separate out the ethical questions and isolate them from all other concerns about a technology-in-development or a technology-in-use. To be able to give a scientifically supported answer to the question of whether or not it is ethically acceptable to spend a certain amount of money on the development of a certain new technology, the discipline of economics needs to be called in because of its knowledge of the ‘laws’ that hold for the economic aspect. And if we want to get scientific underpinning for an answer to the question of the possibilities to transform ethical concerns into legislation, the technoethicist must work together with the expert in legal sciences. And if we want to have a research-based answer to the question whether or not it is justifiable to let a product occupy a certain amount of space, we need the knowledge of the city planner, who has knowledge about the spatial aspect of cities. This list of examples can be expanded to all aspects of reality. This claim is important when one intervenes in reality, because then knowledge about all the different aspects has to be brought together. As we saw, Dooyeweerd claimed that our first and most intuitive knowledge of reality is an integral knowledge in which the aspects are not yet separated. This separation happens when we start analyzing reality and then specialize in a certain discipline, related to a certain aspect. But when acting in reality, all aspects come together again, and so should our knowledge of all the aspects. Then we have again knowledge of the reality as a whole, but now it is informed and enriched by
A Multi-Disciplinary Approach to Technoethics
the knowledge about specific aspects that we have acquired in our disciplinary efforts. This notion makes us aware of the necessity to distinguish between engineering sciences, in which we focus on knowledge of certain aspects (e.g. the physical aspect, or the development aspect), technology as intervention in reality (the designing and making of things). In engineering sciences we can still afford to focus on certain aspects and leave out others, but in technology as the actual intervention in reality we must take into account all aspects. From this we can see that technoethical research must be multidisciplinary in nature in order to do justice to the complexity of reality, and more specifically, of technology. This, however, is by no means a simple matter. In general the experiences are that multidisciplinary research faces several barriers. In the first place, there is what C.P. Snow has called the “two cultures” problem. There is mutual mistrust and lack of understanding between researchers in natural sciences and those in social and human sciences. These two groups use different terminology and different research methods. Even in cases where the same term is used it has so different connotations that misunderstanding arise. An example of this is the term ‘correlation’. A psychologist finding correlations of 0.40 for scores on two different attitude scales will consider this to be a significant relation; but when she tells this to a physicist, he will not be impressed because for him only correlations of at least 0.98 are significant. This is because they have quite different research objects: if we see regularity in people’s behavior, it is regularity in the midst of a lot of seemingly irregular behavior. But of an electron we expect that is behaves exactly the same way when an experiment is repeated. A second barrier is the usual organization of universities and research institutes. Mostly the traditional disciplines are the basis for this organization. There are faculties of mathematics, physics, chemistry, biology, engineering sciences, psychology, economics, sociology, and so on. Organizing or even applying for budget to do a
multidisciplinary research study is not an easy affair given such a disciplinary organization. To put together a multidisciplinary research group with each member having simultaneously other obligations in his/her own faculty can cause a lot of difficulties. A third barrier is directly related to Dooyeweerd’s claim that there are different ‘laws’ in the different aspects (although, as we noticed, sometimes laws in different aspects are similar, such as in the case of laws of conservation). Some of these differences have already been spelled out in the philosophy of science. One distinguishes, for instance, scientific disciplines, where the aim is to find laws that apply equally in all places and all times. These are called nomothetical disciplines (they seek to set general laws). Physics is a typical example of such a discipline. Other disciplines aim for knowledge that describes precisely the peculiarity of certain things or events. Historical sciences are an example of that. They do not necessarily aim at finding general laws that explain the chain of events, but rather seek to give a very precise description of what exactly happened at a certain place and at a certain time. Such disciplines are called ideographic. In a multidisciplinary research it may well be that disciplines from both categories need to work together. This means that knowledge with a highly generalized nature has to be combined with knowledge about specific situations. Another distinction is that between disciplines that study cause-effect relationships and those that study phenomena in which (human) intentions play a role. This differentiation is related to what William Dilthey called the difference between ‘erklären’ and ‘verstehen’ (see, e.g., Scruton 1994, p. 243-244). Physics is an example of the first category, as no intentions are ascribed to, e.g., electrons, and psychology is an example of the second category (al least, if one accepts that the old behaviorist ideal of explaining human behavior only in terms of cause-effect relationship has been abandoned). This is the reason why the law of conservation, which seems
A Multi-Disciplinary Approach to Technoethics
to exist in both the physical and in the economic aspect, does not work in the same way in these two aspects. The law of conservation of matter and energy works in a very strict way. The sum of matter and energy in any process always remains constant. In economics, though, the value of things is dependent on human intentions, and therefore one needs to distinguish between a physical coin that one can only spend once, and one’s shares in stock markets that seems to change in the most random manners. Multidisciplinary research may involve the cooperation of disciplines from these two categories. From Schummer’s recent review of literature on multidisciplinarity (Schummer 2004) we can derive that there is not much philosophical literature on multidisciplinarity, in spite of its popularity in practice. Much of it is reductionist in nature: the search for unification of disciplines (e.g. in the direction of physics or evolutionary biology). Other references (such as Boden) take a sociological view rather than a philosophical (epistemological) one (they focus on cooperation between people from different disciplines rather than studying the content of the knowledge that is combined). Maybe the barriers sketched above are the cause of this lack of philosophical literature. Another cause may be the confusion that still exists with respect to the concept of a ‘discipline’ (Schummer 2004, p. 10/11). Whatever the cause may be, the outcome is that there is still not much known about how multidisciplinary research can be set up best.
FUtUrE trENDs: NaNOEtHIcs as a casE stUDY To offer a perspective of the way future technological developments can be accompanied by a rich ethical discussion in which the whole rage of relevant aspects can be involved, the example of nanotechnology will be elaborated here. Nanotechnology is a technological domain that is often
mentioned as one of the most important ones for the coming decades. Nanotechnology could be one of the first developments in which ethical debates are organized in the early phases of development (Roco and Bainbridge 2002). In other technological developments, ethical debates often arose once the technology had already been developed to such an extent that ethical debates could only be used to fine-tune the further development. But in nanotechnology there is an early awareness of the need to have ethical reflections on this technology, even though most of it is still in laboratory stage or even in the stage of speculations. This has even led to a new journal that is entirely dedicated to the ethics of nanotechnology (called Nanoethics, published by Springer). Nanotechnology is more an umbrella term than a concrete technology, but a common denominator is the manipulation of individual atoms and molecules as the ultimate way of building artifacts. This is opposite to the top-down approach in which small devices are created by breaking down larger ones. Nanotechnology is sometimes considered to be a mere extension of the existing trend of miniaturization, and most of what is in stage of production is not very different compared to existing technologies (only the layers that are deposited are only few atoms thick). Applications that are in production already are, for instance, coatings on sunglasses, and coatings on cloths to make them self-cleaning. Most of what is in laboratory stage is in two fields: electronics (integrated circuits at the nanoscale) and medicine (drugs that only release the active component near the organ or tissue where it is needed). Far reaching are the prophesy-like writings of Eric Drexler (1986) and other representatives of the Foresight Institute. They foresee an enormous impact of nanotechnology on society once we are able to compose every desired structure by building atoms as if the were LEGO© blocks. Thus structures with any desired property would become feasible, assuming that the problem of the huge number of assembly actions that are needed to build such structures will be
A Multi-Disciplinary Approach to Technoethics
solved. According to Drexler this is possible by using general assemblers as an intermediate step: first we build machines that build machines that build the desired structures and devices. Drexler claims that this is the way nature has also solved the problem (ribosomes would be the prototype for the general assemblers; others, though, have rejected this idea based on the observation that ribosomes are not general, but highly specialized assemblers). Let us now examine how ethics works in all the aspects in which nanoartifacts can act. We will go through the whole list only briefly. The aim of this exercise is more to show the sort of research agenda that emerges from these considerations than to give a full account of the ethics of nanotechnology. Several of the observations we will make have already been written in research and policy documents, but usually each of these documents focus on specific aspects only. This analysis gives a more full and comprehensive impression of the complexity of nanoethics. Among the experts in nanotechnology many do not take Drexler’s writings seriously because according to them most of what he claims is not realistic and will never be realized. In our analysis, though, we will not limit ourselves to what is believed by all to be realizable, because the ethical acceptability of technologies-in-development does not depend on what can and what can not be realized. The past has shown that more appears to be realizable in the end than even experts had anticipated (well-known is the expectation in the early days of computers, that only few computers in the whole world would suffice). The direction in which one wants to go from an ethical point of view is at least as relevant as the distance one will be able to go in that direction. Therefore we will also take into account in our analysis such prophesies as Drexler’s. One of the most important problems in realizing the ultimate aim of nanotechnology as people such as Eric Drexler envision it is the enormous amount of atoms and molecules that
need to be processed in order to reach a structure at microlevel. This concerns the numerical aspect of reality. As we saw Drexler sees the solution in the use of general assemblers. But he noticed that in principle it is possible that this process will run out of hand and that the general assemblers will turn all matter in their environment into ‘grey goo.’ This raises the question of the ethical permissibility of manipulating such large numbers of individual atoms (note that this is different from manipulating large lumps of atoms together as it is done in traditional chemistry and physics). The spatial aspect is also important for nanotechnology, which is evident because the very name of this technology was derived from this aspect (the nanometer as the scale on which it operates). By definition nanotechnology works at a level at which we can not see things. The devices that are built are so small that we can not observe them. Already now people speculate about nanocameras that are so small that they can be totally undetectable. Is that ethically permissible? Privacy issues feature already for cameras that are detectable for a good observer; how much more pressing these issues become when things are really invisible? The kinematical aspect and the physical aspect deal with the behavior of the nanoparticles. Although we have learnt a great deal about quantum mechanics, much of the behavior of nanoparticles and nanodevices is not yet fully understood. Is it ethically OK to do such manipulations without this understanding? Note again the difference with manipulating large lumps of atoms, for which maybe the cause of the behavior was not always known (e.g. in the case of the development of steam engines), but the behavior itself was known. As for the biotic aspect, we do not yet know how nanodevices will interact with living beings. Already now the possibility of a new asbestos problem has been pointed out. These issues are all related to aspects in which nanoartifacts act as subjects (they come in numbers, they occupy space, they move and
A Multi-Disciplinary Approach to Technoethics
interact with each other and living beings). The biotic aspect is interesting in that respect because it raises the question if nanoartifacts can also be subjects in that aspect. In other words: can we claim that artifacts that have been built from scratch by manipulating individual (non-lining?) atoms live? That would assume a view on life as something that principally is not different from non-living because both are the result of physical processes. But it ethically correct to treat living tissue (assumed that it will be possible to derive living tissue from a process of manipulating individual atoms) as if it were a piece of cloth? Now we move to issues that are related to aspects where the nanoartefacts can only act as objects. The first of these is the psychic aspect: nanoartifacts can be perceived. But the way in which this happens is necessarily very indirect. Special devices called Scanning Tunneling Microscopes (STMs) scan the surface of layers of atoms and provide data that computers process into a pictorial representation of the layer. These pictures show ‘fields’ of bubbles and each bubble represents an individual atom. ‘Represents,’ because physicists know that the atom is not really a sort of small ball as the image suggests, but a cloud of statistical probabilities (the solution of wave equations). Such images illustrate articles on nanotechnology in popular magazines. The general public will think in terms of ‘What You See Is What You Get’ and therefore get a distorted image of what nanotechnology is about (‘playing’ with these tiny balls). How should one use such images in a manner that is ethically correct, i.e. that does not mislead people? Various problems relate to the logical aspect. Nanotechnology is so vaguely defined that any sort of analysis about it is hampered by confusion about what it is. This can be used in ways that are ethically questionable. In the developmental aspect it can be a question whether or not nanotechnology is merely a next step in the evolution of material sciences or if it is a revolutionary new technology. If the first is the case, the ethical implications are much less
fundamental than in the second case. In the symbolic aspect the name technology can give rise to ethical considerations, because it almost has a certain magic with it, that is often used to apply for research money that perhaps would not have been acquired if terms such as ‘materials research’ had been used. So what is an ethically correct way of suing the name ‘nanotechnology’? The social aspect makes us aware of the possibility of a new social divide of people that will and people that will not have access to nanotechnology products. The more impact nanotechnology will have on society, the more pressing this question will be. In the economic aspect we can pose the question of the ethical acceptability of investing in a technology about which there are still so many uncertainties. Should not the money be used on more ‘guaranteed’ outcomes of existing technologies for the most urgent social and environmental problems? For the aesthetic aspect the concept of harmony is important in Dooyeweerd’s conceptualization. The question of whether or not nanoartifacts will function harmoniously with traditional artifacts has moral implications. The juridical aspect gives rise to the question how ethically acceptable it is to develop a technology for which it is extremely difficult to guide the development by legislation, given all the uncertainties of its nature and effects. Finally there is the pistic or belief aspect. People such as Drexler express strong beliefs in the future of nanotechnology. This, in general, often serves as a driving force for technology (Schuurman 2003). How much belief in this technology can be justified ethically?
tEcHNOEtHIcs bEYOND ‘aPPLIED EtHIcs’: a NEW cHaLLENGE The example of nanotechnology shows that the aspect-oriented analysis of ethical issues results in a broad agenda for ethical debate. The aspectrelated considerations presented in the previous section are just a brief and fragmented sketch and each of them needs to be elaborated. For
A Multi-Disciplinary Approach to Technoethics
this expertise from each of the aspects is needed. Therefore good nanoethics will be multidisciplinary in nature. Although the analysis presented above in principle will hold for any technology, the example of nanotechnology as possibly one of the future’s most prominent technology, par excellence illustrates the need for the multidisciplinarity of technoethical research. It will not be a simple matter to develop a truly multidisciplinary technoethics. In general, one can claim that technoethics is still in its infancy as an academic discipline. Most textbooks for engineering ethics merely apply the traditional ethical theories and approaches to the domain of engineering (see, for instance, Harris, Pritchard and Rabins 2000). Thus we get a deontic technoethics (consisting of codes of conduct), a utilitarian technoethics (leading to risk analyses) and a virtue-based technoethics (stimulating feelings of responsibility or loyalty to one’s company), and all we are told to do is just to find the correct mix of these approaches in specific situations. In particular the utilitarian approach is popular (for instance, this is the immediate translation of ethical issues in technology in Shrader-Frechette 1997). This, of course, is still way from an analysis of what makes technoethics different from ethics in other domains. Also the issue of whether multidisciplinarity is more important for technoethics than for ethics in other domains has not even been started to be studied. The considerations that have been presented here therefore should be seen as merely the beginning of a contribution to the development of a true discipline of technoethics. The Handbook in which this text is published hopefully will serve as a catalyst in that development. Technology has an enormous impact on our lives. Therefore it is very important to state the relevant ethical questions in an early stage of technological developments. A well-developed discipline of technoethics than becomes of great value.
0
cONcLUsION In this chapter I have proposed a multidisciplinary approach to technoethics, based on a conceptual framework derived from the Dutch philosopher Dooyeweerd, in which reality is studied from fifteen different perspectives, each of which is related to one or more scientific disciplines. I have used the example of nanoethics to show that such an approach leads to a rich agenda for ethical debates and goes way beyond the level of risk calculations. I have also argued for developing technoethics as a discipline that goes beyond the level of ‘applied ethics’. Hopefully this contribution will stimulate people to develop such a discipline.
rEFErENcEs Boden, M. (1997). What is interdisciplinarity? In Cunningham, R. (Ed.), Interdisciplinarity and the organization of knowledge in Europe (pp. 13-26). European Community, Brussels. Dooyeweerd, H. (1969). A New critique of theoretical thought, Vols I-IV (transl. from Dutch by D.H. Freeman & W.S. Young). S.l.: The Presbyterian and Reformed Publishing Company. Drexler, E. (1986). Engines of creation. New York: Anchor Press. Harris, C.E., Pitchard, M.S. & Rabins, M.J. (2000), Engineering ethics. Concepts and cases. Belmont, CA: Wadsworth. Houkes, W.N. (2006). Knowledge of artifact functions. Studies in the History and Philosophy of Science, 37, 102-113. Kroes, P.A. & Meijers, A. (2000). Introduction: A discipline in search of its identity. In: Mitcham, C., Kroes, P.A. & Meijers, A.W.M. (Eds.), The empirical turn in the philosophy of technology (pp. xvii-xxxv). Stanford: JAI Press.
A Multi-Disciplinary Approach to Technoethics
Mitcham, C. (1994), Thinking through technology: The path between engineering and philosophy. Chicago: Chicago University Press.
Vries, M.J. de (2005ii). 80 years of research at the Philips Natuurkundig Laboratorium, 1914-1994. Amsterdam: Amsterdam University Press.
Roco, M.C. & Bainbridge, S. (2002). Societal implications of nanoscience and nanotechnology. Dordrecht, the Netherlands: Kluwer Academic Publishers.
Vries, M.J. de (2006). Technological knowledge and artifacts: An analytical view. In Dakers, J.R. (Ed.), Defining technological literacy. Towards an epistemological framework (pp. 17-30). New York: MacMillan.
Sarlemijn, A. (1993). Designs are cultural alloys. STeMPJE in design methodology. In Vries, M.J. de, Cross, N. & Grant, D.P. (Eds.). Design methodology and relationships with science (pp. 191-248). Dordrecht, the Netherlands: Kluwer Academic Publishers. Schummer, J. (2004). Interdisciplinary issues in nanoscale research. In Baird, D., Nordmann, A. & Schummer, J. (Eds.). Discovering the nanoscale (pp. 9-20). Amsterdam: IOS Press. Schuurman, E. (2003). Faith and hope in technology. Toronto, Ontartio: Clements Publishing. Scruton, R. (1994). Modern philosophy. An introduction and survey. London: Penguin Books. Seager, W. (2000). Physicalism. In Newton-Smith (Ed.), A companion to the philosophy of science (pp. 340-342). Oxford: Blackwell. Shrader-Frechette, K. (1997). Technology and ethical issues. In Shrader-Frechette, K. & Westra, L. (Eds.), Technology and values (pp. 25-32). Lanham, MD: Rowman & Littlefield Publ. Strijbos, S. and Basden, A. (Ed.) (2006). In search for an integrative vision for technology. Dordrecht: Springer. Vries, M.J. de (2005i). Teaching about technology. An introduction to the philosophy of technology for non-philosophers. Dordrecht: Springer.
KEY tErMs Aspects (of reality): Different perspectives for analyzing the behavior of entities in reality, each of which has ‘laws’ (regularities) of its own nature. Complexity (of technology): The manifoldness of functioning in different aspects by artifacts. Multidisciplinarity: Cooperation of experts from different scientific disciplines Nanotechnology: The technology of building structures and artifacts by manipulating individual atoms and molecules. Non-Reductionist Approach: The belief that laws in one aspect can not be reduced to laws in other aspects. Normativity (in knowledge): Reference to what should be rather than to what is. Reformational Philosophy: An approach in philosophy that was initiated by the Dutch philosopher Herman Dooyeweerd and that is inspired by Christian belief in the tradition of the 16th century church Reformation.
Chapter III
Technoethics:
An Anthropological Approach Daniela Cerqui Université de Lausanne, Switzerland Kevin Warwick University of Reading, UK
abstract Common ethical issues related to technology are formulated in terms of impact. With an anthropological approach, every technological device is considered as the result of a designing and building process, through which social values are transmitted. The impact can be properly assessed only once these values are understood. The question of privacy is used here to illustrate the approach. Then, it is shown how human beings and machines are defined in reference to each other, the latter being considered as superior. Therefore, human beings try to improve themselves by using technology.
INtrODUctION Most of the time it is assumed that the relationship between technology and society can be understood quite simply as the influence of the former on the latter. As a result, social and ethical issues related to science and technology are usually tackled in terms of impact. However, with an anthropological approach, it is important to take into account that technology is not just a starting point for good or bad consequences. It is also the result of a designing
and building process. Anthropology aims at understanding the values that are behind technology. The goal of this chapter is to show what an anthropological vision can bring to the understanding of the relationship between technology and society. By standing back from common ethical views, such an approach can provide an original framework with which to think about ethical and social issues in a different way. Therefore, by replacing technological development in its broad social and cultural background, this paper proposes a different view of classical ethical issues.
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Technoethics
aNtHrOPOLOGIcaL vErsUs cLassIcaL aPPrOacHEs tO tEcHNOLOGY Social and cultural anthropologists are involved in the study of differences between human cultures, and in the study of what human beings may have in common despite these differences. One common thing is the use of technology, as there is absolutely no human culture without it (Leroi-Gourhan 1993). Therefore, the study of the relationship between technology on the one hand, and society—and more fundamentally humankind—on the other hand, is a very relevant topic for anthropology. Most anthropologists are more interested in other cultures than in their own. Nevertheless, our western society deserves being studied at different levels. Understanding how technology is designed, produced, and used in our society is fundamental. The main anthropological questions are related to what kind of society we want to live in the future. This implies that we need to stand back from the classical visions of technology. Broadly speaking there are two different classical approaches. The first one considers that there is a technological determinism. It may be technophile determinism, and in this case the implementation of technology appears as necessarily synonymous with welfare, knowledge and prosperity for most people. Conversely there may also be technophobe determinism, in which case technology is considered as intrinsically dangerous, the fear being that its implementation will lead to a huge disaster. In the second position, technology is neither good nor bad, but simply neutral. According to this standpoint, there is a good use and a bad use of technology, the goal of the good guys being to promote the first one. In this case, it is assumed that the user is responsible for what will happen, good or bad. Those sharing that view use frequently a very simple example: if you take a
hammer to nail, it is good. If you take it to kill someone, it is bad. Moreover, we find very often a mix of neutralism and determinism in common speeches. A good example is the World Summit on the information society. Organized by a Committee established under the patronage of Kofi Annan, the summit was initially mentioned in a resolution of the International Telecommunication Union, in order to be organized by the United Nations. The first step was held in 2003 in Geneva. Its goal was to obtain a consensual point of view—that was not easy to group the interests of different states, the business world and the civil society—and to develop some operative action plans. The second step, held in 2005 in Tunis, was focused on the evaluation of the results. According to the World Summit on the Information Society web-site1, which explained the challenge: The modern world is undergoing a fundamental transformation as the industrial society that marked the 20th century rapidly gives way to the information society of the 21st century. This dynamic process promises a fundamental change in all aspects of our lives, including knowledge dissemination, social interaction, economic and business practices, political engagement, media, education, health, leisure, and entertainment. We are indeed in the midst of a revolution, perhaps the greatest that humanity has ever experienced. To benefit the world community, the successful and continued growth of this dynamic requires global discussion and harmonization in appropriate areas. Most positions defended during the meetings assumed that we have no choice (determinism) and at the same time that we have to do the right things if we want to reach the right goal (neutralism). Despite their obvious differences, neutralism and determinism have something in common: they assume implicitly that technology does exist, as a starting point, and that we just have to assess
Technoethics
its consequences. This is a common view for the man in the street, but also for people who try to identify the social and ethical issues related to technology. Taking an anthropological viewpoint, this is just half of the task; of course, technology has consequences that need to be accurately studied. However, this question, closely linked with how we use technology, is only half of the problem for the anthropologist. When talking about impact, we consider technology as if it was a meteorite, fallen to earth by sheer accident. Then we consider the impact it will have on our lives, because we know it will have consequences even if we have absolutely no responsibility for it. This is certainly true as far as a meteorite is concerned, but technology is not a meteorite! We forget that it does not come from outer space. On the contrary, our main cultural values are embedded in all of our technological devices. As we are not always aware of these values, it is important to formulate them if we want to understand what kind of society, and more fundamentally what kind of humankind, we are building. By taking an anthropological view, technological devices have themselves to be considered as the result of a process. Behind technology, there are people who design and build them. The devices we produce are the concrete expression of our implicit vision of the world. All of us have an idea of what a human being and a life in society are. We also have an idea of what they should be in an ideal world and this is closely linked to our cultural, moral and ethical values. Through the designing and production process, these values become embedded in the capabilities of our technological devices. The anthropologist is interested in identifying these values as a starting point. Therefore the first question to be asked is: why or what do we develop the technologies all around us for? In a nutshell, we have first to be aware of the society we are promoting through the technology we produce—that is to have an idea of why. And then we can start wondering
which way we should use technology—that is to think properly about how. Fundamentally, both good and bad uses are rooted in very deep common values.
MEtHODOLOGY: HOW caN WE IDENtIFY tHE vaLUEs? Accessing the values in not an easy exercise, and the tools the anthropologist usually uses are mostly qualitative. Moreover, contrary to a deductive approach, in which the questions are deducted from theory, the anthropological approach is inductive: the questions arise from the empirical field and are theorized afterwards (O’Reilly 2005). According to the so-called grounded theory, introduced in the 1960s by Anselm Strauss and Barney Glaser (1967), the right questions arise from the actors. And in their view one should always start a programme of research without any preconceived theoretical assumptions. However, it is difficult to follow them to this extent, as the researcher has at least an indefinite idea of what he or she is looking for—as it seems that you never choose your field by chance : you already know something about it, and what you know is interesting enough to you to make you desire to know more. Anyway, the idea is to try to understand what the values of the actors are, without imposing any topic a priori, with a bottom-up approach, grounded in what people really do, and in the way they understand their own practices. First of all, observation of how people behave is a good indicator. If the anthropologist researchers can mix with the population they are studying, they have good access not only to the practices, but also to the meaning the actors give to what they do (see Arborio and Fournier, 1999; Pétonnet, 1982; Peretz, 1998, Schwartz, 1993). A second step consists of carrying out comprehensive interviews with the actors themselves, according to the method described by Kaufmann (1997). In his view, the researcher must become
Technoethics
deeply engaged in the interview. The result should be a real discussion, with no imposed direction, rather than a question-reply exchange built as a questionnaire. The social and ethical issues people feel really concerned about will, it is felt, appear spontaneously during the interviews. If the interviewer has an idea about something which has not been mentioned, he/she can suggest it at the end of the interview. Nevertheless, whatever the reply, he/she must take into account that this was not a spontaneously mentioned issue, and give to it its right place in the scale of the worries expressed by the actors. Hypotheses can then be formulated and the fieldwork can be completed with their confrontation. Later in the research, the researcher may use a quantitative approach to confirm on a large scale with a questionnaire what he/she was able to theorize (top down approach, with questions inspired by the theory). Considering the topic we are interested in here, the anthropological approach can be applied at different levels. First, at the level of the engineers who design and build technology. Such a field allows access to their representations of the world (Cerqui 2002). Secondly, it can also be applied to all the users. Clearly the overall assessment of reasoning behind technological development must be categorized with as much flexibility as possible. All kinds of social factors can effect the development of a particular brand of research and it may be extremely difficult to focus on the underlying reasons behind a programme being undertaken. As an example it may be primarily a funding issue, in which case emphasis may need to be placed on methods employed to obtain the funding and/or the secondary drive by the funder themselves. Conversely a programme of research may be undertaken for ‘purely’ investigative reasons, in which case the underlying social motivations may well be absolutely fundamental to the culture of the researcher from an early age and not immediately apparent in any form of questioning.
This is exactly what an anthropological approach aims at identifying.
bEING cOMMIttED When working in a cultural environment that is not his/her original one, the anthropologist faces values that are sometimes very different from his/her own. In such a situation, it may be very difficult to understand why people behave the way they do. On the contrary when working in his/her own society, the anthropologist has to stand back from his/her values in order to be able to properly understand them. As he/she shares these values, they seem obvious, almost natural, to him/her. In both cases, researchers in the social sciences sometimes feel that their duty is to understand and to describe the world, but not to act on it. Formally speaking, anthropology is an explicative science. However, for some of us, it is also very important to be committed, i.e. to make other people aware of the future all of them are collectively building, and to warn about the related risks. This paper results from such a position, and even goes one step further with a collaboration between the anthropologist and the field: the two authors are an anthropologist interested in the study of the relationship between technology and society, and an engineer involved in cutting edge research, for which he uses himself as a guinea pig. By physically merging with technology, he is actively promoting cyborgs as the next step in evolution. The anthropologist has spent more than two years in his department, trying to understand why he was doing what he was doing. An interesting dialogue was born between them, especially because they share common values while they disagree on other points. One of his experiments—in which he has merged his nervous system with a computer—will be described in the next section. It will be shown that the values behind such a kind of extreme experiment in reality are those of our
Technoethics
current so-called information society. This means that, to some extent, all of us share these values. Applied to a common issue in technoethics—the question of privacy—the reflection illustrates how the understanding of the deepest values (i.e. why things are done) can provide a new vision of classical matters (i.e. how to use technology in the best way).
PrEsErvING PrIvacY IN a cONNEctED WOrLD An ethical issue often discussed in relation with technology is privacy. It is taken for granted that privacy is important, and the question is how to preserve it. This is a never ending debate, mainly formulated in terms of impact, unless we stand back and try to understand what the problem really is. Privacy and individual freedom are old values in western society. However, we must be aware that they are contradictory with other important values. According to people with power over our political or economical lives as well as those from the scientific world - we are purported to have recently entered the information era, which is supposed to be synonymous with an improvement in all the fields. French discourse talks of the information society or the knowledge society, while English-speakers also refer to information highways. All these phrases express differently the same idea: we are supposed to live in a radically new kind of society. This so-called information society is often considered as an unquestionable reality linked with the emergence and development of Information and Communication Technologies.2 With such a point of view, globalization —defined as an extension of the Western information society to the entire world—has become a reality in order to obtain a better quality of life for everybody. Information is described as the most important source of wealth for individuals and for countries (see for example Gates, 1996
and Dertouzos, 1997) and it is expected to bring money and education to the whole world. This means that if, in the past, the industrial society needed efficient bodies to produce more and more, the information society needs nowadays efficient brains to deal with information. The keyword is: access. To be successful in such a society, you need to access information. And the quicker one can access, the better (Cerqui 2005). Computers are nowadays put everywhere in our environment. They are becoming ubiquitous. But, paradoxically, they are, at the same time, getting less and less visible and much smaller. Information technologies are also getting closer to the human body with each new breakthrough. Thereof, technological implants, brain to machine and brain to brain direct interfaces appear as the last logical step. If the device is implanted, there is no delay in accessing information. Even if it may still seem to be a science-fiction scenario for most people, implanting technological devices into the human body is now becoming reality. Pacemakers used for therapy are no longer the only application, and technology can now be implanted to enhance normal abilities. In 2002, the second author was described worldwide as the first cyborg in history, after a series of investigations involving the implantation of an electrode array, consisting of 100 silicon needles, into his nervous system. Linking the human brain, in such a way, directly with a computer brain opens up a variety of issues. Firstly as it creates a cyborg in the sense that the overall brain of the individual concerned is part biological, part technological—the original human brain will adapt, as it sees fit. Secondly the cyborg is endowed with abilities which are clearly not apparent in regular humans. Indeed the 2002 experiment was not solely therapy orientated, but more for investigation purposes to assess what enhancements of human beings are possible (Warwick et al. 2003; Gasson et al. 2002). The neural interface allowed for a bi-directional information flow. Hence perceivable stimulation
Technoethics
current enabled information to be sent directly onto the nervous system, while control signals could be decoded from neural activity in the region of the electrodes. In this way signals could be sent from the nervous system to a computer and also from the computer to be played down onto the nervous system with a signal transmission range of at least 10 metres. A wide range of experiments were carried out with the implant in place (Warwick et al., 2004; Gasson et al., 2005). Neural signals were directly employed to drive a vehicle around, to operate networked technology in order to switch on/off lights and other artefacts and to operate a robot hand across the internet. At the same time, because the implant was bi-directional it was quite possible for the human involved (the second author) to receive and comprehend incoming ultrasonic signals, to “feel” what the robot hand was feeling and to communicate, in a telegraphic way via the same route. Concretely, with a finger movement, neural signals on the nervous system were transmitted to a computer and out to a robot hand. Sensors on the hand’s fingertips were then employed to pick up signals which were transmitted back onto the nervous system. Whilst wearing a blindfold, in tests it was not only possible to move the robot hand, with neural signals, but also to discern to a high accuracy, how much force the robot hand was applying to an object being gripped. This experiment was carried out, at one stage, via the Internet with KW in Columbia University, New York City, but with the hand in Reading University, in the United Kingdom. What it means is that when the nervous system of a human is linked directly with the Internet, this effectively becomes an extension of their nervous system. To all intents and purposes the body of the individual does not stop as is usual with the human body, but rather extends as far as the Internet takes it. In this case, the brain was able to directly control a robot hand on a different continent, across the Atlantic Ocean.3
When using the Internet, all of us merge metaphorically with technology either in order to exchange data with people around the world, or to access information. The second author did it for real when he physically merged his nervous system with the network. According to Virilio (1995), the history of humankind has seen three major revolutions that point towards an ever-increasing speed in getting in touch with the world. The first one—in transportation—allowed humankind to master space by achieving the ability to move through it. The second revolution—that of transmission or communication—permitted a mastery over time, and allowed the elements of mankind’s environment to reach him faster than if he was forced to move himself in order to obtain them. And the third revolution—that of transplantation—shortens the process even more by directly incorporating the information into the organism. It is assumed that information technologies must by definition be transparent. Nevertheless, as far as privacy is concerned, it is obvious that transparency is contradictory with the respect of the private sphere. Until we are aware of that, we will never solve the problem, because each coin has two sides. In this case, either the information must circulate without any boundaries and everybody can access everything in real time, or we want our privacy to be preserved. But we cannot have both. We must choose between two important values that are no longer compatible.
HUMaN bEINGs aND MacHINEs Human beings and machines, especially robots, are defined in reference to each other: we build machines in our image, and we define ourselves according to them, in a never ending circle (Cerqui 1997). Neither the idea of building an artificial creature, nor the description of the human body as a machine are new, but they remained mainly
Technoethics
metaphorical during human history until recent times. Then, cybernetics, as it was defined by Norbert Wiener in the 1930s, gave the necessary theoretical background for such a vision of human beings and machines in reference to each other to be translated into concrete practice. According to cybernetics, identity is not given by the matter, but by the code organizing the material elements. As a result, you can change the matter without modifying identity if you respect the code. In other words, it is assumed that living elements and technological parts can be exchanged without any problem if they are organized in the right way (internally, and in their relationship with the environment). We find cybernetics’ thinking has a strong influence in the current trend to create bio-inspired robots, which can self-organise in order to give birth to a form of intelligence, or even life. In these fields, the researchers assume that life and intelligence are emerging properties in human beings, and they try to make the same sort of phenomenon appear in a machine (i.e., machines considered as living organisms). This philosophy is also behind implant technology, as it is with genetics and cloning, which imply that you are able to master the code of life (i.e., human beings treated as machines) (Cerqui to be published). Such a view of humankind is considered, by some authors, as a limit to traditional ethical reflection. For instance, according to Hottois, the anthropological difference, which is the foundation of intrinsic human dignity, is disappearing (1999: 57). He is not sure that we are still allowed to talk about a human ontological specificity now that we know we are made up of an arrangement of atoms like animals, flowers and stones. If living and nonliving elements can be exchanged with each other, there is however a hierarchy between them, as was shown in a survey made in 2002 at the Swiss national institute of technology (Arras and Cerqui 2003). 2000 persons were asked a series of questions related to their
perception of robotics. Robots were associated with almost “classical” qualities of machines: precision, reliability, rationality, whereas human beings were associated with words such as sensations or feelings. Moreover, robots were also considered as more perfect than humans, whatever perfection meant for the people interviewed. The survey corroborated that human beings are usually associated with warm qualities while robots are related to cold qualities. A closer analysis shows that perfection seems to be related to the possession of these cold qualities. Paradoxically it may well transpire that humans are better assessed, particularly in education, when they have more of the cold qualities normally linked to machines. From an anthropological point of view this means that the “warm” qualities are no longer those which are considered best in our society. In such a context, treating human beings as machines does not just allow us to reproduce the elements they are made of in the case of disease or disability. It also opens the way to the improvement of these elements in comparison with standard abilities.4 However, there is a conceptual difference between repairing and improving human beings, and that difference expresses two different conceptions which have existed for a long time concerning the perfect or imperfect dimension of human beings: if you just repair, it means that you are in a certain way happy with humankind as it is, that is when people are not disabled. In Leroi-Gourhan’s view, technology is almost biologically linked to the human body (LeroiGourhan 1993; Cerqui 1995). Essentially, when the human being stood up, his hands became free for an activity other than walking and technology appears to be the natural continuation of this. In this particular point of view, it follows that if one compares animals and humans as far as their respective technical abilities are concerned, it can be concluded that in general the animal has morphological specialisations; it has tools in its body. For instance, the crab is very good
Technoethics
at nipping or pinching: it is better at that than a human being. But it is limited to this activity; contrary to the human being who is able to do a lot more by using technology, the crab is only able to nip. Morphological specialisation gives important limits to animal activity. Humans, it is therefore surmised, have no similar sharp physical specialisations, but they have a specialised brain, which permits them to think of technology as an external specialisation. They are in a sense physically unspecified and technology is a natural extension because humans are planned by nature to think like this, thanks to their brain. For Leroi-Gourhan as for other authors (see for instance Bergson 1928: 151 or Le Roy 1928: 20), this makes humankind superior to other animals. In such a way of thinking, the human body does not need to be technically improved. Such a way of thinking can at the very most justify a medical restoring process. On the contrary, some authors consider the human being as a naturally imperfect being. For instance, in Gehlen’s (1990) view, the humankind needs technology as a prosthesis. Where LeroiGourhan sees an unspecified, but able to do everything being, Gehlen sees, on the contrary, a disabled being. For the former, the human being is free to specialise itself in everything thanks to its indetermination, while for the latter, it has to use technology because of its indetermination. It seems that this second point of view—which considers that humankind has to be (or is at least suitable to be) improved—is dominant in our western values.
HOW tO DraW tHE LINE? Ethics is by definition both prescriptive and relative. Which means that it depends on the cultural context, that is to say on the consensual values at a certain moment and in a certain place. For instance, French ethical committees usually insist on the indivisibility of the body and the person,
while American tradition considers that the person is the owner of the body (Hottois 1999: 61, 63). Naturally, positions taken on the same subject differ significantly from each other. Furthermore, in our western ideology technology has become the way to realise our aspirations and to give concrete expression to our values (for instance freedom and efficiency), which means that it is difficult to limit it. We could even argue that there is no place for such boundaries in our western contemporary values. Miquel and Ménard (1988) studied how technology was used in different societies. They concluded that we are the first society in which technology is not codified in boundaries defined by social values: technology is itself a social value and its specificity is that it does not have limits. In such a situation, it is very difficult to define ethical boundaries, because it runs counter to our values themselves. Contemporary ethics is not able to set long-term limits, because the idea of boundaries is a totally closed book to our technological system. That is perhaps the greatest problem with our current ethical reflection, and it appears clearly if we think about how ethical limits usually work in our society. They always lag behind technological developments and are only able to set a temporary boundary to something considered as dangerous at a certain moment, however it is only a short-term limit which does not destabilise the general movement. Ethical arguments can’t restrain science and technology in their general progression. That is especially true when we consider information technologies which are less discussed than biotechnologies. Concerning information technologies, those ethical reflections usually focus on subjects like: surveillance and privacy—human, and especially feminine, dignity in pornography and the Internet—author’s rights in the Internet diffusion of texts or pictures, and so on. In the ethical committees dedicated to biotechnologies, it is sometimes argued that a line has to be drawn between therapy and enhancement.
Technoethics
However, if science and technology give us the power to try to reach perfection, its criteria are never absolute. They continuously evolve in function of what is technically feasible and contribute to a continuous shift in what is considered as normal. The definition of normality evolves, depending on scientific and technological thresholds. Consequently, ethical reflection, which is supposed to draw lines, has to take into account that the boundaries of what is acceptable are continuously shifting. Therefore, reactive ethics is useless, as the ethics committees accept one day what was unbearable to them on the previous day. When Dolly the sheep was cloned, the debate was about human cloning versus non human cloning. A few years later, it was already about therapeutic human cloning versus reproductive human cloning. Human cloning, that was unacceptable a few years before, had become accepted, at least to some extent. Therefore, we may wonder how long human reproductive cloning will remain unacceptable to us. Cloning is one example of how we try to master life. And there is no reason to think that we will stop. Therefore, we should be able to think about the kind of society, and the type of human beings, we are building with our inability to put social boundaries to our technology. It seems that it is taken for granted for most of us that everything feasible ought to be done, whatever the results are. Our faith in technology is strong enough to let us think that all the problems can be solved with a new technological solution. It is now very important to develop a long-term ethical reflection. Ethics has for a long time been based on the implicit idea that human nature is one and is unchangeable. In this view, the human essence is constant and it is not an object for the techne (Jonas, 1990: 22). Ethics thus apply to things that we do to human beings and that we consider as bad—according to our conscious values and to an implicit and consensual definition of what a human being is. However, the human being has now become itself an object whose essence could be changed.
0
In such a situation, where and how can we find the values to found our ethical arguments on? At this stage, should not we base our ethical reflection more on the ontological definition of humankind than on relative values? The implicit definition of humankind has to become explicit, because it is at stake. As Jonas (1990: 13-14) maintains, it is only if and when we are able to predict the deformation of humankind that we know how to protect ourselves from this deformation: in others words, we know what is at stake only when it is at stake. Jonas also suggests that we create the “ethics of the future” whose first principle should no longer be found in an ethic considered as a doing-oriented thought, but in metaphysics considered as a being-oriented thought (1990, 1998). Such a reflection should not make us forget our present obligations, but should anticipate the future (Lelièpvre-Botton 1997: 13). It is important to think of what we do to people, but it is important too to think about what kind of society—and what kind of human being—we are creating. As a result, we need to develop anticipative ethics. That means being able to understand which are our ultimate goals when developing technology, and to look beyond both good and bad uses. We must foresee plausible scenarios for the future, and assess them, before they happen.
KEY trENDs This entire chapter has been written on the basis that technology and its associated ethical values evolve hand in hand, colored by the cultural biases and societal tides that are pertinent in terms of both space and time, that is where and when. What has been referred to here is the emerging impulsive trend of machine intelligence, which will, over the forthcoming years, have more and more of a say itself, to the possible ultimate extent of humans being subject to the ethical constraints of machines rather than vice versa.
Technoethics
We have also considered the more immediate merger between humans and machines and the potential effects this will have on our value sets and regulatory decisions. Just who is meant by “our” is though the most important aspect—will it be a human value set or will it be the value set of a cyborg, an upgraded form of human. Managing the changes even now is difficult—how do humans deal with the possibilities of some people having extra senses or the ability to communicate by thought alone? Should technological development be stopped (we pointed out that this is not realistically possible)? Should we embrace the changes wholeheartedly and make commercial gain whilst we dramatically change (as if from a cocoon) into a more complex, more complete, more powerful individual? Technological evolution in the form of more intelligent machines, communication by thought signal, extrasensory input and direct neural control (by means of the internet) over extreme distances are appearing as this is being written. Our ethical basis for survival and growth must necessarily evolve hand in hand with this. Anthropologists need to evolve with their field.
rEFErENcEs Arborio, A-M., & Fournier, P. (1999). L’enquête et ses méthodes : l’observation directe. Paris: Nathan. Arras, K. & Cerqui, D. (2003). Do we want to share our lives and bodies with robots? Retrieved from http://asl.epfl.ch/index.htm?Content=member. php&SCIPER=112713 Bainbridge, W. & Roco, M. Eds. (2002). Converging technologies for improving human performance: Nanotechnology, biotechnology, information technology and cognitive science. Arlington: National Science Foundation.
Bell, D. (1973). The coming of post-industrial society : A venture in social forecasting. New York : Basic Books. Bell, D. (1999). The axial age of technology. In The coming of post-industrial society. New York : Basic Books. Bergson, H. (1928). L’évolution créatrice. Paris : Alcan. Cerqui, D. (1995). L’extériorisation chez LeroiGourhan. Lausanne: Institut d’Anthropologie et de Sociologie. Cerqui, D. (1997). L’ambivalence du développement technique: entre extériorisation et intériorisation. Revue européenne des sciences sociales, 108, 77-91. Cerqui, D. (2002). The future of humankind in the era of human and computer hybridisation. An anthropological analysis. Ethics and Information Technology, 4(2), 1-8. Cerqui, D. (2005). La société de l’information, de la médiation à l’immédiat. In G. Berthoud, A. Kündig & B. Sitter-Liver (Eds.), Société de l’information : récits et réalités, actes du colloque 2003 de l’Académie suisse des sciences humaines (pp. 311-321). Fribourg: Academic Press. Cerqui, D. (forthcoming). Humains, machines, cyborgs: Le paradigme informationnel dans l’imaginaire technicien. Genève: Labor et Fides (collection Champs éthique). Cerqui, D. & Warwick, K. (2005). Can converging technologies bridge the gap? Proceedings of the CEPE 2005 Conference (Computer Ethics: Philosophical Enquiry), University of Twente, Netherlands. Cerqui, D. & Warwick, K. (to be published). Prospects for thought communication: Brain to brain and brain to machine. In K. Kimppa, P. Duquenoy & C. George (Eds), Ethical, Legal and Social Issues in Medical Informatics. Idea Group
Technoethics
Dertouzos, M. (1997). What will be. How the world of information will change our lives. San Francisco: Harper. Gasson, M., Hutt, B., Goodhew, I., Kyberd, P. & Warwick K. (2002). Bi-directional human machine interface via direct neural connection. Proceedings of the IEEE International Conference on Robot and Human Interactive Communication, 25-27 September 2002 (pp. 265-270), Berlin, Germany. New York: IEEE Press. Gasson, M., Hutt, B., Goodhew, I., Kyberd, P., & Warwick, K. (2005). Invasive neural prosthesis for neural signal detection and nerve stimulation. Proc. International Journal of Adaptive Control and Signal Processing, 19(5), 365-375. Gates, B. (1996). The road ahead. London: Penguin Books. Glaser, B. & Strauss, A. (1967). The discovery of grounded theory. Chicago: Aldine. Hottois, G. (1999). Essai de philosophie bioéthique et biopolitique. Paris: Vrin. Kaufmann, J.-C. (1997) L’entretien compréhensif. Paris: Nathan. Jonas, H. (1990). Le principe responsabilité. Paris: Cerf. Jonas, H. (1998). Pour une éthique du futur. Paris: Payot. Lelièpvre-Botton, S. (1997). L’essor technologique et l’idée de progrès. Paris: Ellipses. Leroi-Gourhan, A. (1993). Gesture and speech. Cambridge, MA: MIT Press. Le Roy, E. (1928). Les origines humaines et l’évolution de l’intelligence. Paris: Boivin. Miquel, C. & Ménard, G. (1988. Les ruses de la technique. Le symbolisme des techniques à travers l’histoire. Montréal: Boréal / Méridiens Klincksieck.
O’Reilly, K. (2005). Ethnographic methods. New York: Routledge. Peretz, H. (1998). Les méthodes en sociologie: l’observation. Paris: La Découverte. Pétonnet, C. (1982). L’observation flottante. L’exemple d’un cimetière parisien. L’Homme, 22(4), 37-47. Richta, R. (1969). La civilisation au carrefour. Paris: Anthropos. Schwartz, O. (1993). L’empirisme irréductible. In N. Anderson (Ed.), Le hobo, sociologie du sansabri (pp. 265-368). Paris: Nathan. Virilio, P. (1995). La vitesse de libération. Paris: Gallilée. Warwick, K., Gasson, M., Hutt, B., Goodhew, I., Kyberd, P., Andrews, B., Teddy, P. & Shad, A. (2003). The application of implant technology for cybernetic systems. Archives of Neurology, 60(10), 1369-1373. Warwick, K., Gasson, M., Hutt, B., Goodhew, I., Kyberd, P., Schulzrinne, H., & Wu, X. (2004). Thought communication and control: A first step using radiotelegraphy. IEE Proceedings on Communications, 151(3), 185-189.
KEY tErMs Cyborg: Cybernetic Organism, an entity that is part human/animal and part machine—in our viewpoint it is a mixed human/machine brain that is of interest. Extra Sensory Input: Sensory input beyond the spectrum of the human “norm”—this includes such as Ultrasound, Infrared and X-Rays. Here we are concerned with direct input to the human nervous system and brain and not in terms of conversion to a normal human sense, e.g. X-Ray image can be converted to a picture for visual input.
Technoethics
Implant: Here the interface between the human brain and a computer. Typically a type of electric plug that is fired into the nervous system. Machine Intelligence: Mental attributes of a machine, as opposed to those of a human. Not a copy of human intelligence. Thought Communication: The ability to communicate directly, possibly electronically, from brain to brain. No need for laborious conversion to/from pressure waves, i.e. speech. 3
ENDNOtEs 1
2
http://www.itu.int/wsis/about/about_WhatlsWsis.html Contrary to what might be believed, such ideas are not so new. Some authors (see for example Richta, 1969) described the same concept without naming it or using another name many years ago: Bell was one of the first ones to theorize about that society while giving it a name: according to him, we are supposed to be in a post-industrial society (Bell, 1973, 1999). In his view, there are five fundamental criteria to define that society: (1) transition from a material goods production system to a service economy (mostly health, teaching, research and administration); (2) employment structures change with
4
an increase in highly qualified professionals and technicians; (3) centrality of theoretical knowledge capable of generating innovation and economic growth ; (4) emergence of new technologies of the mind ; (5) an increasing mastery of technological and social developments. In short, Bell describes an extension of the service sector, whose main condition of existence consists in the fact that information must constantly circulate. That explains the importance given to the information technologies. It is interesting to point out that such an experiement can be understood in two different ways, opening a philosophical debate how the boundaries of the body are perceived (see Cerqui and Warwick, to be published). On the one hand, it could be argued that the body is considered as an interference, the goal being to connect directly the brain with the environment; on the other hand, it could also be considered that the body is extended by technology. There is for instance a series of reports produced in the frame of the american National Science Foundation and the Department of Commerce in which it is clearly claimed that technologies must converge to improve human beings (Bainbridge and Roco 2002). It is very interesting to analyse their formulated and taken for granted criteria for improvement (see Cerqui and Warwick 2005).
Chapter IV
A Technoethical Approach to the Race Problem in Anthropology Michael S. Billinger Edmonton Police Service, Canada
abstract Despite the fact that analyses of biological populations within species have become increasing sophisticated in recent years, the language used to describe such groups has remained static, thereby reinforcing (and reifying) outdated and inadequate models of variation such as race. This problem is further amplified when the element of human culture is introduced. Drawing on Mario Bunge’s work on technoethics, in which he asserts that technology should be subject to social and moral codes, this chapter argues that the ‘race problem’ should compel anthropologists to exploit technology in order to find workable solutions. One solution to this problem may be found in modern approaches to human skeletal variation using advanced computing techniques such as geometric morphometrics, which allows for the comparison of bone morphology in three dimensions. Coupled with more complex theories of social and genetic exchange, technologically advanced methodologies will allow us to better explore the multidimensional nature of these relationships and to understand how group formation occurs, so that a dynamic approach to classification can be developed.
INtrODUctION Despite the fact that the race concept has been vigorously critiqued by anthropologists for over a century, it remains both a conceptual and terminological artefact in contemporary studies of human variation. This is commonly known as the
‘race problem.’ Race not only has contentious sociological connotations, but the concept itself has been shown to be inadequate on a biological level (in terms of its classificatory or taxonomic utility), whether applied specifically to humans or to other geographically variable species. Nonetheless, the race concept continues to appear in a consistently
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Technoethical Approach to the Race Problem in Anthropology
large minority of anthropological studies (Cartmill, 1998, p. 655). Do anthropologists therefore have an ethical obligation to abandon the race concept, or at least strive to find a workable solution? In order to answer this question, this chapter will focus on four subsidiary questions: (1) what is the role of race in anthropology; (2) is race solely an anthropological problem; (3) is there an ethical dimension to the race problem; (4) how can technology be used to solve the race problem? I will argue in this chapter that Bunge’s (1976, 1977) notion of technoethics—that technology has inherent moral codes—compels us to utilize technologically sophisticated methodology for the resolution of ethical dilemmas such as the race problem. The solution I propose is a combination of old theory and new technology, using the example of 3-dimensional (3D) imaging and geometric morphometric analysis of skeletal morphology to explore the multidimensional nature of human biological relationships, moving beyond the outdated race concept.
WHat Is tHE rOLE OF racE IN aNtHrOPOLOGY? The race concept in general, and the use of racial classification in anthropology in particular, are well researched as theoretical problems, and remain popular topics of academic inquiry. The race debate that was initiated by such esteemed anthropologists as Ashley Montagu and Claude Levi-Strauss in the 1940s and 1950s1 in response to the rising popularity of eugenics programs worldwide, seems to have reached its climax in mid-1990s, when much of the scientific world was appalled by the research coming out of the discipline of evolutionary psychology. Evolutionary psychologists such as Herrnstein and Murray (1994) and Rushton (1995) argued that inherent intellectual capabilities could be predicted by racial group membership. Much of the criticism of the race concept at that time was aimed specifi-
cally at this type of research, which drew a direct correlation between race, intelligence, and social achievement. It was presumed that these correlations were demonstrated by both differences in average brain size between racial groups and scores on intelligence tests. Perhaps the most significant response to this line of argumentation was Gould’s The Mismeasure of Man (1996), in which he attacked the fundamental premise of such evolutionary psychologists: that measurable differences in average cranial capacities and/or brain size seen between so-called racial groups were indicative of differences in cognitive and cultural capabilities. Gould collected and analysed craniological data to demonstrate that the racial differences in cranial capacities that were claimed by the early craniologist, Samuel Morton (1839), were created by numerous flaws and errors in his methodology. This struck a huge blow for racial science as it clearly demonstrated that Morton had purposely manipulated his data in order to promote sociallybased theories of racial inequality. Similarly, Gould argued that evolutionary psychology is based on the same pre-conceptions found in Morton’s work—misunderstood or misapplied evolutionary theory—ignoring such issues as the relationship between cranial capacity and overall body mass, sex-based differences in cranial capacities, as well as the cultural and linguistic problems inherent in applying intelligence tests to diverse groups. Unfortunately, Gould’s work represents the pinnacle of the anti-race movement in science. The majority of critical perspectives on the science of race have served to shed light on the historical development of thought about human difference and the place of humans in nature while neglecting the development of methodological solutions. Rather than demonstrating the inadequacies of racial classification and proposing solutions for moving beyond the present state of stagnation in the race debate (Billinger, 2006), many contemporary approaches focus too narrowly on particular aspects of racism (as
A Technoethical Approach to the Race Problem in Anthropology
a sociological problem) as opposed to racialism (as a biological problem). As such, race remains a reality in the evolutionary sciences, including anthropology. Surveys of the current state of theoretically-based racialism in anthropological literature demonstrate that while attitudes have shifted significantly in favour of the rejection of racial categories, the use of race as an explanatory model is persistent. This is a particular problem in physical anthropology, in which analyses of skeletal morphology are used to map relationships between human groups. Cartmill (1998) reported that articles appearing in the American Journal of Physical Anthropology (AJPA) dealing with human variation remained consistent over 30 years in their utilisation of the race concept, with it appearing in 34% of articles in 1965 and 35% in 1996. Lieberman et al. (2003) have challenged Cartmill’s results using a slightly different methodology, explaining that the initial attacks on the racial paradigm by Montagu in 1941 and 1942, Washburn in 1953, Livingstone in 1958, Brace in 1964, and Lieberman in 1968, saw the use of the race concept decline in the AJPA from 78% of articles in 1931 to 36% in 1965 and down to 28% in 1996. Lieberman et al. (2003) also surveyed American Anthropological Association (AAA) members in 1985 and again in 1999, asking respondents to agree or disagree with the statement: “There are biological races in the species Homo sapiens.” In 1985, 41% of physical anthropologists surveyed disagreed with the statement, increasing to 69% in 1999. Interestingly, 53% of cultural anthropologists disagreed with the statement in 1985, with that figure rising dramatically to 80% in 1999. Striking differences were also found in physical anthropology textbooks, which predominantly presented racialist models of human variation between 1932 and 1979, but showed an almost complete abandonment of the race concept as an explanatory tool between 1980 and 1999. These results prompted Lieberman et al. (2003, p. 112) to conclude: “Data indicate that the paradigm of
race has approached the point where its survival is in doubt. Even those who continue to use the concept have serious doubts about its utility.” It is also interesting to note that 42% of first authors of the included textbooks published in 1998–99 were AAA members, whereas only 4% of first authors of AJPA articles in 1996 were AAA members. Cartmill and Brown (2003) suggest that this difference in AAA membership indicates that the textbook authors are more representative of American anthropology, whereas the AJPA authorship is international; therefore, this may be more indicative of an American abandonment of the race concept, but a continued international usage. These two studies demonstrate a significant point: a sweeping philosophical trend occurred between 1931 and 1965, in which there was a 42–43% decline in the use of racial models in the AJPA, followed by a period of relative stability or slight decline between 1965 and 1996. I suggest that this represents a paradigm shift in American anthropology, but a shift that has been stunted by methodological stagnancy. While the race concept itself has been questioned or abandoned, methodology has not advanced, and no workable non-racialist models for explaining human population variation yet exist. Thus, while the concept of race has changed through time, the methodologies that utilise the concept remain static. The biological aspects of this problem are further illustrated by the disjunction in results between cultural anthropologists and physical anthropologists who answered Lieberman et al.’s (2003) questionnaires. In 1985, 12% more cultural anthropologists than physical anthropologists rejected the race concept, and 11% more (even after the marked increases in rejections by both groups) in 1999. Following Cartmill (1998), Wang et al. (2002a, 2002b) report that of 324 articles directly related to human variation printed in Acta Anthropologica Sinica, China’s only journal dedicated to physical anthropology, none questioned the validity of hu-
A Technoethical Approach to the Race Problem in Anthropology
man racial classification. Rather, several articles were mainly concerned with the biological differences among or between ‘major races.’ Wang et al. (2002b, p. 96) suggest that this focus can be considered a continuation of Weidenreich’s (1943) work in China, which emphasized regional continuity, suggesting Mongoloid roots extending back to Homo erectus. The irony of this focus on the so-called Mongoloid race is that studies of intra-group variation reveal that subdivision by north and south, and even further down to the ethnic or tribal level is possible (Wang et al., 2002b, p. 96). However, in China, race has proven to be a powerful political tool for uniting diverse groups within the country, since Chinese physical anthropologists have been portraying the Chinese (Mongoloid) people as a discrete biological group with a long evolutionary history. Results of studies of Polish physical anthropologists using questionnaires based on those developed by Lieberman et al. (1989) reveal a more encouraging picture of human variation studies in central Europe. The first study, conducted in 1999, revealed that out of 55 respondents, 31% agreed with the statement “There are biological races (meaning subspecies) within the species Homo sapiens,” while 62% disagreed, and 7% had no opinion (Kaszycka & Štrkalj, 2002). The authors explain that there are general trends in age: Polish physical anthropologists born before, during, or shortly after World War II had a much stronger sense of race, and racial typology was taught as a major university course in some universities until the 1970s, while younger anthropologists who received more modern training were much more sceptical about the existence of biological races. In a 2001 follow-up study, Kaszycka and Strzałko (2003) found that offering the respondents different racial concepts as options resulted in significantly different results. 75% of 100 respondents (of whom three-quarters had also responded to the 1999 survey) agreed that there are human races when allowed to choose between
races defined as geographical (17%), typological (13%), populational (35%), subspecies (3%) or a combination of two of these options (7%). The rejection of race by the majority of respondents of the 1999 survey versus the much lower rate of rejection in the 2001 survey suggests that race, when construed only as a subspecies, is more problematic (at least in the Polish context) than when the term is attached to other biological concepts (Kaszycka & Štrkalj, 2002, p. 334). The results of these studies indicate that in North America, the rejection of the race concept on an intellectual basis is more widespread amongst those who deal with it only as an organizing category (cultural anthropologists) than those who utilize race as an explanatory model (physical anthropologists) of how human groups vary biologically. In the Chinese example, race continues to be uncritically accepted by physical anthropologists, and the authors suggest that this is a result of socio-political context rather than scientific discovery. In Central Europe,2 the Polish studies suggest that there is general trend toward the rejection of race as a biologically meaningful concept in a strict taxonomic sense by physical anthropologists, but that it remains a persistent organizing principle in general. In terms of anthropological methodology, critical debate over the practical usefulness of racial categorisation was most prominently played out also in the 1990s in forensic anthropological literature (Sauer, 1992; Brace, 1995; Kennedy, 1995). Although many of its practitioners contend that more advanced methods of ancestral determination are necessary (Brues, 1992; Kennedy & Chiment, 1992), and that the race debates have only served to retard such progress (Sauer, 1993), arguments that racial categories are necessary to convey socially-understandable (identifying) information are persistently made. In failing to provide progressive methods of ancestral determination, and continually relying on outdated methods of racial determination, forensic anthropologists are neglecting the scientific questions that led to
A Technoethical Approach to the Race Problem in Anthropology
increased understanding of human variation and accuracy in their determinations.3
Is racE sOLELY aN aNtHrOPOLOGIcaL PrObLEM? The criteria used in racial classification depend upon the purpose of the classification itself (Molnar, 2002, p. 18), which may differ not only between disciplines, but also within them. We can see this in the different approaches to racial classification that are apparent between anthropological subdisciplines. Physical anthropologists are particularly influenced by both theory and method developed by colleagues in the fields of evolutionary biology and genetics, in which classification is argued to have a more objective (or pragmatic) purpose. Dunn (1951, p. 13), writing in the UNESCO publication Race and Biology, explains that “although there has been for some time a considerable measure of agreement amongst biologists about the concept of race in plants, animals and man, the word ‘race’ as used in common speech has no clear or exact meaning at all, and through frequent misuse has acquired unpleasant and distressing connotations.” While race is generally characterized as a contested term in anthropological discourse, the assumption that it has been unproblematically applied to intraspecies variation by evolutionary biologists is not at all accurate (Livingstone, 1962, p. 279; Templeton, 2002). Biologically, the race concept has equal application to plants, animals, and humans, both philosophically and methodologically. Evolutionary biologists typically refer to subspecies when discussing intraspecies variation, with the terms ‘subspecies’ and ‘geographic race’ being used interchangeably in the taxonomic literature (Mayr, 2002). In the 1940s, Ernst Mayr, perhaps the best known evolutionary biologist of the 20th century, defined a ‘subspecies’ or ‘geographic race’ as a “geographically localized subdivi-
sion of the species, which differs genetically and taxonomically from other subdivisions of the species” (1942, p. 106). However, numerous biologists found this definition to be unsuitably ambiguous when practically applied to intraspecies variation. Individuals within geographically localized populations can easily interbreed with individuals from neighbouring populations, resulting in gradients of the characters used in various classification schemes, created by continuous genetic flow. As such, subspecies designations became increasingly problematized in the 1950s, with biologists such as Wilson and Brown (1953; Brown & Wilson, 1954) arguing that the category should be abandoned altogether due to its highly arbitrary nature. The gradations seen in physical and/or genetic characters between geographic populations within species create a major obstacle that taxonomists have not been able to overcome. This is particularly true when attempting to classify human populations. Modern molecular genetics recognizes that when major human populations are classified as (geographic) races, the amount of genetic variation at the level of morphology, karyotype, proteins and DNA within each race is substantially greater than between races (Griffiths et al., 2000, p. 782; Keita & Kittles, 1997, p. 537; Lewontin, 1972; Mettler et al., 1988, p. 269; Templeton, 1999). This has been demonstrated time and time again in human genetics studies using various loci, from blood groups to proteins, and mtDNA (Barbujani et al., 1997; Dean et al., 1994; Excoffier et al., 1992; Latter, 1980; Lewontin, 1972; Nei & Roychoudhury, 1997; Ryman et al., 1983; Seielstad et al., 1998), showing that so-called racial variation accounts for anywhere between 2.8% and 28.0% of human variation, depending upon the method employed, with individual variation accounting for between 74.7% and 97.8%—the vast majority of overall genetic variation (Brown & Armelagos, 2001). In a survey of the geographic distribution of genes throughout historically-situated populations, Cavalli-Sforza et al. (1994) produced nearly
A Technoethical Approach to the Race Problem in Anthropology
five hundred maps of numerous allele frequencies from genetic samples of individuals from nearly two thousand communities. The results reinforce earlier genetic studies and demonstrate four basic rules of human variation that are important considerations for physical anthropologists in discussing human evolution on a genetic level (Cavalli-Sforza et al., 1994): 1. 2.
3.
4.
Individual variation is much larger than group variation. In combination with the fossil record, it can be confirmed that Africa was the birthplace of humanity (i.e., Homo sapiens). Gene frequencies indicate a large genetic difference between present-day Africans and non-Africans. All Europeans are thought to be hybrid populations, with approximately 65% Asian and 35% African genes (attesting to the greater antiquity of African and Asian populations). Indigenous North American populations were found to be divisible into three distinct groups by blood type, representing three separate migrations from Asia.
The surprising result of these genetic analyses was that the map of world genetic variation shows Africa on one end of the spectrum and Australian aborigines at the other. What this tells us is that patterns of adaptation follow models of clinal variation,4 with Australian aborigines showing the greatest genetic distance from Africans, but the most similarity in terms of phenotypic constitution (i.e. skeletal morphology, and other superficial traits such as skin colour and hair texture).5 Cavalli-Sforza et al. (1994) suggest that the morphological similarity seen between indigenous African and Australian populations is the simple product of adaptation to generally similar climates in regions of sub-Saharan Africa and Australia. This highlights another fundamental problem with the subspecies concept: where Mayr sug-
gested that subspecies should differ “genetically and taxonomically” from one another, we can see that analyses of genetic and morphological data can yield different results. In this instance, the results of Cavalli-Sforza et al.’s genetic analyses do not match the population distances derived from skeletal data. Craniometric data collected by Howells (1973, 1989, 1995) have been used to test the results of the Cavalli-Sforza et al. genetic study (Cavalli-Sforza et al., 1994:72; CavalliSforza & Cavalli-Sforza, 1995, pp. 116–118), but consistently grouping Australians (and Papuans) with sub-Saharan Africans as closely cognate populations. What this demonstrates is that subspecific or racial classification fails biologically in its assumption that evolution can be studied as (evolutionary significant) branching patterns that form phylogenetic trees. This requires that geographic groups be understood as evolutionarily significant ancestor-descendant sequences of breeding populations that share a common origin. This cladistic approach inherently favours the single-origin model of modern human dispersal: that modern humans originated as a racially undifferentiated population of modern humans in Africa approximately 200,000 years ago,6 migrating out of Africa and forming various (approximately) reproductively isolated breeding populations, which can be represented by branches on a phylogenetic tree. When human evolution is understood as a pattern of evolutionary branching, the terminal nodes of a phylogenetic tree represent extant (monophyletic) racial groups, whose historical lineages can be traced backwards to a common ancestor (see Andreasen, 1998, 2000, 2004; Kitcher, 2003). This can be highly problematic when two or more groups being studied have morphological similarities that are the products of environment adaptations rather than biological relationships. A hypothetical situation can further illustrate the fundamental problems with this monophyletic assumption, which causes classification at the level of subspecies to fail: group a and group b
A Technoethical Approach to the Race Problem in Anthropology
are both isolated groups that traditionally lived in areas A and B respectively. Neither group has encountered the other or the other’s relatives in the past. Resources become scarce in both areas A and B, and in response, one-quarter of the families in group a decide to migrate in search of better food resources, and one-quarter of the families in group b do the same, at about the same time. Both splinter groups arrive in area C, which is equidistant to areas A and B, at about the same time, and they find adequate food resources. Both groups stay and decide to settle in area C and begin to intermix, becoming group c. Who would be the common ancestor to group c? In this case, group c would be of mixed ancestry. What if food resources were to improve over time and group c was to prosper in area C and eventually spread out and recombine with group a and/or b? Cladistic theory simply cannot adequately deal with this lineage. This example could apply to geographic groups within any biological species but the example becomes significantly more complex when humans are brought into the equation. This evidence should lead us to conclude that racial classifications are problematic in general, but are particularly problematic when the element of human culture is introduced. Genetic evidence has demonstrated that human mobility has resulted in a high degree of genetic mixing within so-called racial groups. Moore (1994, p. 931) uses ethnographic evidence to support the notion that human groups have complex and intertwined historical lineages rather than being unilineal ancestor-descendant sequences: The criticisms of synthetic theory currently being developed come largely from experienced fieldworkers and are based on the observation that the historical scenarios postulated by synthesists—in which ethnic groups split, evolve homogeneously within ethnic boundaries, and then split again in a cladistic manner—simply do not seem familiar
0
ethnographically or ethnohistorically. How many tribal societies are there in which all the members are monolingual in the same language, marry only among themselves, and are homogeneous in their traditions and material culture? Synthetic theories such as cladism, which have been used to explain relationships and geographical distributions of language, culture, and physical types are generally weak theoretically and not particularly suitable for the study of modern human groups (Moore, 1994). Moore (1994) contrasts cladistic with rhizotic theories, explaining that where cladism emphasizes historical parent-daughter (or ancestor-descendant) relationships, rhizotic theories emphasize the extent to which each human language, culture, or biological group is derived from or rooted in several ancestral groups or lineages, which he suggests is better characterized as the process of ethnogenesis. In Moore’s (1994, p. 927) view, ethnogenesis provides a logical alternative explanation for the global distribution of languages, cultures, physical types, and archaeological traditions, and he makes the important point that ethnogenesis stands in contrast to all hierarchical taxonomies, which, regardless of their aim or theoretical bases, are clearly meant to represent cladistic relationships. Such hierarchical models are based on the presumption, in contrast to the hypothetical example above, that once a daughter species (or subspecies, race, etc.) differentiates from its parent, it will not thereafter incorporate any genetic material from its siblings (Moore, 1994, p. 928). Cladograms and other hierarchical methods of phylogenetic reconstruction require that each terminal unit of a phylogenetic tree have only one parental unit, whereas rhizograms show the convergence of diverse roots (possibly from multiple parental stock) forming hybridized or amalgamated descendant groups (Moore, 1994, p. 930). In a survey of tribes in California, the Great Basin, Sonora, Tierra del Fuego, and Australia, Owen (1965) found that instead of ethnically
A Technoethical Approach to the Race Problem in Anthropology
discrete bands or tribes, that multilingualism, intermarriage across supposedly profound ethnic boundaries, and enormous cultural variation was the norm. Similarly, in a study of aboriginal band types on the Canadian plains, Sharrock (1973) found that there were three common types of interethnic social organization: alliance, intermarriage and polyethnic co-residence, and fused ethnicity (Moore, 1994, p. 936). Moore (1994, p. 938) believes that the essential difficulty between cladistic and ethnogenetic theory lies in the long term stability of ethnic groups: cladistic theory requires that ethnic groups remain stable for hundreds or thousands of years, whereas ethnogenesis attempts to understand the processes in which ethnic groups are transformed through time. While this contrast might not be as problematic as Moore suggests, the general point of incommensurability is that cladistics focuses on the biological fissioning of groups, whereas ethnogenetic theory deals with transition, which in many cases involves the rearrangement or fusion of groups. Traditional methods of biological classification simply have not accounted for the complexities of population biology at the local level, looking at how social networks constrain the follow of genes within and between biological groups.
Is tHErE aN EtHIcaL DIMENsION tO tHE racE PrObLEM? Although we know that racial classification has limited, if any, biological utility, we have no workable models to replace race-based models, which remain a basic starting point for discussions of intraspecies variation. A vast but underappreciated body of literature critical of biological subspecies classification has demonstrated the pragmatic problems of such classification schemes: that they are arbitrary due to the overlapping of morphological and/or genetic traits, which themselves show incongruent or inconsistent microevolutionary patterns. Nonetheless, the existing voluminous literature on race and/or racism is lacking due to
the almost exclusive focus on the historical development of the race concept rather than proposing alternative methods or taxonomic schemes. As a result, even in light of new scientific evidence, perspectives on human variation continue to be structured within the racial paradigm. Since race continues to be used as an explanatory model for human evolutionary schemes, the perpetuation of racial language gives the illusion that the race concept itself has real biological meaning. Thus, the race concept remains the basic starting point in discussions of human variation, despite ever-increasing evidence demonstrating the inability of the race concept to account for the patterns of variation seen below the level of species. In this respect, Stoler (1997, p. 198) explains that there are “fixities and fluidities” in racial discourse: despite our demonstration of the complexities of the patterns of human evolution and population biology, simple racial language has remained relatively static in our discussions of such patterns. As such, surveys of literature dealing with the changing perception of race as a biological concept provide significant proof that racial categories are fluid and not fixed, which confirms that “they can and should be undone” (Stoler, 1997, p. 198, emphasis added). This is the ethical aspect of my argument: the more we understand about population biology, the more apparent it becomes that the race concept is fundamentally flawed. The fact that the race concept has been inaccurately perpetuated for decades, provides absolute evidence that we have an ethical obligation to find a real solution, since we are well aware of the atrocities done around the world in the name of racism. While I do not equate racialism with racism, I believe that technological advancement in the analysis of human biological variation is giving us the tools to build workable non-racial models of human variation which, when widely understood, will have obvious profoundly positive results for anti-racism movements. Such a movement will find its legitimacy in the fact that it is rooted in basic biology, and not as an explicit attack on sociological race categories.
A Technoethical Approach to the Race Problem in Anthropology
In terms of the relationship between technology and the ethical problems inherent in using the traditional race concept, the sparse but important literature on technoethics provides an important basis for moving forward in this discussion. For Bunge (1976, p. 155), the term ‘technology’ has broad and encompassing applications, extending to material, social, conceptual, and general technologies. For the purposes of this discussion, material technology (including physical, chemical and biological science), as well as social technology (including economic, psychological, and sociological science), will be the main reference, since they are implicitly tied to the discipline of anthropology. According to Bunge (1977, p. 100), “the technologist must be held not only technically but also morally responsible for whatever he designs or executes: not only should his artifacts be optimally efficient but, far from being harmful, they should be beneficial, and not only in the short run but also in the long term.” This is where ethics compels us to utilise biotechnology as a tool for developing new understandings of population biology. Racial models are neither optimally efficient nor are they beneficial from a biological perspective, since they simply do not work and only serve to mischaracterize the patterns of biological variation seen within geographically diverse species. In light of this technoethical argument, any researcher who continues to use outdated models despite the availability of new technologies allowing them to explore progressive theories and methods is doing a disservice to their discipline, and the scientific community as a whole. Technology should be subjected to moral and social controls (Bunge, 1977, p. 106). We have seen the negative personal and societal effects of the misuses of biotechnology which led to eugenic movements in Europe and North American starting in the 1930s, and which ultimately lead the Nazi Holocaust in the 1940s. In Western Canada, eugenic sterilization laws were in place from 1928 to 1972, resulting in a total of 2,832
adults and children being sterilized by government order (Wahlsten, 1997). Such atrocities were due to the lack of moral and social controls on the use of technology—even though there was a societal belief that technology was being used for the betterment of society, individual rights were widely ignored. The technoethical approach suggests not that we suppress scientific progress because of societal pressures, but that the misuses of good technology be corrected by promoting better technology and rendering it morally and socially sensitive. As such, the technologist must be responsible not only to his or her profession, but to all those who might be affected by his or her work. Furthermore, a technologist who strives to make a contribution to alleviating social ills (including public misunderstandings of science, as in the example of the perpetuation of racial language) or to improving the quality of life is a public benefactor (Bunge, 1977, pp. 106–107). The race problem remains unresolved because it is far too complex to be resolved by any single technologist, or group of technologists within a particular discipline, which has allowed simple answers to prevail. Racial models are remarkably simple in their explanatory powers and, as such, have general appeal in terms of relaying conceptual information to wide audiences. We are now well poised, in an age where interdisciplinary research is highly valued academically, to provide a comprehensive response to the race problem. According to Bunge (1977, p. 107), “Because no single specialist can cope with all of the manysided and complex problems posed by large-scale technological projects, these should be entrusted to teams of experts in various fields, including applied social scientists.” Similarly, and specifically in the context of the race problem, Eze asks (2001, pp. 29–30): “If geneticists have shown that ‘race’ is not a useful concept, it may be up to the social scientists to invent new ways of making this development available to the general culture.” Anthropologists, who deal with both the social and
A Technoethical Approach to the Race Problem in Anthropology
biological realms, are perfectly placed to make perhaps the greatest contribution to redefining human variation. As a discipline that has actively participated in perpetuating the myth of human races, this should be an obligatory goal. If the ‘race problem’ in anthropology can be redefined rather than simply re-articulated, then new ways of exploring human variation can be developed. Histories of racial thought have filled innumerable volumes, and they have served us well, but such treatments (Billinger, 2000; Brace, 2005; Smedley, 1999) should now simply provide a background from which a new line of argumentation will take shape. The goal, then, is to bring together progressive theory and method in the analysis of human biological variation. This should not be specific to any particular anthropological subdiscipline, but a concerted effort by all anthropologists along with our colleagues in biology, medicine, history, philosophy, sociology, political science and other related disciplines.
FUtUrE trENDs: caN tEcHNOLOGY bE UsED tO sOLvE tHE racE PrObLEM? In terms of anthropologists’ approaches to the analysis of human biological relationships, there has been a rapid evolution in the technological sophistication of methodological tools. However, traditional theories of cultural and ethnic interaction typically tend to be cladistic in nature, the problems with which have already been discussed. As such, the reliance on racial classification has resulted in the relationship between ethnicity and human biology being treated in far too simplistic a manner (see Billinger, 2007a, 2007b, 2006; Crews & Bindon, 1991; Chapman, 1993; Montagu, 1997). The complex system of ethnic relations proposed by Moore, as introduced earlier, is actually derived from a relatively old and underappreciated theory of ethnic biology that can be traced back over a century.
Deniker (1900) first introduced the idea that human biology is ethnically structured, and this notion was more clearly elaborated by Huxley and Haddon (1935). Although this ethnic concept was adapted by Ashley Montagu as a genetical theory7 in 1942 (which was further developed by Hulse in 1969), little is understood about how it can be applied to re-building human taxonomy. Moore’s critique builds on this body of work, and provides many facets from which his cultural approach can be modified into a strong theoretical basis for new methodological strategies. If we can understand how to conceptualize the flow and/or isolation of genes and the relation to endogamous or exogamous practices, and human mobility through time, then physical anthropologists can move much closer toward understanding both the patterns and processes involved in creating the morphological variations of prehistoric, historic, and contemporary populations. This should form one part of the proposed interdisciplinary project. I want to now use the example of how technological progress in the methodology employed by physical anthropologists can fuel this project, adapting multidimensional techniques for exploring biological patterns to fit the multidimensional theory of ethnic biology. Perhaps the most diagnostic area for study of human morphology and phylogeny is the skull. Anatomically modern humans show considerable geographic variation in the form of the facial skeleton. Howells (1973, 1989, 1995) was a pioneer in this regard, publishing a vast global data set for use in analyses of global and regional variation in craniofacial morphology. Howells was interested in taxonomy, but saw it not as a means to an end, but rather as tool for comparative description: as an exploration of genetic populations rather than racial types.8 As we will see, the primary factors determining the outcome of such analyses are the types of data available and the types of analysis applied to them. For instance, Billinger (2006, Chapter 5) uses Howells’ (1973, 1989) craniometric data,
A Technoethical Approach to the Race Problem in Anthropology
supplemented with further data for North and South American indigenous populations, to test the craniometric relationships of known world populations.9 Initially, a univariate method was employed in which each of the individual cranial measurements was treated by Mayr’s (1969, pp. 187–197) coefficient of difference (CD), a simple calculation that incorporates the mean values and
standard deviations of each trait for each group. Applying this measure to the data sheds light on whether, using the selected craniometric traits, the groups used in the analysis are differentiated enough to form a subspecies according to Mayr’s definition.10 Accordingly, a CD value of 1.28 reveals that 90% of individuals in population a are different from 90% of individuals in population
Table 1. Average coefficient of difference data matrix for group-by-group comparisons (from Billinger, 2006, p. 234) MALE Norse Zalavar Berg Egypt Teita Dogon Zulu Bushman Andaman Australia Tasmania Tolai N. Japan S. Japan Hainan Ainu Buriat Siberia Eskimo NWT Arikara Navajo Mexico Peru
Norse 0.00 . . . . . . .0 . .0 . . . .0 . . . . .0 . .00 . . .
Zalavar
Berg
Egypt
Teita
Dogon
Zulu
0.00 . . . . . . . . .0 . . . . .0 . . . . . . . .
0.00 . . . . . . . .0 . . . .0 . . .0 . . . .0 . .
0.00 . . . . . . . . . .0 . . .0 .0 . . . . . .
0.00 .0 . .0 . .0 . .0 . .0 .0 .0 . . . .0 . . . .
0.00 . . . . . .0 . . . . .0 . . .0 . . . .
0.00 . . . . . . . . .0 .0 . . . . . . .
0.00 . .0 . . . . .0 .0 .0 13.25 . . . 0. . .
0.00 . .0 . . . . 0. 14.64 14.49 . 12.90 . . . .
0.00 . . . . . . . . . . .0 .0 . .
0.00 . . . . . . .0 . . . .0 . .
0.00 . . . . . . . . . . . .
Hainan
Ainu
Buriat
Siberia
Eskimo
NWT
Arikara
Navajo
Mexico
Peru
0.00 . . . . . . . . .0
0.00 . . . . . . .0 .
0.00 . . . . . . .
0.00 . . . . .0 .
0.00 . . . . .0
0.00 . . . .0
0.00 . . .
0.00 . .
0.00 .0
0.00
MALE N. Japan S. Japan Norse Zalavar Berg Egypt Teita Dogon Zulu Bushman Andaman Australia Tasmania Tolai N. Japan 0.00 S. Japan .0 0.00 Hainan .0 . Ainu . .0 Buriat . .0 Siberia . . Eskimo .0 . NWT .0 . Arikara .0 . Navajo . .0 Mexico . . Peru . .
Bushman Andaman Australia Tasmania
Tolai
*Average calculated by taking the sum of the correlation coefficients for each group-by-group comparison for each craniometric trait.
A Technoethical Approach to the Race Problem in Anthropology
b, which is equal to 75 percent of individuals from population a differing from all individuals of population b. Table 1 gives the results of this analysis using group-by-group comparisons, with each result being multiplied by 10. The CD value of 1.28 should also be multiplied by the same factor. Therefore, results over 12.80 show a level of differentiation over 75 percent. Based on the traits used, only the comparison of groups on the extremities of the geographic range of populations give results over 12.80, those being Siberia-Bushman, Buriat-Andaman, Siberia-Andaman, and NWT-Andaman. The remaining group-by-group comparisons show a pattern of intergradations. The relationships between each of groups based on the results of the CD were treated by cluster analysis (Figure 1) and multidimensional scaling (Figure 2) for visualisation. The results seen are not unexpected based on ‘common sense’ racial
categories derived from skin colour (Andreasen, 2004, pp. 428–430). However, it should be noted that this method calculates the mean differences between each measurement and, as such, it represents a single correlation coefficient calculated from average differences in size. For taxonomic purposes, shape is considered to be of significantly higher value than size (Corruccini, 1973, 1987). According to Howells (1973, p. 3): Populations have been compared (normally two populations at a time) in one measurement at a time. Such estimates of difference, or distance, do not, however, allow consideration of the shape of the skull as a whole, as this is expressed by the relations between measurements (though of course two measurements have been commonly related in a ratio or index). As a result, there had been little consideration as to whether the measurements in
Figure 1. Cluster analysis dendrogram (average linkage method) plotting Coefficient of Difference results for each group in the global sample (from Billinger, 2006, p. 236)
Cluster Tree SIBERIA NWT ESKIMO ARIKARA BURIAT NAVAJO MEXICO PERU BERG NORSE ZALAVAR EGYPT HAINAN S_JAPAN N_JAPAN AINU TASMANIA TOLAI AUSTRALIA TEITA ZULU DOGON BUSHMAN ANDAMAN
0
Distances
A Technoethical Approach to the Race Problem in Anthropology
Figure 2. Multidimensional scaling of coefficient of difference results for each group in the global sample (from Billinger, 2006, p. 238)
Configuration
Dimension-
-
NAVAJO ANDAMAN N_JAPAN HAINAN S_JAPAN ZALAVAR SIBERIA EGYPT PERU NORSE BERG DOGON NWT ESKIMO MEXICO BUSHMAN ZULU ARIKARA AINU TEITA BURIAT TOLAI TASMANIA AUSTRALIA
- -
- Dimension-
Figure 3. Cluster analysis dendrogram (average linkage method) plotting Penrose’s shape distance for each group in the global sample (from Billinger, 2006, p. 249)
Cluster Tree BUSHMAN ZULU TEITA TOLAI AUSTRALIA TASMANIA NORSE ZALAVAR EGYPT N_JAPAN AINU S_JAPAN HAINAN MEXICO DOGON ANDAMAN BERG PERU ARIKARA BURIAT SIBERIA NWT ESKIMO KENNEWICK NAVAJO
0
0
0
0 0 Distances
0
0
A Technoethical Approach to the Race Problem in Anthropology
use do in fact reflect the total configuration of the skull adequately for taxonomic purposes (although traditional lists naturally have attempted to cover many aspects of the skull and face). Methods of multivariate analysis such as those developed by Penrose (1954) allow for the study of shape variation by accounting for the relations between measurements, and represent a great advancement over univariate analyses. The second step in this study was then to apply a multivariate method for analysing shape. Figure 3 gives the resulting dendrograms created by cluster analysis of shape coefficients for each group, obtained using Penrose’s shape distance calculations for the same craniometric traits used in the univariate analysis. Figure 4 plots the shape coefficients derived from the Penrose method by multidimensional scaling. Note the discontinuities seen between the dendrograms and multidimensional scaling plots for the multivariate and univariate analyses—the difference between size
and shape-based data is readily apparent. This picture becomes increasingly complicated when attempting to place an individual into the scheme of world-wide variation for the purposes of forensic ancestral determination. In this instance, the aim was to determine the biological ancestry of the ‘Kennewick Man’ skull.11 The results show that the Kennewick skull groups with northern (Arctic) North American indigenous populations when compared to globally-distributed populations. However, when the Kennewick skull is compared only to regional American indigenous populations,12 also using Penrose’s shape distance, a very different (and more complex) picture of biological relationships appears (Figures 5 and 6). It should be kept in mind that biological distance calculated from morphological data does not give an accurate approximation of genetic distance—it gives only the phenetic distance between the average values for each population—which is problematic because biological distance is the
Figure 4. Multidimensional scaling of Penrose’s shape distance for each group in the global sample (from Billinger, 2006, p. 250)
Configuration
Dimension-
-
ANDAMAN
BUSHMAN
DOGON PERUZULUTEITA BERG HAINAN TASMANIA TOLAI S_JAPAN EGYPT AUSTRALIA AINU NAVAJO ARIKARA ZALAVARN_JAPAN MEXICO NORSE BURIAT SIBERIA ESKIMO NWT KENNEWICK
- -
- Dimension-
A Technoethical Approach to the Race Problem in Anthropology
Figure 5. Cluster analysis dendrogram (average linkage method) plotting Penrose’s shape distance for each group in the sample of regional American indigenous populations (from Billinger, 2006, p. 256)
Cluster Tree PALEOINDIAN ESKIMO BJC KENNEWICK MEXICO PERU NWC ARIKARA GREENVILLE PRH NAMU HAIDA SIBERIA NWT TSIMSHIAN BURIAT NAVAJO
0
0
0
0 0 Distances
0
0
Figure 6. Multidimensional scaling of Penrose’s shape distance for each group in the sample of regional American indigenous populations (from Billinger, 2006, p. 257)
Configuration
Dimension-
-
- -
TSIMSHIAN BURIAT
NAMU HAIDA GREENVILLE SIBERIA PRH NWC NWT ARIKARA KENNEWICK ESKIMO PALEOINDIAN BJC NAVAJO MEXICO PERU
- Dimension-
A Technoethical Approach to the Race Problem in Anthropology
main factor in determining taxonomic groups and hierarchical ordering. The calculation of morphological distance is entirely dependant upon the number and type of morphological features used for the analysis, and the number of populations included. These considerations are often dependant upon the availability of published data, which may be limited. Furthermore, the ordering of the cluster analysis can only be accurately interpreted in the presence of reliable historical data on the groups used in the analysis and should not be taken as a true representation of the cladistic affinity of populations, only as an idealized model. Standard craniometric methods do provide interesting results for studying the patterns of evolution and speciation—and the transition from univariate to multivariate analyses was a huge leap forward—but greater potential lies in moving beyond simple linear craniometric morphometrics into ‘modern morphometrics’ (Slice, 2005). Advancements in three-dimensional (3D) digital imaging provide new opportunities for progressive study of size and shape space through geometric morphometrics. Comparison of geometric morphometric coordinate data to traditional multivariate morphometrics show that although traditional caliper measurements can provide adequate results (Reig, 1996), 3D data produces more powerful statistics with higher correlation and lower probabilities of spurious results, providing a clearer picture of variation with more distinct clusters (McKeown & Jantz, 2005). Geometric morphometrics yield highly visual and readily interpretable results (Collard & O’Higgins, 2001; Vidarsdottir et al., 2002), offering multivariate techniques that cover all possible shape measurements of a region separately, exploring the patterns of their correlations with all possible shape measures of other regions in one single computation. Further, such analysis allows for the comparison of two distinct integrative factors—ontogeny and phylogeny—as they
apply to a shared regionalization within samples of modern Homo sapiens or prehistoric hominoid crania (Bookstein et al., 2003). Geometric morphometrics provides insight into the usefulness of various traits for taxonomic purposes through the analysis of allometric (size and shape covariation) and ontogenetic trajectories for determining homology (Bastir & Rosas, 2004; Humphries, 2002; Lieberman, 2000; Mitteroecker et al., 2004). Looking at homologies between species may provide the key to isolating important microevolutionary traits within species. These trajectories also give insight into the effects of age and sex related variation in population data. There remain, however, two fundamental questions to be asked regarding morphometric analyses and the relationship between continuously distributed morphological traits and their phylogenetic importance: can continuously distributed variables be used to make phylogenetic inferences, and can morphometric variables be used to test phylogenetic hypotheses (MacLeod & Forey, 2002)? Although the statistical complexity of shape distances lead Rohlf and Marcus (1993) to caution against assumptions that they can be safely used as measures of taxonomic distance, numerous recent studies attest to the potential for phylogenetic reconstructions using geometric morphometric data in modern humans and other hominid species. However, no studies have yet provided conclusive results. Bookstein et al. (1999) believe that refinements in methods of morphometric analysis will lead to new perspectives on phylogenetic and functional relationships among and within hominid species. The fact that 3D coordinate data collection has become much more feasible with the development of more economical portable digitizers should result in a rapid increase in the availability of data to answer these phylogenetic questions. Neanderthals provide an especially interesting case for studying inter and intra-species variation, since it is only recently that it has been demonstrated both (geometric) morphometrically
A Technoethical Approach to the Race Problem in Anthropology
(Harvati, 2004; Harvati et al., 2004) and through mitochondrial DNA (Currat & Excoffier, 2004; Serre et al., 2004)13 that the Neanderthals (Homo neanderthalensis) represent a distinct species and are not a subspecies of Homo sapiens, following years of bitter debate (Stringer, 2002). Such phylogenetic analysis may result in immediate practical advancements as well. The results of these analyses will be important in the context of forensic anthropology, particularly in dealing with subadult forensic remains. There is, at present, no way of reliably assigning subadult remains to ancestral groups based on skeletal morphology (Vidarsdottir et al., 2002). Bringing forensic anthropological methods into the realm of human variation studies will be a benefit not only for forensic identification purposes, but also for the perceived misrepresentation of human biological patterns by forensic anthropologists.
cONcLUsION Methodology must be developed on a strong theoretical basis, and as such, the methods of phylogenetic reconstruction will continue to suffer if they remain focused on unilineal patterns of evolutionary branching. Humans do not follow these simple theoretical patterns. I have argued elsewhere (Billinger, 2006) that: (a) humans follow ethnogenetic processes, and the study of these processes requires a much higher level of theoretical sophistication to decode the multiple and interlocking causation of patterns by which humans have grouped together— factors such as language, political divisions, religion, etc., not just by geography; (b) humans form ethnic groups, and such groups are fluid and historically contingent; (c) the degree to which physically or genetically distinguishable groups form is unlikely to be near a level warranting subspecies designations. This chapter has provided a summary of these arguments intended to reignite the race debate in the context of technoethical considerations. That the
0
fluid biosocial condition outlined in this chapter exists only in human populations is enough to dismiss the notion of race at least in reference to modern humans, and likely in general biological application. The main challenge it presents is that it is simply not enough to insist on the substitution of racial terminology for neutral referents, but a wholesale re-evaluation of human taxonomy may be necessary to get at the true patterns of variation. Eze (2001) quite correctly suggests that social scientists should assume the role of making genetic refutations of the race concept accessible to the general public. I believe that the key to destroying racialist thought is by dismantling its biological basis, which has served only to reify the race concept and obscure our understanding of the nature of group biology. However, this will not eliminate racism in its social manifestation(s). In order to adequately tackle this problem, we might find a solution in the language of human rights, which is fundamentally tied to the idea of human uniqueness as our unifying feature. This perspective implicitly ties humans’ social existence to our basically biology, and provides further evidence that there is an inherent ethical argument against the perpetuation of the race problem. Ignatieff (2000, pp. 39–40) suggests that human rights derive their force in our conscience from the sense that we all belong to a single species, and that we can recognize ourselves in every other human being we meet. In other words, “to recognize and respect a stranger is to recognize and respect ourselves,” since having an intense sense of one’s own worth is a precondition for recognizing the worth of others. Ignatieff (2000, p. 41) believes that to commit ourselves to this way of thinking about the relationship between human equality and human difference—that human equality actually manifests itself in our differences—is to understand our commonalities as human beings in the very way we differentiate ourselves (as peoples, as communities, and as individuals). As such, we humans are not simply
A Technoethical Approach to the Race Problem in Anthropology
biologically variable, but we also display astoundingly different ways in which we decorate, adorn, perfume, and costume our bodies in order to assert our identities as individuals and members of tribes or communities. According to Ignatieff (2000, p. 53):
Andreasen, R.O. (2000). Race: Biological reality or social construct? Philosophy of Science, 67 (Proceedings), S653–S666.
Marx was simply wrong when he claimed, in 1843, that rights talk reduces us all to abstract, equal individuals, held together by our biological sameness. The claim I would make is the reverse. If the supreme value that rights seek to protect is human agency, then the chief expression of human agency is difference, the ceaseless elaboration of disguises, affirmations, identities, and claims, at once individually and collectively. To believe in rights is to believe in defending difference.
Barbujani, G., Magagni, A., Minch, E., & CavalliSforza, L.L. (1997). An apportionment of human DNA diversity. Proceedings of the National Academy of Science USA, 94, 4516–4519.
It is this kind of inclusive thinking that should push anthropologists to lead the way toward progressive approaches to the study of human cultural and biological relationships. However, race and racism should not be confused with one another, and indictments against the race concept should not be based on a rejection of racist thought, but must be grounded in solid biological fact. The inconsistencies seen in various types of biological data should stimulate us to rethink the ways in which we categorize human groups. Once we have found a way to move beyond the racial paradigm in terms of the ways in which we conceive of human biological relationships, then we can start to rethink the ways in which we treat others not only as members of social or biological groups, but as individuals.
rEFErENcEs Andreasen, R.O. (1998). A new perspective on the race debate. British Journal for the Philosophy of Science, 49, 199–225.
Andreasen, R.O. (2004). The cladistic race concept: A defense. Biology and Philosophy, 19(19), 425–442.
Bastir, M., & Rosas, A. (2004). Geometric morphometrics in paleoanthropology: Mandibular shape variation, allometry, and the evolution of modern human skull morphology. In A.M.T. Elewa (Ed.), Morphometrics: Applications in biology and paleontology (pp. 231–244). New York: Springer. Billinger, M.S. (2000). Geography, genetics, and generalizations: The abandonment of ‘race’ in the anthropological study of human biological variation. Unpublished Master’s thesis, Carleton University, Ottawa. Billinger, M.S. (2006). Beyond the racial paradigm: new perspective on human biological variation. Unpublished doctoral dissertation, University of Alberta, Edmonton. Billinger, M.S. (2007a). Another look at ethnicity as a biological concept: Moving anthropology beyond the race concept. Critique of Anthropology, 27(1), 5–35. Billinger, M.S. (2007b). Gene expression and ethnic differences. Science, 315(5318), 766. Bookstein, F.L., Gunz, P., Mitteroecker, P., Prossinger, H., Schaefer, K., & Seidler, H. (2003). Cranial integration in Homo reassessed: Morphometrics of the mid-sagittal plane in ontogeny and evolution. Journal of Human Evolution, 44, 167–187. Bookstein, F.L., Schäfer, K., Prossinger, H., Seidler, H., Fieder, M., Stringer, C., Weber, G.W.,
A Technoethical Approach to the Race Problem in Anthropology
Arsuaga, J-L., Slice, D.E, Rohlf, F.J., Recheis, W., Mariam, A.J., & Marcus, L.F. (1999). Comparing frontal cranial profiles in archaic and modern Homo by morphometric analysis. The Anatomical Record (New Anatomist), 257, 217–224. Brace, C.L. (1964). A nonracial approach towards the understanding of human diversity. In A. Montagu (Ed.), The concept of race (pp. 103–152). New York: Free Press. Brace, C.L. (1995). Region does not mean race: reality versus convention in forensic anthropology. Journal of Forensic Sciences, 40(2):171–175. Brace, C.L. (1996 [2000]). A four-letter word called ‘race.’ In C.L. Brace (Ed.), Evolution in an anthropological perspective (pp. 283–322). Walnut Creek: AltaMira Press. Brace C.L. (2005). “Race” is a four-letter word: The genesis of the concept. New York: Oxford University Press. Brown, R.A., & Armelagos, G.J. (2001). Apportionment of racial diversity: A review. Evolutionary Anthropology, 10, 34–40. Brown, W.H., & Wilson, E.O. (1954). The case against the trinomen. Systematic Zoology, 3(4), 174–176. Brues, A.M. (1992). Forensic diagnosis of race: general race vs. specific populations. Social Science and Medicine, 34(2), 125–128. Bunge, M. (1976). The philosophical richness of technology. Proceedings of the Biennial Meeting of the Philosophy of Science Association. Volume 2: Symposia and Invited Papers (pp. 153–172). Bunge, M. (1977). Towards a technoethics. The Monist, 60, 96–107. Cartmill, M. (1997). The third man. Discover, 18(9). Electronic document, http://www.discover.com/issues/sep-97/departments/thethirdman1220/, accessed April 4, 2000.
Cartmill, M. (1998). The status of the race concept in physical anthropology. American Anthropologist, 100(3), 651–660. Cartmill, M., & Brown, K. (2003). Surveying the race concept: A reply to Lieberman, Kirk, and Littlefield. American Anthropologist, 105(1), 114–115. Cavalli-Sforza, L.L., & Cavalli-Sforza, F. (1995). The great human diasporas: the history of diversity and evolution. Reading: Addison-Wesley. Cavalli-Sforza, L.L., Menozzi, P., & Piazza, A. (1994). The history and geography of human genes. Princeton: Princeton University Press. Chapman, M. (Ed.). (1993). Social and biological aspects of ethnicity. Oxford: Oxford University Press. Chatters, J.C. (2000). The recovery and first analysis of an early Holocene human skeleton from Kennewick, Washington. American Antiquity, 65(2): 291–316. Collard, M., & O’Higgins, P. 2001. Ontogeny and homeoplasy in the papionin monkey face. Evolution and Development, 3, 322–331. Collard, M. & Wood, B. (2000). How reliable are human phylogenetic hypotheses? Proceedings of the National Academy of Sciences USA, 97(9), 5003–5006. Corruccini, R.S. (1973). Size and shape in similarity coefficients based on metric characters. American Journal of Physical Anthropology, 38, 743–754. Corruccini, R.S. (1987). Shape in morphometrics: comparative analysis. American Journal of Physical Anthropology, 73, 289–303. Crews, D.E., & Bindon, J.R. (1991). Ethnicity as a taxonomic tool in biomedical and biosocial research. Ethnicity and Disease, 1, 42–49. Currat, M., & Excoffier, L. (2004). Modern humans did not admix with Neanderthals during
A Technoethical Approach to the Race Problem in Anthropology
their range expansion into Europe. PLOS Biology, 2(12), 2264–2274. Dean, M., Stephens, C., Winkler, C., Lomb, D.A., Ramsburg, M., Boaze, R., Stewart, C., Charbonneau, L., Goldman, D., Albough, B.J., Goedert, J.J., Beasley, P., Hwang, L-V., Buchbinder, S., Weedon, M., Johnson, P.A., Eichelberger, M., & O’Brian, S.J. (1994). Polymorphic admixture typing in human ethnic populations. American Journal of Human Genetics, 55, 788–808. Deniker, J. (1900 [1904]). The races of man: An outline of anthropology and ethnography. London: Walter Scott Publishing Co. Ltd. Dunn, L.C. (1951). Race and biology. Paris: UNESCO. Excoffier, L., Smouse, P.F., & Quattro, J.M. (1992). Analysis of molecular variance inferred from metric distances among DNA haplotypes: applications to human mitochondrial DNA restriction data. Genetics, 131, 479–491. Eze, E.C. (2001). Achieving our humanity: the idea of a postracial future. New York: Routledge. Gould, S.J. (1996). The mismeasure of man. Revised and expanded edition. New York: W.W. Norton and Co. Griffiths, A.J.F., Miller, J.H., Suzuki, D.T., Lewontin, R.C., & Gelbart, W.M. (2000). An introduction to genetic analysis. 7th edition. New York: W.H. Freeman. Harvati, K. (2004). 3-D geometric morphometric analysis of temporal bone landmarks in Neanderthals and modern humans. In A.M.T. Elewa (Ed.), Morphometrics: Applications in biology and paleontology (pp. 245–258). New York: Springer. Harvati, K., Frost, S.R., & McNulty, K.P. (2004). Neanderthal taxonomy reconsidered: Implications of 3D primate models of intra- and interspecific differences. Proceedings of the National Academy of Science USA, 101(5), 1147–1152.
Hawks, J., & Wolpoff, M.H. (2003). Sixty years of modern human origins in the American Anthropological Association. American Anthropologist, 105(1), 89–100. Herrnstein, R.J., & Murray, C. (1994). The bell curve: intelligence and class structure in American life. New York: Free Press. Howells, W.W. (1973). Cranial variation in man: a study by multivariate analysis of patterns of difference among recent human populations. Papers of the Peabody Museum of Archaeology and Ethnology, Volume 67. Cambridge: Harvard University Press. Howells, W.W. (1976). Explaining modern man: evolutionists versus migrationists. Journal of Human Evolution, 6, 477–495. Howells, W.W. (1989). Skull shapes and the map: Craniometric analyses in the dispersion of modern Homo. Papers of the Peabody Museum of Archaeology and Ethnology, Volume 79. Cambridge: Harvard University. Howells W.W. (1995). Who’s who in skulls: Ethnic identification of crania from measurements. Papers of the Peabody Museum of Archaeology and Ethnology, Volume 82. Cambridge: Harvard University. Hulse, F.S. (1969). Ethnic, caste and genetic miscegenation. Journal of Biosocial Science, Supplement No. 1. Oxford: Blackwell Scientific Publications. Humphries C.J., (2002). Homology, characters and continuous variables. In N. MacLeod & P.L. Forey (Eds.), Morphology, shape and phylogeny (pp. 8–26). New York: Taylor & Frances. Huxley, J.S., & Haddon, A.C. (1935). We Europeans: A survey of ‘racial’ problems. London: J. Cape. Ignatieff, M. (2000). The rights revolution. Toronto: House of Anansi Press.
A Technoethical Approach to the Race Problem in Anthropology
Kaszycha, K.A., & Štrkalj, G. (2002). Anthropologists’ attitudes towards the concept of race: The Polish sample. American Anthropologist, 43(2), 329–335. Kaszycha, K.A., & Strzałko, J. (2003). Race: Tradition and convenience, or taxonomic reality? More on the race concept in Polish anthropology. Anthropological Review, 66, 23–37. Keita, S.O.Y., & Kittles, R. (1997). The persistence of racial thinking and the myth of racial divergence. American Anthropologist, 99(3), 534–544. Kennedy, K.A.R. (1995). But professor, why teach race identification if races don’t exist? Journal of Forensic Sciences, 40(5), 797–800. Kennedy, K.A.R., & Chiment, J. (1992). Racial identification in the context of prehistoric-historic biological continua: Examples from South Asia. Social Science and Medicine, 34(2), 119–123. Kitcher, P. (2003). In Mendel’s mirror: Philosophical reflections on biology. New York: Oxford University Press. Latter, B.D.H. (1980). Genetic differences within and between populations of the major human subgroups. American Naturalist, 116, 220–237. Levi-Strauss, C. (1958). Race and history. Paris: UNESCO. Lewontin, R.C. (1972). The apportionment of human diversity. Evolutionary Biology, 6, 381–398. Lieberman, D.E. (2000). Ontogeny, homology, and phylogeny in the hominid craniofacial skeleton: the problem of the browridge. In P. O’Higgins & M.J. Cohn (Eds.), Development, growth and evolution: implications for the study of the hominid skeleton (pp. 86–115). New York: Academic Press. Lieberman, L. (1968). The debate over race: a study in the sociology of knowledge. Phylon, 39(2), 127–141.
Lieberman, L., Kirk, R.C., & Littlefield, A. (2003). Perishing paradigm: race, 1931–99. American Anthropologist, 105(1), 110–113. Lieberman, L., Stevenson, B.W., & Reynolds, L.T. (1989). Race and anthropology: a core concept without consensus. Anthropology and Education Quarterly, 20, 67–73. Livingstone, F.B. (1958). Anthropological implications of sickle cell gene distribution in West Africa. American Anthropologist, 30(3), 533–562. Livingstone, F.B. (1962). On the non-existence of human races. Current Anthropology, 3(3), 279–281. MacLeod, N., &. Forey, P.L. (Eds.). (2002). Morphology, shape and phylogeny. New York: Taylor & Frances. Marks, J. (1995). Human biodiversity: Genes, race and history. New York: Aldine de Gruyter. Mayr, E. (1942). Systematics and the origin of species. New York: Columbia University Press. Mayr, E. (1969). Principles of systematic zoology. New York: McGraw-Hill. Mayr, E. (2002). The biology of race and the concept of equality. Daedalus (Winter 2002), 89– 94. McKeown, A.H., & Jantz, R.L. (2005). Comparison of coordinate and craniometric data for biological distance studies. In D.E. Slice (Ed.), Modern morphometrics in physical anthropology (pp. 215–246). New York: Kluwer Academic. McManamon, F.P. (2000). Determination that the Kennewick human skeletal remains are “Native American” for the purposes of the Native American Graves Protection and Repatriation Act (NAGPRA). National Parks Service, United States Department of the Interior. Electronic document, http://www.cr.nps.gov/aad/kennewick/c14memo. htm, accessed April 25, 2005.
A Technoethical Approach to the Race Problem in Anthropology
Mettler, L.E., Gregg, T.G., & Schaffer, H.G. (1988). Population genetics and evolution. 2nd edition. Englewood Cliffs: Prentice Hall. Mitteroecker, P., Gunz, P., Bernhard, M., Schaefer, K., & Bookstein, F. (2004). Comparison of cranial ontogenetic trajectories among great apes and humans. Journal of Human Evolution, 46, 679–698. Molnar, S. (2002). Human variation: Races, types, and ethnic groups. Upper Saddle River: Prentice Hall. Montagu, A. (1941). The concept of race in the human species in the light of genetics. The Journal of Heredity, 32, 243–247. Montagu, A. (1942). The genetical theory of race, and anthropological method. American Anthropologist, 44(3), 369–375. Montagu, A. (1997). Man’s most dangerous myth: the fallacy of race. 6th edition. Walnut Creek: AltaMira Press. Moore, J.H. (1994). Putting anthropology back together again: The ethnogenetic critique of cladistic theory. American Anthropologist, 96(4), 925–948. Morton, S. (1830). Crania Americana. Philadelphia: J. Dobson. Morell, V. (1998). Kennewick Man’s contemporaries. Science, 280(5361), 191. Nei, M., & Roychoudhury, A.K. (1997). Genetic relationship and evolution of human races. In N.E. Gates (Ed.), The concept of ‘race’ in the natural and social sciences (pp. 29–88). New York: Garland Publishing Inc.
Reig, S. (1996). Correspondence between interlandmark distances and caliper measurements. In L.F. Marcus, M. Corti, A. Loy, G.J.P. Naylor, & D.E. Slice (Eds.), Advances in morphometrics (pp. 371–386). New York: Plenum Press. Ridley, M. (2004). Evolution. Cambridge: Blackwell Science. Rohlf, J.F., & Marcus, L.F. (1993). A revolution in morphometrics. Trends in Ecology and Evolution, 8(4), 129–132. Rushton, J.P. (1995). Race, evolution, and behavior: A life history perspective. New Brunswick: Transaction Publishers. Ryman, N., Chakraborty, R., & Nei, M. (1983). Differences in the relative distribution of human genetic diversity between electrophoretic and red and white cell antigens. Human Heredity, 33, 93–102. Sauer, N.J. (1992). Forensic anthropology and the concept of race: if races don’t exist, why are forensic anthropologists so good at identifying them? Social Science and Medicine, 34(2), 107–111. Sauer, N.J. (1993). Applied anthropology and the concept of race: a legacy of Linnaeus. In C.C. Gordon (Ed.), Race, ethnicity, and applied bioanthropology (pp. 79–84). NAPA Bulletin 13. National Association for the Practice of Anthropology: American Anthropological Association. Seielstad, M.T., Minch, E., & Cavalli-Sforza, L.L. (1998). Genetic evidence for a higher female migration rate in humans. Nature Genetics, 20, 278–280.
Owen, R. (1965). The patrilocal band: a linguistically and culturally hybrid social unit. American Anthropologist, 67, 675–690.
Serre, D., Langaney, A., Chech, M., Teschler-Nicola, M., Paunovic, M., Mennecier, P., Hofreiter, M., Possnert, G., & Pääbo, S. (2004). No evidence of Neandertal mtDNA contribution to early modern humans. PLOS Biology, 2(3), 0313–0317.
Penrose, L.S. (1954). Distance, size and shape. Annals of Eugenics, 18, 337–343.
Sharrock, S.R. (1974). Crees, Cree-Assiniboines and Assiniboines: Interethnic social organiza-
A Technoethical Approach to the Race Problem in Anthropology
tion on the far Northern Prairies. Ethnohistory, 21, 95–122. Slice, D.E. (Ed.). (2005). Modern morphometrics in physical anthropology. New York: Kluwer Academic. Smedley, A. (1999). Race in North America: origin and evolution of a worldview. Boulder: Westview Press. Stoler, A.L. (1997). Racial histories and their regimes of truth. Political Power and Social Theory, 11, 183–206. Stringer, C. (1996). African exodus: the origins of modern humanity. New York: Henry Holt. Stringer, C. (2002). New perspectives on the Neanderthals. Evolutionary Anthropology Supplement, 1, 58–59. Templeton, A.R. (1999). Human races: a genetic and evolutionary perspective. American Anthropologist, 100(3), 632–650. Templeton, A.R. (2002). Out of Africa again and again. Nature, 416(7), 45–51. Templeton, A.R. (2005). Haplotype trees and modern human origins. Yearbook of Physical Anthropology, 48, 33–59. Teschler-Nicola, M. (2004). The diagnostic eye: on the history of genetic and racial assessment in pre-1938 Austria. Collegium Anthropologicum 28(Supplement 2), 7–29. Thomas, D.H. (2000). The skull wars: Kennewick man, archaeology, and the battle for Native American identity. New York: Basic Books. Vidarsdottir, U.S., O’Higgins, P., & Stringer, C. (2002). A geometric morphometric study of regional differences in the ontogeny of the modern human facial skeleton. Journal of Anatomy, 201(3), 211–229.
Wahlsten, D. (1997). Leilani Muir versus the philosopher king: Eugenics on trial in Alberta. Genetica, 99(2/3), 185–198. Wang Q., Štrkalj, G., & Sun, L. (2002a). On the concept of race in Chinese biological anthropology: Alive and well. American Anthropologist, 43(2), 403. Wang Q., Štrkalj, G., & Sun, L. (2002b). The status of the race concept in Chinese biological anthropology. Anthropologie, 40(1), 95–98. Washburn, S.L. (1953). The study of race. American Anthropologist, 65, 521–531. Weidenreich, F. (1943). The skull of Sinanthrupus pekinensis: a comparative study on a primitive hominid skull. Acta Anthropologica Sinica, 5, 243–258. Wilson, E.O., & Brown, W.L. (1953). The subspecies concept and its taxonomic application. Systematic Zoology, 2, 97–111. Wolpoff, M., & Caspari, R. (1997). Race and human evolution. New York: Simon & Schuster. Wolpoff, M., & Caspari, R. (2000). Multiregional, not multiple origins. American Journal of Physical Anthropology, 112(1), 129–136. Wolpoff, M., Xinzhi, W., & Thorne, A. (1984). Modern Homo sapiens origins: a general theory of hominid evolution involving the fossil evidence from East Asia. In F.H. Smith & F. Spence (Eds.), The origins of modern humans: A world survey of the fossil evidence (pp. 441–483). New York: Alan R. Liss.
KEY tErMs Allele Frequencies: A measure of the relative frequency of an allele (one of two or more alternative forms of a gene, which control the same inherited characteristic) on a genetic locus in a population
A Technoethical Approach to the Race Problem in Anthropology
Cladistic: System of biological classification that groups organisms on the basis of observed shared characteristics in order to deduce the common ancestors
ENDNOtEs 1
Ethnogenesis: The creation of a new ethnic group identity through the separation or combination of existing groups Homology: Similar evolutionary characteristics that are a product of descent from a common ancestor rather than a product of a similar environment Morphometrics: Measurement of skeletal morphological features, captured using calipers or 3D imaging
2
Ontogeny: The development of an individual organism from a fertilized ovum to maturity, as contrasted with the development of a group or species (phylogeny) Phenetic: System of biological classification based on the quatification of overall physical similarities between organisms rather than on their genetic or developmental relationships Phylogenetic: The development over time of a species, genus, or group, as contrasted with the development of an individual (ontogeny) Rhizotic: System of classification that emphasizes the extent to which each element (e.g. human language, culture, or population) is considered to be derived from or rooted in several different antecedent groups Skeletal Morphology: The form and structure of the skeletal system and its individual elements. Taxonomic: Relating to the practice or principles of systematic classification
3
4
5
According to Montagu (1941, p. 243), “The idea of race is one of the most fundamental if not the most fundamental of the concepts with which the anthropologist has habitually worked.” Furthermore, Levi-Strauss (1958, p. 8) asserted, “The original sin of anthropology … consists in its confusion of the idea of race, in the purely biological sense (assuming that there is any factual basis for the idea, even in this limited field—which is disputed by modern genetics), with the sociological and psychological production of human civilizations.” See also Teschler-Nicola (2004) for a critical analysis of National Socialist race theory in Austria and central Europe, which also demonstrates the changing perceptions of human classification in the post-war period. To add to this problem, popular fictional novels and crime drama television programs have provided a wildly inaccurate image of the techniques uses by anthropologists in the analysis of biological variation and forensic identification. Clinal variation is the graded intensity of adaptive traits according to geographic distance. Thus, genetic distance and geographic distance are highly correlated (Templeton, 1999, p. 639). Patterns of clinal variation follow the socalled Bergmann-Allen rules. Bergmann’s rule explains that in warm-blooded species, as groups move geographically towards more polar regions, overall body mass is expected to increase. Similarly, Allen’s rule explains that as groups move towards warmer (equatorial) geographic areas, the length of the extremities increases. In recent human groups, this clinal variation shows a very strong negative correlation (.60) between body mass and mean annual temperature (Marks, 1995; Molnar, 2002, pp. 199–201).
A Technoethical Approach to the Race Problem in Anthropology
6
7
8
Conflicting human evolutionary models remain unresolved in this respect (see Billinger, 2006, Chapter 2; Collard & Wood, 2000; Hawks & Wolpoff, 2003). The cladistic approach inherently favours the branching pattern of the Out-of-Africa hypothesis of human origins (Stringer, 1996; Templeton, 2002) over the Multiregional Continuity Model (Brace, 1996, p. 221; see also Wolpoff et al., 1984; Wolpoff & Caspari, 1997, 2000). For a discussion of the competing models of human evolution and phylogeny and how they relate to race or subspecies, see particularly Cartmill (1997) and Billinger (2000, Chapter 3). Montagu (1942, p. 375) was the only one of these authors to offer a definition of an ethnic group: “[O]ne of a number of populations comprising the single species Homo sapiens, which individually maintain their differences, physical and cultural, by means of isolating mechanisms such as geographic and social barriers. These differences will vary as the power of the geographic and social barriers, acting upon the original genetic differences, vary. Where these barriers are of low power neighboring groups will intergrade, or hybridize, with one another. Where these barriers are of high power such ethnic groups will tend to remain distinct or replace each other geographically or ecologically.” The populations chosen by Howells (1973, 1989) represent 6 major groups: Europe (Norse, Zalavar, Berg), Africa (Egypt, Teita, Dogon, Zulu, Bushman), AustraloMelanesia (Australia, Tasmania, Tolai), Far East (North Japan, South Japan, Hainan Island), America (Arikara, Peru), and Other (Andaman, Ainu, Buriat, Eskimo). Howells’ American data is supplemented by data for Navajo, Mexico, as well as Arctic (Siberia, Northwest Territories [NWT]), and the “Kennewick Man” skull (see Billinger,
9
10
11
12
13
2006, p. 149). Only male data has been included here. Female data show similar patterns. Please refer to Billinger (2006) for the complete analysis. 10 craniofacial traits were used at this stage of the analysis: GOL (maximum cranial length), XCB (maximum cranial breadth), ZYB (bizygomatic diameter), BBH (basionbregma height), MAB (maxillo-alveolar breadth), NPH (upper facial height), NLH (nasal height), NLB (nasal breadth), OBB (orbital breadth), and OBH (orbital height). Mayr developed the CD as a method to quantify difference in response to critiques of his definition of subspecies as arbitrary. The ‘Kennewick Man’ skull, found on the shores of the Columbia River in Washington State, is one of the oldest known cranial specimens in North America, at 9500–9000 years old (Chatters, 2000; McManamon, 2000). This specimen is particularly interesting for this study because the assignment of ancestral affinity to this skull has been a highly contentious endeavour (Thomas, 2000; Morell, 1998). American/Arctic data (Arikara, Peru Navajo, Mexico, Siberia, Northwest Territories [NWT], Ainu, Buriat, Eskimo, Kennewick) has further been supplemented with the addition of Northwest Coast [NWC], Haida, Tsimshian, Greenville, Prince Rupert Harbour [PRH], Namu, Blue Jackets Creek [BJC], and a Paleoindian sample. Only 4 craniofacial traits were used at this stage of analysis, based on the availability of published data: GOL, XCB, ZYB, and NPH (see Billinger, 2006, p. 253). Templeton (2005, p. 52) finds the designation of Neanderthals as a separate species based on mtDNA evidence to be questionable, arguing that genetic, fossil, and archaeological data should be integrated in order to draw significant conclusions about evolutionary models.
Chapter V
The Ethics of Human Enhancement in Sport Andy Miah University of the West of Scotland, Scotland
abstract This chapter outlines a technoethics for sport by addressing the relationship between sport ethics and bioethics. The purpose of this chapter is to establish the conditions in which a technoethics of sport should be approached, taking into account the varieties and forms of technology in sport. It also provides an historical overview to ethics and policy making on sport technologies and contextualises the development of this work within the broader medical ethical sphere. It undertakes a conceptualisation of sport technology by drawing from the World Anti-Doping Code, which specifies three conditions that determine whether any given technology is considered to be a form of doping. In so doing, it scrutinizes the ‘spirit of sport’, the central mechanism within sport policy that articulates a technoethics of sport. The chapter discusses a range of sport technology examples, focusing on recent cases of hypoxic training and gene doping.
INtrODUctION If one examines the history of modern sport, the importance attributed to discussions about the ethics of technological development is unclear. This is surprising since, via the technology of performance enhancement, ethical discussions about sport technologies are among the most visible of topics politically and culturally. Instead, there is evidence of a struggle to implement a
specific ethical view on doping, which functions as an assumed, rather than contested ethical terrain. This struggle is exhibited through the rhetoric of anti-doping policy and the governmental processes that underpin anti-doping. For instance, in 1998 the World Anti-Doping Agency was conceived as a result of growing criticisms that anti-doping work should be separate from the International Olympic Committee. Between 1999 and 2002, one of the major struggles of WADA
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Ethics of Human Enhancement in Sport
was to achieve the signatures and commitments of participatory governments and sports federations. In this instance, the ethical struggles were never about the soundness of anti-doping arguments, but the ethics of implementation and policy-making. The alleged ethical consensus that surrounds this anti-doping work shapes the conditions within which ethical debates about technology in sport have taken place and prescribes the limits of ethical inquiry that surround the governance of elite sports. As further illustration of this disinterest in the technoethics of sport, one can observe that nearly no research has been funded from sports organizations to investigate the ethics of technology in sport. Some exceptions include research conducted at the Hastings Center (New York) since the 1980s under the direction of its current President, Thomas H. Murray. Murray’s contribution as a long-standing contributor to various sports-related doping authorities is notable, though it is also exceptional. Despite the projects funded through the Hastings Center, ethical reasoning on this issue appears to be of limited interest to sports policy makers. The evidence suggests that there is nearly no political weight behind the interests to question fundamental ethical issues about performance enhancement. Rather, this kind of ethics functions as a form of rhetoric that seeks to endorse an already assumed ethical stance: that doping is wrong. These circumstances can be contrasted with the academic study of sport ethics and philosophy, which has developed steadily since the 1970s. The Journal of the Philosophy of Sport (Human Kinetics) and the recent addition of the journal Sport, Ethics & Philosophy (Routledge) is evidence of a burgeoning community of ethicists who are interested in sport issues. Some of these authors have written about the ethics of technology (see Miah & Eassom 2002), though the majority of contributions have been focused on doping specifically. In recent years, this community has expanded into two notable areas of applied phi-
0
losophy—the philosophy of technology and the philosophy of health care, or technoethics and bioethics. In particular, the latter has developed an interest in sport via the doping issue, particularly to inform ethical debates about the ethics of human enhancement. Recent contributions from such prominent bioethicists as Nick Bostrom, Ruth Chadwick, John Harris and Julian Savulescu are some indication of how sport enhancement issues have reached the mainstream readership within bioethics.1 Accompanying these developments is a range of new technologies that promise to raise difficult questions about the ethics of performance in elite sports. For instance, over the last five years, there has been considerable attention given to the prospect of ‘gene doping’ (World Anti-Doping Code, 2004), the application of gene transfer technology to the athlete. Gene doping raises a number of new questions about the morality of (anti)doping and the parameters of the ‘drug war’ in sports (Miah, 2004; Tamburrini & Tannsjo 2005). Such technology places demands on sporting authorities that have, hitherto, not been encountered, calling into question the limits of the anti-doping movement. For instance, gene doping presents the possibility of enhancing athletes in a manner that is minimally invasive and sufficiently safe. If such conditions are met, then the rationale for anti-doping diminishes. Alternatively, in 2006, WADA investigated the use of hypoxic chambers that have the capacity to enhance an athlete’s performance in a similar manner to altitude training, by simulating different levels of altitude. The inquiry provoked a vast amount of criticism from within the science community, which disputed the inclusion of the technology within the World Anti-Doping Code. Arguably, as technology improves and as body and cognitive enhancements play an increasing role within society, the pursuit of anti-doping raises more ethical issues than it resolves. Consider, for instance, the testing of high-school students in the United States for doping substances. One might legitimately ask where such testing should
The Ethics of Human Enhancement in Sport
be limited, at what age, and to what level of intrusion into people’s lives? In this context, it is necessary to reconsider the role of ethics in debates about technological enhancement in sport. This chapter discusses this role and the capacity of ethics to inform policy debates on doping specifically and sport technology issues generally. I will suggest how ethics is beginning to play an increasing role in the doping debate and in the study of science, medicine and technology, more broadly, which reveals how more effective ethical inquiry will be in discussions about emerging technologies, such as gene doping. I begin by considering the political history of the doping debate, which has given rise to a number of limitations and restrictions on the advancement of the ethical contribution to the issue. I then consider the development of the doping debate in the context of philosophy of sport and medical ethics and argue how their lack of connectedness has limited the advance of the doping issue. Third, I discuss a number of the substantive ethical issues that concern sport technologies. Finally, I argue how the relationship between sport and technoethics is changing via a number of new technologies that, now, consume the anti-doping movement.
MOraL rHEtOrIc & EtHIcaL cODEs While anti-doping began many decades earlier, the major collaborative efforts in anti-doping occurred in the 1960s from within the International Olympic Committee (IOC). In the 1960s, the IOC created a Medical Commission whose role was to address emerging concerns about the use of chemical substances in sport and began testing in the Olympics during the 1964 Tokyo Olympic Games. The IOC’s pursuit of anti-doping at this time can be understood as a reaction to specific cases, where it was believed that athletes were being harmed by substance misuse. Of particu-
lar importance was the death of cyclist Tommie Simpson, who died during the Tour de France in 1967. Arguably, the televised broadcast of Simpson’s death played a critical role in the political pressure to do something about doping and in raising the public profile of the concern, for better or worse. At this time, the influence of the IOC, as the guardians of elite sport, was considerable and the post-war climate along with the emerging reactions to drug abuse within society overshadowed the ethical debate about performance enhancement in sport. Indeed, this connection between drug use in sport and its use in society remains apparent. For instance, the United States government has recently re-asserted its commitment to fighting the drug war and there is considerable alarm about the use of doping substances in high-school sport within the United States of America. It has even been suggested that the doping debate should be approached and dealt with as a public health issue, rather than just a problem of elite sport (Murray, cited in Dreifus, 2004). The use of such substances as anabolic steroids for general for image enhancement and not performance enhancement in sports arises as one such substance that transcends the ethics of sport. This proposed model for approaching doping would signal a radical change to how it is dealt with on a global scale. Amidst this concern, the presumed harmfulness of various forms of doping remains contested and such disputes even extend to notorious doping substances such as anabolic steroids. While there are many strong convictions on this subject, there is considerable disagreement about whether many banned substances are inherently detrimental to an athlete’s health, or whether their particular application is what leads to the greatest risks. This is necessary to bear in mind, not because I wish to take any particular stance on the merits of these convictions, but because this contested status reinforces the claim that the ethics of anti-doping has relied on political justifications,
The Ethics of Human Enhancement in Sport
rather than moral ones. One might legitimately ask whose interests are served by the ongoing absence of evidence surrounding this subject and whether these interests are also what prevents understanding more. The concern about the risks that doping poses to an athlete’s health explains the development of anti-doping much less than the pressure for an aspiring organization like the IOC to demonstrate a concern for the members of its organization. To date, there is considerable uncertainty about the effects of many methods of doping and this uncertainty is exacerbated by the fact that many doping technologies are experimental innovations for which there is only a limited amount of science about. As further support for the importance of politics in the 1960s doping debate, one might also consider the developing ethical conscientiousness during this period. After World War II, the moral concerns about eugenics and the abuse of humans in the pursuit of science were paramount. These conversations led to a series of ethical and legal instruments considered fundamental to any research or practice that involved human subjects. The United Nations Declaration on Human Rights (1948), the Helsinki Agreement (1964), the Nuremberg Code of Ethics (1949), were a significant part of this emerging moral consciousness (see World Medical Association, 2000). Moreover, given the interrelatedness of the medical professions to anti-doping, one might expect that the influence of these developments on how anti-doping evolved is of critical importance. However, there is no evidence that the world of sport was guided by these broader changes or that there was any crossover of discourses. Despite the disagreement over the harms of doping, the argument from harm remains central to the rationale underpinning anti-doping measures. Houlihan (1999) describes the way in which the athlete has been characterized as a subject of medical concern:
Once it is accepted that extreme physical fitness makes an athlete by definition a patient then there is already in existence a culture, professionally supported and promoted, that encourages the treatment of healthy athletes with drugs. (p.88) To this extent, the concern from the IOC was politically aligned with other social concerns about the use and abuse of science. In short, the need for an anti-doping movement seems much less to have been about the ethics of sport and much more about the potential abuse of athletes who might be subjected to state-funded programmes that were designed to produce ‘perfect athletes’. The case of the German Democratic Republic is a particularly good example of why there was a need for such concerns. Furthermore, it reinforced the inter-relatedness of state and sport, where the national interest to secure sporting victories was a considerable motivation to ensure athletes were likely to have a competitive edge over their opponents. Two insights into the role of technoethics in sport are possible to achieve from this set of circumstances. First, the incentive to develop an anti-doping policy arose from a concern about how the public profile of the IOC might be prejudiced as a result of failing to act. The IOC found itself subject to an institutional obligation to address the possible risks faced by its core community, the athletes.2 In particular, its concern was the possible corruption of the Olympic values that doping would entail. This pressure must also be seen in the broader context of governmental concerns about drug abuse more and the political interest of sports organizations to work in partnership with governmental priorities on sport. Second, one can argue that the ethics underpinning antidoping were not directly related to the emerging post-war ethical concerns about the medical and scientific professions. This latter conclusion is of particular relevance to our present discussion, since it assists in explaining the peculiar inconsistencies of how different technologies
The Ethics of Human Enhancement in Sport
have been rationalised in the context of sport. In short, the medical community underpinning the development of anti-doping has been governed largely by a very strict notion of what is medically acceptable, but to the exclusion of conceptual developments and policy debates within medical ethics. This has limited the capacity to develop an adequate approach to the doping debate. Yet, more importantly, this is something that can be changed. Indeed, I will provide examples of how this is changing, with particular attention to ‘gene doping,’ which has become a critical part of this shifting dialogue.
tHE cHaNGING FacE OF sPOrt EtHIcs Despite the growth of sport philosophy, much of its work has been conspicuous by its absence in shaping the policy debate over doping. This is not to say that the publications of sport philosophers are not credible or that their arguments have been irrelevant. Rather, more modestly, the problem has been that the development of ethical debates on doping within the philosophy of sport have been institutionally divorced from critical ethical and policy decisions in the world of anti-doping.3 In defence of sport philosophers, applied debates in anti-doping have not really demonstrated an interest in problematising ethics. Moreover, there is a void between theoretical ethics and applied policy making, the former of which seems, more often than not, to have been the interests of sport philosophers as the discipline evolved to establish itself, first, as a credible subject of philosophical concern. However, some capacity to inform policy through ethical reasoning is provided in other sorts of ethical literature. As some indication of this, it is useful to contrast the sport ethics literature with work in bioethics, which is even more appealing for our present purposes, since the doping debate is closely connected to medical ethics. Medical
ethics and bioethics have a similar historical timeframe to the philosophy of sport. If one takes the long perspective, then it is possible to identify as much philosophy of sport within the works of Aristotle, as one might the philosophy of health. More recently, both sub-disciplines matured in the late 1960s and, again, each was closely allied with the post-war concerns about potential abuses to human subjects in clinical research. Yet, given their apparent interrelatedness, one might wonder how it is that the sport technology debates in the philosophy of sport have been largely unconnected to the ethical discussions in medicine. Rather than focus on explaining this, I will focus, instead, on how these circumstances can and should change. I will also provide evidence of such change, which should indicate how the role of ethics in anti-doping discussions could become more substantive. In 1983, Thomas H. Murray wrote an article about drug taking in sport for the Hastings Center Report, one of the leading medical ethics journals in the world. This was followed by Fost (1986), whose controversial pro-doping stance made its way into the philosophy of sport literature. Aside from these articles, nearly no conversations have taken place within the medical ethical literature about doping, despite concurrent debates taking place within the philosophy of sport literature. In contrast, if one examines debates in medicine, there is a clear connection between the ethical community and the applied policy and legal discussions.4 For example, if one examines human fertilization and embryology, both policy and law in the UK rely on the ethical debates utilized within the 1980s concerning the moral status of the embryo. Moreover, if one examines medical journals such as the Lancet, Nature, or Science, one frequently reads commentaries from academic ethicists about policy, law, or scientific issues (see, for example, Juengst, 2003).5 In contrast, the doping debate has not benefited from such a relationship with philosophers of sport, which is
The Ethics of Human Enhancement in Sport
why there is an opportunity for sport philosophers to make more of a contribution. There are reasons to be optimistic about this relationship. For example, the inclusion of ‘philosophy of sport’ within philosophical and ethical encyclopaedia is some indication of the degree to which the contributions of sport philosophers are now being taken more seriously by a range of audiences (notably, by more established philosophical disciplines and medical institutions). The presence of sport philosophers in the World Congress of Philosophy, the IOC World Congress on Sport Science, and the European College of Sport Science, among others, all suggest that philosophers have an increasing role to play in the analysis of sport. Indeed, in the last two years, the philosophy of sport community has grown considerably in Europe, through the development of the British Philosophy of Sport Association. With respect to the relationship between sport ethics and bioethics, there are also further indications of growth. In 1998, The Hastings Center published a book resulting from a project about enhancement technologies, in which sport had a presence (Parens, 1998). Additionally, since 2002, the Hastings Center has been funded by the United States Anti-Doping Agency and the World Anti-Doping Agency on two research projects to investigate the ethics of performance enhancement in sport, during which time associations have been made between philosophers of sport and bioethicists. Moreover, in 2004, Thomas H. Murray was appointed Chair of the Ethical Issues Review Panel within WADA, which has included a number of contributors to the philosophy of sport literature, such as Angela Schneider, Gunnar Breivik and Sigmund Loland. Today, it is more common to find sport philosophers working in association with sports organizations and medical (ethical) associations. While there is some uncertainty about the effectiveness of these committees and their specific terms of reference, their existence at all is an advance on previous anti-doping organizations. Nevertheless, it remains to be seen
whether ethics will have a central role in many national anti-doping organizations, where what is ethical remains a straightforward implementation of international policies.6 Even within this context, it is important to clarify the nature of ethical concern from academics, since there appear to be at least two kinds of ethical criticisms of anti-doping. First, some ethicists are critical of the way that anti-doping has been handled, but, nevertheless, agree that it is a fundamentally desirable project to support. Such is the perspective of Fraleigh (1985), Murray (1983, 1984, 1986, 1987), Loland (2002), Schneider and Butcher, 1994), Simon (1985) among others, who argue that there are good reasons to prohibit the use of doping methods. Other authors have been critical of the ethical foundation of anti-doping and have advocated its re-appraisal, including arguing on behalf of a more permissive environment of performance enhancements (Burke, 1997; Eassom, 1995, Kayser et al. 2005, 2006; Miah, 2004, Tamburrini, 2000). These two different critical voices often overlap and allegiances shift depending on the specific technology under discussion, though they are not two parts of the same opinion. The former considers that there is value in established ethical protocols on the acceptable uses of medicine and seeks to ensure that good practice is maintained in sport. For example, such authors might argue that sports authorities compromise the position of sports physician, to such an extent that their actions within sport are dubiously described as medicine (McNamee & Edwards 2005). Alternatively, this perspective might argue that the amount of funding dedicated to tackling the problem of doping in sport has not been sufficient, or that its policies have been skewed too far towards detection and not far enough towards, say, education (Houlihan 1999). In contrast, the latter view argues that conventional medical ethical principles and protocols are not appropriate to map directly onto the world of sport and that athletes should be permitted to use whatever they please to alter
The Ethics of Human Enhancement in Sport
their performances. This latter perspective would dispute the premise that sports medicine should operate under established medical norms. While the former of these views seeks a credible implementation of medical ethical protocols, the latter would argue that such standards are inadequate for the governance of medical interventions in sport. It is important to bear this in mind when trying to find some way of engaging with the medical professions on doping, since arguments on behalf of doping are often not dismissed as a matter of course by those within the world of sport. Indeed, one might again return to the literature in the philosophy of sport and wonder whether the overt philosophical liberalism expressed in many papers on doping has, in some way, alienated sport philosophers from the applied medical conversations. This does not mean that sport philosophers should limit their ethical inquiries, but simply suggests that radical views on doping must be accompanied by pragmatic debates in relevant journals, where questions about the legitimate ends of medicine can be discussed in critical and engaging ways. For example, discussions about creating superhumans must be accompanied by problematising the legitimate use of such substances as human growth hormone.
sUbstaNtIvE EtHIcaL IssUEs ON sPOrt tEcHNOLOGY While, individual sports federations have their own anti-doping guidelines, the vast majority of them are now governed by the World Anti-Doping Code. This code is instrumental in deciding whether or not any given technology—product or process—is to be considered a doping technology. The basis of this decision involves testing the proposed technology against three conditions, two of which must be engaged in order for WADA to consider a prohibition. These consist of the following:7
Does the technology have the ‘potential to enhance’ or does it enhance sport performance’? Does the technology present an ‘actual or potential health risk’? Does the technology ‘violate the spirit of sport’ as described by the Code?8 These three conditions are a useful place to begin unravelling the substantive ethical issues arising from the use of technology in sport. While their application can be criticised for being limited to exclusively doping technologies, they offer some explanation for why other technologies do not provoke the concern of the anti-doping community. Yet, it is important to bear in mind that these criteria do not constitute the breadth of the ethical foundation of sports, which are more carefully elaborated upon by broader constitutional documentation, such as the Olympic Charter. If one considers how a technological artefact that is not something that would fall under the Code is dealt with, it becomes clear why, nevertheless, the three criteria are a useful articulation of sport’s technoethical framework. For instance, what sector within the world of sport should respond to a new, lighter tennis racquet? Would one expect this to be described as a doping technology or, should the ethical issues it provokes be discussed elsewhere? How does it engage the three conditions of the World Anti-Doping Code? Such an innovation might allow for a different kind of swing which, subsequently, could present a different portfolio of likely injuries, many of which might be seen as more severe than those that were likely to arise with the previous type of tennis racquet. There are many similar examples in sport. For instance, a faster motorcycle could lead to greater risks being taken, or a greater likelihood of serious or, even, life-threatening injury. In short, the innovation can change the range of harms to health that an athlete experiences during training and competition.
The Ethics of Human Enhancement in Sport
A new tennis racquet might also be performance enhancing; it could allow an athlete to achieve a faster speed of serve, or to impart greater spin on a ball. This latter example was engaged in the late 1970s when a ‘spaghetti strung’ (doublestrung) tennis racquet was introduced. Due to its performance enhancing capabilities, it was deemed illegal because it threatened the characteristics of tennis that are tested via its rules. In this case, too much spin, it would seem, diminishes the ability to test the kinds of tennis-related skills that are of interest. Perhaps a useful analogy is to a tennis shot that clips the net on its way over. While one might identify such a winning stroke as skill-based, it is generally recognised that athletes cannot be this precise with their strokes and that an element of ‘luck’ has led to the advantage. At least where serving is concerned, this is partly why tennis offers a ‘let’ (re-serve) when such clipping takes place. Finally, a new tennis racquet could engage the concern that the technology is against the spirit of sport, though to understand this further, it is necessary to inquire more into this concept, which is, arguably, one of the most contested ethical terms within sport. In our case of the lighter tennis racquet, its use might violate the spirit of sport if it is available to all competitors. In itself, this might be considered an unfair advantage, though if one conceives of technological innovation as an integral part of the athlete’s skill and knowledge base, this is a dubious position to take. In any case, it is useful to probe more extensively the concept of a ‘spirit of sport’.
the spirit of sport (technology) The World Anti-Doping Code does not provide a precise definition of the spirit of sport, though it does articulate a number of values that describe various sports values, such as ‘fair play’, ‘health’, ‘excellence in performance’, ‘character and education’, ‘fun and joy’, ‘teamwork’, ‘respect for rules and laws’, ‘courage’ and ‘community and
solidarity’. It would be fatuous to point out that the gruelling, commercial world of elite sports rarely demonstrates the experiencing of these characteristics and that, on this basis, the values proposed in the code have no resonance. The Code is not a well-worked through ethical paper designed to withstand the scrutiny of theoretical ethics. Rather, it must function across a range of legal, social, and policy contexts. Nevertheless, it is important to ask further how the spirit of sport is applied via the Code, since one might have concerns about consistency of practice.9 One of the most visible recent tests of the ‘spirit’ is the 2006 debate about hypoxic training technology, which involves the creation of an environment—usually the size of a room—that simulates varying levels of altitude in order to confer a performance advantage.10 In this case, WADA considered the technology’s legitimacy on each of the three criteria and its various sub-committees reported mixed findings. It was not possible to conclude that hypoxic training presented any potential or actual health risk or, indeed, that it was performance enhancing, though each concluded that these were not definitive findings.11 This, alone, would be enough to rule out prohibition, though of particular interest is the approach taken by the Ethical Issues Review Panel on whether the ‘spirit of sport’ was challenged by hypoxia. Specifically, the Panel attempted to grapple with the ‘spirit of sport’ in quite specific terms, arguing that the advantage gained via hypoxic training did not require the ‘virtuous perfection of natural talents’, a moral standard it used to establish whether or not the technology contravened the spirit of sport. Importantly, the argument could not, in itself, allow the Panel to conclude that hypoxic training should be banned, but it did imply an element of moral condemnation that is useful to dwell on when thinking about the contribution of technoethics to such debates. Moreover, this case is one of the most visceral attempts to articulate, in more precise terms, the spirit of sport and so it serves as a
The Ethics of Human Enhancement in Sport
useful route towards greater elaboration. I will not detail the specifics of the Panel’s argument any further, though it is useful to note that the scientific community challenged the argument on scientific rather than ethical grounds and that the final recommendation was to maintain hypoxic training as a legal performance technology.12 Of course, this status could change, as it will remain subject to ongoing, scientific analysis. despite the outcome of this case, the ‘virtuous perfection of natural talents’ alludes to what the spirit of sport might be, as if it is important to ensure that athletes gain their advantages by having to work, rather than simply applying a technology. This is what some authors intend when they argue that the ‘means matter’ (Cole-Turner 1998). A less contentious articulation of the spirit of sport concerns the concept of cheating. While there are different kinds of rules in sport, different ways in which they can be broken, and different levels of moral condemnation that will arise from such violations, it is generally regarded that cheating is contrary to the spirit of sport, particularly as it relates to performance enhancement.13 Indeed, doping, by definition, is a form of cheating, since it is the utilization of means that the rules prohibit. However, the analysis of cheating can be approached on a number of other levels. For instance, in response to the argument that laissez faire approach to doping would eliminate the moral condemnation of doping as cheating—since everyone will be permitted to do whatever they want—it is sometimes argued that cheating will still have occurred, since the conditions of the competition will have been undermined. In this sense, the doped competitor achieves an unfair advantage over the sport, rather than the competitors.14 To the extent that sports are practices that are shaped and defined by its community’s members, one can envisage how such concerns develop moral significance—which again reminds us of the resistance to an ‘anything goes’ perspective on the ethics of doping. It also reminds us of the limits of ethics when they are
divorced from the practice community that is affected by the rules. Perhaps a final characterisation of the spirit of sport is its aspiration to ensure that sports competitions are tests of athletes rather than technologies. While I would argue that sports are constitutively technological, others would argue that there are types of technological integration that should be resisted—such as biological modification via pharmaceuticals. On this basis, one can observe varying degrees of moral concern that arise from different types of technological apparatus. The subject of concern here is articulated in various forms. Some authors have described it as the ‘dehumanizing’ thesis, while others write about the ‘deskilling’ of performance that it implies. In each case, the arguments resist such technological development, seeing it as antithetical to what sports competitions are supposed to be about—a test of human capacities. It is imagined that such technology would reduce the athlete’s role in performance and, in so doing, diminish the value of competition. This view of dehumanisation also emerges from a ‘mechanisation’ thesis that describes the scientification of sport as bringing about feelings of alienation—that is the manufacturing of athletes, for instance. Such an evaluation of contemporary elite sports describes the athlete as a product of a scientific or technological process, somehow automated in performance.
Human Enhancement Outside of sport Accompanying these challenges to the spirit of sport is the additional context offered via broader perspectives on bioethics and the culture of body modification. As I have indicated earlier, perhaps one of the more significant challenges to the current model of anti-doping comes from the general rise in body modification/enhancement practices. Very little is known about whether athletes would utilise elective reconstructive surgery to enable more effective sports performance, though there
The Ethics of Human Enhancement in Sport
seem obvious reasons for why an athlete might benefit from such modifications. Various anecdotal stories suggest body modifiers that could enhance performance, such as LASIK eye surgery to improve vision in sport. Various discussions about this technology took place when golfer Tiger Woods underwent this treatment. It is not difficult to imagine other such enhancements that could influence an athlete’s capability to perform and, yet, such modifiers are rarely forbidden via the World Anti-Doping Code. Moreover, if one talks further of image enhancement, the incentive for athletes to be attractive to sponsors and the entertainment industry generally is considerable.
Practical technoethics Transformations to technology in sport are also sometimes needed to accommodate other kinds of changes within any given sport. For instance, in the 1980s, transformations to the javelin were necessary since throwers were beginning to throw dangerously close to spectators. As such, the javelin’s transformation was a relatively pragmatic choice—it was considered more practical to change the technical requirements of javelin throwing than it was to change the length of all athletic arenas around the world. Technological changes are also able to elicit new kinds of ‘excellence’, which are often considered to be a valuable development on previous performances. For instance, also in the 1980s, the introduction of carbon-fibre pole for pole vaulting enhanced the activity by allowing a more skilled performance and eliminating the debilitating influence of toomuch rigidity in poles.15 Alternatively, one might think of the fosbury flop in high jump as a technical innovation that enriched the pursuit of identifying the highest jumper in the world. For each of these cases, it is not obvious that the decision to proceed with or retreat from a particular innovation is arbitrary. Indeed, an alternative example demonstrates how decisions about technological change in sport are also engage political economy of sports.
In the late 1990s the International Tennis Federation endeavoured to address the dominance of the serve in the male pro-game. One of its concerns was that the inability of players to return powerful serves could make the sport less interesting to watch. In turn, this could translate into fewer spectators, less revenue, but perhaps more seriously, less of a grass-roots base of participants that would enable the sport to flourish. Each of these concerns is relevant when thinking about the use of enhancing technologies in sport, though they also raise potential conflicts of interest. For example, consider the influence of television scheduling on sports like marathon running. While marathon runners might prefer to run in the morning or at a time of day where the temperature is moderate, often television companies will expect scheduling to be guided by expected viewing patterns. This raises additional questions about the professional and corporate ethics of the sponsoring organisations of sport. These various aspects to the technoethics of sport reveal the layers of ethical engagement and analysis that operate across the sporting landscape. Resolution over such ethical problems confounds the sports communities, but there have been important developments in how the ethics of performance technology in sports have been addressed. For instance, one can identity the wider range of participants in the conversations as some indication of progress. Further evidence of progress is the World Anti-Doping Agency itself, which has achieved unprecedented participation in working towards the legal harmonization of anti-doping policy in the vast world of elite sports. Nevertheless, one might still raise questions about this process. For instance, it is unclear whether such power should be invested into such a singular and narrowly defined institution, given that it does not function at any inter-governmental level. However, its burgeoning agreements with UNESCO and other relevant authorities, strengthens its claim to occupying the shared ground of ethical concern. Yet, WADA relies on effective testing
The Ethics of Human Enhancement in Sport
methods through which it can claim to ensure a level playing field in sport. For some performance enhancing technologies, it is unclear whether the achievement of such tests is at all realistic given budgetary limitations, the fast-paced developments within science and the growing consumption of enhancement technologies.
FUtUrE trENDs Given what has been said about the relationship between bioethics and sport, future trends within the area of sport technology relate to the broader context of performance technologies within society. A number of emerging examples raise new questions about what sports or societies can do to curb the growth of human enhancements. For instance, the earlier LASIK example offering enhancements to vision can be accompanied by other body and mind modifications. Anecdotal stories surround the use of Tommy John surgery, which is practiced on the elbows of elite baseball pitches when injured. It is said that the reparative surgery has led to athletes returning to the field throwing harder and faster than before they were injured. In this sense, one can envisage a number of surgical procedures that contort the body into enabling an enhanced performance. In addition, a number of cognitive enhancements are becoming visible within competition. For instance, the drug ‘modafinil’ (Kaufman and Gerner 2005) is a cognitive enhancer used to treat patients with narcolepsy, yet its prevalence within elite sports far exceeds the proportion of the population that would require such a drug. It is likely that a range of cognitive enhancements become used increasingly within elite sports to assist with the psychological parameters of competition. The debates about gene doping are now flourishing and it is likely that genetic doping technologies consume the next twenty years of anti-doping interests (Miah 2004). Currently, tests are underway to detect gene doping, though
some scientists believe that it will never be possible to directly detect for all forms of gene doping. This problem is not dissimilar from the challenge of ‘designer steroids’, such as the 2003 discovery of tetrahydrogestrinone (THG). When a phial of this substance was left at Don Catlin’s United States anti-doping lab, it was unknown to anyone. It is likely that an increasing number of designer steroids emerges within competition, reinforcing the problem that, inevitably, testing methods will always be behind what the athletes are able to utilise. A further genetic innovation that is already beginning to influence sport is the development of genetic tests for performance (Miah & Rich 2006). In 2004, the first commercial test appeared on the market and it is likely that more will arise. Already, a range of institutions has reacted to this use of genetic information, questioning the scientific credibility of the tests and the legitimacy of using the information that they provide (Australian Law Reform Commission 2003). Finally, the emergence of ‘functional foods’ or ‘nutrigenomics’ (Chadwick 2005) that are optimised for performance will have a significant role in negotiating the distinction between legitimate and illegitimate methods of performance enhancement. By optimising the nutritional capacities of food, athletes will be enabled to perform at maximal output, without needing to resort to pharmacological substances.
cONcLUsION I began this chapter by suggesting that the circumstances of Tommie Simpson’s death in the 1967 Tour de France, particularly its televisation, were of considerable influence in creating a momentum for the anti-doping movement. Nearly 31 years later, a similar occurrence arose, once again, at the Tour de France. The scandals of 1998 were instrumental in the establishment of the World Anti-Doping Agency, which was also a consequence of the Lausanne Conference on
The Ethics of Human Enhancement in Sport
Doping and Sport (1999). Yet, despite the changes within the world of anti-doping, it has always been the responsibility of medical professionals to decide how best to protect against the nontherapeutic application of medical technology to sports performances. The principles underlying modifications to the anti-doping code rely on what is considered to be medically acceptable. However, this should be only a partial consideration, since what is medically acceptable varies and the basis on which we decide the legitimate ends of medicine are somewhat cultural. These explanations form the basis of the present analysis and questions arise about the legitimacy or relevance of the current technoethics within sport. Given the ways in which medicine is now ‘purchased’ for lifestyle choices, is it still reasonable to prohibit access to enhancing technologies for sport? Is the medical model applied to sport still relevant? What other alternatives exist? For many years, the scientific and medical profession have been discussing these questions like these. Today, it is necessary for philosophers of sport to acknowledge the applied nature of their work on doping and engage with the literature on the ethics of science and medicine. Indeed, there are some useful parallels within sport and bioethics. For example, discussions about personhood, dignity, excellence, autonomy, and respect have been central to medical discussions and have also surfaced as reactions to doping (see Miah (2004) for numerous examples). The political explanation of doping and ethics also demands that sport ethicists reach across to medical ethics and philosophy of medicine journals, to ensure that their work is influential in the capacity that would permit the advancement of ethical debate on this issue. However, further conceptual work is necessary when considering performance enhancement. A further criticism of the doping debate—both academically and professionally—is that it has
0
also misrepresented this matter and, understandably, but unfortunately, led to a skewed notion of performance enhancement. If the debate about ethics and doping has anything to do with the distinction between legitimate and illegitimate methods of performance enhancement, then there must be a discussion about other forms of performance enhancement. How, for example, does a running shoe or training technique challenge the technoethics of sports? Alternatively, how do examples, such as the fast skin swimming suit or altitude chambers alter how we make sense of sport? Also, it is necessary to situate such discussions in specific sporting contexts, rather than speak about a general technoethics for all sports. There are clear differences between the technoethics of different sports. For instance, the use of third eye technology to assist decision-making for umpires and referees takes a variety of forms across different sports. Discussions about doping must also broaden their focus to take into account ethical decisions made in relation to other forms of technology. Institutionally, discussions about these technologies have been separate from doping debates. Again, there is an explanation for this situation based partly on the health argument that gave rise to anti-doping—many technological innovations do not have a direct bearing on the health of an athlete, nor do they require the intervention of a medical professional. Yet, many technologies do have an indirect health impact, as our earlier tennis racquet example indicates. Nevertheless, doping and the issues arising from it are separate from the policy considerations about other technical modifications or enhancements. While there have been some indications of the prospect for change, greater closeness is necessary between sports ethicists, technoethicists and bioethicists to enable a more satisfactory contribution to this complex case.
The Ethics of Human Enhancement in Sport
rEFErENcEs Australia Law Reform Commission. (2003) Alrc 96: Essentially yours. Burke, M. D. (1997). Drugs in sport: Have they practiced too hard? A response To Schneider And Butcher. Journal Of The Philosophy Of Sport XXIV, 47-66. Chadwick, R. (2005) Nutrigenomics, individualism and sports. In Tamburrini, C. & Tannsjo, T. (Eds.), Genetic Technology and Sport: Ethical Questions. Oxon and New York: Routledge, pp.126-135. Cole-Turner, R. (1998). Do means matter? In Parens, E. (Ed.), Enhancing human traits: Ethical and social implications. Washington, DC: Georgetown University Press. Dreifus, C. (2004). A lament for ancient games in modern world of doping. New York Times. Retrieved from http://www.nytimes. com/2004/08/03/Health/Psychology/03conv. html Eassom, S. B. (1995). Playing games with prisoners’ dilemmas. Journal Of The Philosophy Of Sport XXII, 26-47. Editorial. (2007, 2 August). A sporting chance: Bans on drug enhancement in sport may go the way of earlier prohibitions on women and remuneration. Nature, 448, 512. Feezell, R. M. (1988). On the wrongness of cheating and why cheaters can’t play the game. Journal Of The Philosophy Of Sport, XV, 57-68. Fost, N. (1986). Banning drugs in sports: A skeptical view. Hastings Center Report, 16, 5-10. Fraleigh, W. (1982). Why the good foul is not good. Journal of physical education, recreation and dance, January, 41-42.
Fraleigh, W.P. (1985). Performance enhancing drugs in sport: The ethical issue. Journal Of The Philosophy Of Sport, XI, 23-29. Houlihan, B. (1999). Dying to win: Doping in sport and the development of anti-doping policy. Strasburg, Council Of Europe Publishing. Juengst, E. T. (2003). Editorial: What’s next for human gene therapy. British Medical Journal, 326, 1410-1411. Kaufman, K. R. & Gerner, R. (2005) Modafinil in sports: Ethical considerations. British Journal of Sports Medicine, 39, 241-244. Leaman, O. (1988). Cheating and fair play in sport. In Morgan, W. J. & Meier, K. V. (Eds.), Philosophic inquiry in sport. Illinois: Human Kinetics. Ledley, F. D. (1994). Distinguishing genetics and eugenics on the basis of fairness. Journal Of Medical Ethics, 20, 157-164. Lehman, C. K. (1981). Can cheaters play the game? Journal Of The Philosophy Of Sport, VII, 41-46. Levine, B. D. (2006). Editorial: Should ‘artificial’ high altitude environments be considered doping? Scandinavian Journal Of Medicine And Science In Sports, 16, 297-301. Levine, B. D. & Stray-Gunderson, J. (1997). ‘Living high—training low’: Effect of moderatealtitude exposure simulated with nitrogen tents. Journal Of Applied Physiology, 83, 102-112. Loland, S. (2002). Fair play in sport: A moral norm system. London & New York: Routledge. Miah, A. (2004). Genetically modified athletes: Biomedical ethics, gene doping and sport. London and New York: Routledge. Miah, A. (2006). Rethinking enhancement in sport. In Bainbridge, W. S. & Roco, M. C. (Eds.) Progress in convergence: Technologies to improve
The Ethics of Human Enhancement in Sport
human well-being. New York Academy of Sciences, 1093, 301-320. Miah, A. and S. B. Eassom, Eds. (2002). Sport Technology: History, philosophy & policy. Research in philosophy & technology. Oxford, Elsevier Science. Miah, A. & Rich, E. (2006) Genetic tests for ability? Talent identification and the value of an open future. Sport, Education & Society, 11, 259-273. Murray, T. H. (1983). The coercive power of drugs in sports. Hastings Center Report, August, 24-30. Murray, T. H. (1984). Drugs, sports, and ethics. In T. H. Murray, W. Gaylin & R. Macklin (Eds.), Feeling good and doing better. Clifton, New Jersey: Humana Press. Murray, T. H. (1986). Guest Editorial: Drug Testing And Moral Responsibility. The Physician And Sportsmedicine, 14(11), 47-48. Murray, T. H. (1987). The ethics of drugs in sport. drugs and performance in sports. London: W.B. Saunders Company. Rosenberg, D. (1995). The concept of cheating in sport. International Journal Of Physical Education, 32, 4-14. Schneider, A. J. and R. B. Butcher (1994). Why olympic athletes should avoid the use and seek the elimination of performance enhancing substances and practices from the olympic games. Journal Of The Philosophy Of Sport, XXI, 64-81. Schneider, A. J. & Butcher, R. B. (2000). A philosophical overview of the arguments on banning doping in sport. In Tännsjö, T. & Tamburrini, C. (Eds.), Values in sport: Elitism, nationalism, gender equality, and the scientific manufacture of winners. London: E & Fn Spon. Simon, R.L. (1985). Response To Brown & Fraleigh. Journal Of The Philosophy Of Sport, XI, 30-32
The U.S. President’s Council On Bioethics (2002). Session 4: Enhancement 2: Potential for genetic enhancements in sports. The President’s Council On Bioethics, Retrieved from http://www.Bioethics.Gov/200207/Session4.html Tamburrini, C. M. (2000). What’s wrong with doping? In T. Tännsjö And C. Tamburrini (Eds.), Values in sport: Elitism, nationalism, gender equality, and the scientific manufacture of winners. London: E & FN Spon. Wertz, S. K. (1981). The varieties of cheating. Journal Of The Philosophy Of Sport, VIII, 19-40. World Anti-Doping Agency. (2003). Prohibited classes of substances and prohibited methods. World Anti-Doping Agency (2005). The Stockholm Declaration. World Anti-Doping Agency. World Medical Association (2000). The World Medical Association Declaration of Helsinki: Ethical principles for medical research involving human subjects. Retrieved 2003 from http://www. wma.net/e/policy/17c.pdf
KEY tErMs Doping: Doping is defined by the World Anti-Doping Code, as the occurrence of a ‘rule violation’. Often, the doping concerns of institutions relates specifically to the abuse of regulated substances, such as anabolic steroids. Notably, doping offences also includes the presence of substances that would mask the effects of other enhancing substances. Within the world of sport, a policy of ‘strict liability’ is employed to remove positive test cases from competitions. Recently, this policy has been expanded to include more circumstantial evidence, such that a non-analytical positive is now a possible route towards disqualification.
The Ethics of Human Enhancement in Sport
Gene Doping: Gene doping has a precise definition with the World Anti-Doping Code as ‘the non-therapeutic use of cells, genes, genetic elements, or of the modulation of gene expression, having the capacity to improve athletic performance’. However, the Code does not take into account the possibility of germ-line genetic engineering and how, subsequently, sports would deal with the possibility that people might be born with already genetically enhanced predispositions. Over the years, some athletes have been born with abnormal genetic conditions that have benefited them in competition. There is currently no way of dealing witch such cases, unless it is concluded that the abnormality makes an athlete unfit for competition. Hypoxic Training: The utilization of indoor environments that simulate varying levels of altitude by altering the density of oxygen within the area, as would occur by travelling to locations of varying altitudes. By increasing the endogenous production of erythropoietin, hypoxic training can increase the endurance capacities of athletes or, more properly, the capacity to carry oxygenated red blood cells to muscles. In 2006, the world of sport considered whether such environments should be considered as a form of doping and decided that they could not. The formula of ‘living high and training low’ is regarded to be the optimal condition for performance and hypoxic training allows athletes to capitalize more fully on this possibility. Nutrigenomics: The study of molecular relationships between nutrients and the genome. The contribution of nutrigenomics to elite athletes could be the growth of ‘functional foods’, which allow an athlete to optimize performance enhancements, without needing to resort to synthetic substances. The Olympic Charter: The foundation document to the Olympic Movement, which outlines the philosophy of Olympism. This Charter dis-
tinguishes the Olympic Movement from other sports-related organisations, revealing its character as an organisation that aspires towards the aspiration of non-governmental organisations, but which delivers through a commercial model funded by the selling of intellectual property associations. Spirit of Sport: The third criterion of the World Anti-Doping Code. New technologies are tested against this criterion to determine whether or not they should be permitted within the acceptable means of performance enhancement in elite sport. The spirit of sport is the closest definition of an ethics of human enhancement that is given within international agreements about the ethics of sport technology. Tetrahydrogestrinone: (THG; ‘the clear’). A designer anabolic steroid closely related to the banned steroids trenbolone and gestrinone. In 2003, it was added to the banned substance list after a sample of it was left at the United States Anti-Doping laboratory in California. The United States Anti-Doping Agency linked the substance with the Bay Area Laboratory CoOperative, which was subsequently linked to the distribution of prohibited substances to numerous leading athletes. Tommy John Surgery: Technically known as ulnar collateral ligament reconstruction (UCL), the procedure is named after the baseball pitcher for the Los Angeles Dodgers who first underwent the surgery. The procedure involves the replacement of a ligament in the medial elbow with a tendon from another part of the body. Today, there are strong chances of recovery, though at the time of John’s procdure, the probability was extremely unlikely—approximately 1%. Anecdotes indicate that athletes throw harder after the surgery, compared with their pre-injury ability, though it is thought that this improvement is more closely linked to the recovery therapy, rather than any transformation of the biological structures.
The Ethics of Human Enhancement in Sport
World Anti-Doping Agency: The organization responsible for harmonizing anti-doping policy across all International Sports Federations. WADA began in 1999, taking on the role from the International Olympic Committee, where it was formerly located. The World Anti-Doping Code governs all Olympic events. 7
ENDNOtEs 1
2
3
4
5
6
More generally, the European Union funded ENHANCE Project features many of these authors. Political economists will point out that the stakeholders of the Olympic Movement are more likely to include sponsors and broadcasters than athletes, though the prominence of athletes is clearly visible in the rhetoric of these other stakeholders. It is also likely to be because the interests of sport philosophers extend beyond technoethics or even ethics generally. This is not to say that the arguments of medical ethicists are always received warmly or even taken into account by the medical professions, though one cannot dispute the fact that medical professions remain governed in quite prcise ways by principles of medical ethics. The same claim cannot easily be made of sport scientists. As an aside, I draw attention to Nature’s (Editorial, 2007) editorial that inquires into whether it would be sensible to legalize doping in elite sports. The editorial arose in part, as a result of the Tour de France doping scandals of 2007. In defence of this influence, WADA’s Stockholm Declaration (2005) on the ethics of
8
9
10
11
12
13
14
15
gene doping was shaped considerably by such ethical work. Also, in 2007, the British Government published a pioneering report on Human Enhancement Technologies and Sport, which was also informed by a number of ethicists, including Nick Bostrom, Andy Miah, Mike McNamee and Julian Savulescu. This Code also prohibits substances that ‘mask’ other prohibited substances and methods. Direct quotations taken from the World Anti-Doping Code (2003). Perhaps the ‘spirit of sport’ should be seen to function rather like ‘reasonableness’ in medical law, the definition of which often relies on the standard defined by a reasonable expert in the field. For detailed explanations of the science, see Levine & Stray-Guntherson (1997). Actually, whether or not such training confers a performance advantage seems a matter of scientific opinion. One might argue that it is also part of the knowledge that athletes bring to their performance via their entourage, other examples of which might include nutrition advice, specific technique knowledge or mental preparation. For other articles that dwelt with this case, see Levine et al. (2006) and (Miah 2006). See Feezell (1988), Fraleigh (1982), Leaman (1988) Lehman (1981), Rosenberg (1995), Wertz (1981) for more on cheating. For elaborations on this argument, see Schneider & Butcher (2000). Such a criterion is discussed by Perry (1988) as a ‘performance inhibitor’ that is valuable to eliminate. The challenge arises when one begins to discuss natural biological states as ultimately inhibiting of performance.
Chapter VI
Education of Ethics of Science and Technology Across Cultures Darryl Macer Regional Unit for Social and Human Sciences in Asia and the Pacific (RUSHSAP), UNESCO, Thailand
abstract This chapter examines some of the cultural variation in the ethical factors associated with the use of science and technology. The issues discussed include access to technology, social justice, professional ethics, and value systems. The appropriate implementation of international standards in ethics of science and technology and bioethics is considered. There is global agreement that persons should be taught the ethics of science and technology, and discussion of new materials and methods is made. The goals of ethics education as explained in the Action Plan for Bioethics Education developed at the 2006 UNESCO Asia-Pacific Conference on Bioethics Education include knowledge, skills and personal moral development. The International Bioethics Education Network was initiated in 2004, and the creation of networks linking research into policy is a cornerstone of efforts for education of ethics at all levels, from local to regional. In the future the use of principles as expressed in the UNESCO Universal Declaration on Bioethics and Human Rights (2005) will also be analyzed to broaden the description of bioethical reasoning. There needs to be extension of the evaluation methods and tools.
EtHIcs OF scIENcE aND tEcHNOLOGY At the beginning of this chapter we can ask, is there something unique about ethics of science and technology as opposed to ethics itself? All societies use technology, for clothing, housing, food, energy, health, and most other aspects of
life. The history and development of humankind is interwoven with the use of technology. Access to technology to advance quality of life is a long standing ethical issue, not distinct to social justice in general. The technical knowledge of a profession does however convey professional ethical duties upon the members of a profession and these are recognized such as medical ethics or engineering ethics.
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Education of Ethics of Science and Technology Across Cultures
Science, the quest for objective knowledge of our universe, and the method of intellectual inquiry, experimentation and falsification is a more recent phenomenon. Are there some types of knowledge that are dangerous for humankind to learn? The knowledge of gunpowder, dynamite or atomic weapons is not something that we would want everyone to apply, and all have been misused to kill and destroy people and the environment. The knowledge of psychiatry, physiology, chemistry or even educational methodology can also be misused. Therefore there are also scientific ethical issues in the use and consequences of choices regarding science. Thus ethics for scientists again can fall into the realm of professional ethics, and the way that professionals relate to those who lack that particular form of knowledge. This chapter will not focus on the deeper questions that remain on whether humans should pursue knowledge about everything, but having a training in science ethics will make practitioners of science aware of some of these dilemmas to consider in their occupation. If we look at the way that societies have faced ethical dilemmas arising in medicine and technology we can see several important elements in their evolution. When many countries opened their doors (or their doors were involuntarily opened) to Western society in the 19th century, it led to the introduction of a newly emerging scientific paradigm, only part of the fabric of Western society. The ethical values of Western society were also imported in some aspects, including with Christian missionaries and democracies, however while there were different receptions to these value systems the pursuit of science and technology and economic growth were adopted. However, the ethics of use of science and technology are not intrinsically different to the ethics of use of technical knowledge that existed everywhere, in basic life support services such as housing, food, medicines and information sharing. Each country in the world today imports ideas and goods from other countries, and there is evolution of ethical reflection through the grow-
ing involvement of the public in discussion and development of the indigenous diversity of ethical traditions. As cultures evolve, it becomes impossible to separate which aspects were introduced from the different sources at what time.
GLObaL caLLs FOr EtHIcs EDUcatION In addition to the need for professional ethics, citizens of all ages need to make ethical decisions on how they use science and technology and its products. Opinion surveys in every country they have been conducted to show global agreement for the inclusion of more ethical and social issues associated with science and technology to be taught to students. Member states of UNESCO (the United Nations Educational, Scientific and Cultural Organization) in the Universal Declaration on the Protection of the Human Genome and Human Rights (1997) declared such an educational need, and every member country of the United Nations endorsed this in 1998. This call was repeated by all member states when adopting the 2005 Universal Declaration on Bioethics and Human Rights. These calls follow numerous academic works also calling for this (Reiss, 1999; Ratcliffe & Grace, 2003). There is global agreement that persons should be taught the ethics of science and technology, but there are not global methods. UNESCO has taken up some of the challenges of how to translate this global call for bioethics debate and discussion in culturally appropriate manners. The appropriate implementation of international standards in ethics of science and technology and bioethics is important, and there have been a range of responses by states to the three International Declarations on Bioethics unanimously accepted by UNESCO General Conference (Universal Declaration on the Human Genome and Human Rights, 1997; International Declaration on Human Genetic Data, 2001; Universal Declaration on Bioethics
Education of Ethics of Science and Technology Across Cultures
and Human Rights, 2005). Although bioethics education was called for by all states that signed the 1997 Universal Declaration on the Human Genome and Human Rights, in article 20, it is still to be realized: “20. States should take appropriate measures to promote the principles set out in the Declaration, through education and relevant means, inter alia through the conduct of research and training in interdisciplinary fields and through the promotion of education in bioethics, at all levels, in particular for those responsible for science policies.“ Freedom of expression is one of the working methods of critical ethical reflection. Article 19 of the 1948 Universal Declaration of Human Rights, upholds the “freedom to hold opinions without interference.” Article 21 of Universal Declaration on the Human Genome and Human Rights 1997 reads “States should … also undertake to facilitate on this subject an open international discussion, ensuring the free expression of various sociocultural, religious and philosophical opinions.” We can ask how communities can be involved in discussion of ethics of science and technology. In all societies there is a transition from paternalism to informed consent to informed choice. Unless we can educate citizens the choices they make will not be informed. This transition from paternalism to choice creates the space for discussion in communities of what principles they consider important in making choices. We have to build capacity to ensure that the choices are more informed. We need to consider different life views each of us can have when confronted with moral dilemmas. Some believe that there is a right and a wrong choice to be made for a person’s action in each moral dilemma, and that they can also tell others what is morally right or wrong.
GOaLs OF EtHIcs EDUcatION
and technology should be incorporated into the long standing value education that is implicit in every human society. We learn how to relate to others as we grow up in families, schools and society (Rest, 1986). There have been different schemes elaborated for how we could define someone as being morally mature. There is agreement that the aim of teaching ethics is to develop the student’s ability to recognize and analyze ethical issues in order to be able to reach decisions on how to act ethically (COMEST, 2004). In discussions that have occurred in the International Bioethics Education Network in Asia and the Pacific there has been a consensus that the theory of moral development developed by Lawrence Kohlberg, and what has come to be called Kohlberg’s stages of moral development, does not universally apply when teaching bioethics. The problems are not only with non-Western students, but researchers in Australia and New Zealand have also found that it does not serve as a model. Kohlberg’s (1969) theory holds that moral reasoning, which he thought to be the basis for ethical behavior, has developmental stages that are universal. He followed the development of moral judgment beyond the ages originally studied by Jean Piaget looking at moral development throughout life, and created a model based on six identifiable stages of moral development (Scharf, 1978). It is still useful however to describe these stages, while recognizing that in different cultures the order of what we would call the most mature values differs. Kohlberg’s six stages were grouped into three levels: pre-conventional, conventional, and post-conventional. He claimed it is not possible to regress backwards in stages nor to ‘jump’ stages; each stage provides new perspective and is considered “more comprehensive, differentiated, and integrated than its predecessors.” A brief explanation follows.
It is important that different nations develop concrete plans for how education in ethics of science
Education of Ethics of Science and Technology Across Cultures
Level 1: Pre-conventional The pre-conventional level of moral reasoning is especially common in children, and said to be up to the age of 9 in U.S. children he studied, although adults can also exhibit this level of reasoning. Reasoners in the pre-conventional level judge the morality of an action by its direct consequences. The pre-conventional level consists of the first and second stages of moral development, and are purely concerned with the self (egocentric). In stage one (obedience), individuals focus on the direct consequences that their actions will have for themselves. For example, an action is perceived as morally wrong if the person who commits it gets punished. In addition, there is no recognition that others’ points of view are any different from one’s own view. Stage two is a self-interest orientation, right behavior being defined by what is in one’s own best interest. Stage two reasoning shows a limited interest in the needs of others, but only to a point where it might further one’s own interests, such as “you scratch my back, and I’ll scratch yours.” In stage two, concern for others is not based on loyalty or intrinsic respect. Lacking a perspective of society in the pre-conventional level, this should not be confused with stage 5 (social contract) as all actions are performed to serve one’s own needs or interests.
Level 2: conventional The conventional level of moral reasoning is typical of adolescents (age 9+ years) and adults. Persons who reason in a conventional way judge the morality of actions by comparing these actions to societal views and expectations. The conventional level consists of the third and fourth stages of moral development. In stage three, the self enters society by filling social roles. Individuals are receptive of approval or disapproval from other people as it reflects society’s accordance with the perceived role. They try to be a good boy or
good girl to live up to these expectations, having learned that there is inherent value in doing so. Stage three reasoning may judge the morality of an action by evaluating its consequences in terms of a person’s relationships, which now begin to include things like respect, gratitude and the golden rule. Desire to maintain rules and authority exists only to further support these stereotypical social roles. In stage four, it is important to obey laws and social conventions because of their importance in maintaining a functioning society. Moral reasoning in stage four is thus beyond the need for approval exhibited in stage three, because the individual believes that society must transcend individual needs. If one person violates a law, perhaps everyone would - thus there is an obligation and a duty to uphold laws and rules. As a cultural observation, this is a very common attitude in Asian and Pacific communities.
Level 3: Post-conventional The post-conventional level, also known as the principled level, consists of stages five and six of moral development. Realization that individuals are separate entities from society is important in North American society where Kohlberg developed his theory and so he judged it to be a higher level of morality. In that culture one’s own perspective should be viewed before the society’s is considered. Interestingly, the post-conventional level, especially stage six, is sometimes mistaken for pre-conventional behaviors. In stage five, individuals are viewed as holding different opinions and values, all of which should be respected and honoured in order to be impartial. However he considered some issues are not relative like life and choice. Laws are regarded as social contracts rather than dictums, and those that do not promote general social welfare should be changed when necessary to meet the greatest good for the greatest number of people (a utilitarian view).
Education of Ethics of Science and Technology Across Cultures
In stage six, moral reasoning is based on abstract reasoning using universal ethical principles. Decisions are made in an absolute way rather than in a conditional way. In addition, laws are valid only insofar as they are grounded in justice, and that a commitment to justice carries with it an obligation to disobey unjust laws. While Kohlberg insisted that stage six exists, he had difficulty finding participants who use it.
Implications After Kohlberg’s stage 4, the transition from stage four to stage five, people have become disaffected with the arbitrary nature of law and order reasoning and he said they become moral relativists. This transition stage may result in either progress to stage five or in regression to stage four. As has become clear during the bioethics education project, there is such a range of cultural, family and school value systems across the world, that students of one age in one country will most likely be in different stages at different times, even if all persons did follow this progression from stage 1 to stage 6 in moral reasoning, and not revert back to other levels. Stage six would correspond to a person that followed the textbook bioethics of Beauchamp and Childress (1995), or the longer list of principles found in the Universal Declaration on Bioethics and Human Rights (UNESCO, 2005). Macer (1998) has argued that bioethics is love of life, and that principalism based on following the standard ethical principles alone is not sufficient as an explanation of why people behave the way they do. The role of religious values is also obviously important, as concepts like karma and removal of oneself from the matters of the world do affect the values systems people use when approaching moral dilemmas.
bIOEtHIcaL MatUrItY The goals are linked to the methods and criteria that will be used to evaluate the materials and
student responses, and evaluation is discussed below. One concept that has been used by Macer is whether students demonstrate “bioethical maturity” in some way. “Bioethical maturity assumes a certain level of recognition of weighing up the different arguments that can be used to discuss an issue, the different ethical frameworks that can be used, and comparisons and balancing of the benefits and risks of the dilemmas” (Macer, 2002). This process also gives an indication as to how many different ideas people have, and the way they understand the dilemmas, and methods to study this are developing in the behaviourome project (Macer, 2002; 2004b). Prior to considering other issues, setting the goals is central. A detailed listing of goals that are common between many educators is found in the Action Plan for Bioethics Education developed at the 2006 UNESCO Asia-Pacific Conference on Bioethics Education (RUSHSAP, 2006). There has been significant research that has shown that there are a number of goals of ethics education including those listed here: a.
b.
Knowledge Development of trans-disciplinary content knowledge Understanding the advanced scientific concepts Being able to integrate the use of scientific knowledge, facts and ethical principles and argumentation in discussing cases involving moral dilemmas; Understanding the breadth of questions that are posed by advanced science and technology Understanding cultural values Skills (capacity building in skill acquiring should be multi faceted or many sided, and the goals include) Balancing benefits and risks of Science and Technology Being able to undertake a risk/benefit analysis
Education of Ethics of Science and Technology Across Cultures
c.
Develop critical thinking and decision making skills and reflective processes Develop creative thinking skills Develop foresight ability to evade possible risks of science and technology Skills for developing “informed choice” The required skills to detect bias in scientific method, interpretation and presentation of research results Personal moral development Understanding better the diversity of views of different persons Increasing respect for all forms of life Elicit a sense of moral obligation and values including honesty and responsibility Being able to take different viewpoints to issues including both biocentric and ecocentric worldviews rather than only anthropocentric perspectives. Increasing respect for different people and culture, and their values Developing scientific attitudes, reflective processes, and an ability for holistic appraisal, while not ignoring the value for reductionist analysis. Knowledge about bias in the interpretation and presentation of research results, benefits and risks of technology and bioethical issues, and how to detect bias Exploration of morals/values (values clarification) Values analysis and value based utilization of our scarce natural resources. (RUSHSAP, 2006)
Many of these goals apply to ethics education and (education) development of critical thinking in general. Descriptive ethics is to describe the way people view life, their moral interactions and responsibilities. If we attempt to understand the way we as human beings think, then we must look at the views of all in a society – not just an elite of the “philosophers” or “politicians”, to have ethics for the people by the people. The evolution
0
of considerations of ethics has witnessed increasing importance being place on descriptive ethics approaches, as societies become more democratic. As persons realize that ethical concepts have ancient roots in all cultures of the world, and that many persons have interesting views on the questions, the field has become richer, and there is still a lot of human knowledge that can be applied to assist in discussing modern technology. Interactive ethics is discussion and debate between people about descriptive and prescriptive/normative ethics. Consensus is possible after recognition of the relationships between different persons, to try to preserve social harmony. This consensus building is seen even in countries that have structured paternalism affecting relationships between persons. Public discussion of the ethics of science and technology in many societies aided by the media. Participation of the public in the societal decision-making process regarding new technology is essential. Community engagement is not only a question of knowing what is going on, but for a new technology to be accepted by the public, it is crucial to perceive the choice and influence. How can ethics be central in a dialogue between common cultures of technophiles and technophobes? A persons’ ethic is developed based on their own and other people’s opinions that grows as we face various dilemmas through our life. To have a balanced opinion from the community, it is important to hear from persons in a range of positions with different occupations. This common social goal has developed hand in hand with the emergence of increased media attention in pluralistic democracies to display the divergent views on science and technology.
EMPOWErING EDUcatOrs The appropriate response to the call for ethics education in science and technology, and to the common goals, demands education of teachers
Education of Ethics of Science and Technology Across Cultures
to apply these culturally. There is a common goal to develop the decision making ability at all levels of society, to develop sound original research appropriate to each culture, and enable more informed policy-makers, so our society can evolve ethically with the demands of the times. All sectors of society are faced with ethical issues in the pursuit of their duties. Critical to building the capacity of society for this open reflection on bioethics are educators. The task of an educator includes empowering their students/learners to develop their maturity as individuals as well as being able to be cooperative members of changing societies. Learners, as we all should be, need to be prepared so they are able to apply knowledge to make good decisions during their life. How can we train educators and sustain their motivation to take upon this task? How can we create communities that are able to consider all sides of ethical debates? Practicality is essential if teachers are expected to continue teaching, and students will continue their interest in the matter. The turbulent times of today have challenged some of the traditional structures in the relationships between human beings within their society, with nature and God. How can we empower citizens to make a special contribution in the wider context of constructing a mature society? Mature means a person, or a society that can balance the benefits and risks of alternative options, and make well-considered decisions, and talk about it. A mature society is one that has developed some of the social and behavioural tools to balance these bioethical principles, and apply them to new situations raised by technology. Despite a growing interest in education of ethics, one of the major concerns that teachers have is the lack of suitable teaching materials for ethics education. Integration of scientific facts is also important in moral reasoning. Science educators discovered during the last few decades that the most efficient way to educate science is to discuss the science together with examples of technology and put the facts into the social con-
text. The science, technology and society (STS) approach to education, was developed based on research which found students learn more science when the science is placed in its social context (Yager, 1990; Ramsey, 1993). Advances in biology and medicine have led to another pressure upon educators, namely how students can be prepared to face the ethical dilemmas that the technology often raises. The ethical issues associated with biology are generally grouped under the phrase “bioethics.” Bioethics is one part of the approach of STS, and a survey of bioethics teaching is also one method to measure the extent that society issues are included (Macer et al., 1996; Macer, 1999). In general there are less teachers using STS approaches in Asia than in the USA (Kumano, 1991), and Australasia (Asada et al. 1996), but it is growing still. Even within one country, such as the USA, there are a diversity of views on how to deliver efficient education of social issues and even the science itself (Waks & Barchi, 1992). STS approaches are integrated into a broad participatory paradigm of education across all subjects. UNESCO is attempting to generate sustainable ethics teaching and promotion programmes, supported by developing comprehensive databases of experts, existing professional networks, international legal instruments, national legislation, codes of ethics, institutions, and current teaching curriculum and research activities in bioethics (ten Have, 2006). Networking partners in the development of ethics teaching in the region is ongoing, and the UNESCO Asia-Pacific School of Ethics was founded in 2006 to bring together many active institutions and individuals who are collaborating to develop research on ethics of science and technologies from a wide range of cultures. Half of the members are actively involved in bioethics education, including ethics of science and technology, environmental ethics and medical ethics. The assembly and maintenance of on-line free access teaching resources, adaptations and translations into different languages, and links to all regional
Education of Ethics of Science and Technology Across Cultures
laws and guidelines related to professional ethics, including environmental ethics, ethics of sustainable development, bioethics, science ethics and cyber ethics have been made at RUSHSAP. The lessons learnt provide some key areas for future attention and priority setting. There have been different materials produced for teaching ethics (Jarvis et al. 1998; Levinson and Reiss, 2003). A growing compilation of open access teaching materials in different languages is available (Macer, 2004a; Macer, 2006). There are a wide range of materials to teach ethics, and diversity is to be applauded from a cultural perspective. Even before calls for inclusion of ethics of science and technology made by UNESCO Declarations, in a variety of textbooks in India and Japan we can see numerous ethical issues being included, however, there was little depth to the materials (Bhardwaj and Macer, 1999). The World Commission on the Ethics of Science and Technology (COMEST) (2004) suggested that there be some core essentials including making students familiar with the structure of normative argumentation, basic ethical norms and principles, types of ethical theories, ethical issues arising in different fields of science and technology, and especially related to the expected profession of the students in the case they are pursuing a profession such as engineering ethics (Chachra, 2005), or medicine. It is also important to teach about research ethics, and in some countries that is compulsory for graduate students (Eisen & Parker, 2004), or for students using animals for example. There are also a range of levels at which ethics can be taught, with many countries requiring some ethics in high school science classes. For example already in 1993 it was found that ethics of science was being widely taught in science classes in Australia, Japan and New Zealand (Asada et al. 1996; Macer et al. 1996), India (Pandian and Macer, 1998), and in 1997 in Singapore chemistry classes (Macer and Ong, 1999). Universities are also introducing ethics subjects to varying degrees, from general courses to specialized
courses (Zaikowshi & Garrett, 2004). COMEST (2004) recommended that all universities introduce ethics teaching as elementary ethics for all students, advanced courses for specific subjects in postgraduate education, and courses that lead to postgraduate degrees in ethics. In the future we will also have consensus on the core values to be included in courses for professional ethics.
PartIcIPatION Moral dilemmas face every one of us. There have been numerous books written to explain moral theories and how these can be applied to dilemmas we face in medicine, daily life and a range of professions (Scharf, 1978). Interactive ethics classes with experts can be useful (Sass, 1999). Critical thinking capacity is essential for empowering persons to cope with changing times. Participation can promote the creation of ideas and individuality, which we all need in the era of globalization. Bioethics is not about thinking that we can always find one correct solution to ethical problems. A range of solutions to moral dilemmas are often possible, although there are some inappropriate solutions to moral dilemmas, such as to always believe you are right and others are always wrong. Ethical principles and issues need to be balanced. Many people already attempt to do so unconsciously. The balance varies more between two persons within any one culture than between any two. We can often hear complaints from teachers that there are too many students in a class so there is no way to let the students talk. While there are different ways to describe the participation of students, in a lecture for 800 students compared to 32 or 10 students, class-size is not an insurmountable barrier to participatory learning. In the case of large classes there are methods that can be used to improve the participation of students such as talking in pairs while sitting in
Education of Ethics of Science and Technology Across Cultures
the class, or working in small groups of three or more persons to discuss particular questions from the text. While if everything is equal we would prefer less students in a class, a student will probably learn more in a class that has other students than themselves. Interactive responses between students and teachers are important in learning, for not only those asking questions but for all those listening. Some moral exercises provide an opportunity for each person to clarify their thinking on the question being asked. At the end of the exercise the teacher can ask students how their views have developed over the course of the exercise. They also will have been able to listen to others’ views. There mare many interactive discussion methods that can be used in classes with many persons. One participatory method that can be used is to get students to stand in a line to form a continuum line based on their view between two extremes along a moral continuum. After some students give their explanations for why they are standing at that point in the line then students may move to the appropriate point in the moral continuum. Then after some time a modified question can be given and the students asked to move along the continuum to their new positions. This can include a transition from an abstract question, such as whether they support the use of reproductive human cloning, to a personal question, such as whether they would use reproductive cloning if that was the only way for them to have a genetically related child. The line could be in a U shape or straight. The U shape allows all students to see each other more easily (and listen to each other) in the case of a larger number of students. A series of examples can also be described on cards and these cards are given to groups of students to discuss and sort into a paper card continuum on their desk, and then explain the rationale to others. Student debates and presentation of reports can allow more in-depth analysis of issues by students,
whether as individuals or in small groups, and then the debates can occur within the same class, between different classes, institutions or even countries by the use of video conferencing.
EvaLUatION Researchers and educators need to work together to research into appropriate teaching methods for different target groups, to assess the effectiveness and impact (both positive and negative) of ethics education. Generating sustainable ethics teaching and promotion programmes is a method in itself, required by education planners. Developing evaluation methods for effectiveness of education of ethics of science and technology is urgently required in many dimensions such as: knowledge, skills, and personal values. There is needs to be continued research into appropriate assessment methods for the curriculum, as well as research into assessment methods for student learning outcomes, and research on assessment of practices including student, professional and public attitude towards bioethical issues (Van Rooy & Pollard, 2002ab). Evaluation should be authentic, comparative and ongoing to give a better estimate of the way bioethics is received in each group. It is better to use essays and creative writing, or oral debate, as an examination rather than multiple choice response questions that merely test memory. Moral dilemmas also often have more than one correct answer, making it difficult to judge one answer as correct and others as incorrect. For more than 60 years it has been recorded that both quantitative and qualitative data are important in social science research, as was said by Merton and Kendall (1946), “Social scientists have come to abandon the spurious choice between qualitative and quantitative data: they are concerned rather with the combination of both which makes use of the most valuable features of each. The problem becomes one of the determining at
Education of Ethics of Science and Technology Across Cultures
which points they should adopt the one, and at which the other, approach”. Thus an appropriate methodological tool should contain methods to utilize and assess both types of data. One important goal of teaching about bioethical issues is to get students to critically evaluate the issues (Conner, 2003). In a Mexican case (Rodriguez, 2005), bioethics classes were used as a way to improve the general behaviour and study aptitude of students. Each institution is likely to put a different amount of emphasis on each goal. Also, different activities are likely to enable some goals to be met and not others (Macer, 2004c). Therefore we do not need to assess all the institutional objectives when evaluating the success of the trials. Instead, case studies of how students and teachers responded were also sought to give a wider descriptive account of various approaches. Kohlberg used moral dilemmas to determine which stage of moral reasoning a person uses. The dilemmas were short stories that describe situations in which a person has to make a moral decision, yet they provide no solution. The participant is asked what the right course of action is, as well as an explanation why. This style is still commonly used as case-based ethics teaching. There is a need to develop more cases for dialogues between different cultures and cases in broader issues of technology ethics, although some have been compiled. A dilemma that Kohlberg used in his original research was the druggist’s dilemma: Heinz Steals the Drug in Europe. A woman was near death from a special kind of cancer. There was one drug that the doctors thought might save her. It was a form of radium that a druggist in the same town had recently discovered. The drug was expensive to make, but the druggist was charging ten times what the drug cost him to produce. He paid $200 for the radium and charged $2,000 for a small dose of the drug. The sick woman’s husband, Heinz, went to everyone he knew to borrow
the money, but he could only get together about $ 1,000 which is half of what it cost. He told the druggist that his wife was dying and asked him to sell it cheaper or let him pay later. But the druggist said: “No, I discovered the drug and I’m going to make money from it.” So Heinz got desperate and broke into the man’s store to steal the drug for his wife. (Kohlberg, 1969) Should Heinz break into the laboratory to steal the drug for his wife? Why or why not? Like many cases of bioethics, from a theoretical point of view, it is not important what the participant thinks that Heinz should do. The above case is quite relevant to global debates going on regarding the interpretation of the Doha Declaration on the compulsory licensing of generic copies of patented medicines, which raises conflicts between countries and the pharmaceutical industry. The point of interest is the justification that the participant offers. Below are examples of possible arguments that belong to the six stages. It is important to keep in mind that these arguments are only examples. It is possible that a participant reaches a completely different conclusion using the same stage of reasoning: •
•
• •
•
Stage one (obedience): Heinz should not steal the medicine, because he will consequently be put in prison. Stage two (self-interest): Heinz should steal the medicine, because he will be much happier if he saves his wife, even if he will have to serve a prison sentence. Stage three (conformity): Heinz should steal the medicine, because his wife expects it. Stage four (law-and-order): Heinz should not steal the medicine, because the law prohibits stealing. Stage five (human rights): Heinz should steal the medicine, because everyone has a right to live, regardless of the law. Or: Heinz should not steal the medicine, because the scientist has a right to fair compensation.
Education of Ethics of Science and Technology Across Cultures
•
Stage six (universal human ethics): Heinz should steal the medicine, because saving a human life is a more fundamental value than the property rights of another person. Or: Heinz should not steal the medicine, because that violates the golden rule of honesty and respect.
One criticism of Kohlberg’s theory is that it emphasizes justice to the exclusion of other values. As a consequence of this, it may not adequately address the arguments of people who value other moral aspects of actions more highly. His theory was the result of empirical research using only male participants (aged 10, 13, and 16 in Chicago in the 1960s). Gilligan (1993) argued that Kohlberg’s theory therefore did not adequately describe the concerns of women. She developed an alternative theory of moral reasoning that is based on the value of care. Among studies of ethics there is a tendency in some studies to find females have higher regard for ethics theories (Ford and Richardson, 1994). Gilligan’s theory illustrates that theories on moral development do not need to focus on the value of justice. Other psychologists have challenged the assumption that moral action is primarily reached by formal reasoning. People often make moral judgments without weighing concerns such as fairness, law, human rights and abstract ethical values. If this is true, the arguments that Kohlberg and other rationalist psychologists have analyzed are often no more than post hoc rationalizations of intuitive decisions. This would mean that moral reasoning is less relevant to moral action than it seems (Crain, 1985). In current assessment of students there is a trend from merely making lists of many examples, or listing the positive and negative sides of an argument towards making students exhibit their reasoning as well. One of the common goals of school education is that students can produce a good argument. Stephen Toulmin’s model has become popular in development of students’
argumentation skills (Toulmin et al. 1984). To create an argument a person needs to state their claim, then support it with facts (data) that are arranged logically. For each fact, they should give the evidence for the fact (warrant), and for each warrant, state the quality of its validity (backing). Then for each warrant and its backing, people should think of an opposing point of view (rebuttal). They then consider further possible warrants and backing for the rebuttals. At the end then they review, having argued the rebuttals, do they need to qualify their original claim? The mental mapping project, or human behaviourome project (Macer, 1992) identified 9 classes of ideas, and attempts to explain the linkages between ideas in the construction of moral choices by different persons (Macer, 2002). The practical applications of that model are yet to reach a stage at which teachers could simply assess the moral development of their students. The Ideas, Evidence and Argument in Science Education (IDEAS) project of Osborne et al. in the UK [http://www.kcl.ac.uk/depsta/education/ideas. html], has as its goal the assistance of teachers in developing their skills to teach about ideas, evidence and argument in science. The materials they wish to develop include worksheets and video clips to enable teachers to teach children to develop and evidence scientific argument. The IDEAS project suggests the following criteria can be used in evaluating students’ arguments. Is there a claim? Does the argument have data to support the claim? Does the argument link the data to the claim? Are there further justifications to support the case? Is there any anticipation of a counter argument and how it could be opposed? Case studies have long been used in medical ethics teaching (Doyal et al., 1987). Ratcliffe and Grace (2003) outline the knowledge, understanding and skills that students studying ethical issues in science acquire and that can be used to design assessment questions. They listed several different levels of knowledge:
Education of Ethics of Science and Technology Across Cultures
•
•
•
Conceptual knowledge: Learners can demonstrate understanding of: underpinning science concepts and the nature of scientific endeavour; probability and risk; the scope of the issue – personal, local, national, global, political and societal context; and environmental sustainability. Procedural knowledge: Learners can engage successfully in: processes of opinion forming/decision making using a partial and possibly biased information base; cost-benefit analysis; evidence evaluation including media reporting; and ethical reasoning. Attitudes and beliefs: Learners can: clarify personal and societal values and ideas of responsibility; and recognize how values and beliefs are brought to bear, alongside other factors, in considering socio-scientific issues.
There is a consensus among many Western scholars that the balancing of four main bioethical principles, which are autonomy, justice, beneficence and non-maleficence, is central to making better decisions (Beauchamp and Childress, 1994). Autonomy includes ideas such as respect for privacy, respect for personal choice. Justice is to respect the autonomy of others, and to treat persons equally. Beneficence is to try to do good, and non-maleficence is to avoid harm. When solving or trying to reach a consensus about bioethical problems, these four main principles can be a good guide in balancing which ideas should be mostly weighed. One measure of bioethics education could then be whether students are able to use these principles in decision-making, which was examined by presence of these keywords in discourse (oral or written). Reaching a good decision is often difficult, which also may not be the same if made in different times and situations. Another approach that is common in education is to teach learners to break down ethical dilemmas into manageable problems, for example, the separation of action,
consequence and motives connected to a moral decision. This separation is reflected on the different bioethical theories. Utilitarianism is an example of a bioethical theory, which looks at the consequences of an action, and is based on the work of Jeremy Bentham and John Stuart Mill. This principle asserts that we ought always to produce the maximal balance of happiness or pleasure over pain, or good over harm, or positive value over disvalue. Utilitarianism can be then broken down into rule utilitarianism, and act utilitarianism. “A rule utilitarian may use moral rules as authoritative instrumental rules, so the morally right action is conformity to a system of rules, and the criterion of the rightness of the rule is the production of as much general happiness as possible” (Macer, 1998a). Act utilitarians on the other hand, look at the particular act only, and object to moral rules to be only an approximate guides, which could be broken if maximal good is not obtained. Another example of a bioethical theory is rights based theories of Immanuel Kant, and human rights law (Beauchamp and Childress, 1994; Macer, 1998a). The use of utilitarian-style logic and rights arguments were also examined among the discourse.
NEtWOrKING The International Bioethics Education Network was initiated in 2004, and the creation of networks linking research into policy is a cornerstone of efforts in all levels, from local to regional. List serves function in English for educators and students, and persons from a wide range of countries have tried these resources, and contributed to this project over the past years: • •
Education listserve: Student listserve:
Education of Ethics of Science and Technology Across Cultures
Networking among teachers interested in ethics of science and technology is essential for improving the quality of the way that ethics is taught, and also strengthening the motivation of teachers who are still often isolated. There are many teachers who will teach ethics of science at early levels, and successful methods for 12 year olds have also been described (Macer at el., 1997). There are considerable challenges in teaching even topics with a long tradition of debate such as the ethics of biotechnology (Hendrix, 1993; Lock and Miles, 1993; Bryce, 2004).
FUtUrE NEEDs In the future the use of principles as expressed in the UNESCO Universal Declaration on Bioethics and Human Rights (2005) will also be analyzed to broaden the description of bioethical reasoning. There will also be identification of other goals from not only modern Western society but other value traditions. As with the above examples of questions that Kohlberg used for the linkage of student arguments to moral stages of development, there are a number of ways that could be developed into evaluation tools for assessment of bioethics education. We need to repeatedly examine criteria that could be used to measure the success of education of ethics, and the effectiveness of different forms of education for making mature citizens. There needs to be extension of the evaluation methods and tools to look for presence of other concepts such as virtue ethics for example. Classroom observations, audio and video tape recordings, and written essays and homework done by the students can all be collected, but new methods for discourse analysis need to be researched. Text analysis of the reports for keywords was undertaken, extending categorization methods that have been developed (Maekawa & Macer, 2005). This feedback should be used to continually modify the texts and accompanying
questions and materials for teachers. Another way to assess the usefulness of the materials for developing ethical principles in making ethical decisions was to look for key words and concepts in the answers students give to oral questions. Knowledge that educators want to impart includes knowledge of the science/technology content, knowledge of reflective processes (individual views), exploration of morals/values (values clarification), knowledge about bias and how to detect it (values analysis), knowledge about political agendas, for example (Conner, 2004). Recently, Sadler and Zeidler (2005) showed that tertiary students frequently relied on combinations of rationalistic, emotive and intuitive knowledge as they worked to resolve scenarios about genetic engineering. Persons at all levels do mix ideas in different ways (Macer, 2002) and this was shown in the evaluation report that is an output of this project. Evaluation must be done ethically (Alderson & Morrow, 2003), and there are a variety of methods in research which can be applied for evaluation depending on the style of class and purpose (Cohen et al., 2003). It is very important to examine the future direction of bioethics education and how this might enable people to question scientific endeavours and what impact their moral decisions will have on them as individuals and upon their societies. The skills that are required to do this involve the ability to identify existing ideas and beliefs, listen to others, be aware of multiple perspectives, find out relevant information and communicate the findings to others. These skills cannot be ‘given’ to students through a didactic approach to teaching, where the teacher imparts the knowledge. Instead, students need to experience situations that will allow them to develop these skills through interacting with the teacher and with each other. This project allows sharing of cases and experience in a range of cultures as well. When bioethics is applied to professional behaviour, such as in medical ethics, methods
Education of Ethics of Science and Technology Across Cultures
to evaluate have included the way students conduct a patient examination (http://wings.buffalo.edu/faculty/research/bioethics/eval.html). In Buffalo University Bioethics program (Singer et al., 1993), they applied the technology of the objective structured clinical examination (OSCE) (Cohen et al., 1991) using standardized patients to the evaluation of bioethics. Methods to evaluate the clinical-ethical abilities of medical students, post-graduate trainees, and practising physicians that have been used include multiple-choice and true/false questions (Howe and Jones, 1984), case write-ups (Siegler et al, 1982; Doyal et al., 1987; Redmon, 1989; Hebert et al., 1990), audio-taped interviews with standardized patients (Miles et al., 1990), and instruments based on Kohlberg’s cognitive moral development theory (Self et al., 1989). Pre and post teaching interventions are also a method that can be used (Oka & Macer, 2000). These can be applied to other professional ethics and research ethics guidelines. There can be monitoring of the behaviour of scientists who have completed courses, and examination of cases of misconduct to assess whether the persons felt they lacked some ethics education. The reliability and validity of evaluation methods have seldom been examined, and research into these elements also must be developed. Auvinen et al. (2004) applied the use of Kohlberg’s stages of moral development to assess ethics teaching in nursing students in Finland, and they found significantly higher ethical maturity when nurses actually had to deal with ethical dilemmas in their practical training in clinics.
cONcLUsION There are a range of goals of ethics education for science and technology. The goals of ethics range from increasing respect for life; balancing benefits and risks of science and technology; understanding better the diversity of views of different persons; understanding the breadth of questions
that are posed by advanced science and technology; being able to integrate the use of scientific facts and ethical principles and argumentation in discussing cases involving moral dilemmas; and being able to take different viewpoints such as biocentric and ecocentric perspectives. We do not need to achieve all goals to consider a class to be successful, and different persons, professions and communities put a different amount of emphasis on each goal. It is important at all levels we need to research how to evaluate whether the teaching is having any impact or not. Because investigating ethical issues is complex, the educators need to consider what knowledge needs to be developed in order for students to make sense of moral issues, to be able to critically evaluate them and to take more ethical action based on this knowledge. In experience with trying to get evaluation and feedback however, despite positive comments that teachers may provide in person, very few evaluation reports from student and teacher feedback are returned (Macer, 2006). There is still a need for analysis of reports and discourse in order to gain a greater impression of how student values changed, and a suggested coding frame is made. It can be extended case-by-case to add new keywords and concepts which are important for the specific research goals of the evaluation, as well as topic-specific goals. Pre and post questionnaire surveys about specific topics relating to the content of the lecture or teaching intervention (Maekawa and Macer, 2004) can be useful to measure change, however, report and discourse analysis may provide a more reliable judgment because the object is to see the use of ethical principles and moral reasoning all the time, and not just being written for tests by students. There are several different ways to assess learning in bioethics. There need to be assessment methods to map to the different goals. A mix of qualitative and quantitative methodology can help in the monitoring of ethical maturity, and qualitative discourse analysis will assist this (Dawson and Taylor, 1997).
Education of Ethics of Science and Technology Across Cultures
The Action Plan for Bioethics Education developed in 2006 in Asia and Pacific (RUSHSAP, 2006) addressed recommendations to educators, researchers, universities and government. There is room at all levels to develop a practical climate for greater education of the ethics of science and technology. In conclusion we can say that despite the overwhelming consensus that education of professional ethics is essential, we still have a long way to go to being confident that ethics education achieves the goals it is conducted for, and more research and trials are necessary in every culture and field of science and technology.
rEFErENcEs Alderson, P. & Morrow, V. (2004) Ethics, social research and consulting with children and young people. London: Barnardo’s. Asada, Y., Akiyama, S., Tsuzuki, M., Macer, N. Y. & Macer, D. R. J. (1996). High school teaching of bioethics in New Zealand, Australia, and Japan. Journal of Moral Education, 25, 401-420. Auvinen, J. et al. (2004). The development of moral judgment during nursing education in Finland. Nurse Education Today, 24, 538-46. Beauchamp, T. L. & Childress, J. F. (1994). Principles of biomedical ethics. Fourth Edition. New York: Oxford University Press. Bhardwaj, M. & Macer, D. (1999). A comparison of bioethics in school textbooks in India and Japan. Eubios Journal of Asian & International Bioethics, 9, 56-9. Bryce, T. (2004). Tough acts to follow: the challenges to science teachers presented by biotechnological progress. Int. J. Science Education, 26, 717-733. Chachra, D. (2005). Beyond course-based engineering ethics instruction: Commentary on “Top-
ics and cases for online education in engineering”. Science & Engineering Ethics, 11(3), 459-62. Cohen, L., Manion L. & Morrison K. (2003). Research methods in education. 5th Edition. London: Routledge Falmer. Cohen, R., Singer P. A., Rothman A. I., & Robb, A. (1991). Assessing competency to address ethical issues in medicine. Academic Medicine, 66, 14-5. COMEST (The World Commission on the Ethics of Scientific Knowledge and Technology). (2004). The teaching of ethics. Paris: UNESCO. Conner, L. (2003). The importance of developing critical thinking in issues education. New Zealand Biotechnology Association Journal, 56, 58-71. Conner, L. (2004). Assessing learning about social and ethical issues in a biology class. School Science Review, 86(315), 45-51. Crain, W.C. (1985). Theories of development. New York: Prentice-Hall. Dawson, V. & Taylor, P. (1997). The inclusion of bioethics education in biotechnology courses. Eubios Journal of Asian & International Bioethics, 7(6), 171-175. Doyal, L., Hurwitz, B., Yudkin, J. S. (1987). Teaching medical ethics symposium: Medical ethics and the clinical curriculum: A case study. Journal of Medical Ethics, 13, 144-149. Eisen, A. & Parker, K.P. (2004). A model for teaching research ethics. Science & Engineering Ethics, 10(4), 693-704. Ford, R. C. & Richardson, W. D. (1994). Ethical decision making: A review of the empirical literature. J. Business Ethics, 13, 205-21. Gilligan, C. (1993). In a different voice: Psychological theory and women’s development. Cambridge, MA: Harvard.
Education of Ethics of Science and Technology Across Cultures
Hebert, P., Meslin, E. M., Dunn, E. V., Byrne, N., Reid, S.R. (1990). Evaluating ethical sensitivity in medical students: Using vignettes as an instrument. J. Medical Ethics, 16, 141-145. Hendrix, J. R. (1993). The continuum: A teaching strategy for science and society issues. American Biology Teacher, 55, 178-81. Jamieson, D. (1995). Teaching ethics in science and engineering: Animals in research. Science & Engineering Ethics, 1, 185-6. Jarvis, S., Hickford, J. & Conner, L. (1998). Biodecisions. Lincoln: Crop & Food Research Institute. Kohlberg, L. (1969). Stage and sequence: the cognitive-developmental approach to socialization. Chicago: Rand-McNally. Kumano, Y. (1991). Why does Japan need STS: A comparative study of secondary science education between Japan and the U.S. focusing on an STS approach. Bull. Sci. Tech. Soc., 11, 322-30. Levinson, R. & Reiss M. J. (eds). (2003). Key issues in bioethics: A guide for teachers. London: Routledge-Falmer. Lock, R. & Miles, C. (1993). Biotechnology and genetic engineering: students’ knowledge and attitudes. J. Biological Education, 27, 267-72. Macer, D. R. J. (1994) Bioethics for the people by the people. Christchurch: Eubios Ethics Institute. Macer, D. R. J., Asada, Y., Tsuzuki, M., Akiyama, S., & Macer, N. Y. (1996). Bioethics in high schools in Australia, New Zealand and Japan. Christchurch: Eubios Ethics Institute. Macer, D., Obata, H., Levitt, M., Bezar, H. & Daniels, K. (1997). Biotechnology and young citizens: Biocult in New Zealand and Japan. Eubios Journal of Asian & International Bioethics, 7, 111-114.
00
Macer, D. R. J. (1998). Bioethics is love of life: An alternative textbook. Christchurch: Eubios Ethics Institute. Macer, D. & Ong, C.C. (1999). Bioethics education among Singapore high school science teachers. Eubios Journal of Asian & International Bioethics, 9, 138-144 Macer, D. R. J. (2002). The next challenge is to map the human mind. Nature, 420, 121. Macer, D. R. J., ed., (2004a). Bioethics for informed citizens across cultures. Christchurch: Eubios Ethics Institute. Macer, D. R. J., ed. (2004b). Challenges for bioethics from Asia. Christchurch: Eubios Ethics Institute. Macer, D. R. J. (2004c) Bioethics education for informed citizens across cultures. School Science Review, 86(315), 83-86. Macer, D. R. J., ed. (2006). A cross-cultural introduction to bioethics. Christchurch: Eubios Ethics Institute. Retrieved from http://eubios. info/ccib.htm, http://www.unescobkk.org/index. php?id=2508 Maekawa, F. & Macer, D. R. J. (2005). How Japanese students reason about agricultural biotechnology. Science & Engineering Ethics, 10(4) 705-716. Merton, R. K. & Kendall, P. L. (1946). The focused interview. American J. Sociology, 51, 541-7. Miles, S. H., Bannick-Mohrland, S. & Lurie, N. (1990). Advance-treatment planning discussions with nursing home residents: Pilot experience with simulated interviews. Journal of Clinical Ethics, 2, 108-112. Oka, T. & Macer, D. R. J. (2000). Change in high school student attitudes to biotechnology in response to teaching materials. Eubios Journal of Asian & International Bioethics, 106, 174-9.
Education of Ethics of Science and Technology Across Cultures
Pandian, C. & Macer, D. R. J. (1998). An investigation in Tamil Nadu with comparisons to Australia, Japan and New Zealand. In Azariah J., Azariah H., & Macer D.R.J. (Eds.), Bioethics in India (pp. 390-400). Christchurch: Eubios Ethics Institute. Ramsey, J. (1993). The science education reform movement: Implications for social responsibility. Science Education, 77, 235-58. Ratcliffe, M. & Grace, M. (2003) Science for citizenship: Teaching socio-scientific issues. Maidenhead: Open University Press. Reiss, M.J. (1999). Teaching ethics in science. Studies in Science Education, 34, 115-140. RUSHSAP (Regional Unit for Social & Human Sciences in Asia & the Pacific, UNESCO). (2006). Action Plan for Bioethics Education. Developed at the 2006 UNESCO Asia-Pacific Conference on Bioethics Education. Found at http://www. unescobkk.org/index.php?id=apse Rest, J. R. (1986). Moral development: Advances in research and theory. New York: Praeger. Sadler, T. D., & Zeidler, D. L. (2005). Patterns of informal reasoning in the context of socioscientific decision making. Journal of Research in Science Teaching, 42(1), 112-138. Sass, H. M. (1999). Educating and sensitizing health professionals on human rights and ethical considerations: The interactive role of ethics and expertise. International J. Bioethics, 10(3), 69-81. Self, D., Wolkinsky, F.D. & Baldwin, D.C. (1989). The effect of teaching medical ethics on medical students’ moral reasoning. Academic Medicine, 64, 755-9. Scharf, P. (1978). Moral education. Davis, CA: Responsible Action. Siegler, M., Rezler, A. G., & Connell K. J. (1982). Using simulated case studies to evaluate a clini-
cal ethics course for junior students. Journal of Medical Education, 57, 380-385. Singer, P. A., Cohen, R., Robb, A., & Rothman, A. I. (1993). The ethics objective structured clinical examination (OSCE). J Gen Intern Med, 8, 23-8. Ten Have, H. (2006). The activities of UNESCO in the area of ethics. Kennedy Institute of Ethics Journal, 16(4), 333-352. Toulmin, S., Rieke, R. & Janik, A. (1984). An introduction to reasoning (Second edition). New York: Macmillan. UNESCO (1997). Universal Declaration on the Protection of the Human Genome and Human Rights. UNESCO. UNESCO (2005) Universal Declaration on Bioethics and Human Rights. UNESCO. UNESCO. Ethics home page. http://www.unesco. org/ethics Van Rooy, W. & Pollard, I. (2002a). Teaching and learning about bioscience ethics with undergraduates. Education & Health, 15(3), 381-385. Van Rooy, W. & Pollard, I. (2002b). Stories from the bioscience ethics classroom: Exploring undergraduate students’ perceptions of their learning. Eubios Journal of Asian & International Bioethics, 12, 26-30. Waks, L. J. & Barchi, B. A. (1992). STS in U.S. school science: Perceptions of selected leaders and their implications for STS education. Science Education, 76, 79-90. Yager, R. (1990). Science/technology/society movement in the United States: Its origin, evolution, and rationale. Social Education, 54, 198-201. Zaikowshi, L. A. & Garrett, J. M. (2004). A three-tiered approach to enhance undergraduate education in bioethics. BioScience, 54, 942-9.
0
Education of Ethics of Science and Technology Across Cultures
KEY tErMs Bioethics: This is a field concerned with ethical implications within medicine and medical research. Education: This describes the process and act of acquiring knowledge. Ethics of Science and Technology: This is a field concerned with ethical study of science and technology. Evaluation: This is a set of procedures designed to measure or account for changes in learning or performance.
0
Medical Ethics: This is a field of applied ethics concerned with moral and ethical values in medicine. Moral Development: This concerns changes in individual values that occur during development. UNESCO: UNESCO is a specialised technical intergovernmental agency of the United Nations, focusing on promotion of education, culture, social and natural sciences, and communication and information.
0
Chapter VII
Planning, Interests, and Argumentation Seppo Visala University of Tampere, Finland
abstract Within the organisational development people’s arguments rise from their personal or group interests, which in turn are based on the systemic differentiation of society and technology at a given time. We face a crucial issue: Must we accept separated group interests as inescapable circumstances, or can we reach out for universal human interests? This chapter addresses the issue combining Rawls’ idea of an original position behind a veil of ignorance with Habermas’ concepts of communicative rationality and discourse.
INtrODUctION Planners and decision makers encounter competing interests that emerge from the division of labour and of our economic system, but the interests do not provide any rationally motivated legitimation basis for planning. People’s arguments rise from their personal or group interests, which in turn are based on the systemic differentiation of society and technology at a given time. The group interests and the division of labour reproduce each other all the time, technology often being the major driving force behind the new division of labour. The choice between technological alternatives is an ethical issue because it affects
people’s rights and position in the organization in question, as well as through its products and side effects external society and, in the long run, also future generations. The focus of this chapter is inside organizations, but we briefly touch upon the broader perspective in the discussion on future trends. The theoretical background of rational planning has two main sources, the economists’ notion of rational decision making and the systems approach (March, 1982, Simon, 1982, Churchman, 1968). Planning theorists with a more practical stance have been looking for a theoretical basis for planning professionals. Planning theorists take into account the multi-agency view of decision
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Planning, Interests, and Argumentation
making, and the planner should bring different political and technical aspects of relevant alternatives into the open (Faludi, 1973), or even demand that the planner should take an active political role so as to defend the interest of the oppressed (Forester, 1989). We face a crucial issue: must we accept separated group interests as inescapable circumstances, or can we reach out for universal human interests? The situation is a challenge for rational argumentation, since, if there is a possibility of a generalized interest, it is only rational argumentation that can lead us out of the dilemma. By means of the accounts of two outstanding thinkers of last century we can address the problem of the universalisation of interest: Habermas and Rawls.
raWLs Rawls (1973) derives his theory of justice, justice as fairness, through a very simple but powerful concept of rational choice in an ideal ‘original position behind a veil of ignorance’. His aim is to derive principles of justice that equal, rational persons would agree on when they do not know their share of the utilities ensuing from the principles and their social circumstances or personal characteristics. The veil of ignorance guarantees the universalisation of the principles. When the participants do not know their social position or any personal characteristics, they are in a position to think of the principles from the generalised position of any rational decision maker. They can only make their decision with regard to the principles of justice, not their contingent natural fortune (p. 18). Rawls sees the original position as a procedural interpretation of Kant’s categorical imperative (p. 256). Their rational choice will then be to define justice as fairness. Rawls (1973) derives two basic principles of justice. (1) The principle of liberty says: “Each person is to have an equal right to the most extensive basic liberty compatible with
0
a similar liberty for other” (p. 60). (2) The difference principle states: “Social and economic inequalities are to be arranged so that they are both (a) to the greatest benefit of the least advantaged and (b) attached to offices and positions open to all under conditions of fair equality of opportunity” (p. 83, also p. 302). Accordingly, the optimum configuration of economy is achieved at the maximum point of the least advantaged members. The principles are arranged by two priority rules: (1) According to ‘the priority of liberty’, liberty can only be restricted for the sake of liberty. (2) According to ‘the priority of justice over efficiency and welfare’, the equality of opportunities is prior to the welfare of the least advantaged (p. 302). Partly due to the criticism of his ‘Theory of Justice’, Rawls gave up the central role of the above two principles in his work Political Liberalism (1993) without abandoning the idea of the original position. (For discussion on Rawls, see Freeman, 2003). Rawls’ later work expanded the view from the rules of a democratic state to the rules of nations (Rawls, 1993) and between nations (Rawls, 1999), so his views have hardly been discussed at all in a limited organisational context, which is our aim in this paper. In his later work Rawls (1993) accepted it as a fact that people can adhere to different notions of freedom, due to, for instance, their religion. In this context we can leave the detailed debate concerning the above principles aside, although they are most interesting from the point of view of ethics in general. The second principle also addresses technology as it replaces Pareto optimality as the notion of efficiency. (An economic situation is Pareto optimal, if it cannot be changed for the benefit of anyone without worsening it for someone else.) Efficiency as a driving force of social development will be discussed briefly below, but we focus on Rawls’ idea of ‘the original position behind a veil of ignorance’. The notion of the original position was challenged by Habermas’ communicative rationality.
Planning, Interests, and Argumentation
HabErMas During the 1970’s Habermas worked out a theory of communicative action which was summed up in two volumes published in German in 1981, and in translations into English in 1984 and 1987 (Habermas, 1984, 1987). One of the basic notions of his theory is communicative rationality. Habermas (1984, 75) characterises communicative rationality in the following way: We can begin with the claim that the concept of communicative rationality has to be analysed in connection with achieving understanding in language. The concept of reaching an understanding suggests a rationally motivated agreement among participants that is measured against criticisable validity claims. The validity claims (propositional truth, normative rightness, and subjective truthfulness) characterise different categories of a knowledge embodied in symbolic expressions. The idea of discourse put forward by Habermas (1973 and 1984) can be summed up in the notions of an ideal speech situation, levels of discourse and validity claims which stand up with rationally motivated agreement. The ideal speech situation presupposes a certain kind of process and procedure according to which discussants conduct argumentative speech (1984, 25). The process guarantees the participants’ symmetrical and equal chances to express their opinions and raise questions about any issue (Habermas, 1973). Argumentation excludes all force but the force of the better argument. In a discourse “participants […] test with reasons, and only with reasons, whether the claim defended by the proponents rightfully stands or not” (1984, 25). Habermas’ idea of discourse comes close to Argyris and Schön’s Model II theory of action (Schön 1983, 230 ff.). Habermas’ theory partly emerges from the criticism of instrumental reason in the spirit of the Frankfurt school of thought (Habermas, 1968 and 1981, cf. Marcuse, 1968). Habermas’
theory of communicative action has raised discussion within one field of technology, information systems (Lyytinen & Hirschheim, 1989, Klein & Hirschheim, 1991). In accordance with the idea of communicative rationality, Habermas (1983) replaces Kant’s monological universalisation principle (which does not require a discourse to back it up), as expressed in the categorical imperative with the principle of universality that is based on formal conditions of discourse. His universalisation principle says (1983, 75) that any valid norm must satisfy the condition that the consequences which follow from the fact that individuals generally keep the norm when they strive for their particular interests must be submitted to the acceptance of all those affected. Habermas sees this universalisation principle as compelling each participant to take others into account as well, i.e. it leads to what Mead has called ‘ideal role-taking’. Each participant in the discourse must have an opportunity to express his or her ideas, so that consensus is a result of real argumentation. A discourse will bring about a consensus through the force of the better argument. We may ask, however, what it is that can provide moral discourse with better arguments. Rawls has given a plausible, at least a partial answer.
UNIvErsaLIsatION IN a rEaL DIscOUrsE Habermas (1983, 76) explicitly contrasts his notion of ethics with that of Rawls, and criticises the latter’s view of a fictitious original position and the monological basis of his ethics. Habermas has himself been criticised for assuming a fictitious situation in which all those affected could take part in an ideal speech situation (Ulrich, 1983). Not all people can be expected to meet formal requirements of rational argumentation, and future generations cannot in principle take part in decisions that influence their lives. It is obvious
0
Planning, Interests, and Argumentation
that the ideal process in which everyone can take part (also an ideal assumption) does not guarantee rational motivation as such, nor does it serve as a better argument. In practice, a dialogue naturally has a greater chance of achieving a just solution than monological, theoretical contemplation. Rawls (1995) also directly denies Habermas criticism on monological ethics: “The point of view of civil society includes all citizens. Like Habermas’ ideal discourse situation, it is a dialogue, indeed, an omnilogue” (p. 140). Rawls explains this further in a footnote: Habermas sometimes says that the original position is monological and not dialogical; that is because all the parties have, in effect, the same reasons and so they select the same principles. This is said to have the serious fault of leaving it to “the philosopher” as an expert and not to citizens of an ongoing society to determine the political conception of justice. [Rawls refers to Habermas, 1983]. The reply I make to his objection is that it is you and I – and so all citizens over time, one by one and in associations here and there – who judge the merits of the original position as a device of representation and the principles it yields. I deny that the original position is monological in a way that puts in doubt its soundness as a device of representation. (Rawls, 1995, fn. p. 140). Already in his Theory of Justice, Rawls (1973) says that “one or more persons can at any time enter this position or […] simulate the deliberations of this hypothetical situation, simply by reasoning in accordance with the appropriate restrictions” (p. 138). So Habermas’ criticism misses the point of generalisation, but Rawls does not directly answer the question of real discourse versus private contemplation. For Rawls, universalisation is a consequence of freely choosing any citizen (untied by quantifier), which leads to the conclusion that principles decided by that citizen are valid to all citizens, whereas for Habermas the universalisation is more
0
straightforward, because he explicitly refers to all citizens (‘ideal role taking’). From the logical point of view their expressions amount to the same result, Rawls chooses ‘any citizen’ and Habermas uses the quantifier ‘all’. We can think of the possibility of combining these seemingly opposing views within a given organisational context. From Habermas’ point of view the participants are known beforehand, although not necessarily all those affected by decisions, and the decisions can be reached through rational argumentation. From Rawls’ point of view we can recognise here a special case of the original situation: the participants do not necessarily know their position in the reorganised division of labour. This leads us to formulate the following principle of the universalisation of moral-practical discourse (modified from Visala, 1993 and 1996): Arguments assume a universal validity only to the extent that they do not appeal to personal interests; decisions are only backed up by arguments for the optimal total interest of the organisation and pass the tests for general human needs. The combining of Habermas’ and Rawls’ ideas strengthens them both. Rawls’ notion of original position provides Habermas’ communicative rationality with a principle of what counts as a better argument. The conditions of the original position are conditions of an ideal speech situation, even to the extent that they can be realised through the argument that biased personal needs do not count as a rational argument. Habermas’ idea of rational argumentation with the above supplementary principle provides Rawls’ notion of the veil of ignorance with practical implementation. The above principle works in a manner of Popper’s refutation principle: a universal proposition cannot be verified as true, but they can be refuted by one counter-example. How should we organise a discourse in order to reach rationally motivated consensus? A necessary
Planning, Interests, and Argumentation
precondition for this is that the participants share a common view of validity claims, i.e. what counts as a valid argument. Habermas (1984) identifies four types of validity claims: truth, rightness, truthfulness (sincerity) and authenticity. The first three of them are relevant in the organisational context of this paper. Rightness refers to the normative and ethical values of arguments, truthfulness to the sincerity of a speaker, who expresses his or her emotions or opinions. Authenticity refers to the aesthetic value of a piece of art. Truthfulness is a precondition of any serious talk. There is no way of checking the validity of subjective expression through argumentation (Habermas, 1984, 303). As regards rightness, the above principle provides a means to test the universality of a claim. Truth is at first sight the most obvious validity claim, but as the long history of philosophy shows, it is a controversial one. Toulmin (1958) has given a concise formulation of how to use arguments: a claim is a statement expressed with conditional modal qualifiers, based on data, warrant and backing (p. 104). The ultimate ground for any rational discourse is a life-world shared by the members of society. They must have some common ‘preunderstanding’ of what counts as an undisputable fact, for example. What is meant by this is described best by Wittgenstein (1979). We have discussed the philosophical foundation of the rationality of argumentation elsewhere at some length (Visala 1993). This discussion will not be repeated here.
tEcHNOLOGY aND PLaNNING The above universalisation principle can be used as a rule of argumentation, when there is a plan to re-engineer the organisation by means of new technology. The guiding principle for arranging a discourse is that participants first agree on those issues that will be used as a backing for other claims and are thus more general in nature. A new technical production concept should be
evaluated against efficiency objectives and environmental constraints, for example. The process view of an organisation makes it possible for us to encounter a situation in which the division of labour can be discussed without referring to any existing organisational positions. There are no particular interest groups yet, as new tasks have not been allocated to anyone. The participants are in a kind of real original situation, and they can divide the tasks and the benefits attached to them under a veil of ignorance. A planner could serve as a facilitator of discourse and, in a manner of speaking, hold the veil of ignorance in front of the participants. In accordance with Rawls’ difference principle, the participants may end up with unequal shares of the outcome of the firm, if they are a way to get better total returns, i.e. more to share between stakeholders. However, the participants may come to some other conclusion as well, due to their shared values, which have been agreed upon during the discourse, or – what may be a more realistic assumption – due to regulations imposed on them. For example, new technology may not improve the returns of the firm (in the short run), but it is chosen because it is less polluting. The purpose of this speculation is only to stress the point that Rawls’ original principles need not be assumed as necessary starting points for a discourse. (This debate could be continued further with the argument that non-polluting technology is chosen for the benefit of the future generations, etc.)
DIscUssION The above principle still remains as an ideal objective. There are two obvious reasons leading to doubts as to whether it can be implemented directly in modern organisations. The first one follows from the prevalent organisational culture of Model I (control and competition, Argyris, 1990). The second reason is the existing power
0
Planning, Interests, and Argumentation
positions embedded in organisations that presume a certain inequality as a precondition for their existence. Argyris (1990, 13) sums up the Model I theory: “Model I theory-in-use instructs individuals to seek to be in unilateral control, to win, and not to upset people. It recommends action strategies that are primarily selling and persuading.” However, Model I cannot find its justification in economic efficiency. It leads to skilled incompetence that actually inhibits everybody from utilising all their energy and leads to malfunctioning in the organisation and defensive routines. These abandon the questioning of existing norms and values. Model I theory is not rational in economic sense, and hence executives should direct the organisation towards open discourse, i.e. Model II theory-in-use. The prevalent unilateral control ought to be replaced by bilateral control and power with competence and expertise (Argyris, 1990, 127). The second obstacle leads to the same irrational consequences as the first one, although it is based on legal rules of market economy rather than any organisational tradition. Owners could also take into account how business organisations can benefit from rational argumentation: the success of the organisation is a common interest, and it can best be achieved when all relevant opinions can be expressed freely. However, regarding the present globalization and the accelerating pace of (quartile) economy this hope for a rational discourse between workers and stock owners seems utopian. Firms calculate the economic optimum on the basis of the marginal utility of labour. There is not much use for the better argument, when agents step out of the sphere of discourse to the world of sellers and buyers. Money then serves as a disembedding token and a medium of the transactions between people (Giddens, 1990), but it is a substitute of language that does not allow rationally motivated consensus through agreement based on reason (Habermas, 1987).
0
aDDItIONaL rEaDINGs We may never reach sufficient control over the world by means of rational decisions, as there will always be hidden social and natural interactions and unexpected consequences. This only means a longer learning cycle: some day nature will hit back, and then we will learn from experience. We will realize that we have had too tight a system boundary. Conflicts between rivalling interest groups may have to be put on one side, when mankind encounters the common threat of climate change, for example. There is a long road to such a consensus because of the present unequal division of wealth. Can technology provide a solution, or is it a threat as such? At least the global information network gives us a forum for discourse, and a chance to evaluate our own values against those of others. Senge’s influential work (2006) opens up yet another universalisation perspective of systems thinking in learning organisations and society in the large.
cONcLUsION This chapter has proposed an universalisation principle that combines Rawls’ notion of justice as fairness derived ‘behind a veil of ignorance’ and Habermas’ ethical doctrine that is based on discourse in an ideal speech situation. Technological development provides a special motivation for this principle, because it may give an opportunity to a new division of labour. The members of the organization are then in a manner of speaking behind the veil of ignorance, when they do not know their future position. Habermas’ theory of communicative action sets the rules for organising rational discourse which gives all members an opportunity to express their opinion in the matter, but without possibility of referring to their own special interests. Modern network communities provide another link to technology and one that may facilitate to expand the discourse
Planning, Interests, and Argumentation
over organisational boarders. The issue will be discussed elsewhere in this volume.
rEFErENcEs Argyris, C. (1977). Overcoming organizational defences, facilitating organizational learning. Boston, MA: Allyn and Bacon, Churchman, C.W. (1968). The systems approach, New York: Dell. Faludi, A. (1973). Planning theory. Oxford: Pergamon Press. Forester, J. (1989). Planning in the face of power. Berkley, CA: University of California Press. Freeman, S. (ed.) (2003). The cambridge companion to Rawls, Cambridge, UK: Cambridge University Press. Giddens, A. (1990). The consequences of modernity. Cambridge, UK: Polity Press Habermas, J. (1968). Technik und wissenschaft als “Ideologie”. Frankfurt am Main: Suhrkamp. Habermas, J. (1981). Erkenntnis und intresse. Frankfurt am Main: Suhrkamp. Habermas, J. (1983). Moralbewusstsein und kommunikatives handeln. Frankfurt am Main: Suhrkamp. Habermas, J. (1984). The theory of communicative action, Vol. 1: Reason and the rationalization of society. Cambridge, UK: Polity Press. Habermas, J. (1987). The theory of communicative action, Vol. 2: The critique of functionalist reason. Cambridge, UK: Polity Press. Klein, H.K. & Hirschheim, R. (1991). Rationality concepts in information system development methodologies. Accounting, Management and Information Technology, 1(2), 157 – 187.
Lyytinen, K. & Hirschheim, R. (1989). Information systems and emancipation, promise or threat? In Klein, H.K. & Kumar, K. (eds.), Systems development for human progress. Amsterdam: North-Holland (pp. 115 – 139). March, J.G. (1982). Theories of choice and making decisions, Society, 20(Nov-Dec), 29 - 39. Marcuse, H. (1968). One dimensional man. London: Sphere Books Ltd. Rawls, J. (1973). A theory of justice. Oxford: Oxford University Press. Rawls, J. (1993). Political liberalism. New York: Columbian University Press. Rawls, J (1995). Political liberalism: Reply to Habermas. The Journal of Philosophy 92(3), 132 – 180. Rawls, J. (1999). The law of peoples. Cambridge, MA: Harvard University Press. Schön, D. (1983). The reflective practitioner: How professionals think in action. New York: Basic Books. Senge, P.M. (2006). The fifth discipline: The art & practice of the learning organisation. London: Random House. Simon, H. (1982). Models of bounded rationality, Vol. 2: Behavioral economics and business organization. Cambridge, MA: The MIT Press. Toulmin, S. (1958). The uses of argument. Cambridge, UK: Cambridge University Press. Ulrich, W. (1983). Critical heuristics of social planning. Bern: Verlag Paul Haupt. Visala, S. (1993). An exploration into the rationality of argumentation in information systems research and development. Unpublished Doctoral Dissertation, University of Oulu,
0
Planning, Interests, and Argumentation
Visala, S. (1996). Interests and Rationality of Information Systems Development. Computers and Society, 26(3), 19 – 22. Wittgenstein, L. (1979). On certainty. Oxford: Basil Blackwell.
KEY tErMs Categorical Imperative (Kant): “So act that the maxim of your will could always hold at the
0
same time as the principle of universal legislation” (Critic of Practical Reason, A, 54). Veil Of Ignorance: A person makes his or her ethical choice without knowing his or her share of the utilities ensuing from the principles and his or her social circumstances or personal characteristics. Discourse in an Ideal Speech Situation: The participants have symmetrical and equal chances to express their opinions and raise questions about any issue.
Section II
Research Areas of Technoethics
Chapter VIII
Ethics Review on ExternallySponsored Research in Developing Countries Alireza Bagheri University of Toronto, Canada
abstract This chapter elaborates on some of the existing concerns and ethical issues that may arise when biomedical research protocols are proposed or funded by research institutes (private or public) in developed countries but human subjects are recruited from resource-poor countries. Over the last two decades, clinical research conducted by sponsors and researchers from developed countries to be carried out in developing countries has increased dramatically. The article examines the situations in which vulnerable populations in developing countries are likely to be exploited and/or there is no guarantee of any benefit from the research product, if proven successful, to the local community. By examining the structure and functions of ethics committees in developing countries, the article focuses on the issues which a local ethics committee should take into account when reviewing externally-sponsored research. In conclusion, by emphasizing capacity building for local research ethics committees, the article suggests that assigning the national ethics committee (if one exists) or an ethics committee specifically charged with the task of reviewing externally-sponsored proposals would bring better results in protecting human subjects as well as ensuring benefit-sharing with the local community.
INtrODUctION In developed and developing countries alike, biomedical research is an essential component of improving human health and welfare. However many developing countries are unable to address their
research-based health needs, due to the inadequate research capacity and healthcare infrastructure. These needs, along with some other factors such as accessibility of human subjects, lower costs of research and lack of a clear research policy and
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Ethics Review on Externally-Sponsored Research in Developing Countries
ethics review, cause them to be targeted as host countries for clinical trials. Early debates about ethics in medical research were sparked primarily in response to the ways in which medical research was conducted in developed countries and its limited relevance to developing countries. However HIV/AIDS-related clinical trials in Africa and South East Asia can be cited as a hallmark in debates on morality of conducting externally-sponsored research in developing countries. These studies that were sponsored by the National Institute of Health (NIH) and the Center for Disease Control have fueled controversies and disagreements (The participants 2004). For instance, should standard of care be provided to participants in control groups in clinical trials? This was thrown into the international spotlight in 1997, when US-funded research into the prevention of mother-to-child transmission of HIV in Thailand was criticized in The New England Journal of Medicine (Lurie and Wolfe 1997) and The Lancet (The Lancet 1997). Participants in the control group were given a placebo, rather than a long course of antiretroviral treatment, which had been demonstrated to be effective in developed countries. The critics argued that all participants should be provided with the best available treatment anywhere in the world to prevent the exploitation of those in control groups, regardless of where the research is conducted. However, other researchers and research sponsors argued that it was not always possible, affordable, or appropriate, to supply a “universal” standard of care in developing countries, and that the difficulties of meeting such a requirement would prevent beneficial medical research from being conducted. Another issue which became a focus of attention, particularly within the context of access to antiretroviral treatments for HIV/AIDS, is what should happen once a research project in a developing country is completed. Should the products of the clinical trials, new therapeutic methods, drugs, etc, be made available (and affordable)
to the local community in which the trial was conducted or not? To address these issues and concerns, several ethical guidelines have been developed by international organizations, e.g. Council for International Organizations of Medical Sciences (CIOMS 2002), World Medical Association (Helsinki Declaration 2000), Nuffield Council on Bioethics (2002) and UNESCO Declaration on Bioethics and Human Rights (2005). In practice, applying these guidelines is often fraught with difficulty and some times provides conflicting advice (Nuffield Council on Bioethics 2002). In addition, there is no documentation on the application of these guidelines, or how ethics committees in developing countries deal with externally-sponsored proposals (Bagheri and Macer 2005).
HIstOrIcaL bacKGrOUND The long history of medicine is glorified by the fight against diseases through new innovations, methods, and drugs which cure millions of patients. However the short history of research ethics started with scandals of abuse and exploitation of human subjects. The bitter history of clinical research is full of vulnerable individuals who participated in medical experiments unwittingly. Atrocities of Nazi physicians are widely known, and abuse in US government-sponsored research is also well documented (Moreno 2001). There are also less publicized scandals such as abuses of human subjects by Japanese physicians’ biological warfare research on Chinese prisoners during World War II (Tsuchiya 2000 and Powell Tia 2006). Despite the existence of international ethical guidelines such as the Nuremberg Code, some of this clinical research continued, e.g. the Tuskegee study which actually went on for 40 years (1932-1972), and radiation experiments on children at the Fernald and Wrentham schools (1943-1973) which violated ethical codes in research (West 1998).
Ethics Review on Externally-Sponsored Research in Developing Countries
Since the Nuremberg Code, the policies on human subject research have undergone a progressive transformation. Ethics is nowadays considered as an integral part in the strategies of scientific and technological progresses (UNESCO 1998). As Jonathan Moreno observes, the evolution of human research ethics is marked by the construction of ever stronger defenses to protect subjects. He divides the history of research ethics into three eras. The first is “weak protectionism”, in which researchers were largely trusted to oversee their own research. In the early 1980s “weak protectionism” was replaced by “modest protectionism”. It was characterized by modest external oversight of research. He characterizes the current situation as “strong protectionism” which tries to minimize clinical researcher’s discretion in governing their conduct with regard to human subjects (Moreno 2001). Despite all the sad experiences of exploitation, externally-sponsored research is welcomed by poor-resource countries. Why is that so? Due to inadequate research capacity and healthcare infrastructure to conduct relevant research for their own health needs, these countries are dependent on the interests of external sponsors. In such circumstances, developing countries may find it difficult to refuse offers by foreign sponsors, even if it is unlikely to benefit their populations at large. However, there are also accompanying incentives, in terms of improved healthcare facilities, training, joint publications etc. Therefore, there is enough justification for health authorities and research institutes in resource-poor countries to compromise the direct benefit to the research participants because of collateral incentives coming from collaboration with sponsoring institutes. It noteworthy that in some developing countries lack of appropriate governance mechanisms make ground for exploitation as well.
GLObaL INEqUItY IN HEaLtH rEsEarcH In different parts of the world there are different kinds of health problems and needs. What people in Asia or sub Saharan Africa are suffering from is different from the health problems in North America. While people in the developing world are suffering from infectious diseases which can be cured with inexpensive medicine, medical research in industrialized nations has its own priorities. Mainstream research activities in developed countries do not reflect the illnesses people in Asia are suffering. The so called 10/90 gap tries to draw attention to the fact that of all the funds invested in global health research by the public and private sectors, less than 10% is devoted to research into the health problems that account for 90% of the global disease burden (Global Forum for Health Research 2004). It has also been emphasized by the WHO Ad Hoc Committee on Health Research that the central problem in health research is the 10/90 disequilibrium. Namely, that of the US$ 50–60 billion spent world-wide each year on health research by both the private and public sectors, only 10% is devoted to the health problems of 90% of the world’s population (Nuffield Council on Bioethics 2002). It is ethically problematic -from the perspective of justice- knowing that 90% of medical research has focused on health problems in industrialized countries such as obesity, mood enhancement, neurological disorders, and only 10% of research addresses the needs of poor people in developing countries. Making a difference could be possible by striking a balance between health needs in those countries with research activities related to those needs. Let us assume that by implementing the principle of “benefit sharing” which has been emphasized in the recent UNESCO Declaration on Bioethics and Human Rights (UNESCO Declaration 2005), less developed countries will enjoy the benefits of biomedical research. But to change the global situation, the urgent health needs
Ethics Review on Externally-Sponsored Research in Developing Countries
of less developed countries should be addressed too. This could, however, be remedied by giving special consideration to the health problems of poor nations and ensuring that fairness prevails in setting research priorities, in addition to ensuring that all people benefit from the outcomes of biomedical research.
ExtErNaLLY-sPONsOrED rEsEarcH Externally-sponsored research refers to research conducted, in whole or in part, by a foreign sponsor. It also can be part of a collaborative multinational or bilateral research project to be carried out in a developing country. The common feature of these projects is that the research protocol has been designed or funded in a developed country while human subjects are recruited from a developing country. The ethical issues raised in externally-sponsored research involving human subjects are certainly not new or unique. The abuse of vulnerable individuals in the name of the science, expanding the frontiers of human knowledge, good for the larger community has always been a concern from the starting point of human experimentations. As Claude Bernard, the pioneer of the experimental method, says: “The principle of medical and surgical morality consists in never performing on man an experiment which might be harmful to him to any extent, even though the result might be highly advantageous to science, i.e. to the health of others” (Greenwald et al 1982). Like developed countries, a very wide range of research related to healthcare is conducted in developing countries, but the majority of this is externally-sponsored research. The US Food and Drug Administration (FDA) alone has recorded a 16-fold increase in the number of foreign clinical investigators conducting research on new medicines in the decade 1990–2000 (Nuffield Council on Bioethics 2002).
The spectrum of sponsored research ranges from clinical trial on human subjects, laboratory research to determine the safety and efficacy of novel interventions, to behavioral research etc. External agencies, including other national governments, research councils, private sponsors, non-governmental institutions or agencies and pharmaceutical companies sponsor the majority of research related to healthcare in developing countries. In conducting externally-sponsored research, a concern is whether there is a real collaboration between researchers in sponsoring institutes and host institutes. However the nature of externallysponsored research undermines the notion of “research collaboration”. In any collaboration, the fear of exploitation will undermine a constructive effort, particularly in less powerful developing countries. Usually, in externally-sponsored research, the sponsoring institute has already drafted and finalized the proposal, and is only looking for a local body to assist on some practical issues. This can hardly be called “research collaboration”. In fact many researchers in developing countries are acting as a “sample collector” because every thing has been designed in the sponsoring institute and researchers in host institutes are responsible for collecting the biological samples and sending them to the reference laboratory which is in the sponsoring country, e.g. Harvard-Anhui case (Alliance for Human Research Protection 2000).
DEvELOPING cOUNtrIEs: a sOUrcE FOr sUbjEct rEcrUItMENt Since clinical research has increased dramatically, developing countries have become an “attractive and preferred” field for conducting clinical trials. It is important to elaborate the reasons why developing countries have been targeted as the host for clinical research. Several reasons have contributed to this current trend.
Ethics Review on Externally-Sponsored Research in Developing Countries
GlaxoSmithKline (GSK), one of the multinational pharmaceutical companies, explains why in their sponsored trials, GSK has extended patient recruitment into Eastern Europe, Asia and South America (outside the traditional centers of Western Europe and North America). The reasons for conducting clinical trials in developing countries are (GlaxoSmithKline Position Paper 2006): •
• •
•
•
•
•
Clinical trial capabilities in certain other parts of the world have improved and therefore the wider the geographic scope, the faster patients can be recruited, and the faster recruitment, the sooner new medicines will be available. The recruitment costs per patient in these [developing] countries can often be lower. Due to changes in living standards, many diseases of the developed world (e.g. hypertension, diabetes) are now global diseases. Patients in Central and Eastern Europe, Latin America and parts of Asia have often used fewer medicines compared with those in Western Europe and the US. This makes them good candidates for a clinical trial as it is easier to assess the effect of the products being tested. The overall amount of clinical trial activity in North America and Western Europe in many therapy areas makes it increasingly difficult to find experienced investigators who are able to start trials quickly as well as recruit suitable patients rapidly. To assess the relevance and applicability of some therapies within the local system of healthcare in developing countries (phase IV studies). To target diseases that disproportionately affect developing countries, including HIV/ AIDS, TB and malaria, clinical trial has to be carried out in those areas. For example, the incidence of malaria in the developed world is too low to design a scientifically
robust study to evaluate the efficacy of an investigational compound. As explained in the GSK’s position paper: “there are also some scientific and regulatory reasons why clinical trials are conducted in developing countries.” For instance some developing countries insist on the provision of local clinical trial data required as a prerequisite for product registration, such as China, Nigeria, South Korea and India which require significant data in local populations.
INtErNatIONaL EtHIcaL GUIDELINEs In response to the increase of sponsored clinical trials in developing countries along with the experience of abusing vulnerable populations, some international organization have developed and revised ethical guidelines for conducting biomedical research involving human subjects in developing countries. However these international guidelines and declarations have been criticized for focusing on an issue and eliding other important issues (Emanuel et al. 2000). The Council for International Organizations of Medical Sciences (CIOMS 2002), World Medical Association (Helsinki Declaration 2000), Nuffield Council of Bioethics (2002) and UNESCO Declaration on Bioethics and Human Rights (2005) have developed international guidelines to guide researchers in conducting clinical research and some have specifically addressed externally-sponsored research in developing countries. Following are some of the related guidelines issued by these international bodies in conducting international research. World Medical Association, Declaration of Helsinki (2000): The benefits, risks, burdens and effectiveness of a new method should be tested against those
Ethics Review on Externally-Sponsored Research in Developing Countries
of the best current prophylactic, diagnostic and therapeutic methods. (Paragraph 29) Even the best proven prophylactic, diagnostic and therapeutic methods must continuously be challenged through research for their effectiveness, efficiency, accessibility and quality. (Paragraph 6) The particular needs of the economically and medically disadvantaged must be recognized. (Paragraph 8) Council for International Organizations of Medical Sciences: International ethical guidelines for biomedical research involving human subjects (CIOMS 2002): As a general rule, research subjects in the control group of a trial of a diagnostic, therapeutic, or preventive intervention should receive an established effective intervention. In some circumstances it may be ethically acceptable to use an alternative comparator, such as placebo or “no treatment”. (Guideline 11: Choice of control in clinical trials) Placebo may be used: (i) when there is no established effective intervention, (ii) when withholding an established effective intervention would expose subjects to, at most, temporary discomfort or delay in relief of symptoms, (iii) when use of an established effective intervention as comparator would not yield scientifically reliable results and use of placebo would not add any risk of serious or irreversible harm to the subjects. Ethical Considerations in HIV Preventive Vaccine Research (UNAIDS 2000): As long as there is no known effective HIV preventive vaccine, a placebo control arm should be considered ethically acceptable in a phase III HIV preventive vaccine trial. (Guidance Point 11: Control group)
Care and treatment for HIV/AIDS and its associated complications should be provided to participants in HIV preventive vaccine trials, with the ideal being to provide the best proven therapy, and the minimum to provide the highest level of care attainable in the host country in light of…circumstances listed. (Guidance Point 16: Care and treatment) However despite consensus on the major issues, there are some controversies and disagreements on some other issues for example the use of placebo controls in new drug trials. The policy whether to approve placebo control or not has a great impact on pharmaceutical companies in testing new drugs. Weijer and Anderson call this situation as the ethics wars over international research among international organizations. They say, “These disputes have become wars. Neither one could reasonably be described as an open and thorough examination of the thorny ethical issues at stake. Rather each side has joined a political struggle to rewrite the Declaration of Helsinki and the CIOMS guidelines in accordance with its own views” (Weijer and Anderson 2001).
tHE rOLE OF rEsEarcH EtHIcs cOMMIttEEs The main goal of research ethics, in all countries, regardless developed or developing, is to protect human subjects from unreasonable risks and exploitation. However the situation in developing countries and historical experiences in abusing research participants raised real concerns about the safety of participants in externally-sponsored research (Bagheri and Macer 2005). In addition, inequality in resources and imbalances in power between the stakeholders in research, e.g., multinational pharmaceutical companies, and vulnerable participants, increase the likelihood of exploitation of the more vulnerable party.
Ethics Review on Externally-Sponsored Research in Developing Countries
In international clinical research, concerns about the standard of care, relevance of the clinical research to the community, procedure to obtain informed consent from research participants, the appropriate implementation of international and national guidelines on research ethics, benefitsharing, and the care provided to both research participants and the wider community once research is over, have been stated by many commentators (Emanuel et all 2000, Haydar and Wali 2006; Onvomaha, Kass, & Akweongo 2006). In fact, research ethics committees in many poor-resource countries have insufficient experience to address these issues. For instance, a survey of fifteen research ethics committees shows that more than 70% reported moderate, limited or no capacity to review HIV vaccine trial protocols (Milford et al 2006). It should be noted that in the United States, all federally funded international research is required to be reviewed by ethics committees in both the United States and in the host country. In contrast, research conducted by private (pharmaceutical) companies is regulated by the US Food and Drug Administration. This type of research is required to be reviewed by only a single ethics committee, but the regulation does not specify where that ethics committee must be located (Kass et al 2003). The concern about the safety and protection of human subjects who take part in clinical trials is not just an issue in externally-sponsored research but also the focus of research ethics regulatory framework and an obligation of ethics committees in developed countries as well to take it into account when looking into proposals. This has been reflected in several regulatory frameworks established by major sponsors such as NIH (Wichman 2006). However, when it comes to sponsoring research to be conducted among vulnerable people in developing countries, research ethics is more than a mere “protection” against potential harms. The issues of benefit-sharing, respecting local socio-cultural sensitivity, standard of care, and availability of the research product if proven
effective, are among others in which research ethics committees should look at and examine carefully. In the US, research ethics is dominated by the principle of autonomy and given the fact that research institutes and drug companies sponsor large number of clinical research projects in developing countries, it is not surprising that they try to apply the same ethical frameworks as exist within their own country. This may explain why provision of informed consent has been considered as necessary and sufficient for conducting clinical trials by research sponsors in externally-sponsored research. In other words the presumption is that, if the participants are informed and have consented to take part in a clinical trial, this is sufficient to address individual autonomy. In contrast, in nonWestern societies the role of family and community is very strong. For example in a society with a practice of family-centered decision making, such as China, family members should also be involved in the consent process. As Li observes in Chinese community, consent from the research participant alone is not enough and it is necessary to obtain consent from family members (Li 2004). In another study in Ghana Onvomaha Tindana and her colleagues show how community leader serve an important role in the consent process. In their observation, community leaders should be involved in consent process as gatekeepers. Their study indicated that in rural African setting only with the permission of the chief and elders of a community, researchers may invite individuals in that community to participate in their study. When leaders give permission for a study to go forward, community members may view this as an endorsement of the study, which in turn may significantly influence research participation (Onvomaha Tindana et al 2006). By posing the question, “what makes research involving human subject ethical?” Emanuel and his colleagues suggest that because of the preponderance of existing guidance on the ethical conduct of research and the near obsession with
Ethics Review on Externally-Sponsored Research in Developing Countries
autonomy in US bioethics, “informed consent” is the answer most US researchers, bioethicists, and institutional review board members would probably offer to this question. They propose seven requirements which are necessary and sufficient to make clinical research ethical. These requirements are; (i) value, (ii) scientific validity, (iii) fair subject selection, (iv) favorable risk-benefit ratio, (v) independent review, (vi) informed consent, and (vii) respect for enrolled subjects. They believe that these requirements are universal, although they must be adapted to the health, economic, cultural, and technological conditions in which clinical research is conducted. (Emanuel et al 2000). However, the question remains of the degree to which they can be lifted off the shelf and applied in developing countries. The issue becomes complicated with different governance frameworks, existing or non-existing within developing countries. Regarding the procedure of ethics review, two approaches can be taken by research ethics committees (Bagheri 2001). In the first approach, there is a committee whose aim is to implement ethical codes in the form of a checklist, and the second, a research ethics committee that seeks to fulfill its obligations not only to protect the subject but also try to make sure that research subjects will benefits from their participation. In the later approach, they are aware that in reality it is possible that these ethical codes might be ticked, while still research subjects are at risk. There is no doubt that in order to ensure the rights and safety of persons and communities participating in clinical research the function of skilful and experienced research ethics committees are vital. As Peter Singer points out, revision of international ethical guidelines or any other research ethics code are unlikely to make research more ethical throughout the world without some means of strengthening capacity to promote and implement such standards. He continues by saying that strengthened capacity in research ethics is needed in both developed and developing countries,
though the need is particularly acute in developing countries (Singer 2001). The following paragraphs discuss some of the challenges that the research ethics committees should pay attention to while reviewing externally-sponsored research.
Fairness in subject selection It has been stated by many commentators that the selection of subjects for research must be fair. Ethical codes and guidelines for clinical research involving human subjects aim to minimize the possibility of exploitation by ensuring that research subjects are not merely used as means for the good of others. However the bitter history of clinical trials witnesses a breach into the well established ethical norm of “never simply use as means” (Beauchamp and Childress 2001), and shows that in some clinical research, human subjects were selected merely because researchers took advantage of their vulnerability, e.g., Tuskegee study and HIV/AIDS patients in Sub Saharan Africa. If we define vulnerability in the context of biomedical research, vulnerable people are those people who, for any reason, would be quite easy to take unfair advantage of and would not known or wouldn’t be able to resist or wouldn’t have any other better options. Fairness of subject selection at the micro level, concerns about inclusion and exclusion criteria in research methodology, and at the macro level, ethics review should be concerned about why this particular community or country has been approached for participation in clinical trial. Vulnerability raises a red flag as far as morality is concerned, however, it does not itself indicate any wrong and unfair practice. Research ethics committee in the sponsoring country should outline under what circumstances it is appropriate for research institutes to conduct research in resource-poor settings. Local research ethics committees should look at the primary basis for determining the subject group and reasons why this particular (vulnerable community)
Ethics Review on Externally-Sponsored Research in Developing Countries
has been approached for this particular clinical trial. They should determine why this particular population has been chosen as research participants (especially if it is not related to their health need). They should make sure that the potential participants are not chosen merely because of their availability, vulnerability, and disadvantage. In another words they have to make sure that the “community” has not been chosen as “preferred research subjects.”
The Scientific Merit of the Clinical research This is another very important issue to which local ethics committees should pay careful attention. However, it is controversial whether scientific review should be done by an ethics review committee or a separate committee should be responsible for scientific review. For instance, in response to the EU directive on clinical trials, recent policy in the UK has distinguished between two type of review, scientific and ethical (Dawson and Yentis 2007). However science/ethics distinction is not required by international guidelines such as, the Helsinki Declaration (B.13) and CIOMS (Guideline 2) which do not suggest that a study’s scientific aspects should be reviewed separately. If one of the major responsibilities of research ethics committees is to assess the risks and benefits of the proposed research, it is necessary for those committees to evaluate the proposed research for scientific validity as well. This does not require the committee to undertake a peer scientific review. However, the committee, either through its own expertise or outside consultants, should understand the background, aims, and research methods sufficiently to address the ethical issues. Given the research infrastructure in developing countries, local ethics committees should be responsible for this task to review the scientific merit of externally-sponsored research. Obviously, if needed, they should seek external expertise on the research topic to make sure that the protocol is
0
scientifically sound. For instance, a critical question would be under what situations randomization and the use of placebos are ethical, and how to balance scientific and public health needs with ethics requirements in choosing a study design.
Post-trial access to Effective Interventions Another controversial issue which hit the debate is post-trial responsibility of researchers and sponsors. The question of what happens when the research is over and post trial obligations of researchers and sponsors became the topic of the sixth annual meeting of the Global Forum on Bioethics in Research in 2005 in Africa (GFBR 2005). Lessons from the experience of AZT trials in Africa and Asia in 1990s (Macklin 2001) brought the critical issue of ensuring post-trial access to the interventions proven to be effective for the local community. As it has been stressed by international guidelines, researchers should endeavor to secure post-trial access to effective interventions for all the participants in the trial who could benefit (Nuffield Council on Bioethics, 2002). Moreover, paragraph 19 of the Declaration of Helsinki World Medical Association (WMA) reads as: “Medical research is only justified if there is a reasonable likelihood that the populations in which the research is carried out stand to benefit from the results of the research.” (Declaration of Helsinki, Paragraph 19, 2000). In a note of clarification, paragraph 30 of the declaration of Helsinki, the World Medical Association (WMA) gives the responsibility to ethics review committee to ensure that post-trial access is described to the study protocol. This paragraph lays down that at the conclusion of a study, every patient who is part of the study should be assured of access to the best proven prophylactic, diagnostic and therapeutic methods identified by the study (Declaration of Helsinki Paragraph 30, 2004).
Ethics Review on Externally-Sponsored Research in Developing Countries
Although there is a general consensus that participants should benefit from taking part in research there is controversy over how long, who and how should post-trial access to the research outcome be provided. However Pharmaceutical companies raised this concern with the suggestion that research sponsors should be routinely obliged to provide treatments to participants when the trial is over (GSK Position Paper 2006). Local research ethics committees should determine what types of promises of future care or relationships ought to be in place and what types of benefits must be offered to a relatively disadvantaged population in order for the research to be ethical. As the products of clinical trials will be patented, ethics committees should pay special attention to the issue of intellectual property rights regimes. It is important that this be negotiated with the sponsors at the beginning.
the relevance of sponsored research to the National research Priorities There is a general consensus that research should be relevant to the population in which it is conducted. The relevance of research to the health need of the society is another important issue which has to be determined by the research ethics committee. However, many developing countries have a limited capacity to determine national health research priorities. One of the concerns is that the externally-sponsored research does not necessarily reflect the national research priorities in developing countries. In developed countries the priority setting in clinical research has been left to the funding agencies and their interests. Obviously there are enough compelling reasons for a pharmaceutical company to initiate a clinical trial in a resource-rich country. However this would be a task for ethics committee to find out whether an externally-sponsored research protocol would address the urgent health need of their society.
Most of the collaborative research undertaken by pharmaceutical companies in developing countries involves clinical trials. Priorities for national research may be of little relevance to a company that wishes to test a new medicine. In such circumstances, questions arise about the extent to which external sponsors are guided by national priorities when making decisions about research sponsorship. As many external sponsors fund individual researchers rather than national institutions, it is important to raise awareness about priorities for national research at the local level. Instead of research priority in host countries, pharmaceutical companies are concerned with the availability of suitable participants, the availability of high quality collaborators, and appropriate research infrastructure to conduct their clinical trials. The national priorities for research related to healthcare identified by a host country may have little bearing on if and where a company decides to locate its clinical trials. Many funding agencies have their own approaches for the identification of areas which merit support (Nuffield Council on Bioethics 2005). It should be noted that the limited resources available within developing countries may exacerbate the problems faced. However, the research ethics committees should determine the priority for collaboration in the projects that align with national health policies of the respective countries
Risk/Benefit Assessment As the aim of clinical research is to find out new therapeutic methods, drugs and so on, this is not free from risks and subjects are exposed to potential harms. The assessment of risks and benefits, based on the principle of beneficence has been emphasized by international ethical guidelines (Belmont Report 1979). However, the critical point here is that, a research ethics committee does weigh “more benefits” or “avoiding harm” in their risk-benefit assessment. In other words,
Ethics Review on Externally-Sponsored Research in Developing Countries
if they take “more benefits” to the larger population (society) as the standard in their assessment, then it will be readily justifiable to expose the subjects to harm under the pretext of more benefits for society. This approach may expose research participants to a greater risk especially in vaccine trials. As De Castro points out by putting some people at risk of harm for the good of others, clinical research has the potential for exploitation of human subjects (De Castro 1995). However, if a research ethics committee gives more weight to avoiding harm to the subjects instead of more benefits to the larger population in their risk-benefit assessment, then they will do their best to protect the subjects against impending risks and search for strategies to minimize the harm and try to identify alternative methods (Bagheri 2001). As suggested, research ethics committees should try to minimize the risks within the context of standard clinical practice, and also try to enhance potential benefits (Emanuel et al 2000).
cONcLUsION Ethics in research, especially dealing with externally-sponsored research, is a relatively new subject and under development in many countries. However, there is a lack of empirical data to show the functions of ethics committees regarding externally-sponsored research. In order to protect individual human subjects and local communities, capacity building in ethics review especially in research collaborations with other countries, is vital and it has to be included in ethics in research programs. However any initiative in this regard should be based on understanding of the current situation in ethics review of externally-sponsored research through empirical data. Capacity building is just as crucial for ethics committees in sponsoring (developed) countries in order to deepen their understanding and analysis of the ethical issues of research conducted in international settings.
Along with other more general issues in ethics review, research ethics committees should pay special attention to: fairness in subject selection, scientific validity, post-trial access, the relevance of the research question to the needs of the local community, and risk/benefit assessment while reviewing externally-sponsored research. Given the complexity of the issues, assigning the national ethics committee or an ethics committee specifically charged with the task of reviewing externally-sponsored proposals would bring better results in protecting human subjects as well as ensuring benefit-sharing with the local community.
rEFErENcEs Alliance for Human Research Protection. (2000). Harvard-Anhui case. Retrieved June 12, 2007, from http://www.ahrp.org/infomail/1200/20. php Bagheri A. (2001). Ethical codes in medical research and the role of ethics committees in the protection of human subjects. Eubios Journal of Asian and International Bioethics, 11, 8-10. Bagheri A, & Macer D. (2005). Ethics review on externally-sponsored research in Japan. Eubios Journal of Asian and International Bioethics, 15, 138-40. Bagheri A. (2004). Ethical issues in collaborative international medical research. Iranian Journal of Diaetes and Lipid Disorders, Supplement Ethics in Clinical Research, 4, 59-70. Beauchamp, T L., & Childress, J F. (2001). Principles of biomedical ethics. 5th ed. New York: Oxford University Press. Belmont Report. (1979). Ethical principles and guidelines for the protection of human subjects of research. Retrieved June 12, 2007, from http://www.hhs.gov/ohrp/humansubjects/guidance/belmont.htm
Ethics Review on Externally-Sponsored Research in Developing Countries
Dawson A J., Yentis S M., (2007). Consenting the science/ethics distinction in the review of clinical research. Journal of Medical Ethics, 33, 165-167.
Kass N., Dawson L., & Loyo-Berrios N. ( 2003). Ethical oversight of research in developing countries. IRB Ethics & Human Research, 25(2), 1-10.
De Castro Leonardo. (1995). Exploitation in the use of human subject for medical experimentation. Bioethics, 9, 259-268.
Li B F. (2004). Informed consent in research involving human subjects, The Journal of Clinical Ethics, 15(1), 35-37.
Declaration of Helsinki. (2000). World Medical Association. Retrieve March 10, 2007, from http://www.wma.net/e/policy/b3.htm
Lurie P., & Wolfe S M., (1997). Unethical trials of interventions to reduce perinatal transmission of the human immunodeficiency virus in developing countries. New England Journal of Medicine, 337(12), 853-856.
Declaration of Helsinki, Paragraph 30. (2004). Retrieved June 12, 2007, from http://www.wma. net/e/ethicsunit/helsinki.htm/ Emanuel E J., David W., & Christine G. (2000). What makes clinical research ethical? Journal of American Medical Association, 283, 2701-2711. Global Forum for Health Research (2004). The 10/90 report on health research. (fourth report). Retrieved March 1, 2007, from http://www.globalforumhealth.org/Site/000_Home.php Greenwald R A., Ryan M K., & Mulvihill J E., (1982). Human subjects research: a handbook for institutional review boards. New York, Plenum Press. GlaxoSmithKline. (2006). Position paper on global public policy issue. A publication of GlaxoSmithKline government affairs, Europe & Corporate. Retrieved June 13, 2007, from http:// www.gsk.com/responsibility/Downloads/clinical_trials_in_the_developing_world.pdf
Macklin R. (2001). After Helsinki: Unresolved issues in international research, Kennedy Institute Ethics Journal, 11(1), 17-36. Milford C., Wassenaar D., & Slack C. (2006). Resources and Needs of Research Ethics Committees in Africa: Preparation for HIV vaccine trials, IRB Ethics & Human Research, 28(2), 1-9. Moreno J D. (2001). Goodbye to all that: the end of moderate protectionism in human subject research, The Hasting Center Report, 31(3), 9-17. Nuffield Council on Bioethics. (2002). The ethics of research related to healthcare in developing countries, London: Nuffield Council on Bioethics. Nuffield Council on Bioethics. (2005). A follow up discussion paper: The ethics of research related to healthcare in developing countries. London: Nuffield Council on Bioethics.
Global Forum on Bioethics in Research. (2005). What happens when the research is over? Post trial obligations of researchers and sponsors. Sixth annual meeting, Retrieved May 30, 2007, from http://www.gfbronline.com/
Onvomaha P T., Kass N., & Akweongo P. (2006). The informed consent process in a rural African settings: A case study of the Kasena-Nankana District of Northern Ghana. IRB Ethics and Human Research, 28(3), 1-6.
Hydar A A. & Wali S A., ( 2006). Informed consent and collaborative research: Perspectives from the developing world. Developing World Bioethics, 6(1), 33-40.
Moreno J. (2001). Undue Risk: Secret state experiments on humans. New York: Routledge.
Ethics Review on Externally-Sponsored Research in Developing Countries
Powell T. (2006). Cultural context in medical ethics: Lessons from Japan. Philosophy, Ethics, and Humanities in Medicine, 1(4).
Weijer C. & Anderson J A., (2001). The ethics wars; disputes over international research. The Hasting Center Report, 31(3), 18-20.
Singer P. (2001). Beyond Helsinki- A vision for global health ethics: Improving ethical behavior depends on strengthening capacity. British Medical Journal, 322, 747–8.
Wichman A., Kalyan D N., Abbott L J., Wasley R., & Sandler A L. (2006). Protecting human subject in the NIH’s Intramural research program: A draft instrument to evaluate convened meetings of its IRBs. IRB Ethics and Human Research, 28(3), 7-10.
Takashi T. (2000). Why Japanese doctors performed human experiments in China 1933-1945. Eubios Journal of Asian and International Bioethics, 10, 179-180. The Council for International Organizations of Medical Sciences (CIOMS) in collaboration with the World Health Organization. (2002). International Ethical Guidelines for Biomedical Research Involving Human Subjects. Retrieved March 10, 2007, from http://www.cioms.ch/frame_guidelines_nov_2002.htm The Lancet. (1997). The ethics industry. The Lancet (editorial), 350(9082). The National Bioethics committee. (1998). UNESCO, Division of the Ethics of Science and Technology, Paris, 20 May. The participants in the 2001 conference on ethics aspects of research in developing countries, moral standards for research in developing countries: From reasonable availability to fair benefits. (2004). The Hasting Center Report, 34(3), 17-27. UNAIDS: Ethical considerations in HIV preventive vaccine research. (2000). Retrieved June 12, 2007, from http://data.unaids.org/Publications/ IRC-pub01/JC072-EthicalCons_en.pdf Universal Declaration on Bioethics and Human Rights. (2005). Retrieved March 10, 2007, from http://portal.unesco.org/en West, D. (1998). Radiation experiments on children at the fernal and Wrentham schools: Lessons for protocols in human subject research, Accountability in Research, 6(1-2), 103-125.
KEY tErMs Clinical Trial: a prospective biomedical or behavioral research study of human subjects that is designed to answer specific questions about biomedical or behavioral interventions (drugs, treatments, devices, or new ways of using known drugs, treatments, or devices). Clinical trials are used to determine whether new biomedical or behavioral interventions are safe, efficacious, and effective. Control Group: The standard by which experimental observations are evaluated. In many clinical trials, one group of patients will be given an experimental drug or treatment, while the control group is given either a standard treatment for the illness or a placebo. Externally-Sponsored Research: refers to research conducted, in whole or in part, by a foreign sponsor. It also can be part of a collaborative multinational or bilateral research project to be carried out in a developing country. The common feature of these projects is that the research protocol has been designed or funded in a developed country while human subjects are recruited from a developing country. Human Subject: a living individual about whom an investigator conducting research obtains: Data through intervention or interaction with the individual or Identifiable private information
Ethics Review on Externally-Sponsored Research in Developing Countries
Informed Consent: The process of learning the key facts about a clinical trial before deciding whether or not to participate. It is also a continuing process throughout the study to provide information for participants. Placebo: A placebo is an inactive pill, liquid, or powder that has no treatment value. In
clinical trials, experimental treatments are often compared with placebos to assess the treatment’s effectiveness. Standard of Care: Treatment regimen or medical management based on state of the art participant care.
Chapter IX
Social and Ethical Aspects of Biomedical Research Gerrhard Fortwengel University for Health Sciences, Medical Informatics and Technology, Austria Herwig Ostermann University for Health Sciences, Medical Informatics and Technology, Austria Verena Stuehlinger University for Health Sciences, Medical Informatics and Technology, Austria Roland Staudinger University for Health Sciences, Medical Informatics and Technology, Austria
abstract At the beginning of this section the authors provide a definition of biomedical research and an interpretation of the meaning of ethics and social values of research. They continue with the introduction of the risk-benefit approach as basic requirement for any biomedical research involving human subjects and illustrate the need for uniformity with respect to social and ethical issues. The differences and similarities between social and ethical research are described in the core section; social and ethical aspects are presented according to central and peripheral dimensions. In reference to specific areas of research in biomedical science it is exemplary shown that more general principles are not sufficient to cover all types of research, and that depending on research characteristics, the techniques used and the purpose of the research, other specific aspects might need to be considered as well. The chapter ends with a short conclusion calling for continued reflection and review of social and ethical issues speeding an age of fast changes in science and technologies to thereby ensure proper protection of the individual and the best future for society. Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Social and Ethical Aspects of Biomedical Research
DEFINItION OF sOcIaL aND EtHIcaL asPEcts OF bIOMEDIcaL rEsEarcH This section focuses on the social and ethical dimensions of biomedical research and development. Biotechnology, including genetic engineering, as well as advances in areas such as but not limited to assisted reproductive technologies, organ transplantation, human genome analysis, gene therapy, and new recombinant DNA products do no longer belong to science fiction, but are our everyday reality and raise questions of social implications and ethics. All of these areas of biomedical research have added new dimensions to social and ethical issues and must be taken into consideration before evaluating their efficacy and safety and finally their benefit for the community. It is only possible to provide a basic introduction to social and ethical aspects of biomedical research and to some of its central issues. It is primarily intended to give an appreciation of the need for ongoing reflection on social and ethical aspects in this field of research.
biomedical research In general, the term “research” refers to activities that are designed to gain knowledge which can be generalized. Modified by the adjective “biomedical” indicates its relation to health. In the present context biomedical research should be recognized as research involving human subjects and does not include “pre-clinical” or ”post-marketing” research activities. Biomedical research is taken to mean specific actions conducted in a directed manner in order to gain knowledge in the field of biomedical science. Biomedical research comprises any study of specific diseases and conditions, either mental or physical. Those studies include detection of cause, prophylaxis, treatment and rehabilitation of subjects/patients. Biomedical research also means the design of methods, drugs and devices that are used for diagnosis and
to support the subject/patient after completion of study treatment for specific diseases or/and conditions. In addition, any necessary medical, scientific investigation to further understand the underlying life processes which might have an impact on the disease and the well-being of the subject/patient, such as cellular and molecular bases of diseases, genetics, and immunology, as well as laboratory investigations and/or exposure to environmental agentsa, should be understood part of biomedical research. Human individuality, the culture a subject belongs to, religion, and /or the rights and responsibilities of the individual subject are the natural boundaries to biomedical research. Those issues form the essential basis for the need to continuously reflect on social and ethical aspects whenever biomedical research is being discussed.
What Exactly is Ethics? Simply stated, ethics means the nature of morality, moral reasoning, and behavior, whereas morality is defined as the value dimension of human decision making and behavior. From a conceptual point of view a distinction is commonly made between meta-ethics, normative ethics, and applied ethics. Meta-ethics explores whether moral standards are relative and attempts to understand how values, reason for action, and individual motivation are connected. Normative ethics attempts to determine the content of morality and to provide guidance on moral behavior. Applied ethics deals with specific areas and attempts to identify criteria to be used as basis when discussing “ethical” questions coming out of those realms. Topics such as business ethics, engineering ethics, and bioethics are examples for applied ethics initiativesb c.
Ethics in biomedical research and Moral Pluralism Bioethics covers a broad spectrum of moral issues associated with (new) developments in biomedi-
Social and Ethical Aspects of Biomedical Research
cal science. It is closely related to medical ethics, however, the scope of medical ethics is generally narrowed down to moral issues in medical practice. From a philosophical perspective on ethical implications of biomedical research one has to consider the nature of moral pluralism. Moral pluralism is the concept that moral values or moral judgments are not universal. Moral pluralism exists in a number of different forms. The commonly known form of moral pluralism is cultural relativism, which indicates that moral values and moral judgments differ between cultures. It further claims that no deeper or higher level of moral thinking exists to demonstrate any moral value of a specific culture to be wrong or less correct “which entails a rejection of foundationalism in ethics” [Holm, S. 2002]. A specific and viable form of moral pluralism is the so-called “negotiated universalism”. Whenever it comes to work with people from other, different cultures, values for a given context have to be defined as basis for the collaboration/cooperationd. With reference to biomedical research different types of documents exist, such as ethical codes or best practice guidelines, which are based on a negotiated universalism approach. Those guidance documents and/or guidelines do not always fully satisfy any moral philosophical requirement but constitute a set of shared values most parties involved can agree to. To establish bioethics models with global applicability is a demanding task, which requires aside from nuanced thinking continuous empirical investigations to observe real situations and to identity existing dilemmas.
the social value of biomedical research Discussions of the social value of any biomedical research project are based on the contribution to the well-being of society in general. Although it is widely agreed that scientific knowledge does not need any further justification and is valuable in itself, it must be considered that social values
compose a critical criterion for judging whether a biomedical research project should be funded when public resources are requested,. The Declaration of Helsinki, § 18 and 19, clearly states to give consideration to social value when evaluating biomedical research projects and requests a benefit from the results of the research for the population in which the research is carried out. It states further the research objectives have to outweigh the risks and burdens to research participants whereas it should be understood that the objectives include both scientific and social significance. The latter could be of particular importance in countries where clinical research is being conducted and subjects participating undergo additional risks and discomfort associated with the research, but e.g. the investigated drug is intended to be used elsewhere and is not aimed to benefit the researched populatione f.
The Concept of Risk - Benefit Any risks to subjects participating in research must be justified by anticipated benefits to the subjects or under specific circumstances to the society as a whole - the possibility for harm must not be as great as the option that individuals or society will benefit from the research. This requirement is clearly demanded in ethical codes associated with biomedical research and central to regulations and local lawsghi. Whenever the scientific merit and social worth of the research is beyond question, it must be demonstrated that the risks to the subjects/population participating in research are not disproportionate to an expected benefit of the research, even if this may not benefit the subjects directly. It should be noticed that the term risk is subject to two different interpretations. One simply refers to a numerical or quasi-numerical (1 in 100 and likely or less likely) probability that harm may occur. The second interpretation refers to the character of possible harm. However, expressions like small or high risk usually refer both to the
Social and Ethical Aspects of Biomedical Research
probability as well as the magnitude and severity of the harm. The concept of risk-benefit and its relationship is described in different terms like “risk-benefit ratio” or “risk-benefit-relationship”; whereas the latter seems to be more appropriate as the riskbenefit relation, it is not strictly mathematical and rather to be taken as an assessment based on value judgment and general comparison categories commonly agreed uponj. Those categories include risks (potential harm) and potential benefits to be considered when assessing a research project. In general, benefit is defined as direct medical benefit to the research subject and/or as gaining further knowledge and major contribution factor to human health and well-being. Potential harm includes short term as well as long term physical harm, psychological harm, social harm, and economic harm. Thus, anxiety following a diagnosis of a genetic predisposition to an incurable disease may be considered psychological harm; the loss of confidentiality with regard to health conditions could lead to social harm; and the loss of insurance and/or employment based on research results are generally categorized as economic harm k. Ethical codes require a research project to be assessed between two extremes, a highly unlikely risk of small harm and a likely risk of a serious harm. The likely risk of small harm normally does not hinder the progress of a scientifically sound research project. The likely risk of serious harm is usually accepted if the research project is the ‘final straw’ for terminally ill participants. When a vulnerable population is involved, the nature and degree of risk is of significant importance aside from other variables which can demonstrate the appropriateness of involvementlm n. Finally it must be mentioned that a research project in humans should not be continued when the risks associated with the project are entirely unknown. Availability of reliable data on potential risks, either from laboratory studies and/or studies in animals, are essential prior to the initiation of research in man.
PrINcIPLEs OF sOcIaL aND EtHIcaL DIMENsIONs OF bIOMEDIcaL rEsEarcH From a social and ethical perspective, any biomedical research, using human beings as subjects, must be based on 3 key principles, purpose, conduct and evaluation; whereby purpose must be directed by increasing the knowledge about the human condition in relation to the health environment. Such research should be conducted in a manner that human subjects are treated conducive to and in consistency with their dignity and well being. Any such research has further to be subject to a regimentation at all stages, beginning with an independent careful review of the research proposal, the declaration of the research outcome after research completion, and any follow-up considerations thereof. These principles have equal moral force even when given different moral weight in particular circumstances. This section on social and ethical dimensions is directed at the application of these principles.
Differences and similarities between social and Ethical aspects The terms social and ethical aspects are often used to describe a variety of non-scientific issues in the evaluation of biomedical research. There is no clear-cut demarcation between the two areas; rather a third, a legal aspect needs to be added, and one has to be aware of the interaction between these three categories. As a matter of fact, legal issues are relevant to ethics, whereas ethical requirements are not always defined and controlled by legislation; just as many ethical issues have social relevance, but social necessities are not always morally relevant. In the present context social aspects are those that have an impact on community or cultural values and are likely to have an effect on the structure and relationships within a community. Ethical aspects refer to autonomy, respect for the subject, and beneficence, which is
Social and Ethical Aspects of Biomedical Research
the obligation to maximize benefits and to minimize harm; non-maleficence refers to preventing harm to the research participants in particular and to the population in general, and justice is defined as obligation to treat each person in accordance with current, effective moral standards.
core and Peripheral Dimensions of social and Ethical aspects in biomedical research Based on the key principles, social and ethical aspects can be differentiated as core and peripheral dimensions. In doing so, it must be considered that according to categorization and circumstance their application may lead to a different course of actions. Tables 1 and 2 present the core and peripheral dimensions, as determined by the authors and their relationship to either social and/or ethical concerns, taking into account potential interactions between both areas.
Core Dimensions The following sections provide a brief background for each of the social and ethical dimensions de-
fined as core and peripheral dimensions by the authors, their use in practical and controversial discussion, and their direct application in biomedical research. Essentiality/Scientific Validity Any biomedical research entailing the use of human subjects must be absolutely essential and scientifically sound. Absolute essentiality and scientific value can only be defined after all alternatives based on existing knowledge in this area of research have been considered and after the biomedical research has been approved by an adequate authority as necessary for the advancement of knowledge as well as for the benefit of the communityo. The primary motive of research, a social activity, generally carried out for the benefit of society, must always be to maximize public interest and social justice. Generally accepted scientific principles form the basis of any biomedical research project involving humans. Those principles have much in common, but may differ depending on the specific topic of research. A thorough knowledge of scientific literature and other sources of information is essential to understand the scientific requirements and to
Table 1. Social and ethical core dimensions of biomedical research Core Dimension Essentiality/scientific validity
Social
Ethical
X
X
Ethical review
X
Informed consent / voluntariness
X
Non-exploitation / special populations
X
X
Benefit & precaution
X
X
Table 2. Social and ethical peripheral dimensions of biomedical research Peripheral Dimension
Social
Ethical
Privacy
X
Right of injured subjects to treatment and compensation
X
X
Responsibility, compliance and public interest
X
X
0
Social and Ethical Aspects of Biomedical Research
comply with scientific principles applicable to research projects. As stated in The Nuremberg Code “The experiment should be such as to yield fruitful result … unprocurable by other methods or means of study, and not random and unnecessary in nature.” [The Nuremberg Code, 1949]. The Declaration of Helsinki (2004), section B 13, clearly requires that an experimental protocol include detailed information on design and performance of each research procedure. The question on how to verify essentiality and scientific validity can only be answered universally. A detailed protocol describes the objectives, design, methodology, statistical considerations, and the organization of the biomedical project p. A research project must be designed so it can produce statistically sound answers to scientific questions regarding the objectives of the research. An experimental protocol should furthermore provide background information and a rationale for the research enabling the scientific and ethical reviewers to understand the purpose and need for the research. Research involving human beings by nature raises questions of law and ethics and decision making regarding essentiality and scientific justification is a narrow path of weighing legitimate public concerns against possible scientific innovations for the benefit of the society. Ethical Review For some decades, Research Ethics Committees or Institutional Review Boards, the North American equivalent (EC/IRB) have played an important role in the assessment of biomedical research projects involving human subjects. Ethical standards have been developed and established in several international guidelines, including the Declaration of Helsinki, the CIOMS International Ethical Guidelines for Biomedical Research Involving Human Subjects, and the WHO & ICH Guidelines for Good Clinical Practice. Adherence to these guidelines, as well as to national legislation, is mandatory to protect the dignity, rights, safety, and well being of research participants
and that results of the research will be credible. Ethics committees must be multi-disciplinary and multi-sectorial to provide a comprehensive and complete review of the research proposal. They should include not only physicians, scientists and other professionals, such as lawyers and ethicists, but also lay persons as well as men and women in order to better represent all cultural and moral values of society. Whenever an Ethics committee has to evaluate a research proposal (e.g. a very rare disease involving specific populations, such as children or elderly), the EC might consider inviting relevant representatives or advocates to hear their points of views. Independence and competence are the two cornerstones of any ethical review of a biomedical research project. In order to maintain independence from the research team and avoid conflict of interest, any member with a direct or indirect interest in the research project must not be involved in the assessment. Although not common practice members of the EC should follow standards of disclosure with regard to financial or other interest that could lead to a conflict of interest. Based on the disclosure statement to be signed by each member of the EC appropriate action could be taken if needed. Methods to be used for an ethical evaluation must be appropriate to the objectives of the research and the procedure outlining the steps of the review process must be available in writing. It should be noted that scientific and ethical reviews can not be separated since scientifically unsound research by exposing human beings to risk without purpose is de facto unethical. Therefore it is the role of the EC to consider both the scientific and ethical aspects. If the composition of the EC does not include the necessary expertise, it must be ensured that a scientific review by a competent body of experts precedes the ethical scrutiny. The EC must then evaluate whether known or possible risks are justified when weighed against expected direct or indirect benefits, and whether the research methods as proposed in the research protocol will minimize harm and maximize benefit. After
Social and Ethical Aspects of Biomedical Research
verification of these requirements, the proposed procedures for the selection of subjects and for obtaining their consent must be reviewed and evaluated for equitability. In addition, the qualification of the research team and the conditions of the research site must be assessed in order to ensure the safe conduct of the research project. The findings of this review prior to initiation of the research project must be outlined in writing for the research applicant; and the EC is obliged to keep appropriate documentation of the entire review process for a certain period of time, as outlined in the best practice guidelines or in accordance with the local regulations. After approval of the project until its completion the EC has a continuing responsibility to regularly monitor the project for ethical compliance. Such ongoing reviews according to the Declaration of Helsinki and other international guidelines for biomedical research, are so far not routinely or sufficiently implemented but are of great importance in order to safeguard the welfare and rights of research participants and community. In this context it must be mentioned that Ekes do not have the authority to sanction violations of ethical standards. However, the EC is empowered to withdraw ethical approval for a project when appropriate and thereby stop or interrupt a research project q r s t u v w x y. Informed Consent/Voluntariness The principles of informed consent and voluntariness are the basis for any biomedical research. Participating subjects must be fully informed about the research project and of the risks involved. There is also an obligation to keep the subjects informed about any new developments as they might have an impact on their decision whether to continue or withdraw from the study. Nature and form of the consent must follow international standards and local regulationsz. Consent is valid only when freely given, assuming that the subject understands the nature and possible consequences of the research. The subject’s consent must be obtained prior to any research related activity /
procedure. ICH GCP, section 4.8.10, details the obligations of the researcher in obtaining consent and clearly defines the areas to be covered in the information provided to the potential research subject; both, in writing and in verbal communication. Data privacy regulations are detailed in the European Data Protection Directiveaa and the HIPAA (Protecting Personal Health Information in Research)ab, such as which data relating to the subject can be processed and provided to other locations. The manner and context in which information about a study is provided is as important as the information itself. The written informed consent should be clearly written and the language used in oral discussion should be as non-technical as practical. The research project should be explained in an organized fashion rather than giving a ritual recitation of the written document, allowing the subject time for questions of concern. The researcher must convey the information in a language that suits the subjects´ capacity of understanding, bearing in mind the subjects´ age, his maturity, level of education and other factors such as temporary confusion, panic, fatigue, pain, underlying illness and possible effects of medication. It is critical to the consent process that the researcher not only responds to questions but also asks questions. Posing questions can help the subject to think carefully about the study, and can help the investigator to decide whether the person has adequately understood the study. It is most important that the study subject understands his right to withdraw the consent at any time during the course of the study without having to provide a reason, without penalty, and without loosing medical benefit. The consent form must be dated and signed by the subject and by the person conducting the interview. A witness may be required to date and sign the consent (e.g. in case the subject cannot read or write), thereby attesting that the requirements for informed consent have been satisfied and that consent is voluntary and freely given without any element of force, coercion, or undue
Social and Ethical Aspects of Biomedical Research
influence. A witness must be an adult who is not a member of the research team and not a family member of the subject. The Informed Consent Process is outlined in standard documents, such as the Declaration of Helsinki and The Belmont Reportac, and Good Clinical Practices guidelines and is subject to local regulations. From a legal perspective, three prerequisites must be met before a consent can be declared valid: • • •
Does the person consenting have legal capacity? Was the consent given freely? Was the person sufficiently informed about possible risks?
The third element, whether sufficient information was provided to the subject prior to consent, will not be discussed in this context as any information given to a potential research participant is subject to ethical review which ensures a proper information sheet was developed. Legal capacity is based on respect for individual autonomy, the subject must be perceived as reasonable, rational or sensible who may make a decision based on his personal values and beliefs. Legal capacity is not an all or none faculty issue. The more difficulty the subject has to comprehend the less likely he will have the necessary capacity to decide. Depending on the complexity and gravity of a decision, different levels of capacity may be required. From a legal perspective, it is the researchers´ responsibility to ensure that each potential research subject is provided with adequate, understandable, and relevant information for her/his decision making. The barriers to gaining informed consent can be grouped into two categories: subject centered and process centered. According to Taylorad, subject-centered barriers include such issues as age, level of education and the subjects´ illness. These characteristics may diminish subjects´ capacity to understand, evaluate and decide, thus undermin-
ing their ability to consent. As the importance of age is obvious to autonomy and competence in decision-making, so is the level of education with respect to the ability to understand and recall informationae. Patients who are ill may be less able to provide adequate informed consent, as illness is often accompanied by other factors such as fear, stress and pain or by effects of current medical treatment; all of which may reduce the capacity to give consentaf. Process-centered barriers refer to the timing of consent, the way information is provided, and to the type and complexity of information made available to the potential research participant. The time allocated for discussion and for subjects to ask questions, the time allocated to make a final decision, as well as the readability, the content, and the quantity of the information provided may cause obstacles associated with the process. Having a choice and other options available is essential and forms the basis for autonomy and voluntariness. Consent is no longer valid if coercion, threats, fear, force, misrepresentation, and/or deception were involved in obtaining it. Voluntariness might be questioned when the potential study subject feels obliged to his/her physician or when there is a dependent relationship with the researcher (e.g. student-teacher, nurse-physician) or if the subject is a family member of the researcher. Such cases may be acceptable if the informed consent was obtained by a well-informed researcher or physician, who is completely independent and not himself engaged in the project Although payment to study subjects is generally undesirable, payments legally invalidate a subject’s consent only if obvious disproportionate to time and effort involved. Special attention must be paid to the consent process involving a legal representative of the subject. The term “legally authorized representative” is not applicable in all countries and whenever ‘vulnerable’ subjects are to be included in research, it should be detailed in the research protocol how the consent will be obtained. The proposed method
Social and Ethical Aspects of Biomedical Research
is evaluated during the ethical review process, and any modifications needed/recommended by the EC/IRB are documented in writing as part of the approval/refusal letterag. Non-Exploitation / Special Population Involvement in biomedical research must be irrespective of social and economic status of the potential research participant, including literacy and level of education. As a cardinal principle biomedical research must exclude arbitrariness, discrimination or caprice. Beside the already mentioned aspects the following subsections concentrate on special subject groups. Pregnant or Nursing Women In general, pregnant and nursing women should be excluded from any kind of biomedical research, with the exception when the object of the research is to gain further knowledge on how to better protect the health of pregnant or nursing women or the fetus. The justification of participation of the pregnant women in biomedical research is more complex as two lives might be affected by risks and benefit. Whenever research is designed to potentially affect the fetus, the informed consent of the mother should desirably be accompanied by the fathers consent to accept any possible risks. Any research covering a realistic potential that fetal abnormality may occur as a consequence of the research is unethical per se and can only be justified by strong medical arguments to be decided upon by authorized bodies in a case-bycase approach. Special attention must be paid to the woman´s health in societies where the woman´s life is valued less than the well-being of the fetus based on cultural and religious beliefsah. Special ethical and legal safeguards might be needed to prevent any ethically unacceptable procedure to the expectant mother to enroll in this type of research. Research with regard to pre-natal diagnostic techniques and treatments are ethically limited (e.g., detection of fetal abnormalities or genetic disorders) but should exclude certain factors, such
as e.g. sex determination. Whenever a research protocol is designed to likely include pregnant women, the protocol should include specifications on how and how long the pregnancy and health of mother and child will be monitored. Biomedical research with the objective to terminate a pregnancy could be ethically acceptable under very limited circumstances as long as it meets predefined and ethically approved requirements and on the condition that the pregnant woman wishes to undergo a termination of her pregnancy. Timing and procedure must be based on the sole consideration for the mother’s safety and must not be influenced by potential use of embryonic or fetal tissue. With respect to nursing women a research project should not request women to stop breast-feeding, unless the harm to the child has been medically and ethically properly assessed and is adequately documented. Embryo / Fetus Any research related to human embryos has been and will remain a controversial topic. Individuals belonging to different segments of the society may have different opinions on this issue, based on their own value system and strong beliefsai. Ethical concerns include the acquisition of embryos and at what stage of development an embryo should be considered a human being. There are two options of acquiring an embryo; the use of embryonic tissue following termination of an unwanted pregnancy and the creation of embryos explicitly for research. Even if the first option seems to be ethically less problematic, there are specific compliance requirements which might be problematic, such as the donor’s informed consent and the ensurance of the donor’s privacy and confidentiality. With regard to the second option, embryonic state refers to days 15 to 8 weeks post-conception. Before day 15 the embryonic cells are not isomorph and the period is defined as pre-embryo phase. Embryos of 15 days (after fertilization) or older must not be used for research purposes by lawaj, excluding the period in which
Social and Ethical Aspects of Biomedical Research
an embryo might have been frozen. Despite the fact that embryo research might have benefits and lead to new therapeutic options, particularly embryonic stem cell research, the use of human embryos is under strict regulations, however, discussions on the topic and the call for adapting current regulations continue. Research related to the human fetus is mainly directed at prenatal testing to detect abnormalities in the fetus. The acceptance of this type of research is self-evident if it is relevant to fetus or mother. From as social point of view pre-natal diagnostic research may help prepare parents for a disabled child, however human and financial resources must be considered as well. An aborted fetus which is alive is considered a person by law, independent from the gestational age, and consequently must not be used in biomedical research of any kind, and must not be kept on artificial life support for the purpose of research. Children / Minors Ethical and moral implications of the involvement of children in biomedical research projects have been controversialak al am an. In the present context the needs for children or minors must be specifically defined as age related development phases might have a significant impact on the results of biomedical research. Phases of child development are generally defined as followsao: • • • • •
Premature baby: Prior completion of 37th week of pregnancy Newborn: From birth until day 28 Infant: Day 29 until age of 2 Child:Age > 2 until age 11 Adolescent: Age > 11 until age 17
Biomedical research must consider the characteristic developmental physiology, pharmacology, and toxicology of each of these phases. Of major concern is a child’s competence and ability to protect his/her own interestsap. Children have been only rarely involved in biomedical research,
particularly in drug development, therefore, marketed drugs are often administered to children as off-label useaq, without sufficient safety and efficacy data, and without information of dosage regimens suitable to a child’s physiology. Today there is wide agreement that children must not be excluded from research and, that after carefully weighing risks and benefits, exposure to investigational drugs and/or various types of investigational biomedical procedures may be justifiedar, if such research has the potential to gain further knowledge relevant to the health and/needs of children in the age groups concerned. Children as vulnerable subjects are unable to give consent, therefore a research protocol designed to enrol children should clearly outline the role and responsibilities of legal representatives and the conditions and principles which apply. Functioning as legal representative implies acting in the best interest of the child and this certainly includes the protection from potential risks. The potential risks in participating in a research project must be weighed against the risks in taking medication that has not been sufficiently tested in the paediatric population. In the context of children participating in research it must furthermore be distinguished between consent, and assent when demonstrating willingness to participate in a project. While consent implies that the subject has given a´ fully valid declaration of intent, assent implies the declaration was given by someone with limited cognitive, and/or emotional capacities. This leads to the question at which age a child is regarded as competent to give consent; the answer differs considerably from one jurisdiction to another. A deliberate objection from an adolescent might be interpreted differently from the opinion of 5 year-old child. However from an ethical perspective the refusal of a child of any age to participate in research should be respected, if appropriate. Performing research in children dictates other requirements, such as an appropriate setting with psychological and medical support for child and parents, paediatric formulations to allow accu-
Social and Ethical Aspects of Biomedical Research
rate dosing, facilitate consumption and enhance compliance, and protection of confidentiality (also from parents), particularly where adolescents are concerned or when social information is obtained. Children compared to adults take the highest risk but also have the most to gain; any intervention can have long-lasting harm or long-lasting benefit. An open dialogue directed by paediatric health care needs is required between all parties involved, including authorities, ECs, researchers, and subjects´ advocacy groups to further develop a secure environment for children participating in research. Other Vulnerable Groups The term “vulnerable subjects” in the present context refers to individuals who are incapable of protecting their own interests and who are unable by virtue of physical or mental capacity to give informed consentas, whereas incapability could be of temporary nature. From a broader perspective a vulnerable community includes persons or populations less economically developed with limited access to health care and treatment options, persons discriminated due to their health status, and any person in a dependent relationship, such as employees, patients, health care professionals, prisoners, students and service personnel who might have reduced autonomy when being asked to participate in a research programat au. The language and the content of the information provided and the procedure to obtain informed consent must be particularly sensitive to subjects belonging to a vulnerable sub-population. As a general principle subjects who by reason of cognitive and mental impairment are not capable of giving adequately consent, must not be included in any research that might equally be carried out on individuals not impaired in giving consent. Following the Convention for the Protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine: Convention on Human Rights and Biomedicine (Article 17 [2] in conjunction with
article 16 of the addendum protocol) researching incapable subjects requires that the results of the research have the potential to produce real and direct benefit to the subjects´ health (therapeutic research), and the person concerned does not object. In case the research has NOT the potential to produce results of direct benefit (non-therapeutic research), the inclusion is permitted if: •
•
The research may benefit other persons in the same category or afflicted with the same disease or disorder. The research entails only minimal risk and minimal burden for the research individual.
Only minimal risk is involved if it can be expected that the research will result, at the most, in a very slight and temporary negative impact on the health of the person concerned. Minimal burden refers to the expectation that the discomfort will be, at the most, temporary and very slight for the person concernedav. If a subject is unconscious or the decision to make an objective choice might be affected or influenced by social, cultural, economic, psychological or medical factors, particular attention must be paid to ensure autonomy and voluntariness as fundamentals for a legally valid consent. This requires and stresses the importance of an ongoing monitoring by the EC, which would allow the earliest detection of any violations against those principles and the implementation of corrective actions as needed, thereby protecting the rights and well-being of the researched subjects. Benefit and Precaution As a principle rule it must be ensured that any research subject or those affected by it are put at minimum risk. Due care and circumspection at all stages and ongoing ethical review are the minimum precautions necessary. It becomes even more complex when considering that there is a fine line between (bio)medical practice and bio-
Social and Ethical Aspects of Biomedical Research
medical research, which is not always clear. The well-being of the subject should be the primary objective, on the other hand is the importance of gaining further scientific knowledge. In most situations both goals are satisfied at the same time. As a matter of fact, this situation led to specific definitions of “therapeutic” versus “nontherapeutic” research. Therapeutic research refers by definition to an experiment with the potential for a direct benefit for the research subject. Nontherapeutic research is defined as research for the benefit of others. An even further distinction of non-therapeutic research has been introduced, the so called “group-benefit research”. These research projects do not provide any benefit to the researched subject but have a potential benefit for an entire group the test person might belong to, e.g. a group of persons with a specific disease or of a specific age. The concept of group-benefit is ethically difficult to accept. Critics point out that a test person cannot automatically take on a groupbased sense of solidarity with other research beneficiaries [Heinrichs, B., 2006]. Even if this topic is of significant value when discussing biomedical research involving minors, some general issues must be addressed to come to a common level of understanding. Healthy volunteers, participating in a biomedical research project, are participating by definition in non-therapeutic research, i.e. they will not benefit directly from the research. From an ethical perspective, the risk to which healthy volunteers might be exposed must be only minimal and the use of volunteers should not be excessive. Volunteers are prompted to enroll in biomedical research programs for various reasons including for scientific and idealistic motives. To comply with the basics of ethical conduct no overt or covert coercion must be involved. Some medical interventions performed as part of research projects must be defined as non-therapeutic such as e.g. surgery on live donors for organ transplant, or aesthetic surgery, which may be classified as therapeutic in case of psychological indications. In conclusion, the difficulties associated
with non-therapeutic research and the feeling of solidarity depends on legality. Changes in public policies allow such research under strict conditions as defined in local regulations and ethical standardsaw ax ay.
Peripheral Dimensions Peripheral dimensions should be seen as complementary to the core dimensions and as secondary criteria of the social and ethical aspects which form the foundation for a comprehensive perspective. Privacy Participation in research must ensure confidentiality at any stage. No disclosure of identity or personal data must be made available to any third party without written consent of the subject concerned (or his/her legal representative), even if deemed essential. Personal data includes the medical history of the subject provided during the initial interview, as well as information obtained through medical procedures, investigations, and the examination of body tissues. In the context of biomedical research collection of such data is subject to approval by an EC and the consent by the subject of research. Under specific circumstances data which has been routinely collected by public registries, such as data on death, could be of value to biomedical research, especially during followup programs. Disclosure of such data is generally governed by national legislationsaz ba. In the case of large epidemiological studies it may be impracticable to obtain consent from each identifiable subject for the planned data collection from medical or other records and an ethics committee may waive the requirement for informed consent as long as the collection of data is in accordance with applicable laws and confidentiality is safeguarded. Any research using biological samples requires either the subject’s consent or full anonymization of the sample. However, the subjects´ consent is essential whenever the result of sample analysis
Social and Ethical Aspects of Biomedical Research
must be linked with the subject, even if encryption and restricted access is used. Right to Treatment and Compensation for Injured Subjects The topic concerns two distinct but related issues. The first one is the entitlement to free medical treatment for any accidental injury caused by procedures or interventions explicitly executed as part of the research. The second one is material compensation for death or disability as a direct outcome of research participation. Free medication and compensation is usually not provided for subjects who have suffered expected or foreseen adverse events/reactions to standard medical care with an investigational product or established diagnostic or preventive interventions. Implementing a compensation system for injuries or death related to research is certainly a non-easy task and the involvement of the ethical review committee is essential. It should be the EC´ responsibility to identify injuries, disability, or handicaps to be compensated with free medical treatment and respectively to define any injuries, disability, or handicaps for which no compensation is justified,. Those decisions are project specific and must be individually tailored. Ideally these determinations should be made in advance, but by nature there will also be unexpected or unforeseen adverse reactions. Adverse reactions should be defined per se as compensatable, but they should be referred at onset to the ethical board for review and confirmation. The subject must be informed of the detailed process and entitlement regarding free treatment and compensation (without any legal action to be taken) as part of the informed consent procedure. By no means should the subjects be asked to waive their right to compensation. The sponsor of the research project must seek appropriate insurance to cover compensations for risks. The insurance coverage must be carefully reviewed by an ethical board prior to providing the green light for the start of the projectbb.
Responsibility, Compliance and Public Interest Scientific and moral responsibility, based on internationally accepted guidelines and local regulations, applies to all who are directly or indirectly involved in a research project. The target group includes the researchers themselves who are operationally in charge, but also those responsible for funding, the institution where the research is being conducted, and any involved persons who might benefit from the research, such as individual sponsorsbc. Given the fact that human subjects are “used” as “research tools” and that the findings could have a wide impact on the provision of health services, social acceptance is of high value when planning biomedical research, supported not only by the research community but the society as a whole. Social acceptance can be increased through clearly defined roles and responsibilities of the involved operators and through clearly specified regulatory and ethical review mechanism to ensure efficiency and integrity of the research project. To gain greater public confidence appropriate information channels are of significance and can promote the understanding for the need of biomedical research. Publications of research findings must not raise false expectations in the society or any sub-population thereof; premature reports or advertising stunts are out of place. As important as free access is to the findings of research as crucial is the maintenance of privacy and confidentiality of subjects participating in the research. If the identification of a subject is of imminent importance to the publication, such as slides or photographs, prior consent from the subject must be obtainedbd be. Ongoing supervision by competent bodies is essential after initial approval to ensure the risk-benefit balance and that the research is performed as defined in the research protocol as approved by EC and health authorities. Changes in all aspects of research activity or conduct must be approved and/or submitted to the EC prior to its implementation. Any non-compliance with this requirement must be reported to the EC and may lead to further
Social and Ethical Aspects of Biomedical Research
action by the ethical review board. Based on the nature of the violation, the ethical approval may be withdrawn and the project may be stopped. The interest of science and society should never take precedence over considerations for the wellbeing of the subjectsbf.
areas in biomedical research with a Need for Specific Considerations Aside from being subject to more general social and ethical considerations as outlined before, some areas in the field of biomedical research require a specific approach. The following section points out that social and/or ethical perception of biomedical research must be neither exhaustive nor static. While the principles discussed so far are applicable to the whole field of biomedical research there are concerns associated with specific research that must be more closely observed and controlled by Health Authorities and ECs in the best interest of the individual and society as a whole.
Genetics Research Certainly, there have been no greater ethical concerns in any other area of biomedical research than in human genetics. What are the additional considerations from a social and ethical standpoint? Harm might not be limited to physical harm, but genetic information may directly/or indirectly cause psychological harm, for the individual and society, such as anxiety and depression and associated sequelae. Genetic manipulation may have consequences and ramification to the community which cannot be foreseen at the time. Genetic information of individual subjects may also lead to a conflict of interest between the subject of research and society and requires therefore strict and carefully prepared guidelines. ECs are faced with additional challenges where expertise is needed for adequate evaluation. This leads to the question whether there needs to be
a ‘central’ approval for human genetic research by an ethics committee specifically qualified to assess this type of research.
Assisted Reproductive Techniques Research in this area covers any fertilization with gametes manipulated outside the human body and the subsequent transfer into the uterus, as well as embryos. From a social and ethical standpoint additional considerations must be given to research in assisted reproductive techniques. The ownership and use of any spare embryo, not used for transfer to the uterus, must be addressed as part of the informed consent process. After the consent has been obtained from the biological mother, the EC must decide whether or under which conditions these embryos may be used for research purposes, and whether they should be preserved or “destroyed.” From an ethical as well as from a social perspective it is a challenge who should have the decision on reproductive options, the future mother as care giver of the child alone; does her partner or her treating physician have a say, or are laws and regulations preferable without leaving room for individual decision. Nuclear transplantation or embryo splitting, so called cloning is already regulated by law which in most countries already strictly prohibits this technical option with the intention to reproduce a human being.
cONcLUsION aND OUtLOOK This chapter was not meant to provide answers to all questions, but attempts to provide a start to further investigate and increase the awareness of all perspectives of the social and ethical dimensions of biomedical research. All these social and ethical issues are of everyday concern and clearly central to biomedical research. They complement each other and make biomedical research to what
Social and Ethical Aspects of Biomedical Research
we know it to be. New advances in science and medicine must be followed by continuous careful evaluation of risk-benefit in consideration of new and changing social and ethical dimensions. With regard to the nature and the complexity of the guidelines and regulations for biomedical research social and ethical issues need to be updated consistently according to the changes in science and technology. Further needs for social control mechanisms, methods of control over biomedical science by insiders and external professionals are under ongoing discussion. Insiders debate controls of adequate formal training and disciplinary boards and outside professional debate controls of government regulation, ethics boards, and judicial and state law. As biomedical science continues to advance as in areas of genetics and neuroscience, new issues may arise with regard to ethical acceptability of procedures and treatments and social responsibilities of the research itself; no ready-made answers exist to date. Some publicized issues are still subject to ethical and social controversy. Even if an ethical and social environment can be established in ‘developed’ countries, researchers may be confronted with ethical and social problems in resource-poor areas of the world, creating a conflict between their personal ethical beliefs and the specific needs in those countries. The balance between scientific and economic/political arguments combined with technical feasibilities in biomedical research on one hand and the individual and corporate morale combined with public needs on the other hand requires open-mindedness from both sides and human beings who take their ethical and social responsibilities serious.
rEFErENcEs American Academy of Pediatrics (2001). Human embryo research. Pediatrics, 108, 813 - 816. Amos-Hatch, J. (1995). Ethical conflicts in classroom research: Examples from study of peer 0
stigmatization in kindergarten. In: J. Amos Hatch (Ed.), Qualitative research in early childhood settings. London, Praeger/Greenwood. Califf, R., Morse, M., & Wittes, J. (2003). Toward protecting the safety of participants in clinical trials. Controlled Clinical Trials, 24, 256 - 271. Cannistra, S. A. (2004). The ethics of early stopping rules: Who is protecting whom?, Journal of Clinical Oncology, 22(9), 1542 - 1545. Choonara, I. et al (Eds.) (2003). Introduction to paediatric and perinatal drug therapy. Nottingham, University Press. Council for International Organizations of Medical Sciences (CIOMS) (1993). In collaboration with the World Health Organization (WHO): International Ethical Guidelines for Biomedical Research Involving Human Subjects, Geneva. Council for International Organizations of Medical Sciences (CIOMS) (2002). In collaboration with the World Health Organization (WHO): International Ethical Guidelines for Biomedical Research Involving Human Subjects, Geneva. Council for International Organizations of Medical Sciences (CIOMS) (1991). International Guidelines for Ethical Review of Epidemiological Studies. Geneva, CIOMS. Council of Europe (1997). Convention for the Protection of Human Rights and Dignity of the Human Being with Regard to the Application of Biology and Medicine, Convention on Human Rights and Biomedicine. European Treaty Series, No. 164, Oviedo. Council of Europe (2005). Additional Protocol to the Convention on Human Rights and Biomedicine Concerning Biomedical Research. European Treaty Series, No. 195. Strasbourg. Data Protection Directive 95/46/EC. (1995). Of the European parliament and of the council of 24 October 1995 on the protection of individuals
Social and Ethical Aspects of Biomedical Research
with regard to the processing of personal data and on the free movement of such data. Daugherty, C. (1995). Perceptions of cancer patients and their physicians involved in Phase I trials. Journal of Clinical Oncology, 13, 1062 – 1072. Department of Health, Education, and Welfare (1979). The Belmont report: Ethical principles and guidelines for the protection of human subjects of research. Retrieved from http://history.nih. gov/laws/pdf/belmont.pdf Directive 2001/20/EC. (2001). Of the European parliament and the council of April 2001 on the approximation of the laws, regulations and administrative provisions of the member states relating to the implementation of good clinical practice in the conduct of clinical trials on medicinal products for human use, Section 3. Directive 95/46/EC. (1995). Of the European parliament and of the council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data. Drazen, J. M., Curfman, M.D. (2004). Public access to biomedical research. The New England Journal of Medicine, 351, 1343. European Forum for Good Clinical Practice (1997). Guidelines and recommendations for European ethics committees. Brussels, EFGCP. Fauriel, I., Moutel. G. (2003). Protection des personnes et recherche biomedicale en France. La Presse Medicale, 20, 1887 – 1891 Fortwengel, G. (2005). Issues to be considered in the informed consent process. Paper/presentation and conference summary, Clinical Trial Summit, Munich. Fortwengel, G., Ostermann, H., Staudinger, R. (2007). Informed consent and the incapable patient. Good Clinical Practice Journal, 14(8), 18 - 21.
Gefenas, E. (2005). The concept of risk: Linking research ethics and research integrity. Presentation at the Responsible Conduct of Basic and Clinical Research Conference, Warsaw. Goldfarb, N. (2006). The two dimensions of subject vulnerability. Journal of Clinical Research Best Practices, 2. Heinrichs, B. (2006). Medical research involving minors. Retrieved from http://www.drze.de/themen/blickpunkt/kinder-en HIPAA - Privacy Rule at 45 CFR. Parts 160 and 164 and guidance. (http://www.hhs.gov/ocr/hipaa) Holm, S. (2002). Moral pluralism. In The ethical aspects of biomedical research in developing countries. Proceedings of the Round Table Debate, European Group on Ethics, Brussels. Hood, S. et al. (1996). Children as research subjects: A risky enterprise. Children & Society, 2, 117 - 128. Howell, T., Sack, R. L. (1991). The ethics of human experimentation in psychiatry: Toward a more informed consensus. Psychiatry, 44(2), 113-132. Iltis, A.S. (2005). Stopping trials early for commercial reasons: The risk-benefit relationship as moral compass. Journal of Medical Ethics, 31, 410 - 414. Indian Council of Medical Research New Dehli (2000). Ethical Guidelines for Biomedical Research on Human Subjects, 58. International Conference on Harmonization (ICH) (1996). ICH harmonized tripartite guideline for good clinical practice. ICH Kermani, F., Bonacossa, P. (2003). New ways to recruit trial subjects. Applied Clinical Trials, 38 -42. Levine, R. (1988). Uncertainty in clinical research. Law, Medicine and Healthcare, 16, 174-182.
Social and Ethical Aspects of Biomedical Research
Mahon, A. et al. (1996). Researching children: Methods and ethics. Children and Society, 2, 145 - 154. Matthews, H. et al. (1998). The geography of children: Some ethical and methodological considerations for project and dissertation work. Journal of Geography in Higher Education, 3, 311 - 324.
human subjects of research. Retrieved from http:// ohsr.od.nih.gov/guidelines/belmont.html The Nuremberg Code (1949). Trials of war criminals before the nuremberg military tribunals under control council law, No. 10. US Government Printing Office.
OECD (2001). OECD health data 2001: A comparative analysis of 30 countries. Paris, OECD.
The World Medical Association (2004). Declaration of Helsinki, ethical principles for medical research involving human subjects. Tokyo, WMA.
Parrish, D. M. (1999). Scientific misconduct and correcting the scientific literature. Academic Medicine, 74(3), 221-230.
The World Medical Association (2005). Medical ethics manual - Ethics in medical research. Chapter 5. WMA.
Protecting personal health information in research: Understanding the HIPAA privacy rule. Retrieved from http://privacyruleandresearch. nih.gov/pr_02.asp
World Health Organization (1995). Guidelines for good clinical practice for trials on pharmaceutical products. Annex 3 of The Use of Essential Drugs. Sixth Report of the WHO Expert Committee. Geneva, WHO.
Rose, K. (2005). Better medicines for children - where we are now, and where do we want to be? British Journal of Clinical Pharmacology, 6, 657 - 659. Seiberth, H. W. (2005). Pharmakologische besonderheiten im kindes-jugendalter. In Brochhausen, C., Seibert, H.W. (Eds.): Kinder in klinischen studien - Grenzen medizinischer machbarkeit? Lit-Publ. Solomon, R.C. (1970). Normative and meta-ethics, philosophy and phenomenological research. Vol. 31, 1, 97 - 107. Stanford encyclopedia of philosophy. Retrieved from http://plato.stanford.edu/entries/ metaethics/ Taylor, H. A. (1999). Barriers to informed consent. Seminars in Oncology Nursing, 15, 89 - 95. The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (1979). The Belmont report - Ethical principles and guidelines for the protection of
World Health Organization (2002). Guidance for implementation. In Handbook for good clinical practice (GCP). Geneva, WHO.
KEY tErMs Compliance (in relation to biomedical research): Adherence to all research related requirements, such as research protocol and Best Practice Guidelines, and the applicable regulatory requirements. Confidentiality (in relation to biomedical research): Prevention of disclosure, to other than authorized individuals, of a research unit proprietary information or of a subject´s identity. Independent Ethics Committee: An independent body (a review board or a committee, institutional, regional, national, or supranational), constituted of medical/scientific professionals and non-medical / non-scientific members, whose responsibility it is to ensure the protection of the
Social and Ethical Aspects of Biomedical Research
rights, safety, and well-being of human subjects involved in biomedical research and to provide public assurance of that protection, by, among other things, reviewing and approving/providing favourable opinion on the research protocol, the suitability of the investigator(s)/researchers, facilities, and the methods and material to be used in obtaining and documenting informed consent of the research subjects. Informed Consent: A process by which a subject voluntarily confirms his or her willingness to participate in a particular research project, after having been informed of all aspects of the research that are relevant to the subject´s decision to participate. Informed consent is documented by means of a written, signed and dated informed consent form. Opinion (in relation to Ethics Committees and biomedical research): The judgment and/or the advice provided by an Independent Ethics Committee. Research Protocol: A document that describes the objective(s), design, methodology, statistical considerations, and organization of a research project. The protocol usually also gives the background and rationale for the research, but these could be provided in other protocol referenced documents. Vulnerable Subjects: Individuals whose willingness to volunteer in a biomedical research may be unduly influenced by the expectation, whether justified or not, of benefits associated with participation, or of a retaliatory response from senior members of a hierarchy in case of refusal to participate. Examples are members of a group with a hierarchical structure, such as medical, pharmacy, dental, and nursing students, subordinate hospital and laboratory personnel, employees of the pharmaceutical industry, members of the armed forces, and persons kept in detention. Other vulnerable subjects include patients with incurable diseases, persons in nursing homes,
unemployed or impoverished persons, patients in emergency situations, ethnic minority groups, homeless persons, nomads, refugees, minors, and those incapable of giving consent. Well-Being (of research subjects): The physical and mental integrity of subjects participating in a research project.
ENDNOtEs a b c d e f g h
i j k l m n
o p q
r
s
t
u v w x y
OECD (2001) Stanford Encyclopedia of Philosophy Solomon, R.C. (1970) Holm, S. (2002) The World Medical Association (2005) The World Medical Association (2004) The World Medical Association (2005) The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (1979) Council of Europe (2005) Iltis, A.S. (2005) Iltis, A.S. (2005) Gefenas, E. (2005) Levine, R. (1988) Department of Health, Education, and Welfare (1979) World Medical Association (2004) World Health Organization (2002) European Forum for Good Clinical Practice (1997) International Conference on Harmonization (1996) Council for International Organizations of Medical Sciences (CIOMS) (1993) Council for International Organizations of Medical Sciences (CIOMS) (1991) Council of Europe (1997) World Health Organization (1995) World Medical Association (1994) Califf, R., Morse, M., Wittes, J. (2003) Fauriel, I.; Moutel. G. (2003)
Social and Ethical Aspects of Biomedical Research
z aa ab
ac
ad ae af ag ah
ai aj
ak al am an ao ap
Fortwengel, G. (2005) Data Protection Directive 95/46/EC (1995) HIPAA - Privacy Rule at 45 CFR Parts 160 and 164 and guidance The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (1979) Taylor, H. A. (1999) Daugherty, C. (1995) Kermani, F.; Bonacossa, P. (2003) Fortwengel, G. (2007) Council for International Organizations of Medical Sciences (CIOMS) 2002 American Academy of Pediatrics (2001) Indian Council of Medical Research New Dehli (2000) Amos Hatch, J. (1995) Mahon, A. et al (1996) Hood, S. et al (1996) Matthews, H. et al (1998) Seiberth, H. W. (2005) Mahon, A. et al (1996)
aq ar as
at au av aw ax
ay az ba
bb
bc bd be bf
Choonara, I. et al (2003) Rose, K. (2005) Minors are seen differently in the context of this section and are not included in the following. Howell, T., Sack, R. L. (1991) Goldfarb, N. (2006) Gefenas, E. (2005) Cannistra, S. A. (2004) Department of Health, Education, and Welfare (1979), Article 27 Directive 2001/20/EC (2001) Directive 95/46/EC (1995) Protecting Personal Health Information in Research Council for International Organizations of Medical Sciences (CIOMS) (2002) The World Medical Association (2004) Drazen, J. M., Curfman, M.D. (2004) Parrish, D. M. (1999) The World Medical Association (2004)
Chapter X
Ethical Aspects of Genetic Engineering and Biotechnology Stefano Fait University of St. Andrews, Scotland
abstract In assessing the ethical implications of genomics and biotechnology, it is important to acknowledge that science, technology, and bioethics do not exist in a vacuum and are not socially, politically and ethically neutral. Certain technologies have a greater social impact, may require the State to intervene in the private sphere, and may be differentially accessible to users. Also, science and technology can change our relationship with other people and with our environment. Hence the importance of ethnographic, historical, and cross-cultural studies for the analysis of today’s thorniest bioethical controversies. This chapter discusses some of the most controversial issues surrounding the use of genetic technology in human procreation and gene patenting, including eugenics, genetic consumerism, animal-human hybrids (chimeras), the commodification of life, disability and genetic testing.
A breeder of people should possess a supermanly foresight. But it is precisely those persons who are ethically and spiritually superior that are conscious of their weaknesses, and would not volunteer for such a tribunal, much the same as earlier on it was certainly not the best people who pressed for the office of Grand Inquisitor (Oscar Hertwig, German cell biologist, 1849 – 1922).
What is the ape to man? A laughing-stock, a thing of shame. And just the same shall man be to the Superman: a laughing-stock, a thing of shame. (F. Nietzsche, Zarathustra’s Prologue, 3)
INtrODUctION Even a casual observer would not fail to notice the pervasiveness of bioethics in contemporary
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Ethical Aspects of Genetic Engineering and Biotechnology
society. How did bioethics come to take on such significance in Western societies? This is a rather puzzling phenomenon given that, in a pluralist society, philosophy cannot deliver incontrovertible moral verdicts and the philosophers’ views are no more binding than those of the man in the street (Maclean, 1993). As logician Charles S. Peirce noted long ago, absolute certainty, absolute exactitude and absolute universality cannot be attained by reasoning and, in a world in which human reason and knowledge are socially, culturally, and historically embedded, it would be misguided to expect bioethicists to provide objective and rigorously codified precepts and indications. Their speculations can only tell us what they believe is right and fair, and their logical demonstrations must be first evaluated against the empirical evidence. Accordingly, this paper only provides one among many possible interpretations of the ethical issues involved in genetic technology, one that is rooted in a specific tradition (Continental/Mediterranean Europe), period of time (early twenty-first century), and discipline (political anthropology). Following an account of the history of the trans-national movement known as eugenics in the opening section, the chapter then proceeds to examine the future of eugenics as a consumer purchase (designer babies) and the limits of parental decision-making, epitomised by the upbringing of Francis Galton, the founder of modern eugenics. The third section, entitled “Human nature and speciation,” provides a brief outline of some of the issues arising from the Human Genome Project and also covers the debate, which is still in its infancy, on the possible redefinition of personhood and human nature that might be required by future applications of genetic engineering. Questions concerning the commodification of body parts are discussed in the third section. In the fourth section, entitled “Disabilities and genetic testing” I draw the reader’s attention to the impact that biotechnologies are likely to have on the life of people with non-standard bodies and minds. In
the concluding remarks I engage with libertarian bioethics, seek to identify some of its most glaring shortcomings and urge bioethicists in general to pay greater attention to social, cultural and political factors in their ethical deliberations.
a brIEF HIstOrY OF EUGENIcs The term “eugenics” was coined in 1883 by Sir Francis Galton (1822–1911), after the Greek εύγενής, meaning “wellborn”. The logo of the Third International Congress of Eugenics, held in New York in 1932, defined eugenics as “the self direction of human evolution.” Negative eugenics was concerned with the elimination of inheritable diseases and malformations and involved prenuptial certificates, birth control, selective abortion, sterilization, castration, immigration restriction and, in Nazi-occupied Europe, involuntary “euthanasia.” Positive eugenics would instead encourage the propagation of desirable characteristics via tax incentives for “fit parents”, assortative mating and, in the years to come, cloning and germline engineering. A combination of Eternal Recurrence – human beings as expressions of an immortal germplasm – and natural teleology of history – biology as destiny – stamped the arguments of early eugenicists and genealogy researchers, who linked folk hereditarian beliefs about the transmission of patrimonial and biological inheritance and the religious notion of the inheritability of sins. They fostered notions of evolutionary throwbacks and of populations as bundles of lineages, and arbitrarily equated genealogical perpetuation with social distinction. When these deterministic explanations of human behaviour were finally challenged, eugenics did not lose its appeal. Mainline eugenics gave way to ‘reform eugenics’, family planning and population control, characterized by a greater emphasis on environmental factors, birth control, the rational management of human resources, and the repudiation of an overtly racist language. This
Ethical Aspects of Genetic Engineering and Biotechnology
tactic made eugenics far more palatable and effective: if the impact of nurture was so important, then children should be raised in healthy home environments. In order to redress nature’s essential randomness and synchronize biological and socioeconomic processes, irresponsible citizens unable to meet the challenges of modern society would be forced, blackmailed, or cajoled into accepting sterilization or castration. Consequently, by the early 1930s, sterilisation programmes were in full swing. Following the moral panic generated by the Great Depression, few families were prepared to put up with the social protection of what was perceived to be a disproportionate number of dependent people (Paul, 1995). Some argued that, under exceptional circumstances, basic rights could be withheld and that social services should only be granted to those whose social usefulness and biological capability were certain. The theoretical foundation of constitutional rights were undermined by prominent legal scholars in North America and Northern Europe, who argued that the state was the source of a morality more in line with the demands of modernity, and therefore was not necessarily bound by constitutional principles and norms. Radically realist and functionalist jurists submitted that personal rights were not inalienable, for they really were culturally and historically relative legal fictions or superstitions, their existence being, to a large extent, contingent on the majority’s willingness to uphold them, that is, on considerations of general welfare and public utility. Enlightened governments, like good shepherds, would foster virtues and restrict personal rights for the sake of communal rights and civic responsibility (Alschuler, 2001; Bouquet & Voilley, 2000). This led to the paradoxical result that involuntary sterilizations and confinements were almost exclusively carried out in the most advanced and progressive democracies, the only exception being Nazi Germany. The following states or provinces adopted laws permitting the eugenic sterilisations
of their citizens: Tasmania (1920), the Swiss canton of Vaud (1928), Alberta (1928 and 1933), Denmark (1929 and 1935), the Mexican state of Veracruz (1932), British Columbia (1933), Sweden (1934 and 1941), Norway (1934), Finland (1935), Estonia (1937), Latvia (1937), Iceland (1938), Japan (1940), and thirty-one American states. In 1936, the ‘Lebensborn e. V.’ (‘Spring of Life, registered association’) was launched by the Nazis, which involved the selective breeding of ‘racially superior’ children and the kidnapping of ‘racially valuable’ children across occupied Europe. By 1914, in the United States, marriage restriction laws targeting “feeble-minded” citizens had been enacted in more than half the states and, by 1917, 15 states had passed sterilization laws. But “only” a few thousand sterilizations had been actually performed, mainly because nearly half of such laws had been struck down on the ground that they violated due process, freedom from cruel and unusual punishment, and the equal protection clause. A second wave of eugenics laws followed the Immigration Restriction Act (1924) and Virginia’s Act to Preserve Racial Integrity (1924). In 1924, Virginia also passed a law authorizing the involuntary sterilization of alleged mental defectives. This law was upheld, 8-1 by the Supreme Court, in Buck v. Bell 274 U.S. 200 (1927). As a result of this decision, taken in a country that prided itself on its commitment to individual freedom but favoured scientifically unverifiable notions of social progress over clear constitutional principles, nearly half the U.S. states passed eugenics laws authorizing compulsory and non-voluntary sterilization. The ostensibly progressive civic religion of eugenics was seen by many as essentially fair and morally unassailable. Various representatives of the judicial branch became self-appointed guardians of the public morality and urged state governments to intrude in people’s private lives “for their own good”. Wayward citizens, namely those who could not be converted to an acceptable lifestyle, and whose behaviour remained
Ethical Aspects of Genetic Engineering and Biotechnology
unpredictable, were liable to being sterilized or institutionalized. This kind of society, at once ready to embrace an abstract notion of humankind and reluctant to put up with certain categories of human beings, was so insecure, apprehensive, and self-doubting, that it was willing to carry out self-mutilation in order to become risk-free, while refusing to consider the motives of the offenders and “miscreants.” In the United States, as in Sweden or Alberta, this Machiavellian interpretation of public law made ethics the handmaid of politics: rights could only be granted by law, and social utility overruled the “untenable notion” of human rights. Virtues, rather than rights, were the defining attribute of citizenship. Instead of protecting the citizens, law legitimized the persecution of certain categories of people, purportedly unable to enjoy freedom and to pursue happiness, by gradually stripping them of their rights and legal protections. Such policies were described as politically necessary and ethically indisputable. In a tragic reversal of roles, according to the dominant “discourse of truth,” those who violated the physical integrity of other citizens were fulfilling a constitutionally sanctioned civic duty, while the victims of involuntary sterilization and confinement were a social threat and, as such, subject to legally mandated sterilization or confinement “for the good of society” (Colla, 2000; Morone, 2003). Eugenicists were persuaded that what stood in the way of the modernizing process was the result of ignorance, parochialism, and backwardness. Those who questioned their ostensibly sophisticated and rational arguments were labelled as uncooperative or reactionary. In a burst of selfserving enthusiasm, they regarded themselves as modern, progressive and boldly experimentalist. This made resistance to ethical self-scrutiny particularly strong, because the project of a rationalist utopia was inextricably bound up with social systems that many believed were a model of humanitarian and enlightened administration, the embodiment of intrinsic benevolence and
farsightedness, and therefore eminently fair and morally unassailable. Explicit coercion was often unnecessary, as thousands of people genuinely believed, or were led to believe, that eugenics measures were desirable, and they had themselves or their family-members sterilized or confined. This should remind us that informed consent is not just a signature on a form but a two-way process involving information exchange, education and counselling. Most North American and Scandinavian laws were only repealed in the late 1960s and 1970s, even though the Supreme Court ruling in Skinner v. Oklahoma 316 U.S. 535 (1942) defined procreation “one of the basic civil rights of man” and sterilization an invasion of fundamental interests which, according to Justice William O. Douglas, “in evil or reckless hands,” could have genocidal consequences. As late as the 1980s, 44 percent of the American public was still in favour of compulsory sterilization for “habitual criminals and the hopelessly insane” (Singer et al. 1998). By contrast, in those same years, law-makers in Holland, Britain, in Latin American and Latin Rim countries1 objected to selective breeding, involuntary sterilization, the assault on the notion of free will, the spurious conflation of modernization and liberation, and the linear extension of natural laws into the social sphere.2 Eugenics, genetic fatalism, and the marriage between bureaucratic rationality and scientism did not resonate with every Western repertoire of values and symbols (Baud, 2001). This finding is of signal importance for the analysis of current trends in bioethics, social policy and biotech regulation.
EUGENIcs as a cONsUMEr cHOIcE Western societies are today on the verge of a eugenics revival in the form of reprogenetics, germline engineering, and cloning, a trend which is indirectly reinforced by courts’ recognition of
Ethical Aspects of Genetic Engineering and Biotechnology
wrongful birth and wrongful life claims, by the commodification of healthcare, by the diffusion of testing for genetic predispositions, and by the rhetoric of genetic responsibility, involving new forms of discrimination and exclusion. Medical, cosmetic, and enhancing technologies are being pursued to meet our needs, such as the self-imposed obligation to be fit, active, self-sufficient and responsible citizens, and an almost universal desire to control our own lives and possibly improve them. What measure of genetic and personality enhancement are we going to tolerate? In this section I explore continuities and discontinuities between past and future eugenics. Opponents of genetic engineering of the human germline and human cloning point out that a society in which parents can avail themselves of preimplantation genetic diagnosis (PGD) tests has no use for them. If embryos are affected by serious genetic disorders, they can be discarded and only the healthy ones will be implanted in the womb. Therefore, critics argue, the advocates of germline engineering and cloning do not have therapy in mind, or the noble goal of redressing genetic injustice, but species enhancement. Promoters of human genetic enhancement counter that it would be more sensible and economical to try and eradicate genetic conditions instead of treating them each generation. Their detractors respond that “simple”, single-gene disorders are very rare, and that most severe genetic conditions are complex, involving a combination of genetic and non-genetic factors. The risk of unanticipated inheritable negative consequences that the reconfiguration of human biology entails is simply unacceptable, even if germline manipulation could be made reversible, because the more complex the trait that is changed, the less simple it will be to undo the change. I will not object to these procedures on philosophical or religious grounds, nor will I dwell on the inevitable widening of the ontological and social gap between the rich and the poor that
they are likely to cause, including the prospect of a future caste of uninsurable and unemployable. These arguments have already been addressed “ad nauseam.” A different case against designer babies and the medicalization of childhood can be made, which draws on the life of Francis Galton himself, the father of modern eugenics. Galton (1822-1911), half-cousin of Charles Darwin, was destined to a life of fame and academic prestige, to fulfil his father ambitions (Sweeney, 2001). Persuaded that heredity was destiny, and given the outstanding pedigree of the Galton-Darwin-Wedgwood family-stock, his parents decided that he would be taught how to realize his full potential and become the genius he was meant to be. As a result, in the family diaries Francis is only mentioned for his educational achievements and intellectual exploits. Such were the forces at work in the shaping of the character of the proponent of the theory of hereditary genius: destiny was implanted like a programme into Francis, who would grow into a man “suffering considerable angst as a result of seldom achieving the heights of intellectual acclaim to which his parents had encouraged him to aspire and for which he had worked assiduously hard.” (Fancher, 1983). At the age of four, he was already saving pennies for his university honours and four years later he was encouraged to study French, Latin and Greek. But when he confronted the highly selective environment of Cambridge, he crumbled under the pressure of harsh competition and constant mental strain: dozens of exceptionally-gifted students made it exceedingly hard for him to excel and a sudden and severe nervous breakdown ensued (Sweeney, 2001). Little by little, Galton drifted away from his family and devoted himself to those fields of knowledge in which he felt he could stand out. He tried his hand at poetry, soon to realise that he had no literary talent, then he turned his attention to mechanics and devised a number of contrivances that were never patented or manufactured. Even his statistical calculation of the relative efficiency of
Ethical Aspects of Genetic Engineering and Biotechnology
sailing came to naught when the steam engine was invented (Forrest, ibid.). This series of failures brought him to a second and more severe mental breakdown in 1866. It’s easy to see where his unhappiness and frustration came from: not from state coercion, but from parental despotism (Forrest, 1974). Authorizing the creation of designer babies may have dire consequences, because embryo selection carried out for the sake of ‘quality control’ in reproduction is far more restrictive of a child’s freedom and places much more pressure on the offspring. Intuitively, one would expect that it would be more difficult for these children to develop into autonomous beings and to be free to choose not to fulfil the wishes and aspirations of their parents, irreversibly inscribed into their DNA, at least at a symbolical level. Even assuming that most of us would justifiably reject the fallacy of “Genes ‘r’ Us” determinism, “made-to-order” children would be hard put to do the same. As American sociologist W.I. Thomas once said, “if men define situations as real, they are real in their consequences.” What would the consequences be, in terms of their development as nominally independent and responsible moral agents, if the alterations were of a non-medical nature? Would they take pride in their achievements in the same way as ordinary people do, even if their talents are inherited? Why, in a meritocratic society, should they feel they owe anything to the less fortunate? What could be their response to personal failure: would they assume that they are entitled to the best of everything? Finally, and more importantly, those parents so preoccupied with the uncertainties of life that they would rather have their children genetically engineered, how are they going to deal with the unavoidable challenges of parenting and the realization that control can never be complete? Are we going to rear children who cannot face life’s challenges without the help of chemical and genetic enhancers of mood, memory, cognition, sex life and athletic performances? If the goal
0
posts are constantly being pushed forward, how are we going to avoid that what was once regarded as unnecessary should become imperative? Victoria University ethicist Nicholas Agar (Agar, 1998) has argued that if a 6-year-old Mozart had mixed with children of his own age instead of performing in the courts of Europe, today we would not be enjoying The Marriage of Figaro or Don Giovanni. But we could counter that perhaps Wolfi might have preferred to play with his peers and live a longer and less tormented life instead of complying with the requests of his authoritarian and manipulative father. Even a strictly utilitarian perspective should not contemplate a scenario in which kids are sacrificed for the greater good and in the parents’ pursuit of reflected fame and status, to the point of transforming them into biological artefacts designed by others. Even though past government-sponsored coercive eugenics programmes have been discredited, the mere defence of reproductive freedom is not sufficient in itself to protect citizens from abuses and harm. Unfortunately, the history of libertarianism is replete with examples of citizens claiming liberties for themselves while demanding restrictions for other “less deserving” citizens. Also, there is no such thing as a government stubbornly refusing to cater to the demands of powerful lobbies. Apart from the fact that, under more strained socio-economic circumstances, democratic states may at some point be forced to recommend compulsory screening for certain genetic conditions, we might also want to consider the historical evidence pointing to a growing presence of the State in the family sphere in Western democracies, motivated by the imperative to better protect the children’s rights (Cavina, 2007). In point of fact, a proposal has been made in Texas, to the effect that the state should embark on mass presymptomatic diagnosis of Attention Deficit Hyperactivity Disorder in schoolchildren, followed by widespread prescription of psychoactive drugs (Rose, 2005). This should be a sufficient warning
Ethical Aspects of Genetic Engineering and Biotechnology
that the so-called consumerist eugenics will not be a democratic panacea: treating shyness and liveliness as biochemical imbalance, and medicalizing our children to make them well-behaved and cooperative, as though they were faulty devices - regardless of the unforeseeable long-term sideeffects of taking drugs at such an early age - is, for all intents and purposes, an experiment in social engineering on an unprecedented scale, and one which can only disempower parents and children and suppress human diversity. In sum, the language of autonomy, empowerment, choice, and rights ought not to obscure the fact that: a. it is a rather apposite way for medical professionals and the State to be released from their responsibilities vis-à-vis patients and citizens; b. the randomness of sexual fertilization is, alas, the closest thing to freedom (Sandel, 2007) in societies where choices are constrained by legal restrictions, social and gender-related expectations, obligations and imperatives, as well as by prejudices, ignorance, practical impediments, and huge economic and social disparities, which translate into a dramatic differential distribution of power and authority. A society where individuals are expected to responsibly monitor their health and lifestyle and to act on the available knowledge - “free choice under pressure” is a fitting definition of life in advanced democracies – will look on those who do not fulfil that obligation as reckless and uncaring. This is also what we gather from Nancy Smithers, 36, an American lawyer, and from her first-hand experience of how the line between care and desire is becoming blurred and how the range of human variability that is deemed socially acceptable is being inexorably narrowed: “I was hoping I’d never have to make this choice, to become responsible for choosing the kind of baby I’d get, the kind of baby we’d accept. But everyone – my doctor, my parents, my friends – everyone urged me to come for genetic counselling and have amniocentesis. Now, I guess I’m having a modern baby. And they all told me I’d feel more in control. But in some
ways, I feel less in control. Oh, it’s still my baby, but only if it’s good enough to be our baby, if you see what I mean.” (Rapp, 1988: p. 152).
HUMaN NatUrE aND sPEcIatION While Sophocles thought that “there are many wonderful things, and nothing is more wonderful than man,” Nietzsche famously defined man das noch nicht festgestellte Tier, “the animal which is yet undefined.” The Human Genome Project, an international collaboration to code the information contained in the human genome through DNA-sequencing and store the resulting information in databases, was heralded as the means whereby we would attain a more precise definition of human nature. The first working draft of a human genome sequence was published in 2001, but it is important to stress that the genome sequenced by the publicly funded Human Genome Project does not represent the genetic make-up of the human species. Based on blood and sperm samples submitted by several anonymous donors, it really is a statistical artefact, the abstraction of a non-existent species-being standing for all of us, individually and collectively, without being us. Therefore, genomes are benchmarks against which individual genotypes can be examined and described. This is because the genome is not a fixed essence that we all share in common. Each one of us possesses a unique combination of nucleotides and genes coding for proteins (genotype) and even the same genes shared by identical twins express different phenotypes under different environmental circumstances. In other words, we are at once very similar and very different from one another. Suffice it to say that while we differ from each other by 0.1 percent, humans are reportedly 98 percent genetically identical to chimpanzees, proving that such seemingly slight discrepancies have far-reaching consequences, when combined with environmental factors. In
Ethical Aspects of Genetic Engineering and Biotechnology
human beings, variation is the norm and, strictly speaking, by “human genome” we should refer to the sum of all genotypes in the human species, a goal that is currently beyond our reach. One of the outcomes of the Human Genome Project has been the recognition that genetic determinism is incompatible with the evidence provided by the preliminary analysis of the base pair sequence of the “human genome.” From a strictly deterministic point of view, defined by the single-gene single-biological function paradigm, our 30,000 genes, approximately twice the number of genes of a fruit fly and far fewer than most geneticists expected, are simply not enough to make us the way we are. We have not found the “secret of life” and are not anywhere near to being able to explain human nature, let alone control it. However, the finding that the human genome is a dynamic landscape has important ramifications. Assuming that the “genome” is, to some extent, malleable and adaptable without apparent adverse effects, those who still assume that a common genome plays an important part in the definition of human nature (and human rights) will be inclined to regard human nature as infinitely malleable and its characterization as too fluid to serve any meaningful legal, scientific, ethical, and political purpose. They might raise the question that if the social order reflects a society’s conception of human nature, and there is no fixed human nature, then who is to decide what is just and moral, and on what grounds? Traditionally, personhood has only been attributed to human beings: then, what would a valid criterion for species differentiation be if we are going to grant personhood to great apes and to create human chimeras, cyborgs, or a new posthuman species/race? The problem really comes down to what makes us human: if patients in permanent vegetative states and severely mentally impaired persons are human, then some commentators would argue that it would be fair to grant human chimeras the same status. In other words, we
need to clarify the defining criterion that we use to self-identify as humans: what we can do, what we are, or something else? In 1974, Joseph Fletcher, Episcopal minister, academician, and one of the founders of bioethics, published a controversial treatise in which he argued that certain “retarded children” should not be viewed as persons, that procreation was a privilege, not a right, and that devising ways to obtain chimeras and cyborgs to be put in the service of humankind would be a morally legitimate enterprise (Fletcher, 1974). In much the same way, in the early Seventies, a Rand Corporation panel agreed that genetic engineering would also be used to create parahumans, namely humanlike animals, or chimeras: these beings would be more efficient than robots, and would be trained to perform low-grade domestic and industrial work or else provide a supply of transplantable organs (Rorvick, 1971). Given the relative genetic proximity of chimpanzees and human beings, and the fact that the evolutionary split between the two species may have occurred fairly recently, it is conceivable that the first human hybridization would generate a humanzee, that is, a cross between a human and chimpanzee. But there remains the problem of the unpredictable consequences of interspecies transplantations at the embryonic stage, when bodies and brains are highly malleable and every new insertion of non-human stem cells is likely to cause random alterations in the development of the organism and, as a result, in the identity of the individual. Nobody can really anticipate the dynamic interactions of animal mitochondrial DNA and the nuclear DNA of a human embryo. It may even be the case that a chimera might look like a member of one species and behave like the members of the other species. Supposing that the procedure is successfully and safely applied and repeated, we should then broach various topics related to artificial hominization, and discuss the moral and legal status of these beings. If law only recognizes people (in-
Ethical Aspects of Genetic Engineering and Biotechnology
cluding juridical persons) and property, to which category will they belong? Will they be patentable, that is, could they be owned by a company? Will researchers need their informed consent prior to their inclusion in a medical study? Is personhood coterminous with humanity? Are we going to establish a juridical and moral continuum from inanimate things, to animals, semi-human beings (e.g. chimeras, replicant-like androids), and fully autonomous persons? The idea of a seamless gradient is reminiscent of the medieval notion of the Scala Naturae, or Great Chain of Beings, a linear hierarchy for the zoological classification of living beings which generated the visual metaphor behind the theory of evolution. Yet this model is not without its problems, for it was also employed to establish a pecking order of social worth and, simultaneously, thwart the extension of civil rights to certain categories of “diminished” human beings like women, workers, children, minorities, etc. (Groce & Marks, 2001). Modern advanced democracies will be compelled to blur the boundaries along the abovementioned continuum and make it as inclusive as possible. But this can only mean that many human chimeras and artificial intelligences with highly developed cognitive skills and selfconsciousness will be allowed to become our moral equals and, as such, enjoy the attendant legal protection. Ideally, the “borderlines of status” of “artificial human beings” will be removed, with no detriment to senile, foetuses, and the vegetative: human rights will no longer be the monopoly of Homo Sapiens.
tHE cOMMODIFIcatION OF LIFE “Does it uplift or degrade the unique human persona to treat human tissue as a fungible article of commerce?” was Justice Arabian’s rhetorical question in his concurring opinion in Moore v. Regents (1990).
For centuries, millions of people were enslaved on the ground that certain human beings could be assimilated to Aristotle’s “natural slaves.” Chattel slavery, that is the extension of market relations to the human person as a marketable property and human commodity (res commerciabilis) or legal tender, outside the domain of mutual obligations, was officially abolished only in the nineteenth century. In the United States, the Thirteenth Amendment, prohibiting slavery and involuntary servitude, was ratified in 1865: it meant that no human being could be owned by another human being and, by extension, that people’s genotype cannot be patented. But isolated genes and partial gene sequences from human tissue samples can be patented for research purposes, provided that the applicant can “prove” that a “natural” object has been transformed into an “invention.” Ironically, mathematical formula are not patentable, because they are assumed to be already out there, like laws of nature or natural phenomena, whereas genes, which are verifiably part of nature, can be patented when they are discovered, regardless of the fact that the assignees may have failed to demonstrate a use for their discoveries. As a result of the 5 to 4 U.S. Supreme Court ruling in Diamond v. Chakrabarty, 447 U.S. 303 (1980), which determined that “anything under the sun made by man” is patentable, today more than 6000 human genes from all around the world are covered by U.S. patents (Lovgren, 2005), on the ground that the mere isolation and purification of genes from their natural state, by now a routine operation, allows an applicant to be issued a patent. This sentence and the Senate’s refusal to ratify the UN Convention on Biological Diversity (CBD),3 which had been designed to protect the interests of indigenous peoples, have paved the way to “bioprospecting” (gene hunting), also known as “biopiracy,” of pharmaceutical companies in the developing world, so that U.S. private firms and public agencies are granted exclusive rights to decide who can use those cell lines that are prof-
Ethical Aspects of Genetic Engineering and Biotechnology
itable for pharma-business, and how much they will have to pay to do so. There are a number of remarkable analogies that can be drawn between today’s biopiracy and the nineteenth century westward expansion of the United States, when Native Americans were thought to be an inferior race incapable of fulfilling the moral mission of harnessing natural resources. The Doctrine of Discovery meant that the land inhabited by the indigenous peoples was terra nullius (no man’s land). The aboriginal occupants had no legal title to the land because they had no fixed residence and did not till the soil according to European standards. They could only exercise a right of occupancy, under the American “protection and pupilage.” White people had “discovered” the land - and nowadays the genes - and therefore they owned it. In a multibillion dollar market, enormous economic interests are involved in the search for “biovalue” (Waldby, 2002) and ethical considerations are not binding rules and are not always high on governments’ agendas. In the United States, where individual autonomy is oftentimes equated with the privilege of disposing of one’s body as one sees fit, there is a trend to extend market relations to DNA and body parts. Thousands of patents have been granted, mostly to private companies, on human genes whose function is still unknown. This process of parcelization allows companies to gradually take control of the human genome in the form of immortalized cell lines, and put a price on them, without violating the constitutional principle of the non-patentability of human beings. Today, advances in biotechnology raise new questions about the treatment of individuals and their bodies, which can now be seen - a vestige of Cartesian dualism? - as collections of separable, interchangeable, and commercially transferable parts. Bioscientists will, unwittingly or intentionally, play an increasingly important role in this process by introducing technologies that will facilitate the
exchange of body parts and DNA – now endowed with a social life of their own – in commercial transactions, and by selecting (PGD) or cloning donor babies, namely babies who can supply compatible tissues to treat sick siblings. This will raise a host of new questions such as: who is the owner of someone’s separated human tissue? If people are the owners of their cell-lines, should not they be entitled to share in the profits from scientific findings and commercialization? Are patent claims for intellectual property of DNA from indigenous peoples morally and legally justifiable? In general, who can demand a share of the profit from the commercial exploitation of human DNA? Most jurists and legislators of Continental Europe, where the intellectual establishment is generally opposed to the idea of the market as a civilizing and liberating force, will presumably continue to favour social cohesiveness, altruism, and an ethics of the good life (eudaimonia) (Gracia, 1995). The primacy of autonomy will most likely be underplayed for the sake of social justice and equality (Braun, 2000). Reasoning that it would be unrealistic to expect societies to be able to protect all citizens, especially the destitute and disenfranchised, from coercion and exploitation, it is to be expected that most will refuse in principle to regard the human body as a repository of economic value, a marketable property and a source of spare parts. They will stress that Western civilization, from habeas corpus to the abolition of slavery as commerce of “human commodities”, and to the emancipation of women, has developed in opposition to the objectification of the human body and to the idea that anything can be converted into a commodity and into an object of contractual relationships: the argument that human body was a res extra commercium4 was at the heart of the abolitionist movement, for there is no person without a body, and a subject cannot be the object of commercial transactions. Some will further argue in favour of the Kantian normative
Ethical Aspects of Genetic Engineering and Biotechnology
position, whereby one should always treat human beings as ends in themselves, that is, as having intrinsic value or worth (non-use goods), and therefore as the source of our valuation process, and not as the means to satisfy our values (use goods). They will point out that bodies, babies, and life are gifts, and money is no substitute for them (Crignon-De Oliveira & Nikmov, 2004), mostly because allowing market forces to define a scale to measure the value of all things would be degrading to our sense of personhood and to our values (Gold, 1996). Accordingly, Article 18 of the European Human Rights and Biomedicine Convention, signed in Oviedo in 1997, forbids the “creation of human embryos for research purposes”, while article 21 states that “organs and tissues proper, including blood, should not be bought or sold or give rise to financial gain for the person from who they have been removed or for a third party, whether an individual or a corporate entity such as, for example, a hospital.” Its proponents were preoccupied, among other things, that relaxing the restrictions on ownership of human body parts would lead to the proverbial slippery slope with companies legally owning potential human beings (or chimeras) from embryo to birth. In Europe, people cannot make their bodies a source of financial gain.5 While excised body parts, like hairs or placentas, are usually treated as res nullius, that is, free to be owned by the first taker, like an abandoned property, European civil codes prohibit the removal of a body part when it causes permanent impairments, unless it is done within a formalized system of transplant donations. The principle of market-inalienability has gradually replaced the former principle of absolute inalienability. It is now argued that people do own and control their bodies (and tissues) but have no right to sell them, for they cannot exist without them and their rights as human beings and as consumers cannot trump the right of society to attempt to stave off the process of commodification of human life.
Negotiating the economic value of bodies and body parts is therefore out of question, as it used to be in the late 1890s, when American insurance companies reassured their clients that “the term life insurance is a misnomer . . . it implies a value put on human life. But that is not our province. We recognize that life is intrinsically sacred and immeasurable, that it stands socially, morally and religiously above all possible evaluation” (Zelizer, 1978).
DIsabILItIEs aND GENEtIc tEstING In most societies, and especially those with a greying population, people with disabilities constitute the single largest minority group. Depending on how disability is defined, there are currently between 3 to 5 million Canadians, 50 million Americans, and 40 million Western Europeans with disabilities. In half the cases it is a severe impairment. Disability is therefore a fact of life, and the boundary between ability and disability is permeable. It is reasonable to assume that, at some point during one’s life, everybody is going to have to deal personally with a disability or to look after a disabled person, and that it is therefore in everyone’s interest that societies should strive to accommodate disabled people, instead of viewing them as “damaged goods” (Asch, 2001). This is all the more important now that biotechnologies are poised to make the boundary between “abled” and disabled even more porous: the notion of disability will be presumably extended to more individuals (e.g. alcoholism, obesity, predispositions to chronic diseases, etc.). How is this going to affect the social model of disability and the issue of status recognition? Are further adjustments to accommodate the needs of “asymptomatic ill”, that is, people with an “abnormal” genetic constitution necessary? Is it possible that that is going to magnify the
Ethical Aspects of Genetic Engineering and Biotechnology
problem in unpredictable ways (viz. proliferation of identity groups)? In everyday life, there remains an enduring tendency to view human beings as worthwhile not for who they are but for what they do and to confuse facts and values, is and ought. The status of physically and cognitively impaired persons – that is, people with non-standard bodies and minds – best illustrates one of the most glaring antinomies of advanced democracies: they classify their citizens by making up new social categories, labels, and group identities, and they attempt to maximise their potential in order to better include them but, in doing so, they cause the already marginalised to become even more vulnerable and less visible, and they also affect the common perception of what is normal, and therefore acceptable and appropriate, that is, normative (Hoedemaekers & Ten Have, 1999). Nevertheless, the boundary between “normal variation” and “genetic disease” is in part a social construction, because the notions of “normalcy” and “deviance” are historically and culturally relative. What is more, the consequences of physical impairments can be mitigated by the provision of personal and technical support. It follows that this classificatory exercise is completely out of place (Lewontin, 2001). No one, not even the State, can arbitrate normality (Rapp, 2000) and, it should be added, it is not at all clear that disability is a kind of harm that is qualitatively different from other socially constructed “harms”, such as poverty or race (Shakespeare, 1999). Historically, there is no such thing as a linear transition from discrimination to acceptance, as far as people judged to be abnormal and pathological are concerned. Instead, economic, social and political determinants (ideologies, cultural trends, and societal arrangements) have changed the experience of disability along an erratic course (O’Brien, 1999). Their dependence on an artificial environment has been a constant reminder of human imperfections and frailty, and of medical and scientific powerlessness. Furthermore, economic
setbacks have often resulted in growing concerns over the financial burden of social spending on people with disabilities. In the Twenties, when the German economy was in a state of collapse after WWI but prior to Hitler’s rise to power, it was revealed that a majority of parents with handicapped children would consent to their “euthanasia” if the medical authorities decided on this course of action (Burleigh, 2002). They sincerely believed that, under those circumstances, it would be best for their children. Unfortunately, too much insistence on the values of autonomy and self-sufficiency, coupled with cost-benefit considerations on how people might best contribute to production and bolster the economy, are likely to devalue people who are not self-sufficient. If detected abnormalities cannot be treated, prenatal diagnosis and subsequent selective pregnancy termination could still be regarded by many as a quick fix to an intractable social problem, namely society’s unfair treatment of the disabled. The mixed message that society is sending to people with disabilities is that they are mistakes that will hopefully be redressed by technological progress; yet they are still welcome. The two goals of trying to eradicate disability while steering society towards a more embracing and supportive attitude to diversity may well prove incompatible. We must also consider that prenatal genetic testing and, in the future, fetal therapy, that is, the medical treatment of babies in the womb, will not guarantee a “normal” baby. Even the systematic screening of foetuses cannot prevent all “defective children” from being born. This raises the issue of how they will be treated in a society which tends to value competence and intelligence more than anything else, that is to say, one where they would be “better-not-born” (Baroff, 2000). We are already witnessing the clash between constitutional equality and the inequality of bodies (Davis, 2002). It is a situation in which women’s rights are pitted against the civil rights of people with disabilities and of unborn children, while
Ethical Aspects of Genetic Engineering and Biotechnology
individual rights, values, and interests are played against those of the larger society. Today, prenatal screenings are largely performed by midwives and obstetricians. In some countries, these techniques have already reduced the prevalence of spina bifida and Down syndrome by 30 percent or more, and the incidence of neural tube defects, Tay Sachs, and beta thalassemia (Cooley’s Anemia) by 90 percent (Asch et al., 2003). Genetic testing, which can analyse thousands of mutations, is instead usually carried out by genetic counsellors and clinical geneticists. We might want to speculate about the range of possible short-term and long-term effects of the application of new technologies in the field of genetic medicine, albeit they cannot be predicted with certainty. Following the adoption of gene diagnostics (when there is an indication that someone might be affected by a genetic condition) and genetic testing in carrier screening (when no such indication is present), a new class of citizens could arise, which will include people who have been diagnosed as ‘asymptomatic ill’, that is at a higher risk of contracting certain illnesses. Passing specific legislation to prevent discrimination against them would paradoxically make them seem more different from other people than they really are. It has been suggested (Macintyre, 1997) that discrimination in insurance, employment, and healthcare provision – to contain the costs of healthcare benefits – could be the logical consequence of a double-bind whereby you will be treated differently whether you agree that you and your children should be genetically tested, and test positive, or you refuse, for in such case it will be assumed that you might be concealing some inheritable condition. The bottom line seems to be that an epistemological shift has taken place, so that whenever human life does not conform to increasingly high standards of health and quality, it is likely to be deemed a grievous miscalculation. Human dignity, a notion that law cannot define unequivocally, lies at the foundation of the human
rights doctrine, but it is also indissolubly bound up with the concept of quality of life, which is relative. Because of this, quality and equality are pitted against each other and developments in biotechnology could undermine the constitutional principle of equality of all human lives, which is the glue that holds society together. Finally, because prenatal screening is an expensive procedure, it is even possible that, in countries without universal healthcare, more and more people with disabilities will be born into poverty (Asch et al., 2003).
cONcLUsION Studying the ethical implications of the new biomedical technologies involves much more than simply assuming that totally rational agents, altogether free from social and cultural strictures and contingencies, and from their physicality, would arrive at the same conclusions, following a single, completely reliable deductive mode of reasoning or, alternatively, starting from some unverifiable articles of faith. A reality made of moral flexibility, discrimination, inequality, differential power relations and access to healthcare cannot be wished away for the sake of conceptual clarity and simplicity. Yet, historical, political and social issues – including the discussion of the common good, the unfairness of healthcare in the United States and elsewhere, and the sheer nonsense of applying the ethical standards of affluent societies in developing countries –, are seldom the object of critical investigation on the part of mainstream bioethicists. These “secular moral experts” understandably prefer to rely on hard logic rather than on the disputable evidence, multiple constraints, relative values, nagging contradictions, and subjective feelings of everyday reality. But that untidy reality, with its attending uncertainty, is the only one there is, at least for most of us, and this is why there can be no univocal, logically necessary solution to our moral quandaries. Condemning the tendency of ordinary
Ethical Aspects of Genetic Engineering and Biotechnology
people to cling on to their beliefs as a matter of course seems unwarranted. On various important ethical issues people trust their own judgment because they see that their views are widely shared and because they have strong reasons to believe that such a consensus is not going to vanish into thin air any time soon. Indeed, most of us generally subscribe to those moral precepts that have stood the test of time.6 It is our appreciation of the practical insights and moral expertise of those who came before us which, for instance, lead many to maintain that human dignity is important even though it is hard to define. Unfortunately, the haste with which common sense is waved aside as an inconsequential distraction, together with a rather strong measure of technological determinism, can only reinforce the impression that bioethics has the justificatory function of bringing the public around to the way of thinking of the most enlightened and glamorous elite and, by extension, of the bio-pharmaceutical industry. The fact of the matter is that a thin bioethics confronting the market and powerful professional and corporate interests is bound to be either crushed or to lend itself to the endorsement of an ideology of unbridled competition and rampant consumerism. Bioethicists would therefore be well advised to pay heed to the words of Jean-Baptiste Henri Lacordaire, who once said that “between the weak and the strong, it is freedom which oppresses and the law which sets free.”7
of disability studies (pp. 297-326). Thousand Oaks, etc.: Sage Publications.
rEFErENcEs
Crignon-De Oliveira, C. Nikodimov, M. G. (2004). A qui appartient le corps humain? Médecine, politique et droit. Paris: Les Belles Lettres.
Agar, N. (1998). Liberal eugenics. Public Affairs Quarterly, 12(2), 137-155. Alschuler, A.W. (2001). Law without values: The life, work, and legacy of justice Holmes. Chicago and London: University of Chicago Press. Asch, A. (2001). Disability, bioethics and human rights. In Albrecht, G .L. (et al.) (eds.), Handbook
Asch, A. et al. (2003). Respecting persons with disabilities and preventing disability: is there a conflict? In S. S. Herr et al. (Eds.), The human rights of persons with intellectual disabilities (pp. 319-346). Oxford: Oxford University Press. Bachelard-Jobard, C. (2001). L’éugenisme, la science et le droit. Paris: Presses Universitaires de France. Baroff, G. S. (2000). Eugenics, “Baby Doe”, and Peter Singer: Toward a more “perfect” society. Mental Retardation, 38(11), 73-77. Bouquet, B., Voilley, P. (Eds.). (2000). Droit et littérature dans le contexte suédois. Paris: Flies, 2000. Braun, K. (2000). Menschenwürde und biomedizin. Zum philosophischen diskurs der bioethik. Frankfurt/New York: Campus. Burleigh M. (2002). Death and deliverance: ‘Euthanasia’ in Germany, c. 1900-1945. Cambridge: Cambridge University Press. Cavina, M. (2007). Il padre spodestato. L’autorità paterna dall’antichità a oggi. Roma-Bari: Laterza. Colla, P. S. (2000). Per la nazione e per la razza. Cittadini ed esclusi nel “modello svedese”. Roma: Carocci.
Davis, L. J. (2002). Bending over backwards. Disability, dismodernism & other difficult positions. New York & London: New York University Press. Fancher, R. (1983). Biographical Origins of Francis Galton’s Psychology. Isis, 74, 227–33
Ethical Aspects of Genetic Engineering and Biotechnology
Fletcher, J. (1974). The ethics of genetic control: Ending reproductive roulette. New York: Anchor Books. Forrest, D.W. (1974). Francis Galton. The life and work of a Victorian genius. London: Paul Elek Gold, E.R. (1996). Body parts: Property rights and the ownership of human biological materials. Washington, DC: Georgetown University Press. Gracia Guillén, D. (1995). Medical ethics: History of Europe, Southern Europe. In T. W. Reich (ed.), Encyclopedia of bioethics (pp. 1556-1563), Vol. 3. New York: Simon and Schuster Macmillan. Groce, N.E. & Marks, J. (2001). The Great Ape Project and disability rights: Ominous undercurrents of eugenics in action. American Anthropologist, 102(4), 818-822. Hoedemaekers, R. & Ten Have, H. (1999). The concept of abnormality in medical genetics. Theoretical Medicine and Bioethics, 20(6), 537–561. Lewontin, R. C. 2001. It ain’t necessarily so. New York. New York review books. Lovgren, S. (2005). One-fifth of human genes have been patented, study reveals. National Geographic News, October 13. Retrieved May 5, 2007 http://news.nationalgeographic.com/ news/2005/10/1013_051013_gene_patent.html Macintyre S. (1997). Social and psychological issues associated with the new genetics. Philosophical transactions: Biological sciences, 352(1357), 1095-1101. Maclean, A. (1993). The elimination of morality. Reflections on utilitarianism and bioethics. London & New York: Routledge. Morone, J. A. (2003). Hellfire nation. The politics of sin in American history. New Haven and London: Yale University Press.
O’Brien, G. V. (1999). Protecting the social body: use of the organism metaphor in fighting the “menace of the feeble-minded”. Mental Retardation, 37(3), 188–200. Paul, D. (1995). Controlling human heredity: 1865 to the present. Atlantic Highlands, NJ: Humanities Press. Rapp, R. (1988). Chromosomes and Communication: the discourse of genetic counselling. Medical Anthropology Quarterly, 2(2), 143-157. Rapp, R. (2000). Testing women, testing the fetus. The social impact of amniocentesis in America. New York and London: Routledge. Rorvik, D. M. (1971). Brave new baby. Promise and peril of the biological revolution. Garden City, New York: DoubleDay & Co. Rose, N. (2005). Will biomedicine transform society? The political, economic, social and personal impact of medical advances in the twenty first century. Clifford Barclay lecture. Retrieved May 24, 2007, http://www.lse.ac.uk/collections/ LSEPublicLecturesAndEvents/pdf/20050202WillBiomedicine-NikRose.pdf Sandel, M. J. (2007). The case against perfection. Ethics in the age of genetic engineering. Cambridge, MA: The Belknap Press of Harvard University Press. Shakespeare, T. (1999). Losing the plot? Medical and activist discourses of the contemporary genetics and disability. In Conrad, P. & Gabe, J. (Eds) Sociological perspectives on the new genetics (pp. 171-190). Oxford: Blackwell Publishers. Singer, E. et al. (1998). Trends: genetic testing, engineering, and therapy: Awareness and attitudes. Public Opinion Quarterly, 52(4), 633-664. Sweeney, G. (2001).“Fighting for the good cause” reflections on Francis Galton’s legacy to American hereditarian psychology. Independence Square, PA: American Philosophical Society.
Ethical Aspects of Genetic Engineering and Biotechnology
Waldby, C. (2002). Stem cells, tissue cultures and the production of biovalue. Health, 6(3), 305-323. Zelizer, V. A. (1978). Human values and the market: The case of life insurance and death in 19th-century America The American Journal of Sociology, 84(3), pp. 591-610.
KEY tErMs Attention Deficit Hyperactivity Disorder (ADHD): Is a mental condition affecting children and adults and is typified by inattention, hyperactivity, and impulsivity. Hundreds of scientists and medical professionals in both North America and Europe claim that there is no clear evidence to support the existence of ADHD and contend that most cases fall within the normal range of variation in human behaviour. Base Pair: A structure made of two complementary nucleotides (strands of DNA molecules) joined by weak hydrogen bonding. The base pairs are adenine (A) with thymine (T) and guanine (G) with cytosine (C) for DNA and adenine with uracil and guanine with cytosine for RNA. This is where genetic information is stored. It is estimated that the human genotype contains around 3 billion base pairs which, together, give DNA its double helix shape. Chimera: Legendary creature with a lion head and chest, the belly and a second head of a goat, and with a serpent for a tail. In biology and genetics, a distinction is drawn between mosaics, that is, those plants and animals that contain different sets of genetically-distinct cells (e.g. humans with mismatched eyes, but also serious genetic conditions such as Turner’s syndrome) deriving from a single zygote, and chimeras, whose cell populations originated from more than one zygote. Animal chimeras are routinely experimentally produced, whereas the creation of part-human,
0
part-animal hybrids (parahumans) is currently unfeasible and illegal. Germline Engineering: The genetic modification of individuals whose alterations will be passed on to their progeny. It involves altering genes in eggs, sperm, or early embryos, by insertion (e.g. of artificial chromosomes), gene deletion or gene transposition. Germplasm: hereditary material (chromosomes and DNA) of living organisms. Sometimes it is also the name given to a species’ “genome”, namely the entire repertoire of that species’ genotypes. Human Cloning: If it were legal, reproductive cloning would be used to create children who are genetically identical to a cell donor. At present, it would be a very expensive procedure with a staggering rate of failures (about 90%). Therapeutic cloning refers to the creation of identical embryos and tissues in order to harvest stem cells for research and transplantation purposes. There are two main cloning techniques: (a) by embryo splitting (also known as artificial twinning, because it occurs naturally with identical twins): an embryo is split into individual cells or groups of cells that are then artificially prompted to grow as individual embryos; (b) by somatic cell nuclear transfer (SCNT), which is done by transferring genetic material from the nucleus of an adult cell into an enucleated egg, that is an ovum whose genetic material has been taken away. This is the technique used to generate Dolly the sheep. Hyperparenting: A form of child-rearing in which parents become too involved in the management of their children’s lives. In Vitro Fertilization (IVF): An assisted reproductive procedure in which a woman’s ova (eggs) are removed and fertilized with a man’s sperm in a laboratory dish (the Petri dish). Each IVF cycle is very expensive and has a success rate of no more than 30 percent. It is estimated
Ethical Aspects of Genetic Engineering and Biotechnology
that there may currently be about half a million IVF babies worldwide. Mitochondrial DNA (mtDNA): The portion of the maternally inherited cell DNA which is contained in the mitochondria, tiny organelles that generate energy for the cell by converting carbohydrates into energy. Preimplantation Genetic Diagnosis (PGD): Cells taken from embryos created through in vitro fertilization (IVF) are examined in a Petri dish. Embryos carrying harmful and lethal mutations are discarded and only “healthy” ones are subsequently implanted in her mother’s uterus. Reprogenetics: The combination of reproductive medicine and biology and genetic technologies. Embryonic stem cell research, the alteration of select genes, as in germ line therapy and in the genetic manipulation of early embryos, cosmetic gene insertion, human embryo cloning, and embryonic pre-implantation genetic diagnosis (PDG and IVF) are reprogenetic techniques.
ENDNOtEs 1 2
Spain, Portugal, Italy, and France. In those countries, most scientists and social analysts correctly understood that Charles
3
4
5
6
7
Darwin had historicized nature without closing the gap between nature, human history and society. Elsewhere, Social Darwinists, who held that the Darwinian revolution had paved the way to the naturalization of history, found a more receptive audience. In 2007, the United States, Andorra, Brunei, Iraq, and Somalia were the only countries that had not ratified this treaty. Meaning “beyond commercial appropriation.” This position finds expression in the principe de non patrimonialité du corps humain of the French civil code, in the Principio di gratuità (principle of gratuitousness) of the Italian civil code, and in the Article 3 of the European Charter of Fundamental Rights. As an aside, Article 25 of the Civil Code of Québec, states that: “the alienation by a person of a part or product of his body shall be gratuitous; it may not be repeated if it involves a risk to his health.” In the words of Spanish bioethicist Diego Gracia Guillén: “la historia es tambien el gran tribunal de la moralidad,” that is, as it were, “ethics is the daughter of time.” « Entre le fort et le faible, c’est la liberté qui opprime et la loi qui affranchit. »
Chapter XI
Nanoscale Research, Ethics, and the Military Timothy F. Murphy University of Illinois College of Medicine, USA
abstract Military researchers are working to exploit advances in nanoscale research for military uniforms, medical diagnosis and treatment, enhanced soldier performance, information and surveillance systems, and weaponry and guidance systems. These domains of research pose ethical questions in regard to the motives for this research, the way in which it is carried out, and its social effects, especially in regard to its medical aspects. Much of this research can be defended in the name of soldier protection and national defense, but close attention to the practice of research involving human subjects and nanoscale devices is nevertheless warranted because the military is governed in ways that sometimes put its overarching goals ahead of protecting the rights and welfare of individual soldiers. Moreover, the contribution of nanoscale interventions to a new kind of arms race should not be underestimated.
INtrODUctION Military theorists have not failed to appreciate the significance of nanoscale research when it comes to protecting soldiers and giving them advantages in military operations. Researchers working for and with the military work to identify ways in which emerging technologies can be put to their advantage for personnel, weapons, and operations. Specifically, militaries around the world anticipate that this research might lead to new information systems, improved protec-
tive gear, means to improve the performance of military personnel, and innovations in medical diagnosis and treatment. Nanoscale research does not occur in a scientific vacuum, and this research goes forward alongside other domains of research, including neuroscience research that works to describe and gain measures of control over sensation, neurological function, and human behavior. Some commentators expect that these domains of research might come together in ways that fuse biological function and nanoscale mechanical interventions. For example, it might
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Nanoscale Research, Ethics, and the Military
be possible to develop biochips that could “read” the sensory processes of neurons directly, in other words independently of the person whose neurons are involved. Or, these interventions might enable officers to control the emotional affect of personnel under their command. Even if there were no wars and no military operations, nanoscale research conducted for entirely civilian purposes would be of considerable ethical interest because of the way it stands ready to amplify powers of information and to extend control over human behavior. That this research also carries potential military applications makes these ethical issues all the more pressing because of those ethical issues unfold in the context of larger political and strategic purposes of military function, purposes that sometimes subordinate individual interests. For this reason, it becomes important to try and identify ethical standards by which to evaluate research and applications of nanoscale and neuroscience technologies, to identify ethical standards that help guide decision-making. The discussion here first identifies key domains in which nanoscale research is of interest to military theory and practice. It then offers some suggestions about principles by which to judge the value of interventions.
MILItarY INtErEst IN NaNOscaLE rEsEarcH
Nanoscale technology might enable the creation of biochip implants could also be developed to ‘read’ sensory input directly – without introducing the possibility of error – by a soldier who misunderstands or misinterprets exactly what he is seeing (for example, what type of missile or jet is approaching). The military and intelligence agencies also have an interest in knowing whether and to what extend nanoscale technologies could enable them to ‘read’ people’s minds, for example, captured enemy soldiers. Some technologies currently exist that are able to predict roughly what someone is thinking. On a more developed scale, this kind of technology would be extremely useful in interrogating captured soldiers, to learn the status of current operations. These technologies could even bypass contentious debates about whether or not torture is permissible in order to gain information that might be needed to avert imminent loss of death. They could also: • •
•
The military is looking to nanoscale research and technology in five main areas: (1) better information, (2) better weapons, (3) better uniforms, (4) better performance, and (5) better medical diagnoses and treatments.
better Information The military has an abiding interest in sensory mechanisms that can collect intelligence about the enemy and confirm whereabouts and status of its own personnel during operations.
•
Use biochips to track soldier movements in realm time via sensors. Insert microcomputers to relay sensory input: ‘Read’ raw sensory data via biochips or other technologies. For example: read the visual input of a pilot, transmit that input to a remote location, check it against profiles of enemy jets and make decisions about how to respond. Use microcomputers or biochips that evaluate health status of soldiers in real time (heart rate, temperature, secretions, etc.). Knowing the medical status of its personnel enables command to understand its strengths at any given time, or record and relay medical aspects of a soldier’s status. Desirable technologies of this kind are discussed in Better Medicine. Use biochip implants to distinguish the certainty of statements made by people under interrogation, which could eliminate need for harsh interrogations and torture of any kind.
Nanoscale Research, Ethics, and the Military
•
Create animal cell / computer hybrids for detection purposes. For example, connect animal olfactory cells to biochips to detect biochemical agents.
Neuroscience and nonscale research could augment this research considerably with the goal of targeting and incapacitation, hopefully with temporary effects.
better Weapons
better Uniforms
Current research initiatives include robotics, targeting capacity, non-lethal weapons, and weapons that may fall outside biochemical weapons agreements:
Soldiers carry considerable amounts of equipment, including bulky uniforms. Researchers are working to identify ways of using nanoscale devices in order to lessen physical burdens in the field. Moreover, these same kinds of technologies might be able to enhance physical performance. Among other things, researchers hope to:
•
•
•
Technologies that make weapons ‘smart’: That is, systems to identify targets by various kinds of artificial intelligence. For example, robotic weapons that could scan a crowd and fix on intended target. Build and deploy ‘smart’ ammunition: Namely weapons capable of identifying their target (a building, a specific person) without human evaluation or confirmation. These would be robotic in nature, and governed by artificial intelligence. Genetic bombs: Release a genetically engineered agent that works against specific human genotypes. This ‘bomb’ would work slowly and be hard to identify, depending on its outcome (for example, increasing rate of spontaneous abortion in a people or causing cancer in this population over time).
The military currently researches a variety of non-lethal weapons. These involve: • • • •
Sound: To induce nausea, vomiting, incapacitation Optics: Blinding lights disorient and even blind Biological: Induce respiratory and intestinal disorders Chemical: Induce paralysis, disorientation, incapacity
•
•
•
•
•
Magnify existing capacities to run, to jump, to react through exomuscular amplification. Use nanoscale technology to ‘weave’ uniforms from resistant materials. Protect against bullets, flame, radiation, etc. Use nanoscale technology to create bodyclimate controls: capable of detecting excess heat or cold and permitting either release of heat or retention of cold as appropriate. Develop mechanisms for the capture of body heat and water for redistribution or later use. Build medical diagnostics and treatment modalities into uniforms. (See Better Medicine for details.)
better Performance The way in which soldiers behave in combat is critical to their missions. It may be that research will identify mechanisms that offer greater control over the function and performance of soldiers. Enhance physical powers of service members through ‘exomuscle.’ Exomuscle involves structural amplification of existing capabilities of lifting, running, jumping, etc. For example, researchers foresee the day when technologies might be available that enhance per-
Nanoscale Research, Ethics, and the Military
formance through control of emotions, behavior, states of consciousness and sometimes through remote decision-making For example, stress often interrupts sleep that is necessary to optimally performing soldiers. If biochips could be used to control sleep patterns – put people to sleep during lulls in conflict and wake them up when needed again – this would contribute significantly to their strength and vigor on the field. Moreover, these techniques could be used to control sleep patterns during long jet flights. Some researchers predict the possibility that even more fine-grained control over soldiers might be possible: controlling their specific behavior via biochip control to, for example, reduce fear and anxiety during combat operations. Soldiers who are fearful may be less aggressive than desirable when it comes to a particular military objective. Soldiers who are not distracted by their own fears during hand-to-hand field combat would be presumably better at defending themselves and simultaneously working toward the specific military objective at hand. These same kinds of controls could perhaps even work the other way: to heighten anxiety in order to make soldiers especially alert to hidden dangers around them, when they might otherwise drift off into complacency. Biochips might also be useful over the haul of long operations, if they work to induce degrees of compensatory pleasure and sleep, for example. Some commentators have also speculated that commanders might be able to – from a remote location – direct soldiers to pull gun triggers or rocket launchers when key targets are in sight. There can be several advantages to this kind of control: soldiers who are reluctant for one reason or another to fire on a particular target might participate more wholeheartedly in search-andkill operations if they believed that the ultimate responsibility for killing rested not with themselves but with the commander who directed their behavior from a distance.
More speculatively, the military might use gene transfer techniques in order to insert copies of genes associated with intelligence, in order to increase soldier performance.
better Medicine Researchers hopt to develop technologies that could – perhaps in concert with better uniforms – improve diagnostic techniques: •
•
•
Build diagnostic modalities into uniforms. This might include diagnosis of exposure to biochemical weapons, in real time. Biochips could identify and confirm any agent whose molecular characterization is known. Build treatment modalities into uniforms. This might include (automatic) inoculations following exposure to pathogens in the environment. It might also include pressure applications (tourniquets) following detected blood loss. Remote medicine: The use of nanoscale technologies may be expected to increase the powers of remote treatment.
Some combination of neurological and nanotechnology might also be able to block the consolidation of traumatic memories, selectively removing painful events. There is increased use and accuracy of remote treatment: for example, in 2001, surgeons in New York operated on patient in France - entirely via robotics - to remove his gallbladder.
EtHIcaL IssUEs IN MILItarY NaNOscaLE rEsEarcH aND tEcHNOLOGIEs The ethical issues in nanoscale research with military applications can be broken out into three categories: those involved in the motives for the
Nanoscale Research, Ethics, and the Military
research, the process of the research, and the social effects of the research.
Motives In any ethical analysis, the logically first question is whether and to what extent researchers and the society that support them are justified in initiating and carrying out a particular line of research. What justifies choosing this kind of research from all possible domains of research? In one sense, there is nothing inherently objectionable about the basic research that underlies the drive for nanoscale innovation. In some cases, this science is merely doing what biology, chemistry, and physics have always done: try to understand mechanisms at the lowest level of causality and manipulate them to useful human purposes. Moreover, national defense and security are a core function of any government, and the military is one way to contribute to both. If this kind of research can contribute in meaningful ways to the military, then it is justified not only in itself but also in its effects. Researchers share with other citizens the burdens of national defense, including the costs of armed conflict, and to the extent their research can shoulder a portion of that burden, the efforts are morally justified.
Process of Once there is a clear and convincing rationale for a particular line of research, a variety of ethical questions arise related to the actual conduct of the research. How should military research involving nanoscale technologies and neuroscience proceed? At the present time, there is a structure for the review of research involving human beings (I am not going to be concerned with animal research here), whose primary purpose is to identify and deter objectionable research, namely research that ends in harm to the subjects involved. This system of oversight has seen some controversies
in military research, when there is debate about whether or not a particular intervention amounts to an experiment requiring, for example, informed consent from the soldiers involved. There can be disagreement about whether a particular intervention is experimental or not. This situation would likely continue as emerging technologies were introduced into the field, especially in combat theaters. A great deal of novel medical technologies are going to be experimental. In general, medicine in the military should observe as far as possible the moral standards important to civilian medicine. In general, the core principles of military medical ethics do not obstruct research and application of nanoscale research in these areas, but they do create cautions that are important to observe for moral reasons. These include undue loss of control over body and psychic functions, undue exposure to risks of unproven technologies, exposure to risk of unknown health hazards, and ‘dual agency,’ namely the extension of technologies from an acceptable use to an objectionable purpose. It may be that some research might need to bend to military interests. For structural and strategic reasons, the military can sometimes subordinate the healthcare standards that apply in civilian life to its own needs. One interesting development that might emerge from nanoscale technology is that it offers a way to bypass torture and harsh interrogations. Of course, countries signing the Geneva Conventions have agreed to give up torture, but the threshold of torture is sometimes a matter of disagreement, with advocates defending harsh interrogations they say do not rise to the level of torture. In any case, this debate might be bypassed if military and intelligence agents had available to them technological mechanisms that could ‘read’ answers to questions directly from a subject’s neurological system. This scenario presumes a fairly advanced level of science, of course, but the idea would be that this kind of ‘interrogation’ could be entirely harmless to a subject. The kind of technology
Nanoscale Research, Ethics, and the Military
would also make involvement of physicians less problematic, because existing ethical advisories forbid physicians from participating in torture and harsh interrogations. (AMA, 2.067). An ‘interrogation’ that consisted of implanting certain biochips might not be seen, by some, as ethically objectionable, thus opening the door for more direct physician involvement in the interrogation of captured hostile combatants. What is for sure: for the foreseeable future, a great deal of the interventions described here will be experimental in nature, and therefore raise important questions about what degree of oversight should be involved, what calculation of risk and benefit should be required before allowing an experiment to go forward, what kind of informed consent should be required, and what kind of monitoring for safety. These are some of the hardest and – as of yet – unanswered questions involving military applications of nanoscale technologies. It may be that existing standards are insufficient to oversee some of these interventions, for example, neurological interventions that attempt to secure for military commanders some degree of control over their soldiers’ emotional and cognitive states.
social Effects A great deal of U.S. discussion of military innovations involving nanoscale technologies suggests that the primary use of these technologies would be to the benefit of U.S. soldiers. But there is no way to know this in advance. It is unwise to assume that all technological developments in weaponry, uniforms, or healthcare will remain forever in the hands of the people who initially develop them. Indeed, there is likely to be strong demand for these technologies as soon as they are perceived as conferring a benefit on any one military force. In some cases, this demand will be strong indeed, driven as it will be by nationalism and other kinds of political fervor. One should not assume either that this demand will be con-
fined to national militaries, since a great deal of armed conflict in the world today is carried out by subnational groups. Therefore one key question will be to determine to what extent this research and its associated technologies should be available – if at all – outside the original research community. How should these technologies be controlled, if they can be controlled at all? The military frequently classifies information, and that might be one way to achieve some degree of control, but it may be that certain technologies cannot be hidden forever, especially if they are recovered by hostile forces during armed conflicts. In 2002, the World Medical Association adopted a professional advisory that puts it in opposition to all armed conflict. The merits of this advisory are a matter of debate, especially because it draws no distinction between defensive and offensive military operations. That issue aside, what is unresolved by that advisory is the extent to which participation in research that supports armed conflict is also objectionable. Some commentators do object to the participation of healthcare personnel in raising and sustaining armies, but these objections are overbroad.
cONcLUsION Thanks to the widely influential model of the Human Genome Project, a great deal of contemporary research tries to pay attention to the ethical, legal, and social implications of research as it unfolds, especially research expected to have profoundly transformative effects in one way or another. In ethics as elsewhere in life, however, it is never wise not to put too much cart before the horse. There surely are reasons to try and anticipate possible objectionable effects of research and blunt their effect through modified law and policy. However, in some instances, it may be difficult to foresee the effects that a research field may actually have. For this reason, one should be cautious about
Nanoscale Research, Ethics, and the Military
expecting that every effect of nanoscale research and technologies can be predicted and coped with beforehand: some effects will become evident only over time and some expected outcomes may never materialize. Nevertheless, it is clear from what we already know that nanoscale research may offer significant new tools to the military and it is important to bring those tools into line with existing norms or to develop new norms to cover dimensions of the technologies that are left hanging. A good deal of the discussion of the military implications of nanoscale research – better armor, better information, better medical tests and treatments – is couched in the language of protecting military personnel, about which there can be little argument. Nevertheless, the cumulative effect of these tools across the globe may incite military conflict rather than tame it altogether. Certainly, the simple possession of these better tools will not guarantee strategic decisions about the use of troops and weapons anywhere. In fact, a nation or any armed group that is overconfident of its powers is in a position to blunder when making decisions to behave in ways that elicit armed conflict. It is unwise to foresee a global future in which only The Good Guys have access to these tools, and The Bad Guys never do. Military theorists and people concerned with the ethics of weapons research would do well to expect that at some point or another, a substantial number of nations and subnational political groups may gain access to the very systems emerging from nanoscale research. Moreover, one should not assume that these innovations will remain only and forever in the hands of benevolent nations who use them only in morally justified armed conflicts, and then only with the degree of force necessary to secure a benevolent objective. At the close of World War II, the handful of nations with nuclear weapons expected to keep them within that closed circle. Political interests around the world nevertheless worked to expand the number
of nations with nuclear weapons, in ways that are profoundly destabilizing to this very day. One way to frame ethical questions when initiating lines of research having significant military potential, then, is to ask what would happen if this research became common property? The answer to this question might force countries to reconsider whether or not they will engage in the research at all. More likely, however, these countries will try to ensure that any research that poses more risk to international security than benefit remains contained. Whether or not that approach can succeed, it is certainly worth paying attention to.
rEFErENcEs Altmann, Jürgen. (2006). Military technology: Potential applications and preventive arms control. New York: Routledge. Altmann, Jürgen, Gubrud, M. (2004). Anticipating military nanotechnology. IEEE Technology and Society Magazine, (Winter), 33-41. Beam, Thomas A., Howe, E.G. (2003). A look toward the future. In Thomas Beam, Linette Sparacino (Eds.), Military medical ethics, vol. 2, 831-50. Falls Church, VA: Office of the Surgeon General. Gross, M.L. (2006). Bioethics and armed conflict: Moral dilemmas of medicine and war. The MIT Press. Jablonski, Nina. (2006). Skin: A natural history. Berkeley: University of California Press. Military uses of nanotechnology [summary of a UN committee report]. Retrieved from http:// ubertv/envisioning/clippings/2005/03/005882. html Moreno, Jonathan D. (2006). The role of brain research in national defense. Chronicle of Higher Education Review, Nov. 10, 2006, B6-B7.
Nanoscale Research, Ethics, and the Military
Moreno, Jonathan D. (2000). Undue risk: Secret state experiments on humans. New York: Freeman & Co. Website for Massachusetts Institute of Technology Institute for Soldier Nanotechnology. Found at www.mit/edu/isn
KEY tErMs Nanoscale Technology: This is an area of nanotechnology concerned with standard size tools used to create simple structures and devices.
Medical Diagnostics: This refers to methods for identifying a medical condition or disease. Military Necessity: This is a legal notion used in international humanitarian law to govern decisions about the use of military power. Uniforms: This is a set of standard clothing worn by members of an organization or group. Weapons: This is a technique or tool used to defeat an opponent or defend against an opponent.
0
Chapter XII
Healthcare Ethics in the Information Age Keith Bauer Marquette University, USA
abstract This chapter reviews key debates about the meaning of telehealth and also considers how new and emerging systems in telehealth work to protect patient confidentiality, improve healthcare relationships, and diminish instances of compromised access and equity in the healthcare system. This chapter also looks at how these same telehealth systems could undermine those goals, making it important to assess the way in which these emerging technologies are implemented. Various technologies are examined to show how their implementation can ensure that their benefits outweigh their risks.
INtrODUctION The growing use of information and communication technology (ICT) is producing widespread changes in society. One area in particular that is quickly being transformed by ICT is the field of healthcare. This is evident in the relatively new field of telehealth, which utilizes the Internet, electronic patient records systems, hand-held computers, among other types of ICT. Telehealth has great potential to improve the quality and provision of healthcare services, but there are a number of subtle ethical issues that should be considered as society moves forward with its use.
The aim of this chapter is, therefore, to provide an ethical assessment of telehealth. The specific questions this chapter addresses are as follows: 1.
2.
What are the distributive justice implications of telehealth? Will medically underserved populations gain greater access to healthcare services? If so, what sorts of tradeoffs, if any, between access and quality will be required? What are the implications of telehealth for provider-patient relationships? For example, will an increase in the quantity of providerpatient interactions lead to a corresponding
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Healthcare Ethics in the Information Age
3.
4.
increase or reduction in the quality of those interactions? What are the implications of telehealth for medical privacy and patient confidentiality? What are the future trends in telehealth and how will they affect patient care and the healthcare system in general?
bacKGrOUND In order to understand what telehealth is, it is necessary to understand its history and its meanings. The literal meaning of the word telehealth is health from a distance. Combining the word health with the Greek prefix tele, which means end, far off, or distance, produces this definition. We see similar combinations in the words telephone, which literally means, sound from a distance, and telegraph, which literally means writing from a distance. Various definitions of telehealth are currently in circulation within the healthcare community. One common view of telehealth makes it synonymous with two-way audio-video systems that allow for interactive consults between patients and healthcare professionals. However, other definitions are equally common and may include the use of ICTs (e.g., computers) that capture, store, manipulate, and display medical data but not include the use of interactive communications between patients and healthcare providers. Consequently, a fax machine used to transmit patient medical information or the telemonitoring of a cardiac patient would not count as telehealth under the first definition but would under the second definition (Denton, 1993; Preston, 1994). Although no universally accepted definition of telehealth exists, there is agreement that any definition of it must include at least three elements: (1) the use of ICT, (2) geographic distance between the participants, and (3) health or medical uses. On the basis of these three characteristics, the Institute of
Medicine (IOM) defines telehealth/telemedicine in the following manner: Telemedicine [telehealth] is the use of telecommunications and information technologies to share and to maintain patient health information and to provide clinical care and health education to patients and professionals when distance separates the participants. (Field, 1996, p. 27) The IOM’s definition can be made more specific, depending on whether (a) emphasis is given to a particular technology (e.g., video conferencing or Internet) (b) a distinction is made between clinical and non-clinical applications, and (c) whether telehealth is conceived of as an integrated system of healthcare delivery rather than a mere collection of electronic tools. Non-clinical applications of telehealth typically include professional education, healthcare administrative duties, management meetings, research, and the aggregation of health data, but usually exclude medical treatments and decisions for specific patients. Clinical applications of telehealth involve patient care and include medical decisions, diagnostics, and treatments for particular patients. This distinction, however, can be difficult to maintain because some ICT allow for the convergence of non-clinical and clinical activities, for example, when e-mail communications between patients and providers are automatically stored in a computerized record system. In addition, there are a number of ways in which clinical telehealth can be subdivided. One way is to classify clinical applications by the point of service or the patient’s location, for example, rural, correctional, and home. Another classificatory scheme common to clinical telehealth is to organize services by specialization, for example, telepsychiatry and telepathology (Peredina & Allen, 1995; Peredina & Brown, 1995). A third approach is simply to categorize telehealth services in terms of present and future healthcare reimbursement policies, for example,
Healthcare Ethics in the Information Age
emergency care, follow-up care, consultation, and the transmission of diagnostic images (Grigsby & Kaehny, 1993). The IOM’s tripartite definition of telehealth-geography, ICT, and medicine—can be expanded upon when it is conceived as a system of healthcare rather than as the use of a particular ICT in a healthcare setting. According to one view, a telehealth or telemedicine system can be defined as follows: A telemedicine system is an integrated, typically regional, health care network offering comprehensive health service to a defined population through the use of telecommunications and computer technology. (Bashshur & Sanders, 1997, p. 9) When telehealth is defined as a system of healthcare, the distributive and integrative strengths of ICT to form a seamless healthcare network are highlighted. This definition, because it highlights the systemic nature of telehealth, also helps to illuminate many of its social and ethical aspects not readily visible in other definitions.
KEY EtHIcaL IssUEs Most evaluations of telehealth have not centered on its ethical dimensions. In what follows, I explore three key ethical concerns surrounding the growth of telehealth. I begin with an examination of the potential distributive justice ramifications of telehealth.
Distributive justice Much of the debate over healthcare justice in the United States remains focused on the lack of access to healthcare services and inadequate health insurance coverage. The reason for this is that there are approximately 42 million persons who lack health insurance coverage (Schroeder, 2001). At this time, there are about 700,000 physicians
practicing in the United States, which means there are approximately 275 physicians for every 100,000 persons. However, even with this doctorto-patient ratio, many citizens still lack access to adequate healthcare services (Marwick, 2000). From a public health perspective, this is a serious problem. For, as numerous empirical studies have demonstrated, there is a strong correlation between positive health outcomes and access to healthcare services (Davis, 1991). But the problem is even more serious for health profession shortage areas, as they tend to have higher percentages of poverty, elderly people, people lacking health insurance coverage, and people with chronic diseases. From a distributive justice perspective, this means that one of the least fortunate and medically needy populations in the United States faces the greatest burdens in gaining access to healthcare services. As a way of ameliorating these healthcare disparities, telemedical initiatives are being devised to meet the healthcare needs of underserved populations. The good news is that telehealth services have the potential to produce a more equitable or fair distribution of healthcare resources. The bad news is that the Internet and other telecommunication technologies are not yet universally available, which in turn raises an entirely different but related set of distributive justice concerns about the digital divide—the equitable distribution of ICT. However, before looking at the distributive justice pros and cons of telehealth, it will be useful to be more precise about what distributive justice in healthcare means. The general answer is that it pertains to questions about access and the fair allocation of healthcare benefits and burdens among populations. More specifically, distributive justice in healthcare requires the application of fair standards that make quality healthcare available and accessible to persons in an efficient manner (President’s Commission, 1982). It is these five elements—fairness, quality, availability, accessibility, and efficiency—that are typically involved when distributive justice in healthcare is at stake.
Healthcare Ethics in the Information Age
But what do these elements of distributive justice mean and how should we understand them in the context of telehealth? A healthcare system is typically considered fair when (1) persons are not denied healthcare services on the basis of prima facie morally irrelevant criteria, such as class, race, and gender; and when (2) persons can secure an adequate level of care without excessive burdens (IOM, 2001). Because geography can create burdens for medically needy populations, geographic location, the place where one is born and lives, should not be used as a criterion for deciding who gets healthcare services. Rather, medical need should be the decisive factor. Next, quality in the distribution of healthcare means not only that services are delivered with an eye towards avoiding errors, but should also be provided in a competent, compassionate, and respectful manner (IOM, 2001). The problem for many telehealth services at this time is that it is not fully known under what clinical conditions ICT should be employed and when, if at all, they should supplement or replace face-to-face interactions between patients and providers, for example, online psychiatry with severely depressed or suicidal patients. Consequently, standards of care and questions of whether telehealth improves or decreases the quality of healthcare have to be better understood. The importance of availability and accessibility for distributive justice is that persons should receive services when they need it without undue burden. In some cases, services may be available within a geographical region but remain inaccessible because of inadequate transportation. The Internet along with advances in digital technology, for example, now make it possible to deliver some healthcare services to persons in underserved areas who would otherwise need to travel long distances. Telehealth technology, especially when located in the home, can make it easier to provide healthcare information and services over long distances and minimize the physical and financial
burdens and lost time of patients who must travel long distances to meet with healthcare providers (Ostbye & Hurlen, 1997; Bauer, 2001). Finally, efficiency is also an important variable in the distributive justice equation for telehealth. Because of limited healthcare resources and the high demand for them, the healthcare system should minimize inefficiencies such as duplicated services and errors to the greatest extent possible. For example, rather than replicating expensive services and technologies at different sites, a telehealth network could electronically link multiple sites to a centralized healthcare center where patients’ records are stored. Thus, efficiency, along with fairness, quality, accessibility, and availability, are the core elements of a general conception of distributive justice in healthcare and important to an ethical assessment of telehealth. If healthcare justice is to be a reality, no one of the aforementioned elements should be pursed to the exclusion of the other. A problem with this goal is that the five elements of distributive justice frequently come into conflict with each other. For example, improvements in the quality of some services may require placing limitations on the availability of other services because they lead to an unacceptable increase in the aggregate cost of healthcare and produce an unfair allocation of limited healthcare dollars. Also, ICT is likely to increase access to medical services, but the quality and confidentiality of those same services might not meet current standards of care established for traditional faceto-face medical encounters. Nevertheless, given the option of no healthcare services, telehealth, even if of lower quality, may be preferable. As more healthcare information goes online and as more telemedical services are made available to the public, the digital divide (i.e., the unavailability or inaccessibility to ICT) should also be viewed as a healthcare justice problem. The digital divide is relevant because persons who lack access to information technology or the skills needed to operate the same technology
Healthcare Ethics in the Information Age
may have greater burdens obtaining telemedical services compared to persons who have training and access to the Internet and computers (AMIA, 1997). Therefore, those who are the least well off and have the greatest medical needs—disabled elderly persons who live in health profession shortage areas—will have greater burdens in obtaining online health information and telehealth services that rely on the Internet. Until this gap in digital services is filled, telehealth services will remain limited for this medically needy population (Borberg, 1995). But digital inclusion may not be the panacea that it appears to be for telehealth initiatives. If, as discussed above, digital inclusion becomes a reality in the near future for health profession shortage areas, there is the possibility that electronically mediated healthcare services will be of a lower quality compared to face-to-face healthcare services. If so, we will need to answer some important questions: When, if at all, should telehealth services either replace or supplement in-person services? What sorts of tradeoffs between access and quality are ethically acceptable and who gets to decide? Or, on the other hand, instead of pushing for digital inclusion and the establishment of telehealth systems, maybe a just healthcare system should require more effort and financial resources be devoted to enticing physicians and other healthcare professionals to practice in medically underserved regions.
Provider-Patient relationships Before looking at how telehealth is reshaping the provider-patient relationship, it is important to first identify the core goals and values that should be used to evaluate this relationship. For the purpose of this chapter, there are at least three major and overlapping healthcare goals and values that are central to an ethical evaluation of provider-patient relationships. These are (1) to achieve the best quality of patient care, (2) to balance the art of healthcare with the science of healthcare, and (3)
to balance patient autonomy with professional autonomy (Hanson & Callahan, 1999; Kaplan, 2000). As will be discussed subsequently, various telehealth services may advance or retard any one of the three aforementioned healthcare goals and values. First, quality of patient care refers to the ability of a new medical technology to improve patient care and health outcomes. Sometimes, however, acceptance of a new medical technology by providers has more to do with their belief in it than whether it can be demonstrated to improve the quality of patient care. For some, telehealth is a threat to the provider-patient relationship; to others, it is no threat at all. The reality is that until telehealth services have been adequately evaluated, we are left with competing speculations. Nevertheless, the belief that a new medical technology either harms or benefits the quality of patient care and the provider-patient relationship will play a large role in whether telehealth is accepted. Second, healthcare as both an art and science has an extensive history and is closely connected to the quality of patient care. The science of healthcare refers to standardized clinical practice guidelines, automated procedures, scientific evidence, and the employment of medical technology. As is the case with science generally, the science of healthcare is always changing as new discoveries are made and better techniques emerge. The art of healthcare refers to the individual clinical judgements and intuitions of healthcare providers. The art of healthcare also refers to the emotional dimension of the providerpatient relationship. When the art of healthcare is practiced well, providers are able to genuinely feel and express empathy and compassion for their sick and vulnerable patients. Unlike the science of healthcare, the art of healthcare encompasses what is likely a universal and unchanging aspect of the human condition—the experience of being ill, being vulnerable, being dependent, and being healed. The art of healthcare requires a deep
Healthcare Ethics in the Information Age
moral sensitivity to the experience of illness. Concerning this experience, Edmund Pellegrino states the following: It [illness] is only in part defined medically as a concrete organic or psychological aberration. It is the perception of the change in existential states that forms the central experience of illness---the perception of impairment and the need to be made whole again---to be cured, healed, or cared for. (Pellegrino, 1981, p. 71). If Pellegrino is correct, then both the art and science of healthcare are desirable and necessary for the provision of technically sound and ethically appropriate healthcare services. Since Hippocrates’ day, however, there have been tensions between the science and art of healthcare. According to Pellegrino and Thomasma, modern medicine is characterized by an imbalance in which technology and the science of healthcare dominate the provider-patient relationship: The temptation to employ technology rather than to give oneself as a person in the process of healing is a “technological fix.” The technological fix is much easier to conceptualize and to implement than the more difficult process of a truly human engagement. The training and the skills of modern health professionals overwhelmingly foster the use of technological fixes. (Pellegrino & Thomasma, 1993, p. 124). At this time, some applications of telehealth have proven to be easier and cheaper. Although the verdict is out on whether telehealth is simply an instance of modern man’s proclivity for easy technical solutions to complex human problems, Pellegrino and Thomasma think, I believe correctly, that an overemphasis on technology and technical competence at the expense of compassion in medical education gives us good reasons to be concerned.
Autonomy is the final healthcare value basic to the acceptance of new medical technology and the provider-patient relationship. Modern healthcare, with its use of ICT, clinical practice guidelines, and research protocols, tends to give more weight to the science of healthcare and less weight to the expertise and judgments of individual providers. This is already the case in most healthcare settings where teams of providers, rather than individual providers, are more likely to care for a single patient. As such, the professional autonomy of individual providers is diluted because unilateral decision-making has given way to consensus building and shared decision-making. The rise of telehealth has intensified this trend. According to Douglas Walton: The Hippocratic treatises are quite right to cite excellence of craftsmanship as a central ethic of competence for medical treatment. But in modern terms this competence must be understood to entail a sharing of scientific knowledge. Hence a corporate and institutional notion of technology as the coordination of a team effort is necessary. It is futile to try to go back altogether to the model of the caring family doctor as the bringer of treatment. (Walton, 1985, p.60). Moreover, as telehealth evolves, it is likely that patients will take on more responsibilities for administering and regulating their own healthcare, thereby, further limiting the role providers have in direct patient care. As more patients self care with the aid of telehealth technology, providers will not only work in teams, they will work in virtual teams that are geographically and temporally decentralized, lacking, in many instances, any face-to-face interactions with their patients and colleagues. Consequently, more effort at coordinating patient care will need to be made. This, in turn, is likely to increase the responsibilities and autonomy of patients, but place new restrictions on the professional autonomy of providers.
Healthcare Ethics in the Information Age
Changes in the provider-patient relationship are not new. A myriad of social, economic, and technological forces have continuously reshaped the provider-patient relationship since the dawn of medicine. Until relatively recently in medicine’s history, the provider-patient relationship has been characterized by a substantial imbalance of power between patients and healthcare providers. Within the traditional provider-patient relationship, providers, especially doctors, have had more control and authority than their patients have. In simple terms, providers have had a dominant and active role while patients have had a subordinate and passive role in healthcare decision-making. This unequal distribution of power is predominately a consequence of the medical expertise that providers have but patients lack but need in order to get well. Of course, imbalances in medical knowledge and power still exist within contemporary provider-patient relationships, but they are considerably less pronounced as they once were due to ICT. For some theorists, the gradual realignment of power within the traditional paternal providerpatient relationship is, and continues to be, the result of modernity, which consists of convergent social, economic, and technological forces associated with the process of industrialization. Cockerham has the following to say: Modernity promotes social relations that span the globe, moves social life away from traditional practices, and features the progressive use of knowledge to organize and transform society. In this context, medical science becomes increasingly accessible to laypersons. This situation, along with the desire of modern individuals to be in control of their lives, points towards a modification in the physician-patient [provider-patient] relationship in the direction of greater equality between the two parties. (Cockerham, 1993, p. 48). Cockerham does not explicitly mention telehealth, but the expanded use of ITC in healthcare
can also be seen as an elaboration of modernism. Telehealth, as a manifestation of modernism, raises a variety of ethical concerns. However, concerns about medical technology are not new. In fact, alarm over the increasing use of technology within patient-provider interactions also has a long history that includes the introduction of low-tech medical instruments such as the now commonly used stethoscope. When introduced, many physicians considered this device controversial because they believed it dehumanized the provider-patient relationship by putting physical distance between providers and patients. On this point, Evans has this to say: Many chroniclers claim that high-tech medicine has evolved at the expense of the doctor-patient relationship, that machines have created a cold and impersonal chasm between the healer and the patient. In their minds the doctor has become a mere technician, a “body mechanic,” who can treat disease but not the person. (Evans, 1993, p. 82). “High-tech” healthcare, however, does not have to be synonymous with an impersonal provider-patient relationship; it can also be associated with a personal and “high-touch” provider-patient relationship. This is possible because we need not accept, on the one hand, the view that medical technology is singularly responsible (i.e., technological fix) for the changes, good or bad, which have occurred within the provider-patient relationship. Nor need we accept that view, on the other hand, that the consequences of medical technology for the provider-patient relationship are tantamount to an endless stream of interpretations and rootless meanings. Rather, it is possible to adopt a middle course in which the meanings assigned to a new medical technology are as important as the technology itself in altering the provider-patient relationship. From this standpoint, technology, culture, institutional contexts, and the values
Healthcare Ethics in the Information Age
and goals held by providers and patients all play a significant role in either the adoption or rejection of a new medical device. This was the case in the changed attitudes toward the stethoscope, which is now taken for granted as a basic and reliable medical tool. Thus, it would be premature to conclude that the high-tech of telehealth is inherently incompatible with a compassionate or high-touch provider-patient relationship (Turkle, 1984; Schement & Curtis, 1995). Although modernism and the introduction of new medical technologies have had the overall effect of reducing the authority of providers, it has also given them greater power in the relationship in other respects. First, unlike the vast majority of their patients, providers understand how sophisticated medical tools function. Second, many medical devices have modified provider-patient interactions by facilitating the creation of an objective scientific nosology (i.e., classificatory scheme) that allows providers to diagnose a patient’s disease independent of the patient’s subjective reports. Third, because of an objective nosology and the relatively simple and universal manner in which medical devices are used, doctors are now able to delegate timeconsuming activities to nurses and support staff, making doctors less directly involved in basic patient care. According to Evans: With medical instruments, doctors [healthcare providers] could subject patients and their symptoms to objective scrutiny. As doctors gained more data from instruments, the quality of the information related by the patient seemed less important. Doctor and patient shared less knowledge; there was less common ground between them. A medical instrument acted as a lens through which the doctor could see the disease unfiltered by the patient’s interpretations. Instruments thus altered the doctor-patient relationship, making the patient’s experience of illness less important. (Evan, 1993, p. 90).
Concerns about the impact of ICT on providerpatient relationships also have a long history. As early as the 1880s some physicians were lamenting the use of the telephone as a means of communicating with their patients. They were concerned that geographical distance and the lack of a hands-on approach with patients would undermine their ability to care for their patients. Before the telephone, the telegraph was subjected to similar criticism. Now, with the advent of modern-day telehealth the same worries have emerged. Like their ancestors, some contemporary healthcare providers and patients are apprehensive about the possible consequences ITC will have on the balance between the art and science of healthcare, professional autonomy, and the quality of patient care (Sanders & Bashsur, 1995; Wooton & Darkins, 1997). First, some speculate that as telehealth services become more commonplace, providers will be less adept at understanding their patients’ experiences of living with and dying from disease. One particular concern is that physical separation and electronically mediated communication may make the establishment of emotional connections between patients and healthcare providers more difficult. On the patient side, confidence, trust and dependence on providers may be diminished as patients increasingly obtain their medical information from websites, receive emotional support from on on-line support groups, and electronically communicate with their providers by means of e-mail and interactive video (Bero & Jadad, 1997; Eng & Gustafson, 1999). If this happens, there is concern that telehealth may deleteriously affect the quality of patient care. In support of these worries, a number of studies using randomized controlled trials have demonstrated that the quality of clinical communication is related to positive health outcomes. In other words, the more cumbersome and awkward provider-patient communication, the more likely patients will not get well (Kaplan & Greenfield, 1989).
Healthcare Ethics in the Information Age
Second, although some telehealth applications have the capacity to enhance patient autonomy and well-being, they also have the capacity to undermine patient autonomy and well-being especially when, for example, telemedical tools are limited to automated telemetry-capable medical devices and computerized patient records (Beasly & Graber, 1984; Howe, 2001). The reason for this is that providers will have little or no physical contact with their patients, interacting only with abstract patient data sets that have been transmitted through electronic networks and stored as computerized patient records. On this point, George Marckmann argues that under these conditions healthcare providers may (1) fail to include patients in decision-making about the patient’s care and (2) inadvertently dehumanize their patients. He writes the following: Without the physical presence of the patient there will be an increasing probability of unilateral decisions by physicians, thus conflicting with the ideal of a shared decision-making between physician and patient. (Marckmann, 1999, p. 60). And later: If the personal consultation of specialists is replaced by teleconsultations, there will be an increasing risk that not the individual patient but just the digital data set-the gnostic analogue of the patient---becomes the object of diagnosis and treatment. Electronic patient records must be considered as a highly abstract, possible erroneous “artifact” which should not get a life of its own: not the data set but the patient needs treatment. (Marckmann, 1999, p. 60). If Marckmann is correct, then telehealth may modify the level of interconnectedness that exists between patients and providers and, thereby, detrimentally transform the provider-patient relationship. In brief, the concept of interconnectedness refers to the effects of ICT on social relationships.
Interconnectedness at the individual level has particular relevance to provider-patient relationships that take place within telehealth: At the micro level, individuals experience interconnectedness as a change in the nature of their social relationships. For most people, this means an increase in the number of relationships, but a decrease in their depth. That is, we are in regularif not frequent contact with more people, but we don’t know many of them very well. (Schement & Curtis, 1995, p. 47). Similar notions about the quality of social relationships are expressed in the theory of social presence. According to this theory, social presence is the feeling one has that other persons are involved in a communication exchange. The degree of social presence in an interaction is hypothesized to be determined by the communication medium: the fewer channels or codes available within a medium, the less attention that will be given by the user to the presence of other social participants. As social presence declines, messages become more impersonal and task oriented (Walther, 1995). Assuming that the concept of interconnectedness and the theory of social presence are accurate, the primary goal of many electronically mediated relationships may turn out to be neither the person nor the relationship, but the information. Of course, the accurate and timely exchange of information between patients and providers has great benefits in the diagnosing and treatment of patients and in the cost-effective management of healthcare organizations. Moreover, easy access to health information and healthcare workers via telehealth technology may enhance the autonomy of patients, reduce their anxiety, and provide for an overall better quality of life for them. However, an effective and ethically appropriate providerpatient relationship will most likely require more than the efficient accumulation of patient data; it will also require a patient-centered relationship
Healthcare Ethics in the Information Age
infused with empathetic communication and an awareness of the patient’s existential state in the midst of illness.
Privacy and Confidentiality Defining privacy and privacy-related concepts such as confidentiality is not a simple task, as there is no universally accepted definition, theory, or justification for privacy within the philosophical, legal, and public policy literature. Because of this lack of agreement on the scope of privacy, identification and analysis of important privacy issues within telehealth can be difficult, if not entirely overlooked. It what follows distinctions among physical privacy, informational privacy, and confidentiality will be made and their relevance to telehealth discussed. Physical privacy generally refers to the restricted access that others have to our bodies, relationships, and living spaces. Physical privacy is ethically significant because it allows for intimacy, solitude, personal control, and peace of mind (Allen, 1995). Telehealth, especially when used in the homes of patients, is significant because it has the potential to reduce the number of unwanted in-person intrusions by healthcare workers. As teleconsultation and telemonitoring increasingly substitute for in-home visits, it may be possible for patients to gain more control over their homes, personal relationships, and daily schedules. On the other hand, these same patients may want to have more in-person visits than televisits, willingly sacrificing a measure of physical privacy for greater in-person social interaction. Whatever patients decide, the point is that telehealth services will give them options that don’t widely exist today. Informational privacy refers to the confidentiality and security of identifiable patient health information and clinical data found in patient records and in communications among healthcare professionals and patients (Allen, 1995). Confidentiality is the protection of private informa-
tion, once it has been disclosed by a patient to a healthcare professional (e.g., during a medical examination or taking of a medical history). In short, confidentiality requires patients to give up their informational privacy. Once the patient discloses medical information, it becomes confidential and is no longer private. Although there are exceptions to the maintenance of confidentiality, providers are legally and ethically prohibited from sharing patient information with others who are not directly involved in the patient’s care. Telehealth’s use of computerized patient records, electronic mail, medical websites, online support groups, and video conferencing tools create new threats and opportunities for the physical and informational privacy of patients. On one hand, patients can gain greater physical privacy, but, on the other hand, patients’ informational privacy may be a greater risk, especially when the security of socially stigmatizing health information is breached by hacking or the accidental transmission of patient information to unintended recipients. In such scenarios, patients may not only lose their informational privacy, they also may be subject to social ostracism, job discrimination, loss of insurance, and social control in the form of blackmail (Shea, 1994). Total informational and physical privacy is not realistic in telehealth and healthcare generally. First, other goods like medical research and public health require that limits be placed on the privacy of health information. Second, in order to treat and cure their patients, healthcare professionals must sometimes compromise the informational and physical privacy of their patients. Healthcare professionals must be able to touch their patients and obtain information about the intimate details of their patients’ lifestyles and personal habits. Hence, patients must give up some informational and physical privacy to achieve the benefits of medical expertise. Depending on the site or point of care (e.g., hospitals, ambulatory clinics, and patients’ homes), patients will have more or less informational and physical privacy.
Healthcare Ethics in the Information Age
Unfortunately, much of the telehealth literature on privacy simply fails to distinguish between informational and physical privacy. Furthermore, even when distinctions among physical and informational privacy are acknowledged in the telehealth literature, the focus is more often than not on informational privacy and the confidentiality of identifiable health information (Field, 1996). When these distinctions are not recognized, many of the privacy issues of telehealth that should be considered will be overlooked. For example, electronic mail and video conferencing can enhance the physical privacy of patients by reducing the number of in-person visits from healthcare professionals. Yet, these same patients may increase their risks to their informational privacy as their physiological data and electronic communications stream through standard phone lines and over wireless networks. In short, if the distinctions between physical and informational privacy are minimized, then ethically significant conflicts between these kinds of privacy and the need for possible compromises will be missed and remain unarticulated in policies, laws, and procedures affecting telehealth services.
FUtUrE trENDs Before concluding, I want to discuss two future trends in the evolution of telehealth, which include the creation and use of smart homes and implantable biosensor technology. Smart homes refer to the use of ITC to augment the range of services that homes and other buildings can provide for their occupants, for example, using computers to turn lights on and off without human assistance (Bauer, 2007). More advanced forms of ambient intelligence and ubiquitous computing technology can even monitor the time, frequency, and variety of a person’s activities, including, for example, how often a person is waking up and walking, using the toilet, or opening his medicine cabinet to
0
take medication. Moreover, software is presently being tested that analyzes the various activities detected by sensors embedded in patients’ homes. This information is then used to assist either the inhabitants of the house directly or passed on to others, for example relatives and healthcare providers. The benefit of smart home technology is that it gives patients greater physical privacy and simultaneously allows providers to obtain real-time and comprehensive information about their patients’ activities and living environments without being physical present, and to do so in a non-intrusive manner. Moreover, even though smart home technology is ubiquitous in most cases, it is invisible by being architecturally integrated into patients’ homes. One type of smart home technology gaining use with dementia patients is object recognition systems that track certain objects when they’re put down. The system works by taking a picture of any object. Then, with cameras placed throughout the house, it goes and looks for the object. A patient uses the system by asking the computer where a specific object is located, for example, a pair of reading glasses, and the computer tells them where the requested object is located in the home. Another smart home technology being developed is the health detector. Like the recognition system above, this system is also made up of multiple cameras. These cameras, however, regularly take pictures of the patient’s face and body and compare those pictures to others taken previously. The aim of the health detector is to identify any changes in physical appearance that may indicate a decline in function or, for example, the presence of skin cancer or the loss of weight. As this system is part of a larger telehealth network, the collected data can be simultaneously transmitted to the patient’s healthcare providers to be analyzed (Coye, 2007). Although in limited use at this time, implantable biosensors are now increasingly being used (Bauer, 2007; Viseu, 2003). Unlike smart home ICT, these sensors go one step farther by
Healthcare Ethics in the Information Age
embedding ICT directly into the patient’s body. In conjunction with smart homes, implantable biosensors are likely to facilitate independent living and continuum of care. Second, increased use of implantable biosensors is likely to make healthcare more proactive and preventative rather than reactive and episodic. These trends in telehealth are likely to help move healthcare delivery from institutional settings to non-institutional settings such as the home, giving patients more autonomy and a greater role in managing their own healthcare. Two specific uses of implantable biosensors that are on the rise in telehealth are prosthetic and monitoring functions. First, neurotrophic brain implants are now being tested as mental prostheses to compensate for a loss of normal function in persons unable to speak, for example, because of stroke, spinal cord injuries, or ALS (McGee, 1999; Kennedy & Bakay, 1998). As recently as 2004, the Food and Drug Administration gave approval to begin systematic clinical trials to implant microchips in the brains of paralyzed patients (CNN.com, 2004). A neurotrophic brain implant works by implanting an electrode into the motor cortex of the patient’s brain. Neurons in the brain then transmit electrical signals to the electrode, which, in turn, transmits the same signals to a receiver placed on the patient’s scalp. These recorded signals are connected to a computer and are used as a substitute cursor or mouse. As patients learn to control the strength and pattern of electrical impulses being produced in their brain, they are able to direct the cursor to a specific point on the computer as they wish. In doing so, patients are able to communicate and can even send email. Second, implantable biosensors are being used to monitor patients. For example, implantable cardiac biosensors that use wireless technology are being linked to sophisticated Internet-based monitoring networks that allow patients to transmit device and physiologic data to their providers without leaving their homes. Providers can re-
motely monitor the condition of their patients by logging into a secure website. Patients may also have access to the same website where they can obtain health-related information and personalized device data. In some locations, providers can access patient data by means of a handheld computer or personal digital assistant (PDA) (DeVille, 2003). What makes smart homes as well as prosthetic and monitoring bioimplants revolutionary is that they have the potential to create a continuum of care that is seamless and more proactive. How will these future trends in telehealth achieve this goal? The general answer is that these technologies will better enable the integration of the patient’s body with its immediate environment and the larger healthcare community. Since smart homes and implantable biosensors, like many other kinds of ICT, are interactive, they can help facilitate damaged or less than optimal person-environment interactions that are due to illness or environmental barriers (e.g., lack of transportation). The traditional view in medicine has been to view the purpose of technology as a way to fix persons, not environments. The problem with this view is that it construes persons as being distinct from their environments and overlooks the essential reality of person-environment interaction. As implants, smart homes, and other telehealth services become more commonplace in the provision of healthcare, this traditional view will and should dissipate. Second, as smart homes and implantable biosensors more fully integrate patient bodies with their environments, patient care is likely to become mobile and migrate from institutional to non-institutional settings such as the home (Medical News, 2005). Home sensors in concert with implantable biosensors will likely exhibit a collective, synergistic intelligence that not only monitors, stores, and transmits biometric data to healthcare providers, but also allows patients to more easily regulate their home environments and to travel anywhere at anytime with the peace of mind that they are under continuous
Healthcare Ethics in the Information Age
medical supervision. By giving patients more control over their environments and lifestyles, implantable biosensors and smart homes have the capacity to enhance the autonomy and well being of patients. Third, much of the healthcare system today can be characterized as reactive and episodic, rather than proactive and preventative. As such, it is expensive and does a poor job of detecting medical conditions and preventing and responding to medical emergencies. Consequently, the present model of healthcare is less likely to maximize both the quality of patient care and patient health outcomes. In conjunction with external ICT, how might smart homes and implantable biosensors help us transition from a reactive to a preventative healthcare system? In answering this question, take, for example, the cardiac biosensors discussed earlier. These biosensors, which allow for the continuous real-time monitoring and transmission of a patient’s cardiac functions, can be coupled with desktop telehealth units and the Internet, which, in turn, can automatically alert an emergency call center in case of a cardiac event. Unlike a reactive and episodic approach that responds after a cardiac event is in progress, an automated telehealth system that incorporates implants and smart home technology is preventative because it can detect and report a cardiac event even before the patient knows what is happening. In doing so, not only are opportunities to prevent serious patient harms or death increased, the costs of treating and managing cardiac patients is likely to decrease. In concrete terms, a proactive healthcare system that can prevent emergencies is a healthcare system that is more likely to be more efficient and lead to better health outcomes for patients.
cONcLUsION Telehealth has its risks, but this author believes that the overall impact of telehealth is likely to
be positive for patients and healthcare providers alike. In closing, this chapter has explored how telehealth is substantially transforming our healthcare system, arguing that three key ethical issues should be examined as telehealth services are implemented: 1) distributive justice, 2) providerpatient relationships, and 3) privacy. This chapter also identified two overlapping and developing trends in telehealth—smart homes and implantable biosensors—that are likely to improve the continuum of patient care, facilitate independent living, and make the healthcare systems less reactive and more proactive in the future.
rEFErENcEs Allen, A. (1995). Privacy in healthcare. Encyclopedia of Bioethics (pp. 2064-2073). New York, NY: Simon & Schuster Macmillan. AMIA (1997). A proposal to improve quality, increase efficiency, and expand access in the U.S. healthcare system. Journal of the American Medical Informatics Association, 4, 340-341. Bashshur, R., & Sanders, J. (Ed.) (1997). Telemedicine: Theory and practice. Springfield, IL: Charles C. Thomas Publisher, LTD. Bauer, K. (2001). Home-based telemedicine: A survey of ethical issues. Cambridge Quarterly of Healthcare Ethics, 10(2), 137-146. Bauer, K. (2007). Wired patients: Implantable microchips and biosensors in patient care. Cambridge Quarterly of Healthcare Ethics, 16(3), 281-290. Beasly, A., & Graber, G. (1984). The range of autonomy: Informed consent in medicine. Theoretical Medicine, 5, 31-41. Bero, L., & Jadad, A. (1997). How consumers and policy makers can use systematic reviews for decision making. Annals of Internal Medicine, 127, 37-42.
Healthcare Ethics in the Information Age
Borberg, E. (1995). Development, acceptance, and use patterns of computer-based education and support systems for people living with AIDS/HIV infection. Computers in Human Behavior, 11(2), 289-311. CNN.com. (2004). Brain implant devices approved for trials. Retrieved July 2, 2007, from http://www.webmd.com/stroke/news/20040415/ Brain-Implants Cockerham, W. (1993). The changing pattern of physician-patient interaction. In M. Clair & R. Allman (Eds.), Sociomedical perspectives on patient care (pp. 47-57). Lexington, KY: University Press of Kentucky. Coye, M. (2007) Jogging into the sunset. Retrieved July 2, 2007, from http://www.healthtechcenter. org/Common_site/news/docs/Molly _MostWired112906.pdf Davis, K. (1991). Inequality and access to health care. The Milbank Quarterly, 69(2), 253-273.
Grigsby, J., & Kaehny, M (1993). Analysis of expansion of access through use of telemedicine and mobile health services. Denver, CO: University of Colorado Health Science Center. Hanson, M., & Callahan, D. (Eds.). (1999). The goals of medicine: The forgotten issue in healthcare reform. Washington, DC: Georgetown University Press. Howe, E. (2001). Should ethics consultants use telemedicine? A comment on Pronovost and Williams. The Journal of Clinical Ethics, 12(1), 73-79. IOM, (2001). Crossing the quality chasm: A new health system for the 21st century. Washington, DC: National Academy Press. Kaplan, B. (2000). Culture counts: How institutional values affect computer use. MD Computing, 17(1), 23-26.
Denton, I. (1993). Telemedicine: A new paradigm. Healthcare Informatics, 10(11), 44-46, 48, 50.
Kaplan, S. & Greenfield, S. (1989). Assessing the effects of physician-patient interactions on the outcomes of chronic disease. Medical Care, 27, S100-S127.
DeVille, K. (2003). The ethical implications of handheld medical computers in medicine. Ethics & Health Care, 6(1), 1-4.
Kennedy, P., & Bakay, R. (1998). Restoration of neural output from a paralyzed patient by direct brain connection. NeuroReport, 9(8), 1707-1711.
Eng, T., & Gustafson, D. (1999). Wired for health and well-being: The emergence of interactive health communication. Washington, DC: Science Panel on Interactive Communication, U.S. Department of Health and Human Services.
McGee, E. (1999). Implantable brain chips? Time for debate. Hastings Center Report, 29(1), 7-13.
Evans, H. (1993). High tech vs. high touch: The impact of technology on patient care. In M. Clair & R. Allman (Eds.), Sociomedical perspectives on patient care (pp. 82-95). Lexington, KY: University Press of Kentucky. Field, M. (Ed.) (1996). Telemedicine: A guide to assessing telecommunications in health care. Washington, DC: National Academy Press.
Marckmann, G. (1999). Telemedicine and ethics. Biomedical Ethics: Newsletter for the European Network for Biomedical Ethics, 4(2), 59-62. Marwick, C. (2000). National health service corps faces reauthorization during risky time. Journal of the American Medical Association, 283(20), 2641-2642. Medical News Today, (2005). Smart fabric to keep patients healthy. Medical News Today. Retrieved July 1, 2007, from http://www.medicalnewstoday. com/medicalnews.php?newsid=21338
Healthcare Ethics in the Information Age
Ostbye T., & Hurlen, P. (1997). The electronic house call: Consequences of telemedicine consultation for physicians, patients, and society. Archives of Family Medicine, 6(3), 266-271. Pellegrino, E. (1981). Being ill and being healed: Some reflections on the grounding of medical morality. Bulletin of the New York Academy of Medicine, 57(1), 70-79. Pellegrino, E., & Thomasma, D. (1993). The virtues in medical practice. New York: Oxford University Press. Peredina, D., & Allen, A. (1995). Telemedicine technology and clinical applications. The Journal of the American Medical Association, 273(6), 483-488. Peredina, D., & Brown, N. (1995). Teledermatology: One application of telemedicine. Bulletin of the Medical Library Association, 83(1), 42-47. President’s commission for the study of ethical problems in medicine and biomedical and behavior research. (1982). Making health care decisions: A report on the ethical and legal implications of informed consent in the patient-practitioner relationship. Washington, DC: U.S. Government Printing Office. Preston, J. (1994). The telemedical handbook: Improving care with interactive video. Austin TX: Telemedical Interactive Services, Inc. Schement, J., & Curtis, T. (1995). Tendencies and tensions of the information age: The production and distribution of information in the United States. New Jersey: Transaction Publishers. Schroeder, S. (2001). Prospects for expanding health insurance coverage. The New England Journal of Medicine, 344(11), 847-852. Shea, S. (1994). Security versus access: Trade-offs are only part of the story. Journal of the American Medical Informatics Association, 1(4), 314-315.
Turkle, S. (1984). The second self: Computers and the human spirit. New York: Simon & Schuster, Inc. Viseu, A. (2003). Simulation and augmentation: Issues of wearable computers. Ethics and Information Technology, 5, 17-26. Walther, J. (1995). Relational aspects of computer-mediated communication: Experimental observations over time. Organizational Science, 6(2), 186-203. Walton, D. (1985). Physician-patient decisionmaking: A study in medical ethics. Westport, Connecticut: Greenwood Press. Wooton, R., & Darkins, A. (1997). Telemedicine and the doctor-patient relationship. Journal of the Royal College of Physicians, 31(6), 598-599.
KEY tErMs Art of Healthcare: Individual clinical judgements and intuitions of healthcare providers. Confidentiality: The protection of private information, once it has been disclosed by a patient to a healthcare professional (e.g., during a medical examination or taking of a medical history). Confidentiality requires patients to give up their informational privacy. Distributive Justice: A sub-field of ethics that deals with questions about access and the fair allocation of healthcare benefits and burdens among populations. More specifically, distributive justice in healthcare requires the application of fair standards that make quality healthcare available and accessible to persons in an efficient manner. Ethics: The descriptive and prescriptive study of what is right, wrong, good, and bad, of what ought and out not be done.
Healthcare Ethics in the Information Age
Implantable Biosensors: Sensors that are directly embedding into the human body to monitor vital signs and to provide prosthetic functions, often in concert with smart home technology and larger telehealth networks. Informational Privacy: Refers to the security of identifiable patient health information and clinical data found in patient records and in communications among healthcare professionals and patients. Interconnectedness and Social Presence: The quality and feeling of communication exchange with other persons, with or without ITC. Modernity: The social, economic, and technological forces that have shaped the contemporary provider-patient relationship. Physical Privacy: Refers to the restricted access that others have to our bodies, relationships, and living spaces.
Science of Healthcare: Standardized clinical practice guidelines, automated procedures, scientific evidence, and the employment of medical technology. Smart Homes: The use of ITC to augment the range of services that homes can provide for their occupants without human assistance, for example, monitoring the time, frequency, and variety of a person’s activities, including how often a person is waking up and walking, using the toilet, or opening his medicine cabinet to take medication. Telehealth/Telemedicine: The use of ICT to share and to maintain patient health information and to provide clinical care and health education to patients and professionals when distance separates the participants. Technological Fix: The temptation to employ technology as a panacea rather than to give oneself as a person in the process of healing patients.
Chapter XIII
Ethical Theories and Computer Ethics Matthew Charlesworth The Jesuit Institute, South Africa David Sewry Rhodes University, South Africa
abstract The development of cybernetics and digital computers prompted the need for a greater exploration of computer ethics. Information ethics, as described by Floridi and Sanders (2003), offers a conceptual basis for such an exploration. This chapter provides an historical perspective on the development of a foundation for the study of computer ethics. A brief explanation is provided of a number of ethical theories (Divine Command; Ethics of Conscience; Ethical Egoism; Ethics of Duty; Ethics of Respect; Ethics of Rights; Utilitarianism; Ethics of Justice; Virtue Ethics) followed by a number of perspectives on the development of computer ethics. The Innovative Approach proposed by Floridi et al concludes the chapter.
INtrODUctION The origins of computer ethics can be traced to the 1940s to the time at which cybernetics and digital computers were first developed. These developments prompted Wiener (1948) to recognise both the good and evil inherent in these artificial machines. Since then, attempts have progressively
been made to explore computer ethics from a variety of perspectives including that of computer ethics as not a real discipline, as a pedagogical methodology, as a unique discipline, as applied ethics, and as employing information ethics as the foundation of computer ethics. The increasing integration of information and communication technology (ICT) into society
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Ethical Theories and Computer Ethics
has driven the need to understand and develop foundations for computer ethics. This chapter provides an historical perspective on the development of a foundation for the study of computer ethics. A simple case study (software piracy) is used throughout the chapter to illustrate points.
EtHIcaL tHEOrIEs IN brIEF Often we have to make decisions when all the facts cannot be known with certainty. In such cases we have no choice but to rely on the best information we have, and when we are not experts ourselves, this means deciding which experts to trust. (The Elements of Moral Philosophy, p. 9) Lawrence Hinman, Director of the Values Institute and Professor of Philosophy at the University of San Diego provides nine bases upon which moral or ethical decisions are made (Hinman, 2002, p.3-11).2
Divine command theories Divine Command Theory is an ethical theory that states that to be good one must do what God commands you to do. Teachings from the Bible, the Qur’an or other sacred texts are considered to present authoritatively that which leads to what is right. The problem of the Divine Command Theory is summed up in the Euthyphro Dilemma – in short, is it right because God commands it, or does God command it because it is right? With regards to the issue of piracy, one might say that in terms of the Judaeo-Christian commandment ‘thou shalt not steal’, piracy is proscribed.
the Ethics of conscience In this theory, what is right is defined by one’s ‘inner voice’. Whilst this can often have a religious source and operate out of a religious context, it may
also be founded solely on human nature. However, in both cases the conscience must be properly formed. In its negative dimension, conscience tells us what is not right and makes individuals feel guilty, facilitating the possibility of atonement. With regards to piracy, our conscience would compel us to feel guilty for doing something that is immoral, provided we recognised that piracy is illegal and a form of theft, and that we accept that violation of this illegality does not serve a higher good.
Ethical Egoism In this theory, each person ought to do whatever will best promote his or her own interests. Ethical egoism is often argued to be self-defeating in that, a society of egoists do worse for themselves than a society of altruists (see for example the classical philosophical game – the Prisoners Dilemma). Another fundamental objection is that it is inconsistent with the nature of trust and friendship that each party should be motivated solely by self-interest. With regards to piracy, an ethical egoist might pirate software because it would be in their own interests to acquire the software in the most expedient and efficient way to themselves (that is without paying for it). However, it could be argued that in the long-term, should one be caught, the consequences of pirating software are not in an individual’s own interests or indeed if pirating undermines the business it may undermine the egoists own interests in new up-to-date software.
the Ethics of Duty The ethics of duty begin with the conviction that ethics is about doing what is right, about doing one’s duty. Duty can be defined by a classical Kantian appeal to universal reason (our duty is to follow rules that we could consistently will to be universal laws – that is, rules that we would be willing to have followed by all people in all
Ethical Theories and Computer Ethics
circumstances), by professional role (A physician’s duty to care for the sick), or by social role (A parent’s duty to care for their child). The software is seen as an item which has been created, and in terms of one’s duty to the producer, and the greater economy, it would behove the consumer to purchase it if one should want or need it. If all computer users were to pirate software, there would be no incentive to create software. This argument, would not be different if one were to consider Open Source software (since it is ‘free’), because in that case the copyright (for example GPL) is still upheld. At the end of the day, one copyright has a price attached that is greater than zero, and the other a price that is zero. It is one’s duty as a consumer to honour the conditions attached to the use of the product, which includes purchasing the necessary licenses.
the Ethics of respect This theory grounds itself in the doing of what is respectful, for example the golden rule “do unto others what you would have them do unto you.” The difficulty lies in knowing what is respectful – as cultural factors can affect the judgement. In terms of this theory, if an individual A had produced a good that another individual B wished to use, and A expected payment, it would be in B’s interests to pay A so that when B had a product A wanted, B would be paid.
the Ethics of rights This theory is one of the most influential and prevailing ethical theories in our time. It has established a minimal condition of human decency and is often legislated for example “…all Men are created… with certain inalienable Rights.” Piracy is proscribed by legislation and one of the rights in force in the world today is that individuals (or corporations) have the right to not be the victim of intellectual property theft. Of course this issue of rights and intellectual property is highly
controversial e.g. the issue of private companies claiming rights of ownership to things like crops or water through mechanisms such as the WTO’s TRIPS agreements. The question to be asked is: What constitutes a right over something that is seen as essential or common? Does a common right to water overshadow other people’s or corporations private claims of ‘ownership’? Can software, or certain products by particular companies, be seen as essential and common, in the same way?
Utilitarianism This theory seeks to reduce suffering and increase pleasure or happiness. What is done, or ought to be done, should promote the greatest happiness for the greatest number. It demands a high degree of self-sacrifice, in contrast to Ethical Egoism, and demands that every consequence for each individual is considered. This theory distinguishes between two types of utilitarianism, act and rule. Act utilitarianism applies this principle of maximising happiness by focusing only upon the results or consequences of the act under consideration. Rule utilitarianism considers the effect of following a certain rule of conduct in terms of happiness for the greatest number. Utilitarians claim the purpose of morality is to make the world a better place. In light of monopolies making abnormal superprofits on software that have become necessities for individuals to transact in the modern world, it could be argued that it is in the interests of the ‘greater good’ for software to be priced differently, if at all. Conversely it could also be argued that the marginal loss of utility felt by the company or software author from the individual pirating is so negligible to be discounted in favour of the marginal gain in utility for the individual. This argument is subject to the same criticism that Utilitarianism is subject to, namely if one were to universalize this behaviour (that is software piracy) the situation (in this case the commercial software industry) would collapse due to a lack of incentive for software developers to create
Ethical Theories and Computer Ethics
software and an inflated cost to the consumer to cover the losses made within the industry.
the Ethics of justice At the heart of the Ethics of Justice is John Rawls’ proposition that Justice is Fairness. What is fair for one should be fair for all – this theory begins early in the family with fairness to all family members. Fairness in the sense of software piracy is best explained that if one producer is rewarded for his or her work, and another not, then that is not fair. Thus, if it is fair that an appreciation towards the creator of goods is based on market value for example computer hardware creators, the creators of computer software cannot be treated any differently. Since it is plainly obvious that theft of computer hardware is theft, it must also be the same for the theft of computer software. This distinction goes to the heart of computer ethics – namely the incorporeal nature of computer software and the ability to perfectly duplicate material without any degradation to the original.
virtue Ethics This theory seeks to develop individual character and assumes that good persons will make good decisions. Aristotle describes in his Nicomachean Ethics (Book II:6) that a virtue is “the mean by reference to two vices: the one of excess and the other of deficiency” (Baird, 2002, p. 390). Thus “Courage” is the mean between the extremes of cowardice and foolhardiness. Some virtues can conflict – for example when dealing with friends, Justice can conflict with Loyalty (this conflict is recognised by the law, for example in some countries a wife cannot be compelled to testify in court against her husband, and vice versa). In terms of virtue ethics, a good person will make a good decision. The essence of this question goes to what does one mean when one talks of ‘good’. It should not be forgotten that a good person might do something wrong and that a bad person
may do something right. In terms of the slippery slope, where if an individual hurts animals as a child, it is probable that the individual will continue to be violent later on in life3; similarly if one pirates software and breaks the law (even in such a ‘small’ way) it does not go towards creating a good character since the individual may be inclined to break other laws later on. In considering the above ethical approaches we are left with the following questions with regards to piracy. Can piracy be deemed not to be theft? Can we justify piracy in a particular instance to serve a higher good, and what would happen if that exception were to become the rule? Is it reasonable for a society to permit such self-interest to pirate to the detriment of the whole? Or should it be reasonable to allow piracy in situations where there is negligible or no detriment, e.g. in the case of super-rich monopolies whose profit-motive is generally recognised more as greed? Can piracy ever be described as the right course of action, and if not, would it ever be what a good person would do? After examining the above it becomes apparent that when making decisions about computers one cannot rely entirely on any of the above categories as each theory addresses only part of the issues involved in computer ethics. Discussion around whether or not computer ethics can adequately rely on previously conceived ethical theories, or whether it points to a need for its very own theory will now be discussed.
aPPLIED EtHIcs Before one examines computer ethics though, a brief comment must be made about applied ethics. Singer (1995, p. 42) notes that applied ethics has been a subject of discussion since the earliest of times, with the Greeks and Romans discussing in quite concrete terms how one is to live and die; how Medieval ethicists were concerned with such practical issues as abortion, war and whether or
Ethical Theories and Computer Ethics
not it is always wrong to kill; even Hume wrote an essay defending suicide and Kant tried to pursue a means to perpetual peace; and the Utilitarians in the 19th century were very much focused on applied ethics. In fact, as Singer points out (1995, p. 42), the first half of the 20th century is unique in their avoidance of addressing such applied ethics – though he believes this is due to the legacy of Logical Positivism that sought to merely perform meta-ethical study into the meanings of moral terms. Singer explains that this approach evoked little support during the 1960’s when philosophy students demanded courses that were “more relevant to the day” (that is to say courses which helped students deal with the Civil Rights Movement, Vietnam and other such ‘hot-topics’ such as racial/sexual equality, the justifiability of war and environmental ethics). Until recently, bioethics has been one of the most prominent forms of applied ethics; with investigations into a more holistic approach that takes the entire environment (including man and the supporting ecological systems) into account. Another featured area of specialisation has been business ethics. In all cases the study of applied ethics has led to lively debate and to questions which challenge the traditional bounds of ethical discourse (Singer, 1995, p.42). One new such challenge is computer ethics which, some argue, has led to a new underlying macroethic, the philosophy of information or ‘information ethics’, however before we can examine information ethics, we must see how it developed from computer ethics.
cOMPUtEr EtHIcs Bynum explains that computer ethics had its nascent origins in the USA through the work of the MIT Professor Norbert Wiener who, in the 1940s, developed cybernetics4 which was to become the science of information systems (2003). Bynum cites Wiener noting how, upon considering together both the concepts of cyber-
0
netics and the digital computers of the day, he commented that: It has long been clear to me that the modern ultra-rapid computing machine was in principle an ideal central nervous system to an apparatus for automatic control; and that its input and output need not be in the form of numbers or diagrams. It might very well be, respectively, the readings of artificial sense organs, such as photoelectric cells or thermometers, and the performance of motors or solenoids ... we are already in a position to construct artificial machines of almost any degree of elaborateness of performance. Long before Nagasaki and the public awareness of the atomic bomb, it had occurred to me that we were here in the presence of another social potentiality of unheard-of importance for good and for evil. (Wiener, 1948, p. 27) Wiener continued to think about these social potentialities of integrating technology into society and a few years later in 1950 he perceptively laid the foundation for computer ethics that is still applicable today. He wrote a book, which many today consider to be monumental for its time, entitled The Human Use of Human Beings, in which he provided an account of the purpose of human life, explained four “great principles of justice” and conceived of a powerful method for doing applied ethics. His book also included discussions of the fundamental questions of computer ethics and some examples of key computer ethics topics (Wiener, 1954, p. 57). Bynum summarises Wiener’s methodology as follows (Bynum, 2003): 1. 2. 3.
Identify an ethical question or case regarding the integration of ICT into society. Clarify any ambiguous concepts or rules that may apply to the case in question. If possible, apply existing policies (principles, laws, rules, practices) that govern human behaviour in the given society. Use
Ethical Theories and Computer Ethics
4.
5.
precedent and traditional interpretation in such a way as to assimilate the new case or policy into the existing set of social policies and practices. If precedent and existing traditions are insufficient to settle the question or deal with the case, revise the old policies or create new ones, using “the great principles of justice5” and the purpose of a human life6 to guide the effort. Answer the question or deal with the case using the revised or enriched policies.
Bynum believes Wiener’s position (which was ahead of its time) would “require a multi-faceted process taking decades of effort.” He noted that this integration would involve the work place undergoing severe and radical change; government would need to adopt new laws and regulations whilst industry and business would find it necessary to draft new policies and practices. Codes of conduct would have to be (re-)developed within professional organisations and sociologists and psychologists would have to examine and interpret the new arising social and psychological phenomena. Philosophers would also be required to rethink and redefine old social and ethical concepts. Bynum believes that “the defining goal of computer ethics, then, is to advance and facilitate the good consequences of ICT while preventing or minimizing the harmful ones” (Bynum, 2003). Wiener’s important and complex work on applied ethics was not developed further until the 1960’s when Donn Parker of SRI International in Menlo Park, California took stock of, as Wiener had foretold, the important social and ethical consequences that computer technology had wrought. Prompted by an increasing number of computer-aided bank robberies and other crimes Parker published his concerns about computer crime and proposed to the Association for Computing Machinery (see Parker, 1968, p. 198) that they adopt a code of ethics for their members. The ACM appointed Parker to head a committee
to create such a code, which was subsequently adopted in 1973 and later revised first in the early 1980s and most recently in the early 1990s. (Bynum, 2001). Concern over computer-crime soon changed to concern over privacy as individuals in the mid 1960s began to discuss and propose new privacy legislation to legislatures in America as a result of privacy invasions by ‘big-brother’ government agencies. By the mid 1970s, Bynum notes that “new privacy laws and computer crime laws had been enacted in America and in Europe, and organizations of computer professionals were adopting codes of conduct for their members” (Bynum, 2001). Bynum notes further that during this period: MIT computer scientist Joseph Weizenbaum created a computer program called ELIZA, intended to crudely simulate ‘a Rogerian psychotherapist engaged in an initial interview with a patient.’ Weizenbaum was appalled by the reaction that people had to his simple computer program. Some psychiatrists, for example, viewed his results as evidence that computers will soon provide automated psychotherapy; and certain students and staff at MIT even became emotionally involved with the computer and shared their intimate thoughts with it! Concerned by the ethical implications of such a response, Weizenbaum wrote the book Computer Power and Human Reason (1976), which is now considered a classic in computer ethics. (Bynum, 2001) It was Walter Maner, then of Old Dominion University in Virginia, who, whilst teaching a medical ethics course, noticed that whenever computers were involved new ethically important considerations arose, and in the mid 1970s began to use the phrase ‘computer ethics’ to refer to “field of inquiry dealing with ethical problems aggravated, transformed or created by computer technology.” He attempted to, in a manner similar to medical ethics, focus attention upon the ‘tradi-
Ethical Theories and Computer Ethics
tional’ utilitarian ethics of Bentham and Mill, or the rationalist ethics of Kant (Bynum, 2001). Bynum explains that in 1978 Maner “selfpublished and disseminated his Starter Kit in Computer Ethics, which contained curriculum materials and pedagogical advice for university teachers to develop computer ethics courses. The Starter Kit included suggested course descriptions for university catalogues, a rationale for offering such a course in the university curriculum, a list of course objectives, some teaching tips and discussions of topics like privacy and confidentiality, computer crime, computer decisions, technological dependence and professional codes of ethics. Maner’s trailblazing course, plus his Starter Kit and the many conference workshops he conducted,” Bynum notes, “had a significant impact upon the teaching of computer ethics across America.” (Bynum, 2001). Parker, Weizenbaum, Maner and others (though sadly still not Wiener) had established the foundations of computer ethics and it was during the 1980s that these were extended and the discipline allowed to develop. The 1980s saw an increase in attention being paid to issues such as computer-enabled crime, disasters caused by computer failures, invasions of privacy via computer databases, and major law suits regarding software ownership. In 1985 Deborah Johnson wrote the first textbook on computer ethics and James Moor of Dartmouth College (in a special edition of Metaphilosophy entitled Computers and Ethics edited by Bynum) published his influential article (Moor, 1985, p. 266) and defined computer ethics in terms of policy vacuums (recall the discussion about Wiener’s policy and precedents): A typical problem in computer ethics arises because there is a policy vacuum about how computer technology should be used. Computers provide us with new capabilities and these in turn give us new choices for action. Often, either no policies for conduct in these situations exist or existing policies seem inadequate. A central task of com-
puter ethics is to determine what we should do in such cases, that is, to formulate policies to guide our actions. Of course, some ethical situations confront us as individuals and some as a society. Computer ethics includes consideration of both personal and social policies for the ethical use of computer technology. Terrell Bynum notes that during the 1990s universities around the world mounted new courses; specialised research centres were established; and an increasing number of conferences, journals, articles and textbooks dedicated to the subject appeared. In explaining the rising popularity he says that: A wide diversity of additional scholars and topics became involved. For example, figures such as Donald Gotterbarn, Keith Miller, Simon Rogerson, and Dianne Martin – as well as organizations like Computer Professionals for Social Responsibility, the Electronic Frontier Foundation, ACM-SIGCAS – spearheaded projects relevant to computing and professional responsibility. Developments in Europe and Australia were especially noteworthy, including new research centres in England, Poland, Holland, and Italy; the ETHICOMP series of conferences led by Simon Rogerson and the present author [Terrell Bynum]; the CEPE conferences founded by Jeroen van den Hoven; and the Australian Institute of computer ethics headed by Chris Simpson and John Weckert. (Bynum, 2001) Early in the new millennium, a critical analysis of the debate on the foundations of computer ethics took place. Researchers at the University of Oxford contended that the focus of computer ethics has “moved from problem analysis – primarily aimed at sensitising public opinion, professionals and politicians – to tactical solutions resulting, for example, in the evolution of professional codes of conduct, technical standards, usage regulations, and new legislation” (Floridi and Sanders,
Ethical Theories and Computer Ethics
2003, p. 4). The same researchers noted that the “constant risk” of computer ethics’ development thus far has been the “spreading of ad hoc or casuistic approaches to ethical problems” and that this “bottom-up procedure” should be balanced by a “foundationalist debate” which in contrast is a “top-down development…characterised by a metatheoretical reflection on the nature and justification of computer ethics and the discussion of computer ethics’ relations with the broader context of metaethical theories” (Floridi et al, 2003, p. 4). Floridi et al ask the questions: “Can computer ethics amount to a coherent and cohesive discipline, rather than a more or less heterogeneous and random collection of ICT-related ethical problems, applied analyses and practical solutions? If so, what is its conceptual rationale? And how does it compare with other ethical theories?” (Floridi et al, 2003, p. 4) They have identified five approaches (Floridi et al, 2003, p. 5) to the foundation of computer ethics which have different answers to that question, and conclude with an affirmative position on the state of computer ethics as a coherent and cohesive discipline, grounded firmly with its very own conceptual rationale, information ethics.
the “No resolution” approach: computer-Ethics is Not a real Discipline Floridi describes this approach as the “ideal lowest bound for the foundationalist debate, comparable to the role played by relativism in metaethics.” He draws from Parker (1977), when he defines the approach to involve computer ethics problems representing unsolvable dilemmas and considers computer ethics a pointless exercise because there is no conceptual foundation. Floridi notes that Gotterbarn (1991, p. 26; 1992, p. 1) criticises the work of Parker (1981; 1982; 1990). Floridi comments that “empirically, the evolution of computer ethics has proved the no resolution approach to
be unnecessarily pessimistic” since “problems are successfully solved, computer ethics related legislation is approved and enacted” and “professional standards and codes have been promoted” (Floridi, et al 2003, p. 5). Floridi continues the discussion of this approach by recalling a phenomenon common amongst early proponents of computer ethics known as ‘pop ethics’ (Bynum, 1992) which involved the discussion of a variety of case studies that highlight problems and has been characterised by “usually unsystematic and heterogeneous collections of dramatic stories” (Floridi et al, 2003, p. 6) collected together to “raise questions of unethicality rather than ethicality” (Parker, 1981). Floridi notes (2003, p. 1) the usefulness of pop ethics in the early years as it was able to “sensitise people to the fact that computer technology had social and ethical consequences” (Bynum, 1992). A criticism of pop ethics is that it is merely a collection of examples and leads one to believe that there is no solution, though as Floridi et al say “there is little point in providing a solution to someone unaware of the problem, particularly when the solution is not simple” (Floridi et al, 2003, p. 6). An advantage of pop ethics though is its ability to explain the variety of concerns (for example the professional, legal, moral, social and political concerns) through the use of case studies (see Epstein, 1997) that refer to a particular situation. There is a similarity between pop ethics and the older practice of casuistry.7 The criticism of casuistry and other situation-based ethics is that it relies on a particular criterion, and the definition/appropriateness of that criterion can always be argued.
the Professional approach: computer-Ethics is a Pedagogical Methodology Gotterbarn’s view on computer ethics differed with Parker who held that there is no resolution (see above). For Gotterbarn the answer lay in
Ethical Theories and Computer Ethics
developing a ‘professional ethics’ approach. He says that faculty should: introduce the students to the responsibilities of their profession, articulate the standards and methods used to resolve non-technical ethics questions about their profession, develop some proactive skills to reduce the likelihood of future ethical problems,… indoctrinate the students to a particular set of values… and teach the laws related to a particular profession to avoid malpractice suits. (Gotterbarn, 1992, p. 1) Gotterbarn argues the ‘professional-ethics approach’ from a position where there is “no deep theoretical difference between computer ethics and other professional ethics like business ethics, medical ethics or engineering ethics” (Gotterbarn, 1991, p. 26; 1992, p. 1). For Gotterbarn, the goal of computer ethics courses would be to create “ethically minded professionals not ethicists,” and therefore, Floridi notes, it “may actually be better not to have philosophers teaching them” (Floridi et al, 2003, p. 7) since as Gotterbarn says “in applied professional ethics courses, our aim is not the acquaintance with complex ethical theories, rather it is recognising the role responsibility and awareness of the nature of the profession” (1992, p. 1). A criticism of applied ethics understood in this way is that it is trying to be a soft form of law, rigidly legislating behaviour for certain circumstances, whereas ethics is more general, and should inform the law and help us realise where laws need to be ‘bent’, changed or created. Floridi notes that the advantages of the ‘professional-ethics approach’ have been the emphasis on computer-ethics education, looking at “technical standards and requirements, professional guidelines, specific legislation or regulations, [and] levels of excellence”. He concludes that the ‘professional-ethics approach’ “exposes the risky and untenable nature of the ‘no-resolution approach’” whilst at the same time defending the
“value and importance of a constructive ‘popethics’, by developing a ‘proactive’ professional ethics (standards, obligations, responsibilities, expectations etc.)” (Floridi et al, 2003, p. 8). This approach has been largely responsible for the “elaboration and adoption of usage regulations and codes of conduct in ICT contexts (libraries, universities, offices etc.), within industry and in professional associations and organisations, as well as the promotion of certification of computer professionals” (Floridi et al, 2003, p. 8). Floridi notes that this approach focuses mainly on “ICT practitioners, especially those in software development, where technical standards and specific legislation provide a reliable, if minimal, frame of reference” (Floridi et al, 2003, p. 8). This is in keeping with the goals of the approach, which are stated by Gotterbarn to be pedagogical and not metaethical: The only way to make sense of “computer ethics” is to narrow its focus to those actions that are within the horizon of control of the individual moral computer professional. (Gotterbarn, 1991, p. 26; 1992, p. 1; and 2001 presents a less radical view) Floridi disagrees with this strong view of professional-ethics noting that it falls short in three areas: 1.
2.
Firstly, the problems associated with computer ethics (for example privacy, accuracy, security, reliability, intellectual property and access) permeate contemporary life unlike other purely professional issues (Floridi et al, 2003, p. 9). Secondly, Floridi notes that to interpret professional ethics as offering a foundation for computer ethics is to “commit a mistake of levels, similar to attempting to define arithmetic on the basis only of what is taught in an introductory course.” Floridi believes that without a theoretical approach,
Ethical Theories and Computer Ethics
3.
the professional-ethics approach is but only a “middle level” between pop-computer ethics and theoretical computer ethics. (Floridi et al, 2003, p. 9). Thirdly, Floridi et al believes that to accept that computer ethics is merely professional ethics, without any further need for conceptual foundation runs the risk of “being at best critical but naïve, and at worst dogmatic and conservative”. Floridi continues saying that to focus on “case-based analyses and analogical reasoning, a critical professionalethics approach will painfully and slowly attempt to re-discover inductively ethical distinctions, clarifications, theories and so forth already available and discussed in specialised literature; … whilst an uncritical professional-ethics approach will tend to treat ethical problems and solutions as misleadingly simple, non-conflicting, selfevident and uncontroversial, a matter of mere indoctrination, as exemplified in ‘The 10 Commandments of computer ethics8’ approach.” (though he admits that a methodologically coherent system of ethics can be expressed in a list of negative prescriptions (“thou shalt not…”), he does not believe computer ethics has matured enough to be able to do so and sees the ‘professional-ethics approach’ as the pragmatic “historical first step towards a more mature computer ethics” (Floridi et al, 2003, p. 10).
One of the literature’s further criticism’s with the ‘professional-ethics approach’ (following on from the ‘no-resolution approach’ and ‘pop-ethics’) has been its failure to answer the following questions (Floridi et al, 2003, p. 10): 1. 2.
Why does ICT raises moral issues? Are computer ethics issues unique (in the sense of requiring their own theoretical investigations, not entirely derived from standard ethics)?
3.
4.
5. 6.
Or are they simply moral issues that happen to involve ICT? What kind of ethics is computer ethics? What justifies a certain methodology in computer ethics (for example, analogy and case-based analysis?) What is computer ethics’ rationale? What is the contribution of computer ethics to the ethical discourse?
It is at this point in the literature that a ‘Theoretical Computer-Ethics’ emerged – albeit along two lines, arguing for the ‘uniqueness’ of Computer-Ethics.
the radical approach: computer Ethics as a Unique Discipline The Radical Approach says that “the presence of a policy and conceptual vacuum (Moor, 1985, p. 266) indicates that computer ethics deals with absolutely unique ideas, in need of a completely new approach.” Maner argues that: [computer ethics] must exist as a field worthy of study in its own right and not because it can provide a useful means to certain socially noble ends. To exist and to endure as a separate field, there must be a unique domain for computer ethics distinct from the domain for moral education, distinct even from the domains of other kinds of professional and applied ethics. Like James Moor, I believe computers are special technology and raise special ethical issues, hence that computer ethics deserves special status. (Maner, 1999). Floridi believes, that the Radical Approach offers several advantages over the previously considered approaches. It does not under-estimate the “gravity and novelty” of computer ethics and it stresses the methodological necessity of providing the field with a robust and autonomous theoretical rationale” (Floridi et al, 2003, p. 11).
Ethical Theories and Computer Ethics
Yet Floridi et al find four problems with the Radical Approach:
the conservative approach: computer-Ethics as applied Ethics
1.
The Conservative Approach holds that the classic macroethical theories – for example Consequentialism, Deontologism, Virtue Ethics, and Contractualism – are capable of handling Moor’s policy vacuum. Floridi et al note that these theories “might need to be adapted, enriched and extended, but they have all the conceptual resources required to deal with computer ethics questions successfully and satisfactorily” (2003, p. 13). The Conservative Approach also holds that “certain ethical issues are transformed by the use of ICT, but they represent only new species of traditional moral issues, to which already available metaethical theories need to, and can successfully, be applied. They are not and cannot be a source of a new, macroethical theory” (Floridi et al 2003, p. 13). One of the major proponents of this approach is Deborah Johnson who introduced the genusspecies argument9 and believes that “the ethical issues surrounding computer technology are first and foremost ethical” (Johnson, 2000, p. 1). Floridi et al believe that because this approach positions itself as “an interface between ICT-related moral problems and standard macroethics” it enjoys the advantages “associated with a strong theoretical position” (2003, p. 14). Aside from rejecting the No Resolution Approach, it extends the Professional Approach by saying that computer ethics is “an ethics for the citizen of the information society, not just for the ICT professional” (Floridi et al, 2003, p. 14) and because of its grounding in standard macroethics, allows a constructive attitude (similar to that of the Professional Approach) and at the same time refraining from a “naïve or uncritical reliance on some contingent normal ethics” (2003, p. 14). Finally Floridi et al believe that the evolutionary development of this approach enables the Conservative Approach to avoid the ‘unique topic = unique discipline’ pitfalls of the revolutionary Radical Approach and to “integrate them well within the broader context
2.
3.
4.
Given Maner’s argument above, the Radical Approach would need, according to Floridi, the “explicit and uncontroversial identification of some unique area of study” (2003, p. 11), and Floridi declares that none of the cases mentioned by Maner are uncontroversially unique. Yet this does not surprise Floridi since he notes that neither in business ethics, medical ethics or environmental ethics (for example) are there any significant moral issues that do not interact with the rest of the ethical context. Floridi et al argue that to hold onto the Radical Approach because maybe, sometime in the future computer ethics problems “could be made, or become, or discovered to be increasingly specific, until they justify the position defended by the Radical Approach … keeps the burden of proof on the Radical Approach side” (Floridi et al, 2003, p. 12), a situation they dismiss as “safe but uninteresting” (Floridi et al, 2003, p. 12). Rather, they believe that if it is possible in principle to have a domain of unique ethical issues (and they believe in practice it is not) – they state that “the uniqueness of a certain topic is not simply inherited as a property by the discipline that studies it” (Floridi et al, 2003, p. 12). Ethical issues are inter-related and cannot be reduced to the equation “unique topic = unique discipline” (Floridi et al, 2003, p. 12). Finally, Floridi notes that to focus too much on the uniqueness of computer ethics “runs the risk of isolating [computer ethics] from the more general context of metaethical theories.” Floridi concludes saying that “this would mean missing the opportunity to enrich the ethical discourse” (Floridi et al, 2003, p. 12).
Ethical Theories and Computer Ethics
of the ethical discourse” (2003, p. 14). Floridi et al finds four problems with the Conservative Approach, namely: 1.
2.
3.
Firstly, that the position that classic macroethics has all the conceptual resources required to deal successfully and satisfactorily with computer-ethics is questionable given the perception that computer-ethics problems are radically new and unpredictable. Secondly, whilst the evolutionary approach finds an acceptable position between the extremist radical and traditional approaches, it does not adequately describe the degree of evolution that could occur in the genusspecies argument (that is at some point it could be a radical or a minor change) and the Conservative Approach errs, by definition, on the conservative side (that is, that the change is minor) without being able to suggest which standard macroethic to apply. Floridi et al note this forms the “logical regress” inherent to the Conservative Approach. If one accepts the Conservative Approach to computer-ethics saying that computer-ethics is a ‘microethics’ one still “needs a metatheoretical analysis to evaluate which macroethics is most suitable to deal with computer-ethics problems” (2003, p. 15). In Floridi et al’s view, users of this approach are left trying to apply some ‘normal’ ethics acceptable to society or to fall back upon an arbitrary choice of macroethics which would invite philosophy into an area of professionalism unnecessarily (as Floridi et al say, “Software Engineers should not be required to read the Nicomachean Ethics” (2003, p. 15). Thirdly, and as a consequence of point 1 above, Floridi et al note that this approach is “methodologically poor” because is lacks a “clear macroethical commitment” resulting in a reliance upon “common-sense, case-
4.
based analysis and analogical reasoning, … insufficient means to understand what the Conservative Approach itself acknowledges to be new and complex issues in ComputerEthics” (2003, p. 16). Fourthly, Floridi et al concede that this approach answers the question “what can ethics do for computer-ethics”, but laments the avoidance of what they consider “the more philosophically interesting question”, namely “is there anything that computerethics can do for ethics?” (2003, p. 16).
Floridi et al introduces Krystyna GórniakKocikowska, a colleague of Terrell Bynum, who believes that “computer ethics is the most important theoretical development in ethics since the Enlightenment” (2003, p. 16), clearly supporting his view that “computer ethics problems might enrich the ethical discourse by promoting a new macroethical perspective” (2003, p. 16).
the Innovative approach: Information Ethics as the Foundation of computer-Ethics Thus far, two theoretical approaches (the Conservative and Radical) have been examined. Bynum argues that an innovative approach to computer-ethics is required (2001). Floridi et al explains that the innovative approach, including the “Computer-Ethics problems, the corresponding policy and conceptual vacuum, the uniqueness debate and the difficulties encountered by the radical and conservative approaches in developing a cohesive metaethical approach, strongly suggests that the monopoly exercised by standard macroethics in theoretical Computer-ethics is unjustified” (2003, p. 16). They contend that ICT “by transforming in a profound way the context in which moral issues arise, not only adds interesting new dimensions to old problems, but leads us to rethink, methodologically, the very grounds on which our ethical positions are based. Although
Ethical Theories and Computer Ethics
the novelty of computer-ethics is not so dramatic as to require the development of an utterly new, separate, and unrelated discipline, it certainly shows the limits of traditional approaches to the ethical discourse, and encourages a fruitful modification in the metatheoretical perspective.” (2003, p. 17). The product of this ‘fruitful modification’ is information ethics, defined by Floridi 1998; 1999, p. 37; and Floridi and Sanders 1999; 2001, p. 55 to be “the theoretical foundation of applied computer-ethics is a non-standard, environmental macroethics, patient-oriented and ontocentric, based on the concepts of data-entity/infosphere/ entropy rather than life/ecosystem/pain.” This definition requires some explanation. Floridi et al explains that macroethical positions can focus on the moral nature and development of the agent (for example virtue ethics) or the agent’s actions (for example, Consequentialism, Contractualism and Deontologism). The former macroethic is ‘agent-oriented, subjective and often individualistic’, whilst the latter is a macroethic that is ‘action-oriented, relational and intrinsically social in nature’. Both are known as standard or classic macroethics and tend to be anthropocentric. Non-standard ethics on the other hand (such as medical ethics, bioethics and environmental ethics) attempts to develop a patient-oriented ethics in which the ‘patient’ may be not only a human being, but also any form of life (see Rowlands, 2000, cited in Floridi et al, 2003, p. 18). Floridi et al explain “that it [Non-standard ethics] places the ‘receiver’ of the action at the centre of the ethical discourse” and that the previously described problems with computer-ethics within the various approaches can be explained because in Floridi’s view computer ethics “is primarily an ethics of being rather than conduct or becoming” (emphasis mine) (2003, p. 19). The difference between information ethics and other non-standard forms of ethics (such as medical ethics, bioethics and environmental ethics) is that “information as such, rather than just life in general, is raised to the role of the universal patient of any action” (2003, p.
19). Floridi et al notes that their position, unlike biocentric ethics that “ground their analyses of the moral standing of bio-entities and ecological systems on the intrinsic worthiness of life and the intrinsically negative value of suffering”, is unique in that it suggests “that there is something even more elemental than life, namely being, understood as information; and something more fundamental than pain, namely entropy” (2003, p. 19). According to the theory, one should “evaluate the duty of any rational being in terms of contribution to the growth of the infosphere, and any process, action or event that negatively affects the whole infosphere – not just an information entity – as an increase in its level of entropy and hence an instance of evil” (2003, p. 19). Floridi et al identifies the crucial contribution of information ethics which is that move of information from “being a necessary prerequisite for any morally responsible action to being its primary object” (2003, p. 19) enabling an expansion within theoretical ethics of what can be considered to be the centre of minimal moral concern. In the past only living entities were capable of being the centre of some form of moral concern, and now, with information ethics, the limitation of the bio-centric theories to be biased towards ‘living’ entities is overcome and an entity’s state of being (it’s information state) is now capable of becoming the centre of moral concern and thus information ethics can rightly be described as a non-standard (not agent-oriented or action-oriented but) patient-oriented ontocentric (concerned with the metaphysical study of being) macroethic (2003, p. 20). Floridi et al concludes saying that the “foundationalist debate in computer ethics has lead to the shaping of a new ethical view”, information ethics (2003, p. 20). Floridi et al admit that information ethics places computer ethics “at a level of abstraction too philosophical” to make it useful, yet they respond saying that “this is the inevitable price to be paid for any attempt to provide computer ethics with an autonomous rationale. One must polarise theory and practice
Ethical Theories and Computer Ethics
to strengthen both” so that whilst “information ethics is not immediately useful to solve specific computer ethics problems” they note that “it provides the conceptual grounds that can guide problem-solving procedures in computer ethics” (2003, p. 20). It is worth noting that this theory of information ethics is criticized by Mathiesen (2004), who draws on the earlier work of Van Den Hoven (1995), but is rebutted by Mather (2005). Floridi comments that this Mathiesen’s criticism is “undermined” by the problems of applying information ethics as a microethic instead of as a macroethic (Floridi 2006, p. 36). To return to our analysis of piracy then, in terms of information ethics, piracy could be seen to be a threat that would impede the development of future computer software and therefore, in the long-term, contribute negatively to the growth of the infosphere.
FUtUrE trENDs Computers and technology are advancing all the time, and with these advancements there will be a need to assess the ethical implications, for the developers and the users, the companies and the industries in which they are used and influence. In light of this recently discussed philosophical macroethic (information ethics), a reappraisal of computer ethics is needed. Beyond this, some possible future work could include examining the ethics of: • •
• • •
Privacy in the light of society post-9/11 Monopolies in software development and their effect on customers, competitors, and the market Hacking and viruses The production of computers and the management of that which is replaced The use (or abuse) of computers - viz. piracy, pornography, hate speech
•
The political/economic aspects - that is, who has access to computers/Internet and who is in control?
Relative to other applied ethics areas (such as medical ethics or business Ethics), the computer industry has only just started looking at ‘how to do’ ethics. The real question is finding a consensus for this new form of applied computer ethics and being able to situate it within the broader field of ethics, using this new theory of information ethics.
cONcLUsION This chapter examined the development of computer ethics and has suggested that Floridi’s Innovative Approach is more complete than the previous approaches and that it proposes that existence is more fundamental than being alive (that is things exist without necessarily being alive) and that the only proof we have of an object’s existence is that we have information about it. It has been shown that computer ethics has prompted a deeper philosophical debate and that information ethics, as described by Floridi, offers the conceptual basis for further rigorous academic study.
rEFErENcEs Baird, F.E. (2002). Ancient philosophy. New Jersey: Prentice Hall. Beauchamp, T.L. (2003). The nature of applied ethics. In Frey, R.G. & Wellman, C.H. (Eds.), A companion to applied ethics (p. 1). Blackwell Publishing Ltd. Bunch, W.H. (2005). Ethics for evangelical christians. Chapter 13: Virtue Ethics, p. 2. Retrieved May 30, 2007, from http://faculty.samford. edu/~whbunch/Chapter13.pdf
Ethical Theories and Computer Ethics
Bynum, T. W. (1992). Human values and the computer science curriculum. Retrieved May 30, 2007, from http://www.southernct.edu/organizations/rccs/resources/teaching/teaching_mono/ Bynum/Bynum_human_values.html Bynum, T. W. (2001). Computer Ethics: Basic concepts and historical overview. In The Stanford Encyclopedia of Philosophy. Bynum, T. W. (2003). Norbert Wiener’s foundation of computer ethics. The Research Center on Computing & Society. Retrieved May 30, 2007, from http://www.southernct.edu/organizations/rccs/resources/research/introduction/Bynum_Wiener.html Computer Ethics Institute (CEI). The ten commandments of computer ethics. (1992). Retrieved May 30, 2007, from http://www.brook.edu/its/ cei/overview/Ten_Commanments_of_Computer_Ethics.htm Epstein, R. (1997). The case of the killer robot. John Wiley and Sons, New York. Fieser, J. (2006). Ethics. In The Internet encyclopedia of philosophy. Retrieved May 30, 2007, from http://www.iep.utm.edu/e/ethics.htm Floridi, L. L. & Sanders, J. W. (2003). Computer ethics: Mapping the foundationalist debate. Ethics and Information Technology, 4(1), 1-24. Floridi, L. (2006). Information ethics, its nature and scope. SIGCAS Computer Society, 36(3) (Sep. 2006), p. 36. Retrieved May 30, 2007, from http://doi.acm.org/10.1145/1195716.1195719 Gotterbarn, D. (1991). Computer ethics: Responsibility regained. National Forum 71(3), 26-32. Retrieved May 30, 2007, from http:// csciwww. etsu.edu/Gotterbarn/artpp1.htm Gotterbarn, D. (1992). The use and abuse of computer ethics. The Journal of Systems and Software, 17(1), 1. Retrieved May 30, 2007 http://www. southernct.edu/organizations/rccs/resources/
00
teaching/teaching_mono/Gotterbarn02/Gotterbarn02_intro.html Gotterbarn, D. (2001). Software engineering ethics. In J. Marciniak (ed.), Encyclopedia of Software Engineering, 2nd ed. Wiley-Interscience, New York. Hinman, L. M. (2002). Basic moral orientations. Retrieved May 30, 2007, from http://ethics. sandiego.edu/presentations/Theory/BasicOrientations/index.asp Johnson, D. G. (2000). Introduction: Why computer ethics? Computer Ethics, 3, 1-256. Pearson Education. Retrieved May 30, 2007, from http:// www.units.it/~etica/1999_2/Johnson.html Ladd, J. (1997). Ethics and the computer world: A new challenge for philosophers. Computers and Society, 27(3). pp. 8-9. Maner, W. (1999). Is computer ethics unique. Etica & Politica, Special Issue on Computer Ethics 2. Retrieved May 30, 2007, from http://www.units. it/~etica/1999_2/Maner.html Mather, K. (2005). Object oriented goodness: A response to Mathiesen’s ‘What is information ethics?’. Computers and Society, 34(4). Retrieved May 30, 2007, from http://www.computersandsociety. org/sigcas_ofthefuture2/sigcas/subpage/sub_ page.cfm?article=919&page_number_nb=911 Mathews, S. & Birney, G. (1921). A Dictionary of Religion and Ethics. Waverley book Company, Ltd., London. Mathiesen, K. (2004). What is information ethics? Computers and Society, 32(8). Retrieved May 30, 2007, from http://www.computersandsociety.org/ sigcas_ofthefuture2/sigcas/subpage/sub_page. cfm?article=909&page_number_nb=901 Moor, J. H. (1985). What is computer ethics? In T. W. Bynum (ed.), Computers and Ethics, pp. 266-275. Basil Blackwell.
Ethical Theories and Computer Ethics
Pangaro, P. (1991). Cybernetics: A definition. In Macmillan encyclopedia of computers. Macmillan Publishing. Parker, D. B. (1968). Rules of ethics in information processing. Communications of the ACM, 11(3), 198-201. Parker, D. B. (1977). Ethical conflicts in computer science and technology. Arlington, VA: AFIPS Press. Parker, D. B. (1981). Ethical conflicts in computer science and technology. Arlington, VA: AFIPS Press. Parker, D. B. (1982). Ethical dilemmas in computer technology. In W. M. Hoffman & J. M. Moore, (Eds.), Ethics and the management of computer technology. Cambridge, MA: Oelgeschlager, Gunn & Hain. Parker, D. B. (1990). Ethical conflicts in information and computer science, technology, and business. Wellesley, MA: QED Information Sciences. Rowlands, M. (2000). The environmental crisis - Understanding the value of nature. New York: St Martin’s Press. Scott, R.J. (2007). A new definition of software piracy. Retrieved May 30, 2007, from http:// blawg.bsadefense.com/2007/04/a_new_definition_of_software_p.html Singer, P. (1995). Applied ethics. In T. Honderich (ed.), The Oxford companion to philosophy. 1st ed. Oxford University Press. Spong, J.S. (1977). The living commandments. Chapter 11. New York: Seabury Press. Retrieved May 30, 2007, from http://www.religion-online. org/showchapter.asp?title=540&C=620 Van Den Hoven, J. (1995). Equal access and social justice: Information as a primary good. In ETHICOMP95: An International Conference on the
Ethical Issues of Using Information Technology, Leicester, UK: De Montfort University. Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. Freeman. Wiener, N. (1948). Cybernetics: Or control and communication in the animal and the machine. Cambridge, MA: The Technology Press. Wiener, N. (1954). Human use of human beings, Houghton Mifflin. 2nd edn. Doubleday Anchor. WTO (2007). Understanding the WTO – Intellectual property: Protection and enforcement. Retrieved May 30, 2007, from http://www.wto.org/ english/thewto_e/whatis_e/tif_e/agrm7_e.htm
KEY tErMs Applied Ethics: “The term ‘applied ethics’ and its synonym ‘practical ethics’ came into use in the 1970s when philosophers and other academics began to address pressing moral problems in society and in professional ethics (especially medical ethics and business ethics). Prominent examples, then and now, are abortion, euthanasia, the protection of human and animal subjects in research, racism, sexism, affirmative action, acceptable risk in the workplace, the legal enforcement of morality, civil disobedience, unjust war, and the privacy of information.” (Beauchamp 2003, p. 1) Computer Ethics: The area of ethics examining the use of computers on actions and operations that were possible before, or only possible because of, computers. Ethics: The science, or philosophy, or more modestly, the study of moral conduct. By moral conduct in turn is meant conduct regarded as right or wrong, or as what “ought” or “ought not” to be done; or as involving deliberation and choice
0
Ethical Theories and Computer Ethics
between ends viewed as “good”. (Mathews et al 1921, p. 152). Good (opp. Bad): A term referring to the person who constantly strives to do right actions. Obviously, good persons may do wrong acts; we call these mistakes. But upon learning of the mistake, the good person will immediately admit it and attempt to rectify these mistakes because he or she is constantly striving to do right. In this line of thinking, a bad person is one who simply does not strive to do right. Calling a person good means that this person is striving to do the right; it does not mean that he or she has achieved it in every situation. The good doctor constantly strives to make the correct diagnoses and to develop the proper treatment plan. Unfortunately, the good doctor makes more mistakes than anyone would wish. But if they are good doctors, they will discover the mistakes and correct them (Bunch 2005, p. 2). ICT: An acronym for information and communications technology. Information Ethics: The theoretical foundation of applied computer-ethics - a non-standard, environmental macroethics, patient-oriented and ontocentric, based on the concepts of data-entity/ infosphere/entropy rather than life/ecosystem/ pain. (Floridi 1998; 1999, p. 37; and Floridi and Sanders 1999; 2001, p. 55) Macroethics (opp. Microethics): Ethics seen in macrocosm (Macroethics) or microcosm (Microethics), and that a hierarchical relationship exists between them. A macroethics (e.g. concerning society) is more corporate, encompassing and general than a microethic (e.g. concerning an individual) which is more specific (Spong 1977, p. 1). Computer ethics is frequently simply taken to be what is called microethics, that is, the kind of ethics that relates to individual conduct, the rights and wrongs of it, and the good and bad. (Ladd 1997, p. 8). In this chapter information ethics is argued to be a macroethic.
0
Metaethics: “Metaethics investigates where our ethical principles come from, and what they mean. Are they merely social inventions? Do they involve more than expressions of our individual emotions? Metaethical answers to these questions focus on the issues of universal truths, the will of God, the role of reason in ethical judgments, and the meaning of ethical terms themselves.” (Fieser 2006, p. 1) Right (opp. Wrong): “Rightness’ refers to the way of living and the specific acts that conform to the moral standard of the community. Moral theology is built on goodness and badness, not primarily on the rightness and wrongness of actions. This is because goodness and badness is concerned with the vertical relationship with God” (Bunch 2005, p. 2). Software Piracy: “Software Piracy is the distribution of counterfeit software and/or use or distribution of authentic software constituting the intentional violation of intellectual property laws.” (Scott 2007, p. 1). TRIPS: An acronym for the WTO’s Agreement on Trade-Related Aspects of Intellectual Property Rights, negotiated in the 1986-94 Uruguay Round that introduced intellectual property rules into the multilateral trading system for the first time (WTO 2007, p. 1). WTO: An acronym for World Trade Organisation.
ENDNOtEs 1
2
The form “computer ethics” and “ComputerEthics” is used inter-changeably. Where an author is quoted who uses it in a particular form that form is kept. No difference in meaning is intended. The case of deciding to pirate or not pirate computer software as a computer ethics issue is illustrated for each theory.
Ethical Theories and Computer Ethics
3
4
5
Whilst this is probable, Aristotle places a lot of emphasis on habit and it should be noted that whereas habits can be formed in youth and continued on into adulthood – it is possible, we must remember, that habits can, and bad habits should, be broken. The term itself originated in 1947 when Norbert Wiener used it to name a discipline apart from, but touching upon, such established disciplines as electrical engineering, mathematics, biology, neurophysiology, anthropology, and psychology. Wiener, Arturo Rosenblueth and Julian Bigelow needed a new word to refer to their new concept, and they adapted a Greek word meaning “steersman” to invoke the rich interaction of goals, predictions, actions, feedback and response in systems of all kinds. Early applications in the control of physical systems (aiming artillery, designing electrical circuits and manoeuvring simple robots) clarified the fundamental roles of these concepts in engineering; but the relevance to social systems and the softer sciences was also clear from the start. Many researchers from the 1940s through 1960 worked solidly within the tradition of cybernetics without necessarily using the term. (Pangaro 1991) “The Principle of Freedom” – Justice requires ‘the liberty of each human being to develop in his freedom the full measure of the human possibilities embodied in him.’; The Principle of Equality – Justice requires ‘the equality by which what is just for A and B remains just when the positions of A and B are interchanged.’; The Principle of Benevolence – Justice requires ‘a good will between man and man that knows no limits short of those of humanity itself.’ The Principle of Minimum Infringement of Freedom – ‘What compulsion the very existence of the community and the state may demand must be exercised in such a way
6
7
8
as to produce no unnecessary infringement of freedom’” (Wiener 1954, p. 106). A good human life, according to Wiener, is one in which ‘great human values’ are realized – one in which the creative and flexible information-processing potential of ‘the human sensorium’ enables humans to reach their full promise in variety and possibility of action. Different people, of course, have various levels of talent and possibility, so one person’s achievements will differ from another’s. It is possible to lead a good human life in an indefinitely large number of ways: as a public servant or statesman, a teacher or scholar, a scientist or engineer, a musician, an artist, a tradesman, an artisan, and so on. (Bynum 2003) Casuistry acknowledges that new situations create new problems, however, it does not say that one should just ignore the old solutions. Rather that one should look at parallel cases perhaps in related areas and see how they have been resolved, and examine the basic principles or paradigms as opposed to the rules (since rules are: if x then y; whereas a principle is a suggestion; and paradigm is a form of guideline), within which one can develop an interpretation of the problem. Casuistry involves finding the appropriate guideline from a parallel example which then helps one to formulate an ethical response. 1. Thou shalt not use a computer to harm other people; 2. Thou shalt not interfere with other people’s computer work; 3. Thou shalt not snoop around in other people’s computer files; 4. Thou shalt not use a computer to steal; 5. Thou shalt not use a computer to bear false witness; 6. Thou shalt not copy or use proprietary software for which you have not paid; 7. Thou shalt not use other people’s computer resources without authorization or proper compensation; 8. Thou shalt not appropriate other people’s intellectual
0
Ethical Theories and Computer Ethics
9
0
output; 9. Thou shalt think about the social consequences of the program you are writing or the system you are designing; 10. Thou shalt always use a computer in ways that insure consideration and respect for your fellow humans. (computer ethics Institute (CEI) 1992). Extending the idea that computer technology creates new possibilities, in a seminal article, Moor (1985, p. 266) suggested that we think of the ethical questions surrounding computer and information technology as policy vacuums. Computer and information technology creates innumerable opportunities. This means that we are confronted with choices about whether and how to pursue these opportunities, and we find a vacuum of policies on how to make these choices. […] I propose that we think of the ethical issues surrounding computer and
information technology as new species of traditional moral issues. On this account, the idea is that computer ethical issues can be classified into traditional ethical categories. They always involve familiar moral ideas such as personal privacy, harm, taking responsibility for the consequences of one’s action, putting people at risk, and so on. On the other hand, the presence of computer technology often means that the issues arise with a new twist, a new feature, a new possibility. The new feature makes it difficult to draw on traditional moral concepts and norms. […] The genus-species account emphasizes the idea that the ethical issues surrounding computer technology are first and foremost ethical. This is the best way to understand computer-ethical issues because ethical issues are always about human beings. (Johnson 2000, p. 1).
0
Chapter XIV
Artificial Moral Agency in Technoethics John P Sullins Sonoma State University, USA
abstract This chapter will argue that artificial agents created or synthesized by technologies such as artificial life (ALife), artificial intelligence (AI), and in robotics present unique challenges to the traditional notion of moral agency and that any successful technoethics must seriously consider that these artificial agents may indeed be artificial moral agents (AMA), worthy of moral concern. This purpose will be realized by briefly describing a taxonomy of the artificial agents that these technologies are capable of producing. I will then describe how these artificial entities conflict with our standard notions of moral agency. I argue that traditional notions of moral agency are too strict even in the case of recognizably human agents and then expand the notion of moral agency such that it can sensibly include artificial agents.
INtrODUctION The various technosciences of artificial agency such as, artificial life (ALife), artificial intelligence (AI), and robotics present a rather challenging problem to traditional ethical theories whose norms rely on an explicit or tacit notion of human personhood since these entities will share only some, but not all, of the qualities of the humans they will interact with. The field of technoethics must disentangle this problem or be faced with the charge of inco-
herence. This is due to the fact that technology extends the biological limits of the human agent in such a way that it is often difficult to draw a clear line between the human agent and the technologies she uses. Artificial creations such as software bots, physical robots, and synthetic biological constructs are unlike anything we have encountered yet and in them something like individual agency is beginning to evolve. This quasi-individual agency is already placing these entities in conflict with the goals and desires of human agents, creating apparently moral
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Artificial Moral Agency in Technoethics
interrelations. What is the nature of these moral interrelations? We have three possible answers to this question (see Sullins, 2005). The first possibility is that the morality of the situation is illusory, we simply ascribe moral rights and responsibilities to the machine due to an error in judgment. The second option is that the situation is pseudo-moral; a partially moral relation but missing something that would make the actors fully moral agents. A final possibility is that even though these situations may be novel, they are still real moral interrelations. I argue that technoethics must address this latter possibility.
bacKGrOUND It is not an obvious move to grant moral concern to the nonhuman objects around us. It is common to hold the view that the things we come into contact with have at best instrumental value and that only humans have moral rights and responsibilities. If some nonhuman thing elicits moral concern, it does so only because it is the property of some human through whom these rights extend. This all seems very straight forward and beyond question. But here is my worry—we have been mistaken in past about our definition of what it takes to be a human moral agent. Historically women, low caste men and children have been denied this status. We have come to regret these past indiscretions, it is possible that that our beliefs about moral agency are still misguided. Some people may be willing to grant moral rights to animals, ecosystems, perhaps even plants. If machines were shown to be similar to these things might they not also be reasonable candidates for moral rights? If so, what happens if these entities acquire agency similar to that of a human, then must they also bear moral responsibilities similar to that of a human agent? The answer to the latter question is simple; of course anything that displays human level agency enough
0
to satisfy even harsh critics would be a candidate for moral rights and responsibilities because it would have undeniable personhood and all persons have moral worth. The possibilities for this happening any time soon though are fairly low. But, they have made some progress on attaining interesting levels of agency, so what we need to inquire into is whether or not these meager qualities are enough to grant moral agency and worth to artificial agents.
What is an Artificial Agent? The term “agent” has a number of related, but potentially confusing, meanings. An agent is most simply; a thing that exerts power and is the opposite of a patient which only reacts to or receives the consequences of the actions of an agent. Thus it may be used to talk about a person or a thing that has some causal effect on its environment or other agents. Commonly, it is used to refer to persons who act on others behalf. These slightly different meanings converge in the definition of the term “artificial agent”, which I will use to refer to any technology created to act as an agent, either as a locus of its own power, or as a proxy acting on behalf of another agent. So an artificial agent might have its own goals that it attempts to advance or, more likely, it is created to advance the goals of some other agent. Certain criteria are used by technologists to distinguish autonomous (artificial) agents from other objects: An autonomous agent is a system situated within and a part of an environment that senses that environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the future. (Franklin and Graesser, 1996) This definition could be used to describe a wide range of artificial entities, some of which could be very minimal and philosophically uninteresting. Franklin and Graesser (1996), go on to list a num-
Artificial Moral Agency in Technoethics
ber of additional qualities that these agents may or may not posses, which leads to a much more complex and interesting taxonomy of artificial agents. Ideally, the agent should be a continuous process that monitors its environment, be able to communicate with its user or at least other simple agents, have some sort of machine learning, and be mobile in its environment or able to move to other environments, as well as be flexible in its reactions to stimulus in these environments. It is also a bonus if the artificial agent is affective in its character in order to interact with a realistic personality that will ease communications with human agents. These additional qualities are not necessary for artificial agency but the more that can be added to the technology the better it will be at interacting with its environment and other agents. Franklin and Graesser (1996), use these criteria to outline a taxonomy of autonomous agents that will be useful for our purposes here. I would like to add to their taxonomy the additional caveat that the autonomous agents under consideration are artificial; that is a product of technology, in order to add focus to our discussion.1 Artificial autonomous agents can be separated into three categories: synthetic biological constructs, robots, and software agents.2 Synthetic biological constructs are not very common today, with the only real examples being the attempts at creating wet ALife entities through synthetic biology (see, Rasmuseen et al., 2004). Wet ALife is still a long way from achieving the complexity required for full autonomy, these entities are more like single artificial cells than robust agents but as this technology grows we may see more philosophically challenging agents evolve (see, Sullins, 2006). Some robots are already interesting autonomous agents. All robots can be separated into two broad categories; autonomous robots and telerobots, the distinction being that in a telerobot some level of human control of the robot is maintained and in an autonomous robot the machine’s pro-
gramming is in control of the robot’s actions. Each of these classifications can be further subdivided into; automatons, mobile robots, and affective robots. Automatons are things like industrial robots or simple entertainment robots, which lack almost all of the criteria for agency listed above. As such, automatons are unlikely candidates for autonomous agency. Mobile robots can display autonomy, continuous monitoring of their environment, motility, modest levels of communication and flexible machine learning. Affective robots are like mobile robots except they attempt to simulate emotions and expressions in order to enhance communication with the human agents it interacts with. An example would be the robot Kismet that interacts well with humans (Brazeal et al., 2004, Brazeal 2002, 1999, Sullins, 2008). This makes mobile and affective robots artificial agents and also makes them potential candidates for artificial moral agency. Finally we turn to software agents, which are AI agents that are autonomous programs that inhabit fully virtual environments; be that on a physical computer, a virtual machine,3 or in some form of web technology. Software agents can be broken into these subcategories; task accomplishing agents, entertainment agents, viruses and ALife agents.4 For example, the infamous Clippit the animated office assistant that came bundled with Microsoft Office 97 to 2007, was an artificial task accomplishing agent that attempted to autonomously interact with human users of Microsoft Office, but its failings in understanding the needs of the user along with its overtly artificial and saccharin personality lead to its eventual demise. There are legions of such programs in use today that are designed to help human users but some are consciously designed to cause harm, such as computer viruses and other malware. These agents are already significantly impacting the lives of those humans that interact with them. A final distinction worth noting in ALife agents is the difference between so called hard and soft
0
Artificial Moral Agency in Technoethics
ALife. Soft ALife uses software agents to model interesting characteristics of living systems and hard ALife is the quest to create software entities that are functionally alive in the way that a biological creature is said to be alive. Hard ALife, to the extent that it succeeds in its stated goal, is the most interesting for our purposes here, but robust soft ALife entities can be considered autonomous agents given our definition above.
What is an Artificial Moral Agent (aMa)? We can now turn towards a working definition of artificial moral agent (AMA). Simply put, an artificial moral agent is an artificial autonomous agent that has moral value, rights and/or responsibilities. That is easy to say but I must acknowledge that no traditional ethical theory takes this notion seriously. Still, there are three possible openings for the heresy I am espousing here. The first is the growing field of animal and environmental ethics where philosophers, inspired by the work of Peter Singer, famously argue that utilitarian theories must take into account all the repercussions of one’s actions on all those impacted, even animal agents and failure to do so is due to speciesism (see Singer, 1974; 2006). The second theory that seems to open the door for nonhuman moral agents is found in John Rawls’ original position thought experiment, where he asks us to build our theory of justice as if we were rational disembodied minds designing a society that we would have to then inhabit as human persons, but we are to do so under a veil of ignorance where we will not know beforehand which exact embodied humans we will become (see chapter III in Rawls, 1971). The original position is most likely just a literary device for Rawls, but they certainly are nonhuman, purely rational minds that do count as at least theoretical moral agents in his theory of justice.5 The third potential ally for the position taken by this chapter can be found in evolutionary ethics.
0
In this field we are beginning to find compelling evidence that suggests that our moral reasoning is at least in part the product of evolutionary processes that can be found in nonhuman species and most likely was present in our hominid ancestors. The primateologist Franz DeWaal provides much empirical evidence to support the notion that primates have a kind of rudimentary moral reasoning process (DeWaal, 1996, 2006). And the philosopher Elliot Sober and the theoretical biologist David Sloan Wilson have shown that certain proto-ethical behaviors such as altruism can be seen to be rational from an evolutionary standpoint (Sober and Wilson, 1998). This suggests that, at least in principle, moral reasoning may have an evolutionary function and therefore this adaptation could be found in other species, which would be nonhuman moral agents. If you take these three ideas and mix them together and squint just right while looking at the product, we can start to see a theory of artificial morality taking shape. Peter Singer and his allies give us a forceful argument that animals should be seen as moral patients at the very least. So we have a case for natural nonhuman moral patients. DeWaal along with Sober and Wilson help us build a case that moral reasoning, or at least moral instinct, reaches down the phylogenetic tree to species related to humans, blurring the distinction of moral agency being a feature of humans only. This means that at least some of our nonhuman relatives could be more than just moral patients but agents as well. Of course these are still natural non-human moral agents so we need to take a further step. Finally, if we take moral reason to be functional in some way that does not entirely depend on specific human embodiment; in the manner of Rawls who suggests we can think about moral values like justice as if we were not a particular human agent, then it is possible that this moral reasoning process could be found in some other natural or artificial entity and function in some way congruent with that of humans. So it is logically possible that artificial moral agents
Artificial Moral Agency in Technoethics
could exist. These would be artificial autonomous agents that exhibited some functional artificial morality. The philosopher Peter Danielson (1992), describes artificial morality as: ...the class of entities that can be moral agents is determined by their functional abilities. This basic functionalist doctrine is widely accepted for other capacities, for example, there are functional prerequisites for calculating. The fossil-filled stone I use as a paperweight would make a poor computer. It contains lots of silicon but not in a form that can change and retain states easily. Similarly, we may ask: what sorts of things make good—that is rationally successful—[moral] agents?6 I will take up this question and the implications of its answer later in this chapter. But now we need to ask if it is possible for us to transform some of our technology in such a way as to create artificial moral agents as I have described them?
Our technology, Ourselves At the turn of the last century, John Dewey built his theory of inquiry on the instrumental uses of technology writ large to include not only common tools and machines but logic and language as well, all of which we are in a transactional relationship with in the discovery and construction of our world (See, Hickman, 1990). We should take this cue from Dewey and explore the relations and transactions between humans and their artifacts and their ontological similarities. The philosopher John McDermott explains this move thus: Artifacts, then, are human versions of the world acting as transactional mediations, representing human endeavor in relational accordance with the resistance and possibility endemic to both nature and culture. (McDermott, 1976, p. 220) Artifacts, then, have meaning neither over and above human meaning, nor in opposition to human
meaning, but instead their value is in relation to that of humans as informed by the natural and cultural world. In this way, technological artifacts create a space separate from nature that allows for human values to be seen in distinction from the dictates of natural necessity. With technology we can pursue goals that are solely our own. But modern information technologies are changing and not only does technology provide a tool for inquiry and thought, but it is becoming a locus itself for inquiry and thought. Dewey could see the trajectory of industrial technology but could only dimly imagine the coming postindustrial world of information technology. Technology is no longer simply a tool for thought but actually thought itself. McDermott (1976) argues that, “...the wider range of consciousness, becomes concretized by the presence of electronic media, with Telstar performing as a global cortex” (p. 221). If this argument was evident in the seventies, how much more so this argument rings true in light of modern web technologies and the various artificial agents being built via, AI, ALife and robotics? Technoethics asks us to consider the special impact technology has had on issues of ethical concern. In this way it is a similar stance to that taken by Dewey, who argued that solving problems in ethics is similar to solving a problem in algebra, it is a practice as much as a theory (Hickman, 1990, p. 111). Mario Bunge wrote his 1974 essay, Towards a Technoethics, in order to bring ethics under the notice or even the purview of technologists. Contrary to my position, he does state unambiguously that he believes that technological instruments are “...morally inert and socially not responsible” (Bunge, 1974, p. 98). What he is worried about is that technologists working in a distributed industrial setting have a tendency to see themselves as merely instruments in a technological process making them immune from having to share in the blame for any social impact the technology they devise might have. Such claims have always
0
Artificial Moral Agency in Technoethics
rung hollow and Bunge set out to show the error of this belief. He argues that technology is about gaining power over various processes and it is not the case that having power over something is a neutral good, since some interests are served at the expense of others. This means that there are no morally neutral technologies; or technology just for the sake of technology, each technology is built to extend the power and interests of some group or other. Thus technologists are caught up in ethical situations weather they want to think about it or not. “Because of the close relationships among the physical, biological and social aspects of any large-scale technological project, advanced large-scale technology cannot be one-sided, in the service of narrow interests, short-sighted, and beyond control: it is many-sided, socially oriented, farsighted, and morally bridled” (Bunge, 1974, pg 101). So we see that the technologist is certainly a moral agent with rights and responsibilities but he goes further to claim that technologists provide an interesting model for moving from scientific theory to practical applications, which is something ethicists may learn from. The technologist does not make: ...categorical imperatives but conditionals of the forms If A produces B, and you value B, chose to do A, and If A produces B and C produces D, and you prefer B to D, choose A rather than C. In short, the rules he comes up with are based on fact and value, I submit that this is the way moral rules ought to be fashioned, namely as rules of conduct deriving from scientific statements and value judgments. In short ethics could be conceived as a branch of technology. (Bunge, 1974, pg 103). It is clear that what Bunge means by the term technoethics, is the study of ethics as a kind of technology. I would like to extend this argument beyond what Bunge intended. Since Bunge (1974), argues
0
that we should see human goals (G), and the means to them (M), and any side effect(s) to the means (S), as factors in a moral formula: IF M then G and S, which would be non-arbitrary and based on natural or social law with the results either certain or based on some “fixed” probability. These sorts of laws could be used to form rules such as: To get G and S do M; to avoid G and S refrain from doing M. This shows that ethical norms can be rationally produced through the construction of the right set of conditional statements. If one follows Bunge’s argument this far, then contrary to what Bunge himself states, it opens the door to the possibility that at least some of the products of technology can themselves have moral agency because some forms of technology, notably information technologies, are particularly adept at executing instructions such as the conditionals we just mentioned. Given some goal, a properly programmed machine could deduce the means to achieve that goal mitigated, of course, by cost benefit analysis given the potential side effects of the means employed. A machine such as this would be an artificial moral agent, an agent that might operate beyond the control of the technologists that created it. So, briefly put, if technoethics makes the claim that ethics is, or can be, a branch of technology, then it is possible to argue that technologies could be created that are autonomous technoethical agents, artificial agents that have moral worth and responsibilities—artificial moral agents.
IMPLIcatIONs In this section we will look at how artificial moral agency conflicts with traditional conceptions of moral agency and attempt to deal with the various possible counterarguments to ascribing moral agency to things of human construction. We will then conclude this section by looking at arguments from other writers in favor of the more liberal interpretation of moral agency that
Artificial Moral Agency in Technoethics
would include some AI, Alife, and robotic agents as real moral agents.
a Dilemma and its resolution Personhood and all that that word implies is a common prerequisite for ascribing moral agency, any dictionary of philosophy or introductory text will tell you that a moral agent is a person with moral reasoning capabilities. Is it right to presuppose personhood in the way this definition does? Will technologies have to be ‘persons’ to be considered moral agents? Let’s look at this argument closely. The foundational premise is this: an entity is a moral agent only if it is a person with free will and a sound mind. This sounds perfectly reasonable and it would be hard to find arguments to the contrary of this assertion. Many millennia of philosophical and theological tradition rest on the assumption that humans and perhaps God(s) are the only moral agents, so human moral agents must also be persons with free will and a sound mind. Furthermore, autonomous artificial agents, as we have described them in the previous section, may be many things but it is a stretch to grant them personhood. Autonomous artificial agents do not seem to be the legal nor moral equal of human agents, perhaps even being of lesser status then animals and even plants. Their actions are the direct results of programs interacting with inputs and their environment, so it would be impossible to claim any sort of free will for these agents. They also lack the capacities of mind that we so value as humans, such as self-consciousness, experience, and nuanced learning. This has an unfortunate consequence for the status of artificial autonomous agents as moral agents. Since artificial autonomous agents do not possess personhood, freewill or a human equivalent mind, then by simple deduction they are not moral agents. Case closed; or so it would seem.
There is a problem in that concepts such as personhood, freewill, and mind are all notoriously difficult to define and each has its own constellation of writers furiously composing new hypotheses regarding each topic resulting in a voluminous literature that would be impossible to survey here or anywhere else. Regardless of where you stand on each of these particular debates, you must consent that reasonable people can disagree on these topics making it difficult to come up with a definition that will be generally acceptable. We have a tough time determining the status of human fetuses and humans suffering from persistent commas as persons. Additionally, the more we find out about the cognitive apparatus that produces our thoughts and actions the less likely it is that humans’ possess radical free will (see Libet, et al, 1999). One might argue, as Daniel Dennett (2003) does, for a kind of compatibilist free will as he describes in his book, Freedom Evolves, which maintains moral responsibility while acknowledging the findings of modern cognitive and neuroscience. But there is nothing about his theory that could not apply to a certain class of artificial agents as well. Lastly, the criterion of possessing a ‘human equivalent mind’ must refer to the agent’s ability to reason and to possess good reasons for the behavior it exhibits. I think it is a difficult task to determine if a human possesses these abilities, as evidenced by the numerous court cases and trials attempting to determine guilt and culpability of defendants. It is a task that stretches the ability of modern psychology since we, as yet, cannot decipher the inner workings of the brain and have to rely on external behavior and induce the root causes of these actions. So it would seem that it is not fully warranted to believe that one might even be able to claim personhood, free will and a sound mind to human agents, thus leaving us with the uncomfortable position of having to deny that there are any human moral agents. Given this absurdity we
Artificial Moral Agency in Technoethics
have to weaken our original claim and remove the requirement of personhood, free will and a sound mind from the list of attributes of a moral agent. This move will allow us to maintain our belief that humans can be moral agents, but it will also open the door for certain other things to be considered as moral agents as well.
chemistry, etc. It must be the case that the philosophical requirement for robust free will, whatever that turns out to be, demanded by Bringsjord, is a red herring when it comes to moral agency. Robots may not have it, but we may not have it either, so I am reluctant to place it as a necessary condition for morality agency. (Sullins, 2006)
some counterarguments
An interesting twist on this type of argument comes from Daniel Dennett who claims that artificial agents could be built that had mens rea, or a guilty state of mind, which includes: motivational states of purpose, cognitive states of belief, or a non-mental state of negligence (Dennett 1998).7 So an artificial agent might be programmed that could be well aware of its guilt in some negative moral act that it committed, but in order to be a full moral agent he requires that the agent also posses “higher order intentionality,” meaning that they can have beliefs about beliefs and desires about desires, beliefs about its fears about its thoughts about its hopes, and so on (1998). Under this view, artificial moral agency is a possibility, but a remote one as we are nowhere near capable of building such machines. While it is certain that an agent with higher order intentionality as described by Dennett would indeed be a moral agent, as with free will, it is not so easy to determine just what is required from an agent to justly claim that it has higher order intentionality. Would we be able to achieve it by simply having the machine recursively reconsider the “beliefs and desires” its lower programming developed and alter them in light of new ethical considerations? This does not seem impossible. It is also the case that many humans do not properly consider their own actions, in fact it seems like only a small minority ever do, so again I am reluctant to place this as a necessary condition for moral agency since these restrictions may eliminate most humans as well as all artificial agents. Bernhard Irrgang (2006), argues that that, “[i]n order to be morally responsible, however, and act needs a participant, who is characterized by
The most obvious counterargument to ascribing moral agency to artificial agents is based on tradition. This argument finds it implausible that so many theorists in ethics from the beginning of the discipline to now would incorrectly determine the necessary and sufficient criteria for moral agency. All the heavy hitters in moral theory, Plato, Aristotle, Aquinas, Hume, Kant, etc., place special emphasis on human agency in their theories of ethics. Reviewing the traditional literature will lead one to conclude that free will, consciousness and the ability to understand concepts like right and wrong, good and bad, are prerequisites for moral agency, and these assets are only found in God and humans (Bringsjord, 2007, Himma, 2007, Irrgang 2006, Drozdek 1995). Bringsjord and Himma both recognize the power and possibilities of artificial agents but they argue that these entities are, and will always be, non-moral agents given their lack of free will. Any moral agency they appear to have is there only through the actions of human programmers, so at best an artificial agent is just an extension of human moral agency. The argument is simply that if the entity in question has free will then it has the ability to be a moral agent; an entity without free will then is not a moral agent. This move presents a worse problem then the one it is trying to solve since, as I have argued in another paper: If Bringsjord is correct, then we are not moral agents either, since our beliefs, goals and desires are not strictly autonomous, since they are the products of culture, environment, education, brain
Artificial Moral Agency in Technoethics
personality or subjectivity.” Irrgang distinguishes between purely artificial agents and humans that have become cyborg through extreme body modifications through technology. He believes it is not possible for non-cyborg robots; that are purely artificial agents, to attain subjectivity, making it impossible for artificial robots or other computational agents to be called into account for their behavior (Irgang, 2006). Irrgang opens a crack in the door though which he rightly allows for cyborgs to be moral agents. One does not lose their status as moral agents by having a pacemaker implanted or when they put on glasses or any of the myriad other technological enhancements we place in and around our bodies. So this leaves Irrgang with having to state at what point do technological enhancements move one from the status of cyborg to fully artificial entity, and hence the point where one losses her moral agency. I agree that personality and subjectivity are important, especially in human equivalent moral agents but again these are philosophically loaded terms that one can equivocate on easily, so I would rather look at the details of how an agent expresses perceived personality and subjectivity and make my decision there rather than rule the agent out of court simply based on its artificial origins. As we will see in the next section, there is a credible alternative method to making these judgments based exclusively on the ontology of the agent under consideration.
Floridi and Sanders’ on Artificial Moral agency The most extensive literature in the philosophy of information technology and artificial moral agents that supports artificial moral agency can be found in that of Luciano Floridi (1999, 2002, 2003), and Floridi with Jeff W. Sanders (1999, 2001, 2004) of the Information Ethics Group at the University of Oxford. Here I would like to briefly note the highlights of their research, as they are able to move us beyond many of the
counterarguments to artificial moral agency raised in the last section. Floridi (1999) argues that issues in computer ethics strain our traditional philosophical conceptions of the issues faced in ethics and morality. Due to this friction, computer ethics has caused us to see that what is needed is a broader philosophy of information that will allow us to confront these traditional conundrums in a new light (Floridi 2002). Floridi (2003) sees this as recognition that information is a legitimate environment of its own that has its own intrinsic value something like the natural environment and the acknowledgement of this makes information itself worthy of ethical concern. Technoethics can be seen it as an attempt to use technology to advance the study of ethics similar to the way that Mario Bunge used simple logic to sketch a possible model of ethical analysis. Floridi (2003) helps extend that notion by providing us with a more satisfying model of moral action using the logic of object oriented programming. His model has seven components; (1) the moral agent a, (2) the moral patient p (or more appropriately, reagent), (3) the interactions of these agents, (4) the agent’s frame of information, (5) the factual information available to the agent concerning the situation that agent is attempting to navigate, (6) the environment the interaction is occurring in, and (7) the situation in which the interaction occurs (Floridi, 2003, p. 3). Note that there is no assumption of the ontology of the agents concerned in the moral relationship modeled. To understand this move we have to look at Floridi’s use of levels of abstraction (LoA) when considering a moral action. Anytime we determine a moral agent and patient, the nature we ascribe to them will be dependant on the LoA we adopt: Suppose, for example, that we interpret p as Mary (p=Mary). Depending on the LoA and the corresponding set of observables, p=Mary can be analysed as the unique individual person called Mary, as a woman, as a human being, as an animal,
Artificial Moral Agency in Technoethics
as a form of life, as physical body and so forth. The higher the LoA, the more impoverished is the set of observables, and the more extended the scope of analysis. As the Turing Test shows, ‘erasing’ observables raises the LoA, until it becomes impossible to discriminate between two input sources.... At the LoA provided by information analysis (LoAi), both a and p [agent and patient] are information objects. In our example, this means that p=Mary is analysed as an information object that interacts and shares a number of properties with other information objects, like a digital customer profile. It does not mean that a and p are necessarily only information objects. (Floridi, 2003, p 4) Floridi is aware that the danger here is in opening the door to information objects to be treated as moral agents through abstracting the LoA, one is in danger of abstracting out all morally relevant details leading to the possibility of a highly relativistic ethical theory that sees moral worth in just about everything, “...this would clearly fail to make sense of a whole sphere of moral facts and the commonly acknowledged presence of worthless and unworthy patients” (Floridi, 2003, p 39). To account for this, Floridi and Sanders (2001), have an account of negative moral evaluations of information objects, which they describe in their paper Artificial Evil and the Foundation of Computer Ethics. Floridi and Sanders (2001), argue that we need to make room for a notion of artificial evil, to describe the maleficent actions of artificial agents, and to some extent also acknowledge the evil that can be done to artificial agents themselves. This notion would extend our traditional notions of moral evil and natural evil commonly found in ethical theory that are used to distinguish actions that harm others that are caused by other human agents, moral evil, from those that are caused by natural disasters or accidents, natural evil. Actions of artificial agents, they argue, are not comfortably placed as acts of moral evil since they lack
the qualities of human agency normally required, nor are they acts of nature, such as a hurricane, etc. due to their ontology. This, of course, resonates with the early thinking in technoethics, if a bridge collapses, for example, many years after its initial construction, who is to blame? Is this an act of moral evil with prosecutable human moral agents or natural evil in which no one is to blame? If you will recall our earlier discussion of Mario Bunge, he argued that technologists were the locus of moral evil in these sorts of situations, even though they liked to isolate themselves from the long-term effects of a failed design. Floridi and Sanders (2001), find a similar defect in Bunge’s position as we found earlier in this chapter when it comes to extending this idea to artificial computational agents. To clarify their position they hold that moral evil is any evil caused by a responsible natural autonomous agent; whereas natural evil is any evil caused by natural heteronomous agent such as a flood, earthquake, nuclear fission, and or by any natural autonomous agent that is not directly responsible for the evil act; artificial evil is evil produced by a artificial autonomous or heteronomous agent. The question is whether or not this is a distinction without any real content, are acts of artificial evil just the expression of moral evil or perhaps natural evil in another medium? Floridi and Sanders (2001), note that this is the move made by Bunge when he claims that technologists are morally accountable for the technologies they create and that these technologies are at best a tool or intermediary for the expression of the moral intentions of their makers (pg. 19). They note that this is similar to the problem of evil found in theological philosophy where we are unable to make a distinction between moral evil and natural evil if one posits a divine designer of nature who must be finally responsible for the world this ultimate agent has created. If we see the force of that argument then by analogy Bunge’s argument must be valid as well since it has the same form as the problem of evil argument
Artificial Moral Agency in Technoethics
(Floridi and Sanders, 2001, pg 19). It is not easy to counter this argument but Floridi and Sanders agree that when one is talking about artificial heteronomous agents working in conjunction with human agents, then it is appropriate to pass the buck of moral blame to the human agents in the system. But we cannot do this when the agent is an artificial autonomous agent as described earlier in this chapter. Since these agents: …whose behaviour is nomologically independent of human intervention, may cause AE [artificial evil]. In this case, the interpretive model is not God vs. created universe, but parents vs. children. Although it is conceivable that the evil caused by a child may be partly blamed on their parents, it is also true that, normally, the sins of the sons will not be passed onto the fathers. Indirect responsibility can only be forward, not backward, as it were. (Floridi and Sanders, 2001, Pg. 20) Here the charge of false analogy is likely to be leveled at Floridi and Sanders, how can they go from talking about artificial autonomous agents, which include very meager entities like software bots and malicious software agents and then equate them analogously with children who are clearly robust human agents? This does seem like a stretch in reasoning that needs to be explained. I believe Floridi and Sanders are not trying to make the case that there is an exact one to one correspondence between these analogies, instead they are pointing out that under a reasonable level of abstraction, there are two key similarities; autonomy and unpredictability, that both children and artificial autonomous agents have in common. They are not suggesting that Artificial Autonomous Agents have the personhood, free will, and robust intentionality that one might ascribe to children. Floridi and Sanders (2005), want to get away from the notions of free will, personhood and robust intentionality in regards to artificial autonomous agents as they tend to be a conversation
stopper. Instead they argue for what they call a ‘mind-less morality,’ which would enable us to bypass these tricky traditional notions of moral agency that, as we have discussed at length in previous sections, have proven problematic to ascribe even to humans. They argue instead that at the proper level of abstraction, if we can see that the artificial autonomous agent’s actions are interactive with their surroundings and influenced by state changes internal to the agent that cause the agent to be adaptable to new surroundings, then that is all that is needed to ascribe a functional moral agency to the agent in question (Floridi and Sanders, 2005). What is important is that when the actions of an autonomous artificial agent pass a certain threshold of tolerance and cause harm or evil, then we can logically ascribe a negative moral value to the agent itself and not simply its creator, just as we would do for a child who commits an immoral or evil act. Certainly, we would ask the parent or caretaker of the child or autonomous agent to control their wards, and we can admonish these caretakers for neglect of this duty if harm is caused by their wards, but the moral blame for the harm or evil done outside of the caretaker’s control can only be blamed on the agent that directly propagated it. This needs more explanation but before we get to that it must be added that autonomous artificial agents do hold a certain appropriate level of moral consideration themselves, in much the same way that one may argue for the moral status of animals, environments, or even legal entities such as corporations (Floridi and Sanders, 2005). This seems to be a far less contentious claim but could be challenged in the same way some challenge the moral worth of anything but human agents. Floridi and Sanders (2005), provide an important caveat to all of this speculation; the artificial autonomous agent must be unpredictable or set in an environment that keeps its actions unpredictable. An artificial agent whose actions can be predicted with a great deal of accuracy, lacks the autonomy needed for moral agency. So, if we
Artificial Moral Agency in Technoethics
imagine a program that is finely crafted so that it does exactly what the programmer intended, then of course it is the programmer that deserves the moral praise or blame for the actions of the program itself (Floridi and Sanders, 2005). So for Floridi and Sanders, the artificial autonomous agent is a moral agent certainly deserving of moral concern, but also, if the agent has true autonomy, meaning its actions within a given environment are determined by state changes internal to itself and further that the agent is unpredictable since at the normal level of abstraction of those other human and artificial agents interacting with it. If those internal state changes are somewhat inscrutable to casual observers, then the artificial agent is quite capable of deserving moral responsibility as well and is a true artificial moral agent.
Artificial Moral Agents are the Only Moral agents Possible The final, and most radical position that can be taken of this topic is the argument that humans are deficient moral agents at best and a fully rational, artificial moral agent is the only thing sufficient to achieve full status as a moral agent. There are a number of people that espouse this position, but I will pick only a few paragon examples for our discussion here. Peter Danielson (1992), in his book Artificial Morality: Virtuous robots for virtual games, explores the ages old question of whether or not it is rational to be moral, but uses the relatively novel tools of computer modeling to shed new light on this problem. Danielson’s book is largely about using computer models to shed light on game-theoretic puzzles regarding the rationality of moral decision making, trying to get beyond the apparent paradox that asks; why be moral when the payoff for immorality seems higher? He does this by building models and model worlds that show that, “…there are moral agents which are rational in the following sense: they successfully solve social problems that amoral
agents cannot solve” (Danielson, 1992, p. 4). He achieves this by conducting multiple experiments using well know problems like the iterated prisoners dilemma, and shows the increased fitness of artificial moral agents in these scenarios over artificial amoral agents. The results of these experiments strengthen Danielson’s position that rational moral agents are more fit, meaning better at solving social problems, than irrational or amoral agents. There is, however, a sinister modus tollens lurking here; if rational moral agents are able to solve certain social problems, and we are unable to solve those problems, then it follows that we are not rational moral agents. Danielson admits that it is possible that his theories apply best to artificial moral agents and might be only extendable to other artificial agents such as organizations, corporations, etc., and therefore only indirectly applicable to human agents. For Danielson, humans are lacking certain qualities necessary for rational moral agency: First, we humans evidently are not cognitively transparent. Second, we may lack the discriminating means of commitment that rational morality requires. Third, we humans cannot readily adapt our commitments, as our emotional mechanisms for fixing dispositions tend to have high inertia and momentum. (1992, p. 200) We may have certain psychological patterns that limit our commitment to, or even our ability to engage in rational contemplation of moral situations. Similar concerns are raised by the philosopher Eric Dietrich who argues that evolutionary psychology has shown that we have evolved some very nasty behaviors that are clearly immoral like sexual abuse, racism, sexism, warfare, etc. We know these are acts of moral evil but even so they have not disappeared despite millennia of top-notch moral philosophy and religious interventions. If anything, our technology has increased our capacity for creating harm but Dietrich argues technology also holds the potential
Artificial Moral Agency in Technoethics
solution to these problems. “… [L]et’s build a race of robots that implement only what is beautiful about humanity, that do not feel any evolutionary tug to commit certain evils, and then let us – the humans – exit stage left, leaving behind a planet populated with robots that while not perfect angels, will nevertheless be a vast improvement over us” (Dietrich, 2001, p. 324). Hans Moravek (1998), the prominent robotics researcher, also argues in his book, Robot: Mere machine to transcendent mind, for a similar potential future in which humans upload only the best of their science and moral theories into rational machines which will then supercede us humans and take only the best we have to offer into future millennia leaving our evolved evil tendencies as just a historical relic. The obvious counterargument here is the worry that machines, even machines of loving grace, are not free and therefore not moral even if their actions of beneficent, couldn’t they just as easily be programmed to be the avatars of ultimate moral evil? The physicist Joseph Emile Nadeau counters this argument by claiming that on the contrary, an action is a free action if and only if it is based on reasons fully thought out by the agent and that only an agent that operates on a strictly logical theorem prover can thus be truly free (Nedeau, 2006). If free will is necessary for moral agency and we, as humans have no such apparatus operating in our brain, then using Neduau’s logic, we are not free agents. Androids, robots and other artificial agents, on the other hand, are programmed explicitly in this manner so if we built them, Nadeau believes they would be the first truly moral agents on earth (Nadeau, 2006 ).8 This moral rationalist position is simply the belief that the more rational and cognitively capable the agent, the more moral that agent’s actions will be, since moral actions are rational actions: …[T]he reason we have gotten better is mostly because we have gotten smarter. In a surprisingly
strong sense, ethics and science are the same thing. They are collections of wisdom gathered by many people over many generations that allow us to see further and do more than if we were individual, noncommunicating, start-from-scratch animals. The core of a science of ethics looks like an amalgam of evolutionary theory, game theory, economics, and cognitive science. (Hall, 2007). It is not necessary to make the leap that given the truth of this proposition it follows that humans are irretrievably evil and need to be replaced by artificial moral agents, one might instead take Peter Danielson’s next step and conclude that, “[a]rtificial morality may lead us to discover techniques of communication and commitment that are morally effective but unavailable to unaided humans” (1992, p. 201). Thus humans are not morally bound to self-destruction but should work to use technoethics to technologically enhance their primitive, but not insignificant, moral inclinations.
FUtUrE trENDs There has already been some small success in building artificial moral agents such as those programmed by Peter Danielson and others for the exploration of simple moral problems. As this field progresses we can expect more and more robust artificial moral agents that will take the form of household and other service robots that will coexist with us in our living space, synthetic biological agents, and software agents that will act together with us online. If these technologies are to operate in an intelligent manner, then they must take note of the behaviors, wants, and desires of their owners. This personal data might include sensitive or embarrassing information about the user that could become known to anyone with the skill to access that information. So it is imperative that these agents be programmed with the ability to act
Artificial Moral Agency in Technoethics
as competent moral agents in order to navigate the tricky social situations they will find themselves in. Mark Weiser, one of the early researchers in ubiquitous computing, suggested that when designing artificial agents that will work closely with human operators, “…we must work with a sense of humility and reverence to make sure these devices enhance the humanness of our world, advancing our cherished values and even our spirituality, rather then just focusing on efficiency, and obtrusive sterility (Weiser, 1999).9 Successful artificial moral agents will also begin to be more than just tools to those who interact with them. Because of this, these agents must be designed in such a way that they can act in a sociable and friendly way.10 Neil Gershenfeld of the MIT Media Lab has sketched out the proper relationship we should have with the information technology around us. He claims that, (1) we should have the information we need when, where and in the form that we want, (2) We should be protected from sending and receiving information we do not want, and (3) we should be able to use technology without attending to its needs (Gershenfeld, 1999, p. 102). Additionally, things have the right to, (1) an identity, (2) access to other objects, and (3) the right to detect the nature of their environment, “[t]aken together, these rights define a new notion of behavior, shared between people and machines, that is appropriate for a new era (Gershenfeld, 1999, p. 104). It is clear that more work needs to be done in order to determine just how to achieve these goals. Colin Allen, Gary Varner and Jason Zinser (2000) explain the problem clearly in their paper, Prolegomena to any future artificial moral agent: Attempts to build an artificial moral agent (henceforth AMA) are stymied by two areas of deep disagreement in ethical theory. One is at the level of moral principle: ethicists disagree deeply about what standards moral agents ought to follow. Some hold that the fundamental moral norm is the principle of utility, which defines right actions and policies in terms of maximizing
aggregate good consequences, while others hold that certain kinds of actions are unjustifiable even if a particular token of the type would maximize aggregate good. The other level of disagreement is more conceptual or even ontological: apart from the question of what standards a moral agent ought to follow, what does it mean to be a moral agent? This chapter and other similar works have clarified what a moral agent is and what it takes for an artificial autonomous agent to become one, but the question of exactly which moral standards should program these moral agents follow is still open. This question now has its own field of study called machine morality, and we can look for many new developments here in the near future.
cONcLUsION In this chapter I have carefully outlined the issues surrounding the acceptance of artificial moral agents. As we have seen, technoethics provides us with all the conceptual tools we need to open the door for the possibility of accepting the reality of artificial moral agents. I argued that technoethics must address this possibility earnestly if it is to be anything more than just an unsurprising extension of traditional ethical systems. If we do take it seriously, then technoethics stands to be an important plank in the new information ethics.
rEFErENcEs Allen, C., Varner, G., & Zinzer, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental and Theoretical Artificial Intelligence, 12(2000), 251-261. Brazeal, C., Brooks, A., Gray, J., Hoffman, G., Kidd, C., Lee, H., Lieberman, J., Lockerd, A., & Mulanda, D. (n.d.). Humanoid robots as coop-
Artificial Moral Agency in Technoethics
erative partners for people. Retrieved, August 2006 from, http://robotic.media.mit.edu/Papers/ Breazeal-etal-ijhr04.pdf Brazeal, C. (2002). Designing sociable robots. Cambridge: MIT Press. Brazeal, C. (1999). Robot in society: Friend or appliance? Proceedings of the 1999 Autonomous Agents Workshop on Emotion-Based Agent Architectures, Seattle, WA, pp. 18-26. Bringsjord, S. (2007): Ethical robots: The future can heed us. AI and Society, (online). Retrieved Tuesday, March 13, 2007, from http://www. springerlink.com Bunge, M. (1977). Towards a technoethics. The Monist, 60, 96-107. Danielson, P. (1992). Artificial morality: Virtuous robots for virtual games. London: Routledge. Dennett, D. (2003). Freedom evolves. New York, New York: Penguin Books. Dennett, D. (1998). When HAL Kills, Who’s to Blame? Computer Ethics. In, D. Stork (Ed), HAL’s legacy: 2001’s computer as dream and reality (pp. 351-365). Cambridge, MA: MIT Press. DeWaal, F. (1996). Good natured: The origins of right and wrong in humans and other animals. Cambridge, MA: Harvard University Press. DeWaal, F. (2006). Primates and philosophers: How morality evolved. Princeton, NJ: Princeton University Press. Dietrich, E. (2001). Homo sapiens 2.0: Why we should build the better robots of our nature. Journal of Experimental and Theoretical Artificial Intelligence, 13(4), 323-328. Drozdek, A. (1995). Moral dimension of man in the age of computers. Lanham, Maryland: University Press of America. Floridi, L. (1999). Information ethics: On the philosophical foundation of computer ethics.
ETHICOMP98 - The Fourth International Conference on Ethical Issues of Information Technology. Retrieved August 2007 from http://www.wolfson. ox.ac.uk/~floridi/ie.htm Floridi, L. (2002). What is the philosophy of information? Metaphilosophy, 33(1/2). Floridi, L. (2003). On the intrinsic value of information objects and the infosphere. Ethics and Information Technology, 4(4), 287-304 Floridi, L. & Sanders, J. W., (1999). Entropy as evil in information ethics. Etica & Politica, Special Issue on Computer Ethics, I.2. Floridi, L., & Sanders, J. W., (2001). Artificial evil and the foundation of computer ethics. Ethics and Information Technology, 3(1), 55-66. Floridi, L., & Sanders, J. W., (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349-379. Franklin, S., & Graesser, A. (1996). Is it an agent, or just a program: A taxonomy for autonomous agents. Proceedings of the Third International Workshop on Agent Theories, Architectures, and Languages. Springer-Verlag. Gershenfeld, N. (1999). When things start to think. New York: Henry Holt and Company. Hall, J. S. (2007). Beyond AI. New York: Prometheus Books. Hickman, L. A. (1990). John Dewy’s pragmatic technology. Bloomington, IN: Indiana University Press. Himma, K. E. (2007). Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? In L. Hinman, P. Brey, L. Floridi, F. Grodzinsky, & L. Introna (Eds.), Proceedings of CEPE 2007: The 7th International Conference of Computer Ethics: Philosophical Enquiry (pp. 163180). Enschede, the Netherlands: Center for Telematics and Information Technology (CTIT).
Artificial Moral Agency in Technoethics
Irrgang, B. (2006). Ethical Acts in Robotics. Ubiquity, 7(34). Retrieved from, www.acm. org/ubiquity
Sullins, J. P. (2006 a). Ethics and artificial life: From modeling to moral agents. Ethics and Information technology, 7, 139-148.
Libet, B., Freeman, A., & Sutherland, K. (eds). (1999). The volitional brain: Towards a neuroscience of free will. Thorverton, UK: Imprint Academic.
Sullins, J. P. (2006 b). When is a robot a moral agent? International Review of Information Ethics, 6(December), 23-30. Retrieved from http://www. i-r-i-e.net/
McDermott, J. J. (1976). The culture of experience: Philosophical essays in the American grain. New York: New York University Press.
Sullins, J. P. (2005). Artificial intelligence. In C. Mitcham (Ed.), Encyclopedia of science technology and ethics. Rev Ed edition. MacMillan Reference Books.
Moravec, H. (1998). ROBOT: Mere machine to transcendent mind. Cambridge: Oxford University Press. Nadeau, J. E. (2006). Only androids can be ethical. In, Ford, K. and Glymour, C. (eds.), Thinking about android epistemology (pp. 241-248). Menlo Park, CA: AAAI Press (American Association for Artificial Intelligence); Cambridge, MA: MIT Press. Rasmussen, S, Chen, L, Deamer, D, Krakauer, D, Packard, N, Stadler, P, & Bedau, M. (2004). Transitions From nonliving and living matter. Science, 303, 963-965 Rawls, J. (1999, 1971 original publish date). A theory of justice. Cambridge, MA: Harvard University Press. Singer, P. (1974). Animal liberation: A new ethics for our treatment of animals. New York: Avon. Singer, P. (Ed.). (2006). In defense of animals: The second wave. Malden, MA: Blackwell. Sober, E., & Wilson, D. S. (1998). Unto others: The evolution and psychology of unselfish behavior. Cambridge, MA: Harvard University Press. Sullins, J. P. (2008). Friends by design: A design philosophy for personal robotics technology. In P. E. Vermaas, P. Kroes, A. Light, and S. A. Moore (Eds). Philosophy and design: From engineering to architecture. Dordrecht:Springer.
0
Weiser, M. (1999). The spirit of the engineering quest. Technology in Society, 21, 355-361.
KEY tErMs Artificial Autonomous Agent: An autonomous agent whose ontology is removed significantly from the natural world but who nevertheless resembles natural autonomous agents in its ability to initiate events and processes. Artificial Moral Agent (AMA): A moral agent whose ontology is removed significantly from the natural world. Level of Abstracton (LoA): The level of complexity from which the observer views the system under consideration. Higher levels of abstraction provide the observer with fewer dietails while lower levels of abstraction provide much more detail of the operations of the system. Malware: Software or software agents that are consciously programmed and set lose to create evil affects in the operation of information technology. A computer virus is an example of malware. Moral Rationalism: The philosophical position that holds that moral agents must be fully and completely rational in order to maintain their status as moral agents. Typically, those who hold
Artificial Moral Agency in Technoethics
this position deny that human agents can achieve this requirement, meaning that only AI or Alife agents could be true moral agents. Synthetic Biological Constructs: Artificial Life entities created by basic chemical processes, an example would be synthetic living cells created entirely from basic chemical processes and not via genetic engineering.
3
4
ENDNOtEs 1
2
With biotechnology the distinction between natural and artificial may appear blurred, is a human that has had some sort of new gene therapy to remedy some biological malady now an artificial agent? I would like to set that question aside for now as I believe it is unproblematic to see such a person as still a moral agent regardless of their status as a natural born human. I am breaking with Franklin and Graesser here; they call the third category “computational agents” under which they list software agents as a subcategory. It can be argued under computationalism that all of the artificial agents discussed so far are computational in the broadest sense, so in order to be less confusing I have changed the taxonomy to reflect this.
5
6
7
8
9
10
A virtual machine is a computer simulated on a computer, essentially all software are virtual machines but these can be multiply embedded, software that runs embedded in other software, etc. This is largely taken from Franklin and Graesser (1996), but I have altered their classification a bit dropping ALife agents a level below software agents and changing task specific agents to task accomplishing agents in order to account for everything from search bots that come the web to complete some specific task to mixed initiative agents that provide more multivariant assistance for their users. For more discussion on this see Danielson, 1992. I am certain that Danielson means ‘moral agents’ here and is not switching the conversation to agency in general as the original text ‘agents’ might suggest. The following arguments are modified from Sullins (2006 b). This section paraphrased from Sullins (2006b). This section paraphrased from Sullins (2005). For a more complete discussion of this point see, Sullins, (forthcoming).
222
Chapter XV
Ethical Controversy over Information and Communication Technology Pilar Alejandra Cortés Pascual University of Zaragoza, Spain
Abstract ‘What positive and negative aspects are perceived of Information and Communications Technologies (ICT)?’ and ‘What dilemmas arise regarding these technologies?’ are the two questions addressed in this chapter. First of all, we have conducted a theoretical analysis comparing various ICT characteristic from two different perspectives: the positive and the negative ones of ICT. Secondly, we present the results in two work modules conducted in the Observation Laboratory of Technoethics for Adults (LOTA) project, already explained in the previous chapter, with an inference towards intervention.
to my nephews
Introduction It is important to determine (Edwards, 1997; Rumbo, 2007) what type of learning society we want, that is: a manner of socialization of community rules and practices (to distinguish between advantaged and disadvantaged students); a way of providing competitive individuals to the neoliberal labor market; or a network of continuous learning influenced by information technology. And before
these three stances, a possibly eclectic one, in our opinion, might be a learning community based on educating all people to become future citizens and adapt to the labor situation and in this regard, ICT can help achieve this aim. It is not a matter of presenting an entirely positive or negative situation regarding ICT, although considering both sides allows an ethical reflection of these technologies which is necessary (Hogue, 2007) in different fields such as we are invited to do by, for example, UNESCO, which has considered this a priority since 2002
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Ethical Controversy over Information and Communication Technology
(Ten Have, 2006). Although the medical field is the one that has worked the most on the topic of healthcare ethics and bioethics, the lack of studies has also been criticized (Marziali et al., 2005). Also, nanotechnology, such as the current scientific revolution applied at a nanoscale to technologically manipulate molecular structures and atoms, is also provoking certain axiological controversy (Ebbesen et al., 2006) in areas such as: genetic engineering, research of stem cells and transgenic products. This reality leads to proposing the need for a deeper understanding of the being and acting of ICT.
The yin and yang of ICT Martins and García (2003) are of the opinion that ICT can facilitate the passage from the knowledge society to wisdom. A dilemma surrounding ICT between technology optimism and pessimism. Obviously the third environment or space for ICT presents a series of risks (Echeverría, 2004) in various areas: technological, economic, military, political, social, legal, ecological, moral, religious, aesthetic, epistemic and basic, although, extending the author’s thesis, it also provides relevant macrosocial and individual possibilities. This seems interesting and perhaps we should aim for an intermediate Aristotelian position which would be wise indeed. Certainly, the exclusive and extremist principle of absolute precaution before ICT (Luján and López, 2002) is not the right course, but rather a more eclectic one should be adopted. In this regard, for example, García (1998a, b) proposes 7 “virtues” and 7 “sins” of ICT (the basis for the first activity proposed in this chapter). It is also very interesting to review the ERIC base (2007), with references to the benefits and the detrimental aspects of technologies, as well as the paper by Luján and Told (2007) based on the perception of citizens themselves to analyze the relationship between the role of science and technologies and values.
ICT are a service that is spread worldwide, but not yet reaching all or not all reaching them in the same manner (Benadou, 2005), that is, here too there is social inequality, which Castells (2002:22) has summarized in two ideas: first, “web connection and flexibility allow connecting with that which is valuable and reject that which is valueless, either people, companies or territories”; and second, “the extreme underdevelopment of technology infrastructures in most parts of the world is an essential barrier towards development”. If in 2000, only 3% of the world population used the Internet, by 20071 the figure was 16.6%. This increase is positive, but certain clarifications must be made such as that in Canada or the United States the percentage reaches 70%, yet in most countries in Africa it barely amounts to 4%. Or that Asia (35.6%) and Europe (28.6%) present the highest percentage compared to the rest of the world. Although discrimination can be observed, for example, by gender2, since in childhood girls (76%) use computers more than boys (71%), but in adulthood this trend is reversed (60% of women and 70% of men) due, to a great extent, to the greater family obligations for females, which condition the time available compared to men to devote to ICT. In Spain, for example, the profile of an Internet user is male, between 35 and 36, residing in a province capital, and only 10% access the Internet through broadband, according to 2006 data.3 All this evidences a digital gap which, in our opinion, must be overcome in order for the country to progress equally both internally and in relation to other countries. We refer to Castells (2002) and his Marshall Plan proposal for the Information Era with various strategic recommendations such as, for example, a social economy based on high technology for expert on-line systems on healthcare, distance education, avoiding the bottleneck on information and technology education or preventing the brain leak of developing countries by extending quality networks worldwide.
223
Ethical Controversy over Information and Communication Technology
Let us now analyze the use of ICT. This is represented by the following line which, at one end, indicates their abuse, and at the other, their underuse (see Figure 1). Obviously, the best is a balanced use of ICT, that is, in the middle of this line. Nonetheless, humans can fall on any point along this line. Thus, cases of addiction have been detected (Fisher and Wesolkonski, 1999), especially in people with a certain psychological profile: insecure, with low self-esteem and obsessive. In adults primarily, an individual has been detected who before others appears in control and adopts all the latest technologies. The authors point out that 10% of technology users are addicted to it. Also, to a certain extent, as a result of this, when there are technological problems (not being familiar with certain software, loss of an electronic document, etc.) they feel frustrated (Lazar et al, 2006). At the same time, despite being connected, ICT users can fall into personal isolation. Indeed it is a paradox that among the various social networks or sites for social gathering, there are over 300 million people surfing worldwide. Especially in MySpace (Murdoch) and YouTube (Google), youths and adults communicate through a real or imaginary identity and yet, they cannot establish the slightest personal relationship. However, the media and the Internet also act as channels to establish friendship and love relations, and to create another personality or alter ego in order to play in the web as, for example, in Second
Life4, where every day over a million people lead digital lives. Other issues that may arise with ICT are that they take time away to conduct other activities, such as reading or physical exercise, or that they decrease users’ creativity. In any case, there are diverse conclusions in this regard, like for example, on the influence of television on viewers (Medrano and Cortés, 2007), on which there are studies that refer both to the positive and the negative aspects of this medium. On the other hand, with regard to underuse, people who decide not to employ ICT, are missing an option to find information quickly and abundantly, under certain flexible time-space coordinates, relevant features of these resources. At the same time, some individuals may have difficulty finding a job or being promoted if they lack technical and methodological skills regarding technologies, so relevant nowadays (Organisation for Economic Co-operation and Development, 2005). Technologies have also made working life easier through e-learning or telecommuting. ICT also imply philanthropic initiatives as they expand. For example, that of Michail Bletsas, who has developed the OLPC (One Laptop per Child5) project, endorsed by the MIT education center, and the companies Intel and Advanced Micro Devices. It entails enabling disadvantaged children to obtain a low cost laptop (XO solidarity laptop), through Governments and Non-Government Organizations. It is starting to be implemented
Figure 1. Some of the consequences of ICT use Abuse -Addiction -Isolation -Passivity -Takes time away from other activities -Diminishes originality
224
Underuse -Missing information -Labor problems
Ethical Controversy over Information and Communication Technology
in Uruguay, Brazil, Argentina, Libya, Rwanda and Thailand. Or the objective of raising 1 billion dollars for 1 billion people in five years of the World Ahead program (2006-2011), carried out by Intel to reduce the digital gap. So far we have presented a brief overview of the positive and negative aspects of technology and media resources, and what is important is that we are aware of this double perspective. Some practical proposals reinforce this line. Sadler and Zeider (2005) work with university students of genetic engineering on how to apply moral reasoning (through dilemmas) to experiments conducted in class, in order to ensure that as well as scientific reasoning, emotional and ethical elements are also considered. Sadler et al (2006) explore the use that science professors make of ethical aspects. For this, they interviewed teachers of 22 high schools in the United States and classified them in several profiles: from those least committed to the relation between science and ethics, to those closest to this link. Another practical association is the one conducted by the Integrating Computer Ethics Across the Curriculum Workshop project6 (Ben-Jacob, 2005), where various teachers address, through educational modules, ethics and technology in various areas: business and cultural studies; interdisciplinary; legal studies; health science; computer science and engineering; mathematics, the natural and physical sciences; and social sciences and humanities. We also consider another interesting proposal from a site in Sweden (Alexandersson and Runesson, 2006), where secondary school students analyzed international wars through the Internet, or another educational practice developed in the Business Administration Colleges of the United States (Leonard and Haines, 2007) that virtually present five dilemmas on technological situations which students must decide on.
Main focus Finally, a series of critical conclusions with ICT are proposed, and at the same time others holding them blameless and making individuals responsible for an ethical and proper use of technologies and communications. After all, as Luppicini states (2005), technology reflects the values of those who make and use it. And certainly, we defend that the various centers and educational levels should offer a course, either in the classroom or online, on technology and ethics, along the lines advocated by Olcott (2002). The understanding and balance of ICT are issues that formal and informal education must address because these resources are a part of people’s everyday life.
Future trends: dos módulos de intervención desde el “Observational Laboratory on Technoethics for Adults” (OLTA)7 This project was already explained in the previous chapter. OLTA developed a series of modules, and here we describe two of them: “Debate on the positive and negative aspects of technologies” and “ICT-related dilemmas.”
Debate on the Positive and Negative Aspects of Technologies In the first module, in which 79 adults participated (46 males and 33 females), an itemized analysis was conducted which was very significant (by listening to the voice of those involved), of 15 ideas on ICT, on which they had to individually express their level of agreement or disagreement and the reason for the option chosen. Thus: 1.
They generate or enhance social inequality. 77% agree and 23% disagree. The reasons given were the following: they agree because
225
Ethical Controversy over Information and Communication Technology
2.
3.
4.
5.
6.
226
technology discriminates between those who have money and those who do not; between the educated and the uneducated; and they disagree because ICT help developing countries quickly reach the level of developed ones, they make life more affordable and there has always been discrimination. They are at the service of globalization. 65% agree and 35% disagree. The former, because they encourage monopoly and profits and business benefits only for the rich. The latter, because they also help the poor develop, and can end the capitalist view. They encourage passivity. 55% agree and 45% disagree. The former because people become “zombies”, they encourage uncertainty, real disinformation, exaggerated comfort, mental alienation and an inability to select information. The latter, because they help to find a job, resolve everyday problems easily, favor mental activity. Immobility depends on the person. They are bestowed with a special magic. 63% agree and 37% disagree. Those who agree feel they lead to deception, favor financial profits, increase disinformation, abuse power to mobilize masses, there is a feeling of being fabulous for using ICT, and they create more problems than they solve. Those who disagree feel people have the capacity to decide, they help educate people and people are capable of controlling the machine. They enhance personal frustration. 65% agree and 35% disagree. The former agree because there is a lack of technical training, they affect diminished self-esteem and encourage isolation from private life. The latter disagree because when you learn how to use them, frustration vanishes. They are too attractive and therefore, take up too much time and limit socializing. 74% agree, while 26% disagree. The former because they perceive they have fewer personal contacts when they use and abuse ICT. Those
who disagree consider the situation is under control and they can determine when and how long to use ICT. 7. They promote isolation or individuality. 62% agree and 38% disagree. In the first case, adults agree because they generate a technological vice, waste time and take away family time, promoting direct lack of communication with others and enhancing loneliness. Those who disagree say they help people communicate and learn independently. 8. They are an easy resource for people with socialization problems. While 67% agree, since you shut yourself in your own world, 33% disagree, because they also help people with socialization problems. 9. They are not used to the fullest advantage. Their full potential is not materialized and they end up being a routine. 48% agree and 52% do not. Thus, the first group reason that they promote boredom, do not teach properly and with all their potential , they make users waste time. And those who disagree say they make work easier and help meet people. 10. They are a means to communicate political power. 62% agree and, on the other hand, 38% disagree. The former because there is a subliminal control by politicians’ will, using mainly the mass media. And the latter say they also help some citizens learn about the political landscape. 11. They make subjects feel they are always outdated. 75% agree with this idea, although 25% do not. The former explain that ICT demand a constant technological renewal, progress and an extreme eagerness to be up to date. The latter reason that in reality there are not that many breakthroughs brought about by ICT, but rather they are the same contents under different formats. 12. They take away time from important human development activities. 68% agree and 32% disagree. The first case is justified because
Ethical Controversy over Information and Communication Technology
they create addiction and since they are so easy, they make people move less and carry out more intellectually complicated tasks. The second group base their disagreement on the fact that technology is also a very useful activity which can be compatible with other less sedentary activities. After all, it depends on the person. 13. If you are not a part of the Information Technology Society you feel you are not of this world. 55% agree, while 45% do not. The former agree because we have become so dependent that if we are not a part of ICT we seem outcasts. Whereas those with a different view point out that technology can be dispensable and that it depends on personal decisions. 14. They encourage alienation. 69% concur, since there are already some technologies we could not live without, and 31% feel the opposite because they make life easier, especially in aspects that seem tedious. 15. They limit creativity and originality. 55% share this idea, because users stick to what they know and do not want to learn any further, and 45% disagree because they offer tools to work cognitive and creative competences which is more difficult to do with other means. In general, we can state that the first idea is the one with the most agreement, that is, technologies do not reduce the differences between groups, but rather increase them, since economic development marks the boundary between developing and developed countries, which is stressed with technologies. In second and third place, they agree with the feeling of being outdated, and that they take time away from socializing. They do not agree with the idea that they are not used to the best advantage. And above all, worth highlighting is the opinion expressed at times that it depends on the person.
ICT-Related Dilemmas In the second module on “dilemmas related to ICT” they were presented, through interviews, with a hypothetical dilemma and later asked for a real one, to which 82 subjects replied (52 women and 30 men). The hypothetical dilemmas served as a pretext for subjects to approach what is a sociocognitive and socioaffective conflict forcing them to choose one of the values opposed, that is, a dilemma. The situations were the following (Cortés, 2004).
John, His Father and the Internet When a boy named John gets home, the first thing he does is turn on the television and then go to his room to play on the computer and surf the Internet. One day his father suggested going for a walk to visit a relative, but John had arranged to chat with friends on the web. John decides to stay home to “talk to his friends”. 1. 2. 3.
Should John go with his father? Why? Should John stay home to chat on the computer? Why? Is it a good or a bad thing that he stays home to chat with his friends? Why?
Sonia and Computers There is a teacher called Sonia who, when she does not know what to do with her students and she wants to do something innovative and motivating, takes them to the computer room or to watch a video without any guidelines. Sonia knows it is not the right thing to do, but she does not dare to admit she does not about information technology applied to education, nor does she know how to use videos properly. 1.
Is it right or wrong for Sonia to do this with her students? Why?
227
Ethical Controversy over Information and Communication Technology
2. 3.
Should Sonia bring up her situation with the school’s heads or staff ? Why? What reasons could she have for not wanting to talk about it?
Patricia’s Advertisement Patricia is an advertising executive who needs to do a television commercial on a new videogame, of high commercial impact, whose character is a fabulous hero. The creators of the videogame demand that some of its features be magnified, almost lied about. Patricia feels she should be more objective with the product, but on the other hand, she is a part of the fantastic and iconic world of advertising and television. 1. 2.
3.
What should Patricia do? Why? Is it right or wrong for Patricia to do the ad under the conditions established by the advertisers? Why? Do you think that because it is a commercial, one can be subjective and deceive consumers?
The results of the above dilemmas are not presented here, but we do provide those described by the adult students themselves. The real dilemmas were classified, with an interjudge analysis among 3 people, into five categories: (1) Credibility of MCS (dilemmas contrasting values of objectivity vs. subjectivity or sensationalist manipulation of the mass media), with 37%; (2) Quality of life at work (contrasting values on whether time is wasted or gained for work tasks with ICT), with 27%; (3) Health (whether to use ICT or not because of the effect of technology waves on humans), with 18%; (4) Social discrimination (questioning whether ICT are a synonym of power or they can enhance equality because they help everybody, that is, they are for the whole of humanity or only for the enrichment of a few privileged ones), with 10% and; (5) Usefulness of the Internet (whether or not to have Internet at home or at work due to
228
questions about its efficiency, speed, ease of use, etc.), with 8%.
Conclusion: A proposal to work on technoethics in education Throughout this chapter we have strived to record essentially three connected concepts: first, the importance of a realistic perception of ICT; second, the user’s responsibility and; third, inclusion of technoethics education (for example, provision of the two modules explained in the OLTA project). With the first concept we advocate that although it is objective that ICT do not reach the world population in an identical manner, there are supportive initiatives seeking equality in this regard, and it is also interesting to analyze how subjects use these technological and virtual resources. This use may be problematic or not in two areas: underuse and overuse, and this depends in most cases on user autonomy regarding ICT. In order for this critical and selective autonomy to exist regarding the how and the why of information and communication technology media, technoethics education is endorsed, which gradually builds up knowledge and praxis (Sadler and Zeider, 2005; Ben-Jacob, 2005; Sadler, 2006; Alexandersson and Runesson, 2006; Leonard and Haines, 2007). More initiatives in secondary and higher education are observed, but we advocate for them to reach all other levels, such as adult education (Cortés, 2005). In this regard, the two practical contributions of the modules can be used by educators to apply with students as is or adapted to the recipients. This emphasis on technoethics, focused essentially on ICT, must also be linked to an axiological analysis of technoscience, understood as the impact of technology advances on science according to Echeverría (2001), since, as expressed by this author, an action or artifact is good (metavalue) if, among other aspects, it respects ecological,
Ethical Controversy over Information and Communication Technology
human, political, social and legal values (privacy, security, autonomy, spread of knowledge, etc.); and, what we are most interested in, ethical values (freedom of conscience, dignity, respect for beliefs, honesty, etc.). In any case, in this chapter, as in the other one (Educational technoethics applied to career guidance) we have tried to present contents to help situate the discussion and provide resources for researchers working in this area.
Edwards, R. (1997). Changing places? Flexibility, lifelong learning and a learning society. New York: Routledge.
References
Echeverría, J. (2001). Tecnociencia y sistema de valores. In J.A. López & J.M. Sánchez (Eds.), Ciencia, tecnología, sociedad y cultura (pp. 221230). Madrid: Biblioteca Nueva.
Alexandersson, M. & Runesson, U. (2006). The tyranny of the temporal dimension: Learning about fundamental values through the Internet. Scandinavian Journal of Educational Research, 50(4), 411-427. Benadou, R. (2005). Inequality, technology and the social contract. Handbooks in economics, 22, 1595-1638. Ben-Jacob, M.G. (2005). Integrating computer ethics across the curriculum: A case study. Educational Technology & Society, 8(4), 198-204. Castells, M. (2002). Tecnologías de la Información y de la Comunicación y desarrollo social. Revista de Economía Mundial, 7, 91-107. Cortés, P. A. (2004). Una mirada psicoeducativa de los valores. Seminario aplicado a las nuevas tecnologías de la educación. Prensas Universitarias de la Universidad de Zaragoza: Zaragoza. Cortés, P.A. (2005). Las preconcepciones sobre la tecnoética en los adultos. Revista Mexicana de Psicología, 22(2), 541-552. Ebbesen, M., Andersen, S. & Besenbacher, F. (2006). Ethics in nanotechnology: Starting from scratch? Bulletin of Science, Technology & Society, 26(6), 451-462.
ERIC (2007). Effects of technology. Retrieved April 15, 2007 from http://www.eduref.org/cgibin/printresponses.cgi/Virtual/Qa/archives/ Educational_Technology/Effects_of_Technology/negeffects.html and http://www.eduref.org/ cgi-bin/printresponses.cgi/Virtual/Qa/archives/ Educational_Technology/Effects_of_Technology/edtech.html
Echeverría, J. (2004). Los riegos de la globalización. In Luján, J.L. y Echeverría, J. (Eds.), Gobernar los riesgos. Ciencia y valores en la sociedad del riesgo (pp. 187-205). Madrid: Biblioteca Nueva. Fishe R, W. & Wesolkonski, S. (1999). Tempering technostress. Technology and Society Magazine, 18(1), 28 – 42. García Pascual, E. (1998a). Las siete pecados capitales de las nuevas tecnologías. Acción Educativa, Madrid, 97, 5-7. García Pascual, E. (1998b). Las siete virtudes de las nuevas tecnologías. Acción Educativa, Madrid, 98, 5-8. Hogue MS (2007). Theological ethics and technological culture: A biocultural approach. ZYGON, 42(1), 77-95. Lazar, J., Jones, A., Hackley, M. & Shneiderman, B. (2006). Severity and impact of computer user frustration: A comparison of student and workplace users. Interacting with Computers, 18(2), 187-207.
229
Ethical Controversy over Information and Communication Technology
Leonard, L. & Haines, R. (2007). Computer-mediated group influence on ethical behaviour. Computers in Human Behavior, 23(5), 2302-2320. Luján, J.L. & López, J.A. (2003). The social dimension of technology and the precautionary principle. Política y Sociedad, 40, 53-60. Luján, J.L. & Todt, O. (2007). Precaution in public: The social perception of the role of science and values in policy making. Public Understanding of Science, 16(1), 97-109. Luppicini, R. (2005). A systems definition of educational technology in society. Educational Technology & Society, 8(3), 103-109. Martins, H. y García, J.L. (2003). Dilemas da Civilização Tecnológica. Lisboa: Imprensa de Ciencias Sociais. Marziali E., Serafini, J.M.D., McCleary L. (2005). A systematic review of practice standards and research ethics in technology-based home health care intervention programs for older adults. Journal of Aging and Health, 17(6), 679-696. Medrano, C. & Cortés, P.A. (2007). Teaching and learning of values through television. International Review of Education, 53(1), 5-21. OECD. (2005). Informe de la organización para la cooperación y el desarrollo económicos (OCDE). Retrieved April 15, 2007 from http://www.oecd. org/home/0,2987,en_2649_201185_1_1_1_1_ 1,00.html. Consultado en diciembre de 2004. Rumbo, B. (2006). La educación de las personas adultas: un ámbito de estudio y de investigación. Revista de Educación, 339, 625-635. Sadler, T.D. & Zeider, D.L. (2005). Patterns of informal reasoning in the context of socioscientific decision making. Journal of Research in Science Teaching, 42(1), 112-138. Sadler, T.D., Amirshokoohi A., Kazempour,M. & Allspaw, K.M. (2006). Socioscience and ethics in science classrooms: Teacher perspectives
230
and strategies. Journal of Research in Science Teaching, 43(4), 353-356. Ten Have, H. (2006). The activities of UNESCO in the area of ethics. Kennedy Institute of Ethics Journal, 16(4), 333-351.
Key terms Abuse of ICT: addiction; isolation, passivity, takes time away from other activities and diminishes originality. Alter Ego: Another personality to play in the web. Dilemma: Sociocognitive and socioaffective conflict forcing them to choose one of the values opposed. Nanotechnology: The current scientific revolution applied at a nanoscale to technologically manipulate molecular structures and atoms Marshall Plan (Castells, 2002): Proposal for the Information Era with various ethics strategic recommendations. Underuse of ICT: Missing information and labor problems.
ENDNOTES
1
2
3
4 5
Source http://www.internetworldstats.com/ Internet use world statistics in 2007. Source http://epp.eurostat.ec.europa.eu Eurostat Data 2007 (European Commission). Source http://www.mityc.es/Telecomunicaciones/Datos de 2006. State Department of Communications. Source http://www.secondlife.com Source http://olpc.org; www.intel.com/intel/
Ethical Controversy over Information and Communication Technology
6
7
worldahead; http://www.50x15.com; www. powerfulideassummit.com Source en: http://www.mercy.edu/IT/ ethics/ A project approved by the Aragón regional government (Spain) in the calls of 2003 and
2004, directed by Carlos Sanz, director of the Concepción Arenal centre (Zaragoza), and coordinated by the present writer. Isabel Segura (Teruel) also collaborated.
231
Chapter XVI
The Cyborg and the Noble Savage:
Ethics in the War on Information Poverty Martin Ryder University of Colorado at Denver, USA
abstract This chapter provides a brief summary of the technical and social hurdles that define the so-called ‘digital divide’ and it considers the celebrated ‘One Laptop Per Child’ project as one response to the problem of digital poverty. The chapter reviews the design of the XO laptop with particular interest on the ethical territory that is traversed in the implementation of a low-cost computer intended for millions of children in underdeveloped nations. The chapter reviews how XO designers negotiated between ethics and feasibility as they confronted numerous problems including infrastructure, power consumption, hazardous materials, free vs. proprietary software, security, and the cost of labor. Apart from technical considerations, this review of the XO evaluates the notion of cultural hegemony and how the imposition of such technology as an educational tool might be balanced by considerations of local control and user agency.
INtrODUctION The digital divide is the white man’s burden of the present era. As technically advanced people become enriched by the knowledge they create, there is a consciousness that millions of disconnected people lack the ‘freedoms’ associated with modern civilization. In this digital age, the billions who survive without computer technology are
seen as languishing on a globe that can no longer sustain hunter-gatherers or subsistence farmers. The technical world of automation, manufacturing and mass consumption is increasingly hostile to the simple folk who live directly from the land. Modern humanity’s ability to dominate nature has imposed serious consequences on pre-modern societies that depend completely upon nature for their sustenance.
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Cyborg and the Noble Savage
Kipling’s White Man’s Burden captured the prevailing ethic of a colonialist society that justified conquest of non-Western cultures in the name of ‘civilization’. It was a noble enterprise to lift savage populations from their ‘simplicity’ and hopeless poverty. This transformation began with skills of reading and writing. Literacy came first in the form of religion, then it flourished under the tutelage of commercialism. Today, the medium of literacy has migrated from parchment to silicon and the electronic well of knowledge is deep and boundless. Those who draw from the well continue to enrich it as they are enriched by it. But most of the world’s people remain disconnected from this knowledge source. They do not speak its language, they are unaware of its powers, and they are completely consumed by the more urgent necessities of daily living. The focal point of this chapter is the celebrated OLPC (One Laptop Per Child) project founded in 2005 by Nicholas Negroponte and a core team of the M.I.T. Media Lab. OLPC is an aggressive project that addresses the core issues of information poverty head on. The stated goal of OLPC is “to develop a $100 laptop – a technology that could revolutionize the way we educate the world’s children.”1 In working toward this goal, the designers have grappled with problems of technical feasibility, organizational pragmatics, social and political considerations, and the overarching problem of cultural hegemony. Negroponte’s non-profit team has wrestled between government ministries (as customers) and corporate interests (as suppliers) over questions of content, connectivity, power sources, the user interface, privacy, licensing, component sources, manufacturing, distribution and scores of related issues. What has emerged is a very novel technology at a very low cost with the potential for wide distribution in equally novel markets. The ethical issues that we confront in this chapter are as numerous, complex, and varied as the science of ethics itself. They traverse several major traditions of ethical theory including natural
law, utilitarian theory, and deontology and the applied fields of environmental ethics, engineering ethics and computer ethics. The very fact that we are addressing this issue - the digital divide - places us immediately into a state of anguish associated with Sartre’s existential ethics. While embracing the new powers that we inherit from information technology, we accept responsibility for ourselves in the use of these powers. And yet, as a free people, we also accept responsibility for the impact of our choices upon those who do not possess such power. Can a moral person ignore the growing knowledge gulf between our present-day civilizations? Who of us is justified in raising the question of digital poverty? Can the Western mind presume to understand a life of survival without technology and then dare to suggest a technical solution? In advancing our technologies to the farthest reaches of humanity, what are the unintended consequences of our actions? Do we, as Albert Borgmann (1999) suggests, risk the possibility of forever losing touch with nature? This chapter will address some of the salient ethical issues associated with the digital divide and the moral implications of one specific intervention: the OLPC project. We will briefly consider some of the engineering ethics associated with the design and world-wide distribution of a child’s laptop computer. We will also consider the issue of cultural hegemony that is unavoidably associated with this project and observe the manner in which the designers have consciously addressed this concern.
bacKGrOUND The notion of a ‘digital divide’ was coined during the second half of the Clinton administration. The expression is a spin off of the ‘great divide’ theory of Jack Goody (Goody and Watt, 1968) (Goody, 1977) (Goody, 1986). The Goody thesis portrays literacy as a necessary precondition for abstract
The Cyborg and the Noble Savage
thought and a fundamental component of modern human cognition. According to this theory, societies have a tremendous intellectual and cognitive advantage wherever they can leverage the rich interactions of literate individuals. Numerous theorists in the ‘60s ‘70s and ‘80s studied claims of how specific technologies such as writing or print could affect the nature of thinking and the formation of a modern worldview. (See for example McLuhan, 1962; Havelock, 1963; Finnegan, 1973; Eisenstein, 1979; Scribner and Cole, 1981; Ong, 1982.) A number of researchers have also focused on the close connection between literacy practices and power relationships. An ethical dimension to literacy is introduced in these models. Brian Street (1993) proposed an ‘ideological model’ to the study of literacy (p.7) in which he identifies a tension between power and authority on one hand, and resistance and creativity on the other. Street suggests the conceptualization of literacy as ideological practice “opens up a potentially rich field of inquiry into the nature of culture and power, and the relationship of institutions and ideologies of communication.” (p.12) For years, literacy had been a common indicator of modern social capital (Bourdieu, 1983; Coleman, 1988). By the 1990s the propagation of knowledge by means of information computing technologies (ICTs) had significantly raised the bar that defines what it is to be ‘modern’. The cognitive divide between modern and pre-modern peoples is more pronounced than ever and the solutions required to close this gulf are more formidable. Where Goody’s cognitive divide called for interventions of literacy and basic education, the interventions facing the global digital divide are more complex and daunting. They mandate infrastructure for electricity and communications (Kling, 1998), an army of highly skilled technicians (Korman, 2003; Barton, 2007), an enormous translation effort (Auh, 2001; Martindale, 2002), the dismantling of social barriers (Warschauer, 2003) and political barriers (Norris, 2001). The chances of closing this digital gap hinge on eco-
nomics and the ability to deliver ICTs to populations in the most backward and depressed areas of the world.
tHE OLPc PrOjEct Scores of projects have been spawned since the late 1990s to better understand the global digital divide and to respond to the challenges it poses. Among the most aggressive was unveiled at the World Economic Forum in Davos in 2005 by Nicholas Negroponte, chairman emeritus of the MIT Media Lab. Stated simply, the goal of One Laptop Per Child is “to give all children in the developing world laptop computers of their own.”2 The hurdles for achieving this goal are daunting, but it is exactly the kind of challenge for which the Media Lab was created. According to Negroponte, the Media Lab’s charter is “to invent and creatively exploit new media for human well-being and individual satisfaction without regard for present-day constraints.”3 OLPC designers accepted the challenge to create a powerful lap-top computer that can operate reliably in regions that have no electricity, that can network with peers in regions without telephone cables or cellular hubs, that can endure rough handling in dusty or humid conditions, that can display information clearly in sunshine or in shade, that can capture and record photographs or moving pictures with sound, that offers tight security adjustable to the user’s needs, that is rich with multi-media resources for communication and for learning, that is immediately adaptable to eight different languages with four different alphabets – and that can do all of this for a price point of $100! The name of the children’s machine is “XO”, christened for the childlike icon that is presented to the user interface. The computers themselves require no external power source. They are intended to communicate with each other in a wireless mesh network4 within a locality that ide-
The Cyborg and the Noble Savage
ally includes a central server5 at a local school or community house. What follows is a summary of the specifications for the XO. As you review these features, notice how each one addresses a specific technical hurdle that guards the digital frontier: •
•
•
•
CPU: 433MHz AMD Geode LX07006. This processor consumes less than one watt, a minute percentage (2%) of the electrical energy consumed by a standard microprocessor in a typical desktop PC. A power management facility in this chip shuts down the CPU when active processing is not actually needed. This effectively keeps the processor in a passive, power-off mode 98 percent of the time. Memory: 256 MB DRAM. Nothing unusual here. This is par with most current laptop systems, but it suggests that the XO is no mere toy. Storage: There is no hard drive on the XO1. The idea is to preserve power and keep the cost low. Essential disk functions are handled by a 1GB flash memory chip7 and data compression, offering the equivalent of 2GB of local storage. Unused space on peer computers can be shared across the local mesh network. The child’s own work can be uploaded and stored on a central server in the vicinity, and applications can be downloaded as needed from that local server. Display: The graphics display poses the greatest challenge to the design of a low-cost, energy efficient computer. A typical laptop display costs around $120 and consumes more than a hundred watts of power. In her design of this innovative display, Mary Lou Jepsen8 manipulated the pixel layout using high resolution TFT LCD9 technology and she eliminated the use of costly color filters. Jepsen introduced a bright black and white mode for use in sunshine and a color
•
•
mode for visibility in shade10. The chip that controls the display has its own memory. This allows information to remain live on the screen even when the processor is not active. The XO display uses one-seventh the typical power consumed by a traditional screen. With these innovations, Jepsen was still able to trim the cost of the XO display to one third of that for a standard display. Network: The reliance on a wireless mesh network11 in lieu of a commercial copper or fibre infrastructure is another innovation of the OLPC project. The XO is equipped with an embedded wireless (802.1112) network chip and two small antennae that enable network communication for distances beyond one kilometer. Each computer serves as a router13 for network traffic, passing data from one child’s laptop to the next. In conventional PCs, the tcp/ip14 software that controls network communications is the part of the operating system. What is unique about the XO is the use of an embedded chip that runs wireless network communications independently of the operating system. On the XO computer, network communications is handled completely from an outboard chip15. This allows a child’s computer to continually forward network traffic for neighboring computers, even at times when the computer itself is in a suspended mode! School Server16: The laptops themselves are generally self-sufficient, but they lack ample storage capabilities and they have no Internet connection. These services can be provided by a more conventional computer situated in a school or community building with electricity and access to the Internet. Where electricity is lacking, the school could be equipped with a photovoltaic solar panel and storage batteries to power the server day and night. The school server is a hub for the local mesh network and it serves as a network router bringing Internet services
The Cyborg and the Noble Savage
•
to the local network. One important basic service of the school server is an online library of learning materials written in the language of the local community. In many countries, the cost of the computers will be justified on the basis of this material, eliminating the need for textbook distribution. Instructional materials developed by local teachers and students would be stored on the school server. The local server will retain registration data for each child’s laptop in the local mesh network, providing basic security services for each node. Security: The OLPC project presents some unusual security challenges. The XO computers are prime targets for malicious viruses and theft. The designers chose a security strategy26 called bitfrost27 developed by Ivan Krstic. This strategy places the laptop owner in complete control of the system. The system ‘bonds’ to its owner from the moment of first boot. The initial security program asks the child for his or her name. The new laptop machine then takes a picture of its owner and generates a UUID28 that is unique to that child. Once the ownership is established, bitfrost registers the child’s picture, the UUID, and the serial number of the machine with the local school server. Thereafter, each time the XO is powered on, it first checks in with the school server for permission to run. If a machine has been reported stolen, the school server will instruct the child’s unit to deactivate. Bitfrost was designed for young children. With day-to-day operation, the system will ask the user her name, but it will not require a password. Security operation is based on a system of ‘rights’. Each time the user attempts an operation, whether to accesses the camera, microphone, internal memory, or the network, bitfrost will grant the right of access, but will deny any combination of rights that suggest malicious use. The
laptop’s owner, however, has authority to override certain levels of security. The educational goal is to give each child the ability to manipulate his or her own computer as the child continues to discover the computer’s internal workings. For those so inclined, the designers expect some children to master complex technical skills. In the process, the designers anticipate inevitable mistakes that can incapacitate the machine. Full system restore29 functionality is provided from the school server, allowing fail-safe protection for the young experimenter. •
•
Power: The XO computer runs at about 12 watts of power when fully operational. This is less than one fifth the power required by a typical laptop PC. The designers chose a nickel metal hydride (NiMH)17 battery in lieu of the conventional nickel cadmium (NiCd)18 for multiple reasons. NiMH can store three times the charge of an equivalent size NiCd battery. NiMH batteries are friendlier to the environment avoiding the toxic waste problems associated with cadmium19. And unlike NiCd, NiMN batteries don’t suffer from the memory effect20 that requires full discharge and recharge cycles. The battery life is twelve hours between charges when running at full use and six days between charges when in suspended mode. In regions where electricity is not available, the battery can be recharged by a hand crank generator supplied with the machine. Human cranking time requires about six minutes for every hour of use at maximum CPU consumption. Operating System: The Linux21 2.6 kernel22 and the GNU23 operating system comprise the software infrastructure for the XO. The designers were careful to select public domain software because of its zero cost and because of its freedom from license restrictions that would limit ongoing changes
The Cyborg and the Noble Savage
•
•
and deviations. The fact that the code is nonproprietary means that it can be changed by anyone anywhere, anytime. Free and open source software is a prudent choice for long term maintenance of the XO. A custom graphical user interface (GUI) called sugar24 was developed as a deliberate variation from the standard “office/desktop” paradigm that holds little meaning for children in remote villages. In contrast to files and folders, the icons on the XO signify people and objects. An icon of the child herself is at the center of activity. It is the first complete re-design of a user interface since Apple launched the MacIntosh in 1984. Credit for the GUI design is attributed to Chris Blizzard.25 Enclosure: The XO-1 enclosure is made of ABS plastic,30 chosen for its durability and shock absorbency. The design is light and colorful, intended to appeal to the aesthetic mood of a child. The case can be manipulated into the shape of a standard laptop, or it can fold into a tablet or e-book. The rounded edges of the XO are child-friendly and the enclosure provides a rounded handle for the 1.5kg computer. Environmental impact: Even though this computer is targeted for regions outside the European Union, the XO designers have chosen to comply with the strict Restriction of Hazardous Substances (RoHS)31 directive of the EU. This directive requires circuit boards and electronic components in the unit to be free of heavy metals or other toxic materials that would otherwise pose long-term environmental threats when the units are discarded by their owners.
The MIT Media Lab’s charter to invent the future seems well represented in the XO-1. It will be no surprise to find these innovations creeping into mainstream designs for personal computing. The XO computers and the mesh network require no external communication or power infrastruc-
ture. Picture, sound, and text messages from the central server or from any laptop can reach the entire community by bouncing from one computer to the next. A security model requires no password for the owner. The child simply enters her name and she is in complete control of her own machine, but an alien user will have serious difficulty. As of this writing, the first so-called XO models have not achieved the $100 target. When early units were shipped in the Spring of 2007, the unit price was $176. But Negroponte is counting on sheer production volume to bring the price down.
tHE cOst OF LabOr A dominant factor in the manufacture of any durable goods is the cost of labor. The XO developers have exploited automated manufacturing processes wherever possible and they have designed mechanical components for quick and easy manual assembly. Such technical solutions bypass32 a multitude of labor concerns and thereby avoid the ethical implications associated with cheap labor. In selecting the company that will manufacture these machines, Negroponte and his team negotiated the fine ethical line between low-cost units for impoverished children and minimal living wages for XO assembly workers. The manufacturing contract for XO-1 was awarded to Quanta, a Taiwan based corporation, the largest outsource manufacturer of laptop computers in the world. Quanta set up an assembly plant for XO manufacturing in Jiangsu province on China’s mainland. This move was strategic. China’s urban unemployment rests at 4.6%.33 The pool of surplus labor34 on the Mainland attracted Quanta to set up a plant in the city of Changshu, enabling the company to lower manufacturing costs by 20%.35 The minimum wage in Jiangsu ranges from $60 to $88 per month.36 Recent labor abuses by other manufacturers37 prompted the Chinese govern-
The Cyborg and the Noble Savage
ment to institute overtime laws requiring timeand-a-half pay for work beyond forty hours.38 While the workers who assemble the XO make only pennies per unit, their employer, Quanta, seems to have no difficulty operating within the local parameters of fair labor practices. In an unlikely partnership, the Taiwanese company became China’s second largest exporter of goods by value.39 Quanta promoted China’s status to the world’s largest base for laptop production.40 This rising star of computer production can rightly boast of its contribution to boost employment in Changshu. While remaining insensitive to the real lives of the workers and their families, Quanta prefers to emphasize their contribution toward elevating the skills and knowledge of the province’s factory workers. And the OLPC project can do little but rationalize the necessity of cheap labor in their grand philanthropic scheme.
FrEE aND OPEN sOUrcE sOFtWarE The use of free and open source software in the XO was a firm decision of the OLPC development team. Free software was chosen for philosophical reasons as much as it was to lower costs. “Our commitment to software freedom gives children the opportunity to use their laptops on their own terms.”41 While it is unlikely that the vast majority of children will find themselves changing the computer code, it is a safe bet that many children will exercise some degree of technical curiosity with their laptop, and the OLPC designers insist on the educational value of such pursuits. For this reason, the goal of the developers is to use free and open software exclusively on the XO. The free software movement began in the 1970s as an informal community of programmers who were drawn together to share ideas, strategies and computer code in the context of solving real problems. These software enthusiasts became formalized by the mid-1980s, primarily
through the efforts of Richard Stallman (1985) who founded the Free Software Foundation.42 The FSF provided a formal face for a broad community of programmers who shared an ethic of collaboration and participatory development. The spontaneous collaborative organization of programmers and independent technical experts arose in response to a predictable corporate trend toward copyright protections and restricted access to computer source code. Proponents of copyright insist that protections are needed as a means to preserve profits - the driving force behind innovation. By contrast, Stallman’s copyleft movement has advocated the preservation of free and open source code as the long-term guarantee toward continual improvement and evolution of software. We can hear the classical voices of Rousseau and Locke in these contemporary debates. From a Lockean view, the value derived from labor is the laborer’s exclusive property (Locke, 1690, book-2, Ch-5, sec-27). “Government has no other end but the preservation of property” (Ch-7). Contemporary interpretations of Locke’s theory extend the definition of property to include productions of the mind: intellectual labor. In recent years, multinational arrangements43 have emerged to ensure that intellectual property rights will survive beyond national boundaries (Correa, 2007). John Locke’s social order is alive and well in the age of global trade and economics. By contrast, the philosophical descendents of Rousseau recognize the social nature of knowledge and society’s increasing dependence on intellectual artifacts. Human knowledge, they argue, is an immense ocean of collective interaction. No single idea can be conceived in isolation of a larger scheme of social cognition. To lay exclusive claim to any portion of the commons and to expel others from that territory is tantamount to robbery (Rousseau, 1762, Book-1, ch-9). Software is very much a social artifact and few, if any, so-called ‘original’ algorithms are entirely original. As technical mediations seep into every crevice of modern life, mastery of the machine becomes a principle
The Cyborg and the Noble Savage
source of power (Feenberg, 2002, p65). The role of software is looming toward a controlling factor behind daily human existence. Keenly aware of this trend, the ‘copyleft’ movement promotes a ‘software commons’ that will ensure the advance of technology in the interest of the public good. The quest of the Free Software Foundation is to create a vast, open library where programmers can draw from public-domain source code and add to this library with each new development effort. To lose control of software and to become dependent on the private property of others bears the risk of relationships that are antagonistic to the common good. In 1762 Rousseau warned that, “nothing is more dangerous than the influence of private interests in public affairs.” (Book III, ch 4). Stallman’s (2001) refrain declares “the biggest political issue in the world today is resisting the tendency to give business the power of the public and of governments.” From this perspective, the free software movement is seen as an expression of political resistance (Maxwell, p 281). What does free and open source software mean for the OLPC project? It insures the possibility that the computer’s life span can extend far beyond the designer’s vision. Maintenance of the XO will fall in the hands of the open source community. The ranks of that community will be augmented, no doubt, by technically motivated children around the world who discover novel uses for their own XO and who learn how to develop authentic applications that are adapted to their own situation. If, in this analysis, we employ Heidegger’s phenomenology of action (Wakefield and Dreyfus, 1991), we can view the XO from the standpoint of its implementer, the young child who possesses few artifacts in addition to this magical machine. As long as the XO continues to function, the child will hold on to the machine and value it as one of very few possessions. The instrument in the hands of a motivated child will enjoy a life span that is much longer than typical PCs in the West. Even though this computer was shaped by an alien
rationality, it will not be long before the young experimenter will learn to reshape the instrument for his or her own purposes. Brent Jesiek (2003) argues that open source software extends a degree of control to the actors (in this case, the children and their local technical mentors). While it is unlikely that a typical child will become adept at changing or rewriting the software, it is very likely that many children will learn the intricacies of the machine and will develop their own rationality for the instrument. It is no stretch to assume that the laptop users will create implementations that the designers have never envisioned. Free and open source software enables this possibility.
a LEarNING MacHINE The XO computer is intended as a learning instrument. It is designed for use by children in rural regions of developing countries where access to school is limited. But what model of education should be implemented in the XO? What philosophy of learning should be applied? Modern education, or “schooling”, follows two traditions with two distinct learning paradigms, each with its own history and each with its own conception of knowledge. The dominant tradition has its roots in medieval Europe when books were rare and expensive, and texts were highly revered. In 14th Century university settings, holding a lecture meant reading a book, sentence by sentence, interjecting occasional commentary as pupils transcribed the information. The reading was central. The pupil’s role was to capture the textual content and never interrupt the delivery. In grammar schools, students were taught by rote from Latin texts, mostly scripture. The young pupils were not allowed to interrupt, and those who did were beaten with a birch rod. Whether in grammar school or the university, knowledge was an entity that existed apart from the learner and detached from the present context. The medieval approach that places content at the center
The Cyborg and the Noble Savage
of learning has survived Western education to the present day. Jean Jacques Rousseau was the first modern philosopher to criticize this model of education. In a fictional account, Emile (1762), Rousseau seized the opportunity to describe what education might be like if emphasis were to shift away from instructional content and steer instead toward the learner’s own experience. By allowing young Emile to learn what he wanted to learn, Rousseau envisioned a system of education in which intrinsic motivation rather than extrinsic coercion could direct a much more rewarding learning experience. It wasn’t until the twentieth century that student-centered alternatives to medieval education were actually introduced in public education. John Dewey formalized an educational model based on direct experience (Dewey, 1938). Experiential education places the learner in an active role to investigate the issue under study, to draw upon available tools, to seek out relevant information and collaborative assistance, to resolve any problems that get in the way, and to reflect on the overall experience. In considering the computer as a learning technology, it is possible to implement both of these educational traditions. The XO is an e-book that can deliver scores of texts written in the local language. Where local schools insist on a prescribed curriculum and didactic content, the XO can faithfully transfer teacher-prepared instructional content from the school server to a child’s laptop as the curriculum demands. Where educators are available to develop content in the local language, ample support is offered by the OLPC project with a rich set of development tools. But the XO can also respond to child-initiated learning activities. It provides tools and an infrastructure that allow a young learner to initiate interactions directly with the machine, with peers, with a teacher, or with the Internet as the child is motivated from genuine curiosity. Each laptop is equipped with wiki44 software, downloadable from the school server. This tool 0
enables children to create their own content with ease and make it accessible to all others from a web browser. OLPC content president Walter Bender explains: The wiki really is a way of taking the knowledge that exists in the world and putting it in a form that makes it transformed and realizable by the children in a very different way. In a wiki, every page also has a commentary, a discussion. The idea is, whatever the children are reading, they’ll be able to write margin notes into that, and share those margin notes with other people; engage in discussion about the content. (Bender, 2006). The programming language logo45 is part of the XO educational package. Logo is an easy-tolearn language intended for young children to discover the elements of computer logic and basic algebraic and geometric principals in an engaging, entertaining, and non-intimidating fashion (Papert, 1980). Logo was an outgrowth of Seymour Papert’s constructionist philosophy.46 For Papert, constructionism is the idea that children develop their own sophisticated knowledge structures as they “engage in the activity of constructing a public entity, whether it’s a sand castle on the beach or a theory of the universe” (Papert and Harel, 1991). An object-oriented47 variant of logo called, etoys48, is also available on the XO-1. This is a mediarich language that extends logo’s constructionist possibilities by allowing children to manipulate complex objects, generate multi-colored patterns, or create music (Kay, 2007).
tHE qUEstION OF HEGEMONY The thought of distributing millions of Westerndesigned computers to children in remote villages, in barrios and ghettos across the world suggests the possibility of hegemonic concerns. Hegemony is a form of cultural domination imposed by one segment of society over another. Forms of hege-
The Cyborg and the Noble Savage
mony can so permeate social life that they seem entirely natural to those who are dominated. One oft-cited example is in feudal society where peasant revolts were conducted in the name of the king. It was well understood that the chain of power descended from God through the king. So when peasants revolted against their noble oppressors, they did so in the name of the king. Andrew Feenberg explains that today’s chain of power descends through technocratic rationalization and the key to cultural power rests in the design of technology (Feenberg, 1999, p86). He adds that modern forms of hegemony are based on the technical mediation of a variety of social activities (including education) and that democratization of society must include democratic control of the technical sector. Hegemony is never completely solid, but porous. Within any society under cultural domination there is always room for agency and resistance (Kincheleloe and McLaren, 2000). Feenberg describes how new forms of resistance inevitably emerge through new technologies in what he calls a “subversive rationalization”. This concept describes the manner that technological innovations are followed by novel implementations in the hands of agentive users, and these uses spawn new opportunities for the transformation of technologies toward democratic ends (Feenberg, 1998). The Brazilian, Paulo Freire is particularly noted for his views of power, hegemony and domination in the educational space. But it is doubtful that Freire would find displeasure in the OLPC project since the open communication and collaborative technologies on the XO appear to be aligned with Friere’s own constructivist philosophy of education. According to Freire, the ethical response to hegemony in education is a learnercentered pedagogy that introduces possibilities for active and honest inquiry. Inquiry starts from the learner’s own experience in confronting an authentic problem. “Problem-posing education” (Freire, 1970, p 60) brings teacher and student into a collaborative relationship as peers. The teacher
is no longer an authoritarian oppressor, but she partners with the student in genuine conversation (see Pask, 1975 and Bakhtin, 1935). M. Scott Ruse raises the issue of unavoidable dependencies on allied technologies leading to hegemonic control. A certain technology itself may not pose a threat, but it may draw upon other technologies that do. Ruse argues that technologies form a web of dependencies, bringing with them a complex set of political, social, and economic relations. For instance, “telephones depend upon parts made by both the metal industry and the plastics industry. The building of communication networks depends upon the transportation system, which itself depends upon the petrol-chemical industry, etc” (Ruse, 2005). The knowledge that the XO provides its own electrical and communications infrastructure might ease Ruse’s mind. By the same token, the XO mesh network can provide a needed infrastructure upon which other applications could likely attach dependencies. Where no other form of telecommunications exists in a certain region, the XO mesh network will undoubtedly serve for multiple uses beyond children’s education. This type of dependency describes the actor-network49 phenomenon of translation and enrollment (defined by Michael Callon, 1986), but it does not appear to be the hegemonic threat suggested by Ruse. The OLPC project is a Western solution to a problem defined from a Western perspective. Martin Heidegger in his (1977) essay, The Question Concerning Technology points out that problems typically show up as already requiring technical solutions. To a carpenter with a hammer, everything looks like a nail. To the finest minds at MIT, there is a digital solution to the problems of illiteracy. And in the design of the particular solution, certain cultural interests are included and others are excluded. If we are to evaluate the ethics of this technological manner of being, we must look for arguments that justify a particular balance of values or rights over and against other possibilities (Introna, 2005).
The Cyborg and the Noble Savage
FUtUrE trENDs Whether OLPC succeeds or fails on the grand scale that is planned or whether it has any hope of significantly narrowing the digital divide, this aggressive project has set new directions that will benefit the developed world in two fundamental domains: computer architecture and educational technology. From the standpoint of computer architecture, energy conservation is the primary innovation of the XO. Typically the three greatest power consumers in a personal computer are the disk drive, the display, and the CPU. The XO computer confronted all of these hurdles head-on. It replaced the disk drive with flash memory and shifted the role of mass storage to a central server in the mesh network. This provides the XO with all of the computing resources that are associated with any well-equipped PC, but access to these resources comes by means of the mesh network, not from a local disk drive. In the display, the traditional cold cathode fluorescent lamp is replaced by low power LEDs to provide efficient back lighting for indoor viewing of the color display. Traditional color filters50 that subtract from the light source are replaced by a low-cost plastic diffraction grating51 to provide ample color saturation with 85% less power. And no attempt is made to project a bright color display in full sunlight. Instead, a reflective black and white imaging strategy provides high resolution images with virtually no energy consumption in sunlight. Finally, CPU power is the remaining target for energy savings. Realizing that computer CPUs remain idle most of the time, the XO designers and their AMD supplier explored possibilities of eliminating idle-time power consumption. This innovation reduced overall CPU power consumption by more than 90%. All of these energy-saving innovations have already taken root in next-generation designs for mainstream computer products. The current trend in business and industry is to deliver essential computing resources from a single, well-
equipped central server. In contemporary office environments, we see power-hungry desktop PCs being replaced by thin client52 terminals that include a simple keyboard, monitor, and mouse remotely connected to a central server delivering all of the resources required for data storage and heavy processing activities. With thin-client technology in a high-speed network, a single fullyequipped server can extend abundant computing resources to multiple users at a fraction of the cost for equipment, software, and energy. And personal computers themselves are undergoing significant design changes to save energy. The idle-and-sleep power saving mode of the AMD Geode and the “deep power down technology” of Intel’s new generation Penryn CPU53 promise low power consumption with the next generation of computing products. In a similar vein, designers are implementing methods to adjust CPU clock speed to match an application’s needs as a further effort to save energy54. At a time when three million new users join the global network each week55, energy conservation becomes an ethical imperative in computer design. In the domain of educational technology, the OLPC’s orientation toward constructivist learning suggests a shift away from traditional schooling and didactic instruction toward self-directed and peer-collaborative learning. According to Walter Bender, the OLPC educational package is designed with the assumption that children are social and expressive beings, and they can act in roles as teachers as well as learners. “In practice this means wikis rather than just document viewers, music composition tools rather than just MP3 players.” (Quoted in Rowell, 2007). This 21st Century educational package seems to conform to the model that social critic, Ivan Illich, advocated in 1972: A good educational system should have three purposes: it should provide all who want to learn with access to available resources at any time in their lives; empower all who want to share what
The Cyborg and the Noble Savage
they know to find those who want to learn it from them; and, finally, furnish all who want to present an issue to the public with the opportunity to make their challenge known. (Illich, 1972, p75)
domain, the constructivist approach to education encourages each learner to actively shape and refine our common base of knowledge.
The basis for such a system is found today within a local network of learners armed with collaborative information technologies, enhanced by a free and open Internet. In educational theory, we see a trend away from the individual view of “learning” toward a holistic view of development and change. Science educator, Jay Lemke, asserts that learning is not a cognitive process, but an “ecosocial” phenomenon:
sUMMarY aND cONcLUsION
Learning is not an internal process. People participate in larger systems and those larger systems undergo developmental processes; in interaction with their own relevant environments, they create the conditions for their own further change along evolved, type-specific and individuating trajectories. Some things change inside the people as they participate in these processes, and other, internal developmental processes of the same kind are going on within us among our own subsystems, coupled to our participation in these larger processes. What fundamentally changes, what we call learning, is how people interact with and participate in the larger ecosocial systems that sustain them. (Lemke, 1993) For Lemke, the learner, the learning community, the artifacts that interconnect the community, and the environment are all interdependent. Lemke sees a network of human-machine organisms (cyborgs) in which humans are shaped by their interaction with machines and machines are shaped by the manner which they are adopted into this sociotechnical network. A change in any human or machine component will impact all other components. Just as the free software movement encourages each programmer to continually adapt and improve the software that we use in the public
In the opening lines of his Nicomachean Ethics, Aristotle observed that “Every techne and every inquiry, and similarly every praxis and pursuit, is believed to aim at some good.” Ethical considerations of modern technology look at the physical objects in contexts of health, safety, and ecological concerns. They evaluate technology’s ability to free rather than to constrain human creativity. And they raise questions about the effects of technology on human identity and on the environment. Ethical inquiry becomes a complicated matter by the diversity of ways technology can be applied and understood (Mitcham, Briggle, and Ryder, 2005). The success or failure of Negroponte’s MIT project has yet to be determined. As of this writing, the XO is not yet for sale in developed countries and is available only at cost to requesting governments for educational purposes. Government ministries currently hold the decision making power over the future of this project. Some of these decision makers are elected, others appointed, and some are self-appointed. They all understand that new technologies bring with them unpredictable effects. They are keenly aware of the power of information technology to effect social change. Some will consider their decision purely on the basis of partisan bias and whose interests will be served. Others must evaluate the ethical choice of purchasing laptop computers in areas where children lack basic aliment, clean water, and health services. And some will view this opportunity as a means to advance the youngest among their populations on a trajectory of knowledge and learning to the benefit of the larger society. What good or ill might emerge out of this effort cannot be framed in reference to a child, or a school, or a
The Cyborg and the Noble Savage
nation, or the developing world, or the developed world, or even humanity, but the good of the whole (Lemke, 1993). Humanity must evolve toward a general consciousness that it lives and dies along with other species in a fragile ecosystem. The key question is whether technology is an indispensable component toward the establishment of this collective consciousness.
rEFErENcEs Auh, T.S. (2001) Language divide and knowledge gap in cyberspace: Beyond digital divide. Accessed online May 27, 2007 from http://www. unesco.or.kr/cyberlang/auhtaeksup.htm Bakhtin, M. (1935/1981 trans.). The dialogic imagination. Translation by K. Brostrom. Austin, TX: University of Texas Press. Barton, J. (2007). New trends in technology transfer: Implications for national and international policy. Issue paper No. 18, International Center for Trade and Sustainable Development. Accessed online May 27, 2007 from http://www.iprsonline. org/resources/docs/Barton%20%20New%20Tren ds%20Technology%20Transfer%200207.pdf Bender, W. (2006). OLPC talks. From a seminar entitled, Ars Electronica, September, 2006. Posted online and accessed May 27, 2007 at http://www. olpctalks.com/walter_bender/walter_bender_ ars_electronica.html Bijker, W. (1995). Of bicycles, bakelites, and bulbs: Toward a theory of sociotechnical change. Cambridge, MA: MIT Press. Bloor, David (1976) Knowledge and social imagery. London: Routledge. Borgmann, A. (1984). Technology and the character of contemporary life: A philosophical inquiry. Chicago: University of Chicago Press.
Borgmann, A. (1999), Holding on to reality: The nature of information at the turn of the millennium. Chicago: University of Chicago Press. Bourdieu, P., (1983) Economic capital, cultural capital, social capital. Soziale-Welt, Supplement 2, pp. 183-198. Coleman, J.S. (1988). Social capital and the creation of human capital. American Journal of Sociology, 94, supplement, S95-S120 Callon, M. (1986). Some elements of a sociology of translation: Domestication of the scallops and the fishermen of S./Brieuc Bay. In Law, J. (Ed.), Power, action, and belief: A new sociology of knowledge? Sociological Review Monograph, 32, 196-233. London, UK: Routledge & Kegan Paul Clynes, M. and N. Kline (1960) Cyborgs and Space. Astronautics, September, 26-27, 74-75. Correa, C. (2007) Trade related aspects of intellectual property rights: A commentary on the TRIPS agreement. Oxford University Press Dewey, J. (1938/1997). Experience and education. New York: Macmillan. Eisenstein, E.L., (1979). The printing press as an agent of change. Cambridge University Press. Feenberg, A. (1992). Subversive Rationalization: Technology, Power, and Democracy. Inquiry, 35(3/4). Feenberg, A. (1998). Escaping the iron cage, or subversive rationalization and democratic theory. In R. Schomberg (ed.), Democratising technology: Ethics, risk, and public debate. Tilburg: International Centre for Human and Public Affairs. Feenberg, A. (2002a). Transforming technology: A critical theory revisited. New York: Oxford University Press. Feenberg, A. (2002b). Democratic rationalization: Technology, power and freedom. Published in
The Cyborg and the Noble Savage
the online journal Dogma. Accessed online May 27, 2007 from http://dogma.free.fr/txt/AF_democratic-rationalization.htm
Jesiek, B. (2003). Democratizing software: Open source, the hacker ethic, and beyond. First Monday, 8(10).
Finnegan, R. (1973). Literacy versus non-literacy: The great divide. In Horton & Finnegan (Eds.), Modes of thought. London: Faber and Faber.
Kant, I. (1781/1787) Critique of pure reason (trans. N. Kemp Smith in 1927 as Immanuel Kant’s Critique of Pure Reason). London: Macmillan Co. Ltd.
Friere, P. (1970/1993). Pedagogy of the oppressed. New York: Continuum. Freire, P. (1985). The politics of education: Culture, power and liberation. Hadley, MA: Bergin & Garvey. Goody, J. & I. Watt (1968). The consequences of literacy. In J. Goody (Ed.), Literacy in traditional societies (pp. 27-68). New York: Cambridge University Press. Goody, J.(1977). The domestication of the savage mind. Cambridge, UK: Cambridge University Press. Goody, J. (1986). The logic of writing and the organization of society. New York: Cambridge University Press. Haraway, D (1991) The Cyborg Manifesto, extracted from Simians, Cyborgs, and Women: The Reinvention of Nature. New York: Routledge. Havelock, E. A. (1963). Preface to Plato. Cambridge, UK: Cambridge University Press. Heidegger, M. (1977). The question concerning technology (W. Lovitt, Trans.). In M. Heidegger (Ed.), The question concerning technology and other essays, pp. 1-49. New York: Harper & Row. Illich, Ivan (1971). Deschooling society. New York: Harper & Row. Introna, L. (2005). Phenomenological approaches to ethics and information technology. In The Stanford encyclopedia of philosophy. Accessed online at http://plato.stanford.edu/entries/ethicsit-phenomenology/
Kincheloe, J.L. & P. McLaren (2000). Rethinking critical theory and qualitative research. In K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research. 2nd edition. Thousand Oaks, CA: Sage. Kling, R. (1998). Technological and social access to computing, information, and communication technologies. White paper for the Presidential Advisory Committee on High-Performance Computing and Communications, Information Technology, and the Next Generation Internet. Accessed online May 27, 2007 from http://rkcsi. indiana.edu/archive/kling/pubs/NGI.htm Kay, A. (2007). Children learning by doing: Etoys on the XO. Draft document published on the OLPC web and accessed May 27, 2007 at http://www. laptop.org/OLPCEtoys.pdf Korman, R. (05/12/2003) Geeking in the third world. O’Reilly Media. Accessed online May 27, 2007 from http://www.oreilly.com/pub/a/oreilly/ news/ethan_0503.html Lemke, J. L., (1993). Education, cyberspace, and change. Originally published in the Electronic Journal on Virtual Culture, 1(1). Archived as ERIC document #ED356767 Accessed online Sep. 23, 2007 from ERIC: http://www.eric.ed.gov/ ERICDocs/data/ericdocs2sql/content_storage_01/0000019b/80/13/b5/9d.pdf Locke, John (1690). Of civil government: Second treatise. Accessed online May 27, 2007 from http://www.constitution.org/jl/2ndtreat.txt Martindale, L. (11/01/2002). Bridging the digital divide in South Africa. In Linux Journal. Ac-
The Cyborg and the Noble Savage
cessed online May 27,2007 from http://www. linuxjournal.com/article/5966 Maxwell (2006). Tracing the dynabook: A study of technocultural transformations. Accessed online May 27, 2007 from http://thinkubator.ccsp.sfu. ca/Dynabook/Maxwell-DynabookFinal.pdf Mitcham, C., A. Briggle, & M. Ryder (2005). Technology overview. In The encyclopedia of science, technology, and ethics. Stamford, CT: Thompson and Gale. Norris, P. (2001). Digital divide: Civic engagement, information poverty, and the Internet worldwide (communication, society and politics). Cambridge: Cambridge University Press. Papert, S. (1980) Mindstorms: Children, computers and powerful ideas. New York: Basic Books Papert S. and I. Harel (1991) Constructionism. Norwood, NJ: Ablex Pask, G. (1975). Conversation, cognition and learning. New York: Elsevier. Piaget, J. (1968) Genetic epistomology. Columbia Univesity Press. Rand, A. (1957). Atlas shrugged. Random House. Rousseau, J.J. (1762). The social contract or principles of political right. Translated by G. D. H. Cole, public domain. Accessed online May 27, 2007 from http://www.constitution.org/jjr/socon. htm Rousseau, J.J. (1762). Emile, or on education. Trans. Allan Bloom. New York: Basic Books, 1979. Accessed online from Columbia’s ILT Web May 27, 2007 http://projects.ilt.columbia. edu/pedagogies/rousseau/ Rowell, L. (2007). Reinventing the PC. In ACM Networker, 11(2).
Ruse, M.S. (2005). Technology and the evolution of the human: From Bergson to the philosophy of technology. In Essays in Philosophy, 6(1). Schribner, S. & M. Cole (1981). The psychology of literacy. Cambridge, MA: Harvard University Press. Stallman, R. (2001). Copyright and globalization in the age of computer networks. MIT Communications Forum. Accessed online May 27, 2007 from http://web.mit.edu/m-i-t/forums/copyright/ transcript.html Street, B. (1993). Cross-cultural approaches to literacy (Cambridge studies in oral and literate culture). New York: Cambridge University Press. Wakefield, J. & Dreyfus, H. (1990). Intentionality and the phenomenology of action. In Lepore E, van Gulick R (eds.), John Searle and his critics. Oxford: Blackwell. Warschauer, M. (2003). Technology and social inclusion: Rethinking the digital divide. Boston: MIT Press.
KEY tErMs Constructivism: Constructivism is a philosophical position that views knowledge as the outcome of experience mediated by one’s own prior knowledge and the experience of others. In contrast to objectivism (e.g. Ayn Rand, 1957) which embraces a static reality that is independent of human cognition, constructivism (e.g. Immanuel Kant, 1781/1787) holds that the only reality we can know is that which is represented by human thought. Each new conception of the world is mediated by prior-constructed realities that we take for granted. Human cognitive development is a continually adaptive process of assimilation, accommodation, and correction (Piaget, 1968). Social constructivists (e.g. Berger and Luckmann,
The Cyborg and the Noble Savage
1966) suggest that it is through the social process that reality takes on meaning and that our lives are formed and reformed through the dialectical process of socialization. A similar dialectical relationship informs our understanding of science (e.g. Bloor, 1976), and it shapes the technical artifacts that we invent and continually adapt to our changing realities (e.g. Bijker, 1995). Humans are shaped by their interactions with machines just as machines evolve and change in response to their use by humans. (Lemke, 1993). Cyborg: A compound word formed from the words: ‘cybernetic organism’. The term was coined by two medical researchers (Clynes and Kline, 1960) to describe a cybernetic augmentation of machines with the human body toward the goal of achieving super-human capabilities of survival. The term has been adopted in popular literature to describe a synthesis of organic and synthetic parts, and is widely used to convey the melding of the human mind with computer technology to achieve super-human cognitive powers. Dona Haraway frames the expression in context of techno-political supremacy as “the awful apocalyptic telos of the West’s dominations,” (1991, p. 150). Digital Divide: This expression arose in the digital age to describe the information gulf that exists between peoples and societies. The perceived gulf is the result of the dramatic rise of information technologies that evolved exponentially in the developed countries during the latter half of the Twentieth Century. The expression connotes the idea that information is a potent source of power, and those who enjoy access to information technologies have the potential to wield significant power over those who have no such access. Free and Open Source Software (FOSS): This is software that is available to the general public not only to be used, but to be changed and adapted as local usage patterns may dictate. Sometimes referred to as ‘freeware’, the design documentation and human-readable source code are openly published and not constrained by in-
tellectual property restrictions that would limit how and where the software will be used or how it might be improved or adapted to a particular need. Recognizing the social nature of knowledge and the constructivist nature of technology, participants in the free and open software movement routinely collaborate and share information with peers and they assert no exclusive claims to the software designs and code implementations that result from this wide collaborative praxis. Hegemony: Hegemony describes the political, economic, and cultural domination of one class of people over other classes. Hegemony comes about, not by means of forceful repression over those who might resist domination, but through the passive consent of subordinate classes who eventually accept the social order as a natural state of affairs as it is manifested in virtually every social institution. Hegemony is most pronounced in societies where the dominant class controls the information sector including mass media, education, and the market supply chain. Subversive Rationalization: Coined by Andrew Feenberg (1992), subversive rationalization describes the constructivist nature of technology. In particular, it denotes the manner that technologies undergo a metamorphosis through the process of adoption and use over time. While such changes may undermine a designer’s intentions, the transformations result in a democratizing trend that may convert a given technology from an instrument of social control to one that is guided by democratic social forces and human values. The final shape of an instrument is determined, not by the designer, but by the cultural logic of the human actors who adopt and use the technology.
ENDNOtEs 1
MIT Media Lab: OLPC Web. http://laptop. media.mit.edu/
The Cyborg and the Noble Savage
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
MIT Technology Review, Nov. 13, 2006. http://www.technologyreview.com/printer_friendly_article.aspx?id=17722 MIT The Tech, 110:40, October 5, 1990. http:// www-tech.mit.edu/V110/N40/media.40n. html A mesh network is a self-sufficient network that requires no interconnecting infrastructure. http://en.wikipedia.org/wiki/Mesh_ network The school server is a conventional computer with ample disk storage and Internet access. http://wiki.laptop.org/go/School_server The Geode series from AMD is a very low power, high performance microprocessor. http://www.amd.com/us-en/assets/content_type/DownloadableAssets/33358e_lx_ 900_productb.pdf Flash memory is read/write memory that retains its data even while powered off. http:// en.wikipedia.org/wiki/Flash_memory Mary Lou Jepsen is Chief Technology Officer of OLPC. http://www.spectrum.ieee. org/feb07/4900 Thin Film Transistor-Liquid Crystal Display. http://en.wikipedia.org/wiki/TFT_LCD See http://www.olpcnews.com/hardware/ screen/dual-mode_display_details.html See http://wiki.laptop.org/go/Mesh_Network_Details IEEE standard for wireless network communications. http://en.wikipedia.org/ wiki/802.11 A network switching device that determines the proper path for data packets. http:// en.wikipedia.org/wiki/Router The standard protocol for network communications. http://en.wikipedia.org/wiki/ Internet_protocol_suite See http://www.physorg.com/news2627. html See http://wiki.laptop.org/go/XS_Server_ Services
17
18
19 20
21
22
23
24
25
26
27 28
29
30
31 32
33
A non-toxic rechargeable battery using a hydrogen metal alloy anode. http:// en.wikipedia.org/wiki/Nickel_metal-hydride A traditional rechargeable battery whose anode is made of cadmium. http://en.wikipedia. org/wiki/Nickel-cadmium_battery See http://www.osha.gov/SLTC/cadmium/ This type of battery must be fully discharged before recharging. http://www.batterybank. com/page18.html An open source operating system kernal developed by Linus Torvald. http:// en.wikipedia.org/wiki/Linux The kernel is that part of an operating system that manages memory, i/o and other system hardware. http://en.wikipedia.org/wiki/Kernel_%28computer_science%29 A broad set of open-source operating system utilities. http://en.wikipedia.org/wiki/ GNU See http://www.olpcnews.com/software/operating_system/olpc_sugar_ui_linux.html See http://en.wikipedia.org/wiki/Christopher_Blizzard See OLPC Development Site: Bitfrost. http://dev.laptop.org/git.do?p=security;a=b lob;hb=HEAD;f=bitfrost.txt See http://wiki.laptop.org/go/Bitfrost Universally unique identifier. http:// en.wikipedia.org/wiki/UUID See http://wiki.laptop.org/go/OLPC_Human_Interface_Guidelines#Restore Acrylonitrile butadiene styrene plastic. http://en.wikipedia.org/wiki/Acrylonitrile_butadiene_styrene See http://en.wikipedia.org/wiki/RoHS The problem of labor is not really solved by this approach, merely bypassed. See Norbert Wiener’s (1950) The Human Use of Human Beings: Cybernetics and Society People’s Daily, Oct 25, 2005. http://english. people.com.cn/200510/25/eng20051025_ 216705.html
The Cyborg and the Noble Savage
34
35
36
37
38
39
40
41
42 43
44
See http://en.wikipedia.org/wiki/Surplus_ labour DigiTimes, Aug 23, 2006. http://www.digitimes.com/systems/a20060823PR205.html China Labor Watch, Jul 24, 2006. h t t p: // w w w. c h i n a l a b o r w a t c h . o r g / 2006%20Editorials/07-24-2006%20Mini mum%20Wage%20Chart.htm The Inquirer, Jun 26, 2006. http://www. theinquirer.net/default.aspx?article=32644 China CSR Nov 13, 2006. http://www.chinacsr.com/2006/11/13/846-jiangsu-limitsovertime-work/ Wall Street Jour nal, June 9, 2005. ht t p://onli ne.wsj.com /public/ar ticle/ SB111825761813954442-d4x_lQnm5A2G O O1N R6 Wi _ DBAy y s _ 2 0 050 70 9. html?mod=blogs People’s Daily, Jan 8, 2004. http://english. people.com.cn/200401/08/eng20040108_ 132140.shtml OLPC Web. http://www.laptop.org/laptop/ software/ See http://www.fsf.org/ http://en.wikipedia.org/wiki/Agreement_ on_Trade-Related_Aspects_of_Intellectual_Property_Rights See http://wiki.org/wiki.cgi?WhatIsWiki
45
46
47
48
49
50
51
52
53
54
55
See http://en.wikipedia.org/wiki/Logo_(programming_language) See http://www.papert.org/articles/SituatingConstructionism.html See http://en.wikipedia.org/wiki/Objectoriented_programming Laptop Wiki: Etoys. http://wiki.laptop.org/ go/Etoys An approach to social research that considers both human and non-human agency in shaping human activity. http://en.wikipedia. org/wiki/Actor-network_theory Filters that are used to block certain colors while passing other colors. http:// en.wikipedia.org/wiki/Bayer_filter See http://en.wikipedia.org/wiki/Diffraction_grating See http://en.wikipedia.org/wiki/Thin_client Tech nology@Intel Magazine: May, 2007. http://www.intel.com/technology/ magazine/45nm/coremicroarchitecture0507.htm http://www.cs.pitt.edu/PARTS/papers/koolchips00.pdf http://www.sustainableindustries.com/commentary/3469281.html
0
Chapter XVII
Becoming a Digital Citizen in a Technological World Mike Ribble Kansas State University, USA
abstract In todays changing global society, digital technology users need to be prepared to interact and work with users from around the world. Digital technology is helping to define this new global society. Being part of a society provides opportunities to its citizens but also asks that its members behave in certain way. This new technological society is drawing users together to learn, share and interact with one another in the virtual world. But for all users to be productive there needs to be a defined level of acceptable activity by everyone, in other words a digital citizenship. The concept of digital citizenship provides a structure for this digital society, by conceptualizing and organizing appropriate technology use into a new digital culture. Anyone using these digital technologies needs to understand the parameters of appropriate use so that they can become more constructive digital citizens.
INtrODUctION In the last five years, there has been evidence of an increasing pattern of misuse and abuse with respect to technology. This pattern of technology misuse and abuse has been documented in hundreds of articles, texts, and countless news broadcasts. Some examples include: websites to intimidate or threaten users, downloading music illegally from the Internet, plagiarizing information off the web, using cellular phones at inappropriate times
(e.g., during movies, at church, or in a meeting). This situation has created users “who want to enjoy the benefits of digital technology without making the effort to use it responsibly” (Harmon, 2004). Organizations have created standards or Acceptable Use Policies (AUPs) concerning how people are to use technology appropriately often without providing knowledge of what all the issues may be. In the article Online Ethics Should Begin in Classroom, Educators Say, February 16, 2000 issue of the New York Times the author
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Becoming a Digital Citizen in a Technological World
states “although most schools have ‘acceptable use policies’ outlining correct behavior online, educators said documents are often either flawed or insufficient to teach young people responsible use of computers” (Mendels, 2000). How individuals behave as members of a digital society has become a critical issue for technology users, and the focus of digital citizenship. What is digital citizenship? Digital citizenship has been defined as the norms of behavior with regard to technology use. Ribble and Bailey (2004) defined digital citizenship to address the complex issues of technology use, abuse, and misuse. The focus in this context when describing norms are those technology uses that considered as acceptable through the consent of its users. The focal point is more on the acknowledged responsibility of the technology users than on setting standards. The International Society for Technology in Education (ISTE) developed technology standards dealing with students’, teachers’, and administrators’ knowledge of using technology (http://cnets. iste.org). Through a process of gathering information from various interest groups and then building consensus, ISTE created the National Educational Technology Standards (NETS) for Students, Teachers, and Administrators (ISTE, 2003). In each of these NETS, ISTE has a section related to the Social, Ethical and Human Issues (Teacher and Administrators – Standard VI, Students – Standard 2). With these standards, ISTE provides structure for students, teachers, and administrators how to use technology in a responsible way. By implementing these standards ISTE has shown the importance of appropriate use of technology in education. By 2004 the NETS had been used or aligned to by 49 states, showing the importance of technology standards as a part of the educational curricula. In 2007 ISTE has begun a process of updating and evaluating the NETS for students to stay current with the changes in technology. At their annual meeting in the summer of 2007 this updated draft of the
NETS for students was accepted by the organization. Replacing social, ethical and human issues was the new standard of digital citizenship to encompass these ideas of ethics in the educational setting. As new digital technologies emerge and evolve, it becomes more difficult to create a framework of codified principles for acting responsibly in using these technologies. Some laws have been enacted, and some groups and organizations have created rules or policies. Unfortunately, there is no universal agreement on how users should act when using digital technologies. The purpose of focusing on digital citizenship is to create a dialogue among technology users on the issues of misuse and abuse of technology. Digital technology provides great improvements in society and continues to change how users work, learn, and play. However, users should ensure that digital technology continues to enhance our society. The benefits it provides should outweigh the problems it creates.
DIGItaL IssUEs GrOWtH aND tHE NEED FOr DIGItaL cItIzENsHIP Too often, new digital technologies have been made available without providing support for the users. As with all fields of study, education has been affected by the expansion of technology. Bork (1993) noted that schools bought the hardware, told the teachers to teach the programs, but did not provide the proper support to make their teaching effective. This same cycle has been seen in many other disciplines. Cuban (1986) suggested a parallel between unprepared schools and the use of untested drugs to release mentally ill patients out into the public. Just like the untested drugs, technology was sold as a miracle drug [italics added] for education. Although the consequences were not the same, the parallel of how technology was oversold in education was similar.
Becoming a Digital Citizen in a Technological World
With the increase in numbers of computers, there was also a growth in the technical issues that the users were unprepared to handle. Along with additional computers, software packages were purchased with the intent to help users utilize the technology more efficiently. Bork (1993) argued that the technology tools were seen only as a means to an end. Tools such as the Internet have provided new avenues of communication and interaction between users. With these new communication methods it also has provided opportunities for misuse and abuse. The Internet, having no governing body, provides no established rules for communication. As problem arise (e.g., harassing other users online, defacing others information), the users or organizations have created their own rules. These rules, often referred to as “netiquette,” became the decree for online technology users. Yevics (1999) explained how netiquette provided boundaries for those using the Internet. Netiquette helped some users realize how they should act while on the Internet, but not all users knew or understood netiquette well enough to make a systemic change. As technology began to grow in the 1990’s, users recognized that there needed to be a structure to what was considered appropriate and inappropriate use of technology. The interest in computer ethics grew rapidly during the 1990’s. Moor (1995) defined computer ethics as “the analysis of the nature and social impact of computer technology and the corresponding formulation and justification of policies for the ethical use of such technology” (p. 7). Users who supported the need for computer ethics believed there was a continuous cycle of inappropriate behavior that was happening with technology. Something new was needed to help break this cycle of misuse and abuse. One way to help provide structure to this question is digital citizenship. The idea of digital citizenship is not a new concept in the field of digital technologies. The concept has been used to define different pro-
grams. Drake University created a program on digital citizenship dealing with service learning and the digital divide (Shulman, Beisser, Larson & Shelley, 2002). The Motion Picture Association of America (MPAA) with Junior Achievement (JA) created a program of digital citizenship that focused on copyright and digital piracy (Motion Picture Association of America & Junior Achievement, 2003). However, these programs and other programs failed to provide users with a comprehensive definition of how to act appropriately in a digital society. Researchers both in education and other fields have evaluated the issues of ethical problems with the use of technology. In the study by Ellis and Griffith (2001) they found implications to both education as well as business. digital citizenship has been introduced to help users better understand the topics of misuse and abuse of technologies by dividing the concepts into nine elements. Mason (1986) had come up with four main keywords in his work. These were privacy, accuracy, property and accessibility. But after reviewing the literature of misuse and abuse over the last four decades other theme areas became evident. The nine key theme areas for digital citizenship that emerged from the current research were: (a) Digital Access, (b) Digital Commerce, (c) Digital Communication, (d) Digital Literacy, (e) Digital Etiquette, (f) Digital Rights & Responsibility, (g) Digital Law, (h) Digital Health and Welfare, and (i) Digital Security. These elements provide a way for users to better understand where issues may arise. It also provided a way to not only recognize these issues but also provide resources to modify the attitude of the user.
DIGItaL OPPOrtUNItIEs WItH DIGItaL cItIzENsHIP When talking to people who use the digital technologies, it often seems as if they have a language all their own. They speak about podcasting, blogs
Becoming a Digital Citizen in a Technological World
and wikis as well as other topics loosely defined as “Web 2.0.” In 2001, an influential article by Mark Prensky (2001) identified two distinctive groups of technology users: “Digital Natives” and “Digital Immigrants.” Digital natives are defined as those young people who have grown up around digital technologies and seem to instinctively understand the technology (p.10). Digital immigrants (the majority of users), may be fascinated by and may have adopted many aspects of the new technologies, but by not growing up with these digital tools, don’t use them as instinctively as the natives. While some users view this concept as controversial because there are adults who are more adept at technology than most children, the focus of the issue is availability of the technology. Students today have grown up in a society surrounded by technology; because of this, adults make the assumption that they already know everything there is to know about technology. Most adults perceive young users as digital natives and many do not even feel as competent as an immigrant; often only consider themselves a digital tourist (only happy with the technology when everything works like a well planned vacation). But the truth is not all children are as technologically savvy as adults might assume. And even when these young adults are comfortable using technology, they may not be using it appropriately. For everyone to use technology effectively, all users need to become “naturalized” members of a digital citizenry. How do users become digital citizens? First, there is the need to explore and come to understand the technology currently being used or how it could be used in the future. Next, users should realize what the technology can do, and its possible effects on them as well as others. Finally, after using the technology, there is a need to evaluate how it was used. Clearly children can be expected to make mistakes when using technology, but through modeling and direction these young adults should not make the same error as often. The focus of digital citizenship should not just be on the technology itself, but on how to use the
technology appropriately. digital citizenship is a process where all users of technology can learn what is considered acceptable use and in what situations. It provides a framework in which all users can begin to ask what they should be doing with respect to technology. digital citizenship is an attempt to provide a consistent message to everyone what he or she needs to do to be productive users of digital technologies. So, how can digital citizenship be explained? Digital citizenship is the norms of appropriate, responsible behavior with regard to technology use. As a way of understanding the complexity of digital citizenship and the issues of technology use, abuse and misuse, nine elements have been identified that together make up the foundation of digital citizenship. After five years of searching for the issues pertaining to technology appropriate use, nine elements were identified to encompass these issues: 1. 2. 3. 4.
5. 6. 7.
8.
9.
Digital access: Full electronic participation in society. Digital commerce: Electronic buying and selling of goods. Digital Communication: Electronic exchange of information. Digital literacy: Process of teaching and learning about technology and the use of technology. Digital etiquette: Electronic standards of conduct or procedure. Digital law: Electronic responsibility for actions and deeds Digital rights and responsibilities: Freedoms extended to everyone in a digital world. Digital health and wellness: Physical and psychological well being in a digital technology world. Digital security (self-protection): Electronic precautions to guarantee safety.
Becoming a Digital Citizen in a Technological World
The nine elements provide a lens for technology leaders to help users understand the issues related to becoming a digital citizen. Many people are already using the technology; but now users need a structure to help them understand how to use the technology appropriately.
tHE NINE ELEMENts OF DIGItaL cItIzENsHIP To help users, the nine elements digital citizenship need to be further defined and additional explanation to their meaning provided. The nine elements were identified after evaluating varied media to determine general concepts related to technology misuse and abuse. After looking at articles, books, and other research the nine elements began to emerge as important issues. These elements focus on issues relevant today but provide the flexibility for changes with future technologies. The nine elements provide a framework for understanding the issues related to technologies that are important to everyone. By providing a structure for technology use within these elements, a foundation is provided for users to focus on those areas that are important to them.
Digital access Full electronic participation in society. Technology provides opportunities for users to communicate and interact rapidly and over great distances. However, not everyone has access to all of the tools of this new digital society. Because of socioeconomic status, disabilities, location, etc. these opportunities are not equally available to everyone. Groups that are disenfranchised by lack of technology access include families who do not have the financial ability to have technology in the home, communities that have too few public computers, and rural areas that lack access to high-speed Internet connections. Communities
need to evaluate their access to technology to everyone. Do all members of the community have adequate access throughout the day to technology? In communities where users do not have access to technology in the home, opportunities such as open computer labs in the evenings, access in libraries and community organizations need to be offered and publicized so that users are aware of this access.
Digital commerce Electronic buying and selling of goods. Online purchasing has increased exponentially over the last five years. Often users are making purchases on the Internet without understanding of the potential problems which can occur with online transactions. Teaching users to become intelligent consumers is important for everyone in a society. As more people go out and accumulate debt, the entire economy is affected. As more users buy, sell, bank and pay bills online there needs to be a familiarity of issues that could affect them such as identity theft, phishing scams, and viruses. Technology users need to be provided additional information to more inform of the possible hazards of online commerce.
Digital communication Electronic exchange of information. Cell phones, Instant Messaging (IM), videoconferencing and e-mail have changed the way technology users communicate. These forms of communication have created a new social structure governing whom, how, and when people interact. Digital communication provides users with instant access to others on an unprecedented level. Many users prefer e-mail to phone calls because of its ease of use and providing a record of the conversation. Users forget, however, that even though they may delete a message it is usually
Becoming a Digital Citizen in a Technological World
stored on a server or backup for future review. Because of this, users need to think about what they say and how they say it. As with any technology, there are times when these technologies can be used inappropriately. Too often e-mails are sent without considering who might see them or how they might be interpreted. It is too easy to write the first thing that comes to mind then send it before thinking of the long-term consequences. There are also times when speaking to someone face-to-face can solve a situation faster than multiple e-mails. Users need a way to decide when using a technology is appropriate to the situation. Many users are provided the technology without being informed on how and when it should be used.
Digital Literacy Process of teaching and learning about technology and the use of technology. Technology-infused learning is becoming more commonplace every year and becoming as transparent as the chalkboard and pencil. However, the training of instructors on how to use this technology appropriately has not grown accordingly. Learning with technology does not always include knowledge about the potential issues related to the technology. Too often, the focus is on learning the technology, and very little is discussed about how to integrate the technology into the learning. Schools have more technology than ever before. According to National Educational Technology Plan (Paige, 2004) there is one computer in schools for every five students in the United States. But where are these computers; are they in students’ hands? Too often, they are placed in a “computer lab” that is understaffed and under funded. Technology is often seen as another class that students go to as opposed to being an integral part of the larger curriculum. As people begin transitioning more and more into new careers their needs to be a vehicle of
how to provide knowledge “just in time.” Digital Literacy can provide the conduit if there is an understanding of how to use it appropriately.
Digital Etiquette Electronic standards of conduct or procedure. Responsible digital behavior makes every user a role model for others. New technology users observe how others are using the technology and assume if others can use it in that manner, so can they. The problem with instruction of digital technology is that few rules have been established for proper use of these devices. The proliferation of new technologies has created a steep learning curve for all users. Some are more proficient than others, and those that lag behind often do not understand the subtle rules that have emerged among early adopters. In the past, it was up to parents and families to teach basic etiquette to their children before they reached school age. With the new technologies parents often do not have the knowledge of what is appropriate or not. Adults will often look to others for cues of how to use the technology. As a digital society, it is time that users step forward to provide direction on what is considered appropriate and what is not.
Digital Law Electronic responsibility for actions and deeds. The Internet has made it easy to locate and access information. Often users do not consider what is appropriate, inappropriate or even illegal when downloading information from the Internet. Users often remark, “We did not think it was wrong, all we were doing was using the technology.” This ability to easily find material is one of the strengths of the Internet. However, it also raises issues of intellectual property rights and copyright protection. The issue was exposed when the Recording Industry Association of America (RIAA) fined users for downloading
Becoming a Digital Citizen in a Technological World
music illegally (Wired News, 2003). This action has caused technology users to begin thinking about what is appropriate and illegal with respect to online file sharing. Ironically, a 2003 survey done by Ipsos (a market research company) for Business Software Alliance indicated that twothirds of college faculty and administrators said it is wrong to download or swap files while less than one-quarter of their students felt the same way (CyberAtlas, 2003). There will always be people who do not follow the rules of society and who engage in activities that are counter to the ideals of society as a whole. Consequences must be established for those who do not act as true digital citizens, and instead steal others’ information, hack into servers, and create and release viruses. If members of the digital society do not determine policies, those with little understanding of that culture will impose laws without the users’ contribution.
Digital rights and responsibilities Those freedoms extended to everyone in a digital world. When discussing the rights of an individual user they often identify the rights or privileges that come with of membership of a group. When someone is given rights there is an assumption that they will act in accordance with the rules that govern membership in the group. This is true for Digital Rights and Responsibilities, because its purpose is to provide content while protecting its members. In the digital world, users expect that if they post information to a site (whether it is a poem, a picture, or song) that others can enjoy it without utilizing it for their own use (unless so designated). Members in a digital society should be afforded certain rights. Citizens also have certain responsibilities to the others as well. When becoming a member of a digital society, people must agree to live according to the parameters that are set down by the others in that society. These boundaries
may come in the form of rules or policies, and they often constrain the freedoms of others. There needs to be some guiding principles on appropriate ways to act in a digital society. If following these rules provides an advantage for those living in it, then most would follow those ideals.
Digital Health and Wellness Physical & psychological well being in a digital technology world. Everyone needs to be aware of the physical and psychological dangers that are possible when using technology. According to Alan Hedge, the director of the Human Factors and Ergonomics Research Group at Cornell University, “carpel tunnel syndrome isn’t the only injury to worry about when working at a computer” (Manjoo, 2003). Eyestrain and poor posture are not uncommon in technology-related activities. Another aspect of Digital Health and Wellness that is just now coming to the forefront is technology addiction. Some users are becoming so dependent on using technology that they become disabled. When any technology use is taken to the extreme, this can cause problems not only psychologically but physically as well. Too often the concern is for the safety and security of equipment and not on the physical safety and security of the users. Sometimes computers are set on furniture that is too high or too low for the user. Too often users do not think of the long-term effects of using mobile technologies. If this attitude continues, there could be cumulative damage caused to the user that could last a lifetime.
Digital security (self-Protection) Electronic precautions to guarantee safety. As more and more sensitive information is stored electronically, there needs to be a corresponding strategy to protect that information. Users need to learn how to protect their elec-
Becoming a Digital Citizen in a Technological World
tronic data (e.g., virus protection, firewalls, and backups). The idea of protecting what we have should not be foreign to anyone. People put locks on their doors, and fire detectors in their homes, and some have security systems to protect their families and possessions. A users’ computer should have as much (or more) security on it. Why is there a need for this additional protection on computers when there are locks on the door? The reason is that technology intruders today are not coming through the front door, but through the wires of users’ homes. Without security measures information is available to anyone who would hack into their computers, and by doing so inviting thieves into their homes. How is this happening? By not having virus protection (with up-to-date virus definitions), by being connected 24 hours a day 7 days a week without any firewall protection, and having wireless setups without encryption is inviting more problems.
the Nine Elements reviewed The nine elements were created to help users understand the various aspects of digital citizenship. It is also worth noticing that digital citizenship does not set down a set of rules, but instead is to be used as a way of focusing on the issues that all technology users face. These elements also provide a starting point for users to begin preparing to be digital citizens. digital citizenship addresses the fact that users have different degrees of knowledge when dealing with technology. Depending on their awareness and understanding of technology, users can focus more deeply on a specific element, while keeping the others in mind. As new digital technologies emerge, a framework of codified principles will be harder to create. Yet, society needs guidance on how it should act with respect to technology. Laws and policies have tended to fail to focus on a central aspect of inappropriate use of technology in a digital society – lack of knowledge. Users need
to have a grounded understanding of the technology and how it is to be appropriately used. To this point, there has been limited universal agreement on how users should act in relation to digital technologies. digital citizenship begins that discussion. Coming to a consensus on how everyone will deal with digital technology will be difficult, at best. However we must begin the discussion somewhere and digital citizenship sets this framework. Technology has brought society into a digital world. This technological world has changed how we behave and function as citizens. Today, users live and work and interact with not only the “real” physical world, but in a digital, virtual world as well. Citizenship today takes on a whole new meaning, beyond our understanding of states and countries to one of bits and bytes. This new citizenship will encompass more than working with our neighbors but how we will work with others around the globe. Our children will have to learn how to work with other users in India, China and Russia to be effective in the coming century. In Thomas Friedman’s book The World is Flat he underlines the desperate need to begin preparing our children for this new world. A world that will have new rules, with this there needs to be a guiding beacon such as digital citizenship. By having a common framework such as digital citizenship, will provide us all a starting point for everyone to understand each other. This new citizenship goes beyond knowing the rules and polices to understanding how the individuals perceive the technology. Users should not look at technology as a collection of toys or gadgets, but as tools to communicate and interact in this new world. Users need to see themselves as a member of a community and by realizing this acting in a way reflecting this knowledge when using the technology. It is often difficult to separate the technology from the users today. But this is the importance and challenge of digital citizenship – to balance both technology and the users.
Becoming a Digital Citizen in a Technological World
In schools, students are taught how to be good citizens of a country, what their rights are as well as their responsibilities as members of that society. Children need to begin preparing today to develop this same responsible behavior in this new digital society by engaging them in discussion of these nine elements. Everyone needs to learn how to become good digital citizens in a new digital society. Users may not be aware of the challenges that they may face every day. It is believed that having knowledge of the nine elements and the issues that accompany them will allow users to interact about these issues. As these discussions about technology arise, digital citizenship can act as a cornerstone of that discussion. The technology of today will quickly give away to new technologies so children need to be taught how to think about appropriate use of all technology, both today and in the future. The question is “how should users proceed into this new society”?
a rEFLEctION MODEL OF DIGItaL cItIzENsHIP How can users begin thinking about these nine elements their daily technology use? By employing a reflection model, users could—and arguably should—seek its help each time they use digital technology. As users become more aware of their actions when using technology they should realize the consequences and the implications of those actions. This model will help users begin to formulate how they should use digital technology for the future. There are four stages in the reflection model to enhance the understanding and development of digital citizenship: (1) awareness, (2) guided practice, (3) modeling and demonstration and (4) feedback and analysis. These stages are used provide a framework for helping users to understand why being good digital citizens are important.
By using this reflection model a discussion of the issues that are happening with respect to technology to help focus on using technology appropriately.
stage One: awareness Awareness means engaging users to become technologically literate. The awareness stage goes beyond just basic knowledge or information of hardware and software. Users need to focus on examples of misuse and abuse of technology. Everyone needs to learn what is appropriate and not when using different digital technologies.
step two: Guided Practice Following awareness activities, there needs to be additional opportunities (i.e., guided practice) to focus on appropriate use of technology. Users should be able to utilize technology in an atmosphere where exploration and risk taking are promoted. During this same time users will need the support of knowledgeable technology users when mistakes are made and guided direction provided. During this stage the new user needs to realize the issues that are related to the appropriate use of the technologies.
step three: Modeling and Demonstration Once users have had the opportunity to learn and practice the skills of the technology use they need to model appropriate use to others. Users need to think about how others see their technology use and how it affects others. All users need to be positive role models for being good digital citizens so others can follow their example. Young technology users need to have numerous technology role models to gain a through understanding of digital citizenship.
Becoming a Digital Citizen in a Technological World
step Four: Feedback and analysis Users should discuss their use of technologies with others to see how they can use it more appropriately. Technology users need to provide constructive criticism on how technologies should be used in all parts of society. Users should analyze and explore why they should use technologies in a certain way. Users need a forum to illustrate specific examples and why these behaviors are inappropriate. This final step involves providing users feedback on their technology use. It can be difficult to reflect on one’s actions after they occurred, but it is a necessary part of the process. Without providing opportunities for self-reflection or self-contemplation, the inappropriate behavior will be repeated over and over in the future.
FUtUrE trENDs Digital citizenship recognizes that users understand and follow the laws, rules and policies that have been created, but stretches beyond this. digital citizenship means having digital technology users understand that there are reasons for having such conventions. digital citizenship attempts to help provide a framework for users to think critically about what is acceptable, and by doing so will lead them to do the right thing. The goal of digital citizenship is to create a citizenry that will learn these justifications early in their life, so that later they will not have to think about whether something is appropriate or not, but they will have the tools to evaluate digital technology situations and come to the reason-based conclusions. Just like shoplifting, downloading illegal items from the Internet may seem easy and fun. But individuals often discover that both can have consequences that may not be enjoyable. Users of technology need to have a strong foundation in digital citizenship if there is an expectation that their use of technology affects themselves as well as those around them. Digital
technology users must begin evaluating their own technology use to be productive members of a digital society. This society may have laws and policies on technology use, but users understanding of the technology and how it will be used in a digital society will define how effective they will be in the future.
cONcLUsION It is becoming apparent that misuse and abuse of technology is more prevalent in society than ever before. The development of digital citizenship will provide an invaluable resource for technology leaders. digital citizenship will provide a framework for how experienced technology users should explain the need for appropriate use of technology. It also provides a foundation for technology leaders to organize teaching and learning while using technology. Technology provides many opportunities for users to access growing amounts of information, but without having a systematic way of using the technology appropriately these opportunities may be curtailed (through more laws and policies). Technology leaders need to have a firm knowledge of the issues that are at stake, and digital citizenship provides that foundation. This topic suggests that there is a need for strong technology leadership within the all aspects of society. digital citizenship can provide a foundation for how technology leaders can begin defining how they will share this information with others. But beyond this there must be a process of planning, implementation, and evaluation of the digital citizenship process. Technology is creating many changes in society. The last 40 years has seen a transition from a few isolated computers to a society dependant on technology. The infusion of these digital technologies has changed how users work, play and learn. When users misuse technology it causes issues for everyone who enjoys the freedoms technology provide. As technology become more and more
Becoming a Digital Citizen in a Technological World
prevalent so do the issues of misuse and abuse. It is only through the knowledge of how to use technology appropriately will everyone be able to maintain the rights we currently enjoy.
rEFErENcEs 4teachers.org. (2004). Dictionary main page. Retrieved February 22, 2004, from http:// www.4teachers.org Bork, A. (1993). Technology in education: An historical perspective. In R. Muffoletto & N. N. Knupfer (Eds.), Computers in education: Social, political & historical perspectives (pp. 71-90). Cresskill, NJ: Hampton Press. Cuban, L. (1986). Teachers and machines: The classroom use of technology since 1920. New York: Teachers College Press. CyberAltlas Staff (2003). Colleges a gateway to software piracy. CyberAtlas. Retrieved September 30, 2003, from http://cyberatlas.Internet.com/big_ picture/ applications/article/0,,1301_3078901,00. html Ellis, T. S. & Griffith, D. (2001). The evaluation of it ethical scenarios using a multidimensional scale. The Database for Advances in Information Systems, 32(1), 75-85. Friedman, T. L. (2006). The world is flat: A brief history of the 21st century. New York: Farrar, Straus and Giroux. International Society for Technology in Education. (2003). ISTE NETS main page. Retrieved December 10, 2003, from http://cnets.iste.org/ index.shtml Johnson, D.G. & Nissenbaum, H. (1995). Computers, ethics & social values. Upper Saddle River, NJ: Prentice Hall.
0
Harmon, A. (2004, February 5). Geeks put the unsavvy on alert: Learn or log off. New York Times. Retrieved February 6, 2004, from http://www. nytimes.com/ 2004/02/05/technology/05VIRU. html?th=&pagewanted=print&position= Manjoo, F. (2003). Carpel study stress syndrome. Retrieved October 15, 2003, from http://www. wired.com/news/politics/0,1283,44400,00.html Mason, R.O. (1986). Four ethical issues of the information age. MIS Quarterly, 10(1), 5-12. Mendels, P. (2000, February 16). Online ethics should begin in classroom, educators say. The New York Times, Technology section. Merriam-Webster. (2004). Dictionary main page. Retrieved February 18, 2004, from http://www. m-w.com Moor, J. H. (1995). What is computer ethics? In D.G. Johnson & H. Nissenbaum (Eds.), Computers, ethics & social values (pp. 7-15). Saddle River, NJ: Prentice Hall. Motion Picture Association of America & Junior Achievement (2003). What’s the diff? JA.org. Retrieved July 21, 2004, from http://www.ja.org/ programs/programs_supplements_citizenship. shtml Occupational and Environmental Health Center. (2004). Ergonomic technology center. Retrieved September 18, 2006 from http://www.oehc.uchc. edu/ergo/index.htm PC Webopedia. (2004). Dictionary main page. Retrieved February 26, 2004, from http://www. pcwebopedia.com Paige, R. (2004). Toward a new golden age in American education. National educational technology plan website. Retrieved May 25, 2005, from http://www.nationaledtechplan.org/ Prensky, M. (2001). Digital natives, digital immigrants. On the Horizonm, (9)5, 10-15.
Becoming a Digital Citizen in a Technological World
Ribble, M., & Bailey, G. (2004). Digital citizenship: Focus questions for implementation. Learning & Leading with Technology, 32(2), 12-15. Shulman, S., Beisser, S., Larson, T. & Shelley, M. (2002). Digital citizenship: Service-learning meets the digital divide. Drake.edu. Retrieved April 14, 2004, from http://www.drake.edu/artsci/faculty/ sshulman/ITR/ digitalcitizenship.htm Southern Association of Colleges and Schools. (1996). Criteria for accreditation. Commission on Colleges, Decatur, GA. TechWeb. (2004). Dictionary main page. Retrieved November 15, 2004, from http://www. techweb.com WhatIs. (2004). Dictionary main page. Retrieved March 5, 2004, from http://www.whatis.com Wired News Staff (2003, May 1). Students fork it over to RIAA. Retrieved May 1, 2003, from http://www.wired.com/news/digiwood/0,1412,58707,00.html Yevics, P. (1999). Netiquette – what it is and why should you care? The Maryland State Bar Association, Inc. Retrieved September 28, 2005, from http://www.msba.org/Departments/Ioma/ articles/officemngmt/netiquette.htm
KEY tErMs Acceptable Use Policy (AUP): Policy set up by the network administrator or other school leaders in conjunction with their technology needs and safety concerns. This policy restricts the manner in which a network may be used, and helps provide guidelines for teachers using technology in the classroom (4teachers.org, 2004). Computer Ethics: Analysis of the nature and social impact of computer technology and the corresponding formulation and justification
of policies for the ethical use of such technology (Johnson & Nissenbaum, 1995). E-Commerce (Electronic-Commerce): Buying and selling of goods and services on the Internet, especially the World Wide Web (WhatIs, 2004). Ergonomics: The science of fitting the workplace to the worker—involves reducing exposures to physical trauma, redesigning tools and workstations, and preventing and treating Cumulative Trauma Disorders (CTDs), such as Carpal Tunnel Syndrome and Tendonitis (Occupational and Environmental Health Center, 2004). Information Literacy: Ability to locate, evaluate, and use information to become independent life-long learners (Southern Association of Colleges and Schools, 1996). IT (Information Technology): Pronounced as separate letters, the broad subject concerned with all aspects of managing and processing information, especially within a large organization or company. Because computers are central to information management, computer departments within companies and universities are often called IT departments. Some companies refer to this department as IS (Information Services) or MIS (Management Information Services) (Pcwebopedia, 2004). Netiquette (Internet Etiquette): Etiquette guidelines for posting messages to online services, and particularly Internet newsgroups. Netiquette covers not only rules to maintain civility in discussions (i.e., avoiding flames), but also special guidelines unique to the electronic nature of forum messages. For example, netiquette advises users to use simple formats because complex formatting may not appear correctly for all readers. In most cases, netiquette is enforced by fellow users who will vociferously object if you break a rule of netiquette (Pcwebopedia, 2004).
Becoming a Digital Citizen in a Technological World
Plagiarize: To steal and pass off (the ideas or words of another) as one’s own : use (another’s production) without crediting the source: to commit literary theft : present as new and original an idea or product derived from an existing source (Merriam-Webster, 2004).
Chapter XVIII
Technoethics in Education for the Twenty-First Century Deb Gearhart Troy University, USA
abstract Are we developing a (global) society where our youth think it is ok to copy and paste whatever they see on the Internet and turn it in for homework; where writing an English paper would include BTW, IMHO, LOL among other emoticons; where downloading a song or movie that they can pirate from the Web is perfectly ok? We would certainly hope not. However, theses concerns are just the tip of what is happening in our society. When looking at the social impact of technology in on our society it becomes clear the importance of instilling ethical behaviors and practices in the members of our society. Where is the best place to instill these ethical behaviors? This author contends it is within our education system but is our education system prepared to deal with the ethical issues being raised by our use of technology known as technoethics? Currently our education system is not. This chapter defines technoethics for education and provides suggestions for technoethics in our education system.
INtrODUctION Freedman (2006) commented that determining what is ethical is difficult to do under any circumstance; it is even harder in this Internet age. He noted that we are dealing with all types of issues including privacy issues, free speech, racial and cultural issues. Swierstra (1997) looked at technology changes in relation to survival of society over the years and commented that the technology of
this era has had more impact on society than any other technology change. It is no longer a matter of survival but of the quality of life and having a good life. Galván (2001) noted that technology is not an addition to man but is, in fact, one of the ways in which mankind distinguishes itself from animals and has provided added value to mankind. These comments are from some recent research on technoethics. As students enter the public school systems, the youth of our society
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Technoethics in Education for the Twenty-First Century
are fully entwined with technology and learning potential bad practices in using and learning with technology. From elementary education on, we need to instill ethical, legal and moral practices of using technology to our youth. Growing up with technology our youth should learn all aspects of using technology. As with many societal norms, we cannot expect that such values will be learned in the home. Defining technoethics and reviewing the literature on how it has developed is important to demonstrate how education must rise to the occasion. Chapter Objectives: • • •
To define the concept of technoethics To review pertinent literature and research on technoethics in education To provide recommendations on how technoethics can be handled in our education system
bacKGrOUND Defining Technoethics Floridi and Sanders (2001) proposed that the ethical issues surrounding computer and information technology are a new species of traditional moral issues; based on the idea that computer-ethical issues can be classified into traditional ethical categories such as personal privacy, harm, taking responsibility for the consequences of one’s actions, putting people at risk, and so on. On the other hand, the presence of computer technology often means that the issues arise with a new twist, a new feature, a new possibility. The new feature makes it difficult to draw on traditional moral concepts and norms. However viewed, technoethics must exist as a field worthy of study in its own right and not because it can provide a useful means to endure as a separate field, there must be a unique domain for technoethics distinct from the domain for moral education,
distinct even from the domains of other kinds of professional and applied ethics. Technology raises special ethical issues, hence technoethics deserves special status. According to Bao and Xiang (2006) technoethics should be the ethical basis for the world or global community. Technoethics should be a behavioral norm for all those who are active in the global community, such as international organizations and nations, as well as other groups and individuals. Technoethics can be a new principle, or a variation on past research findings. Technoethics provide principles geared to the needs of the digital era. For the purposes of this chapter, technoethics is defined as the study of moral, legal and social issues involving technology. Technoethics examines the impact of technology on our social, legal and moral systems, and it evaluates the social policies and laws that have been framed in response to issues generated by its development and use (Tavani, 2004).
Literature review Sloan (1980) noted that reviewing the teaching of ethics over a hundred year period shows that ethics has been uniquely and inseparably connected with the most important issues of modern higher education. No wonder there is an emphasis on technoethics now in the information age. Technology has become one of the most important issues in higher education, and in education in general. According to Sloan, academic freedom lies at the heart of ethics and responsibilities for members of the academic community. Ethics and responsibilities are at the core of reconstructing a sensible and defensible rationale for the preservation of academic freedom. With these thoughts in mind, the literature review is designed to introduce a basic understanding of ethical theory, understanding the development of the use of technology, and understanding the major ethical issues related to technology use.
Technoethics in Education for the Twenty-First Century
Brandon (1980) commented that ethical and technological developments have always proceeded hand-in-hand, because ethical rules of conduct only make sense if man has some control over his environment. As environmental control has become increasingly sophisticated, through developing technology, so have the ethical codes become more elaborate. If we continue our technological development in the face of the present pressures of increasing population and limited resources, we need to balance technology with the environment so that future societies can lead better lives. Brandon (1980) contends that if control over the environment isn’t exercised, then mankind is due to meet the fate of the dodo and the dinosaur – an evolutionary dead end unable to adapt to change. Reviewing the above discussion on the affect of technology on society, it is important to understand what technology is. Brey, Flordi and Grodzinsky (2005) noted there are currently a mobile and wireless revolution, the ubiquitous computing revolution, as well as revolutionary new uses of information technology in biomedicine, education, the fight against crime and terrorism, entertainment and other areas. It is their contention that ethical reflection on information technology occurs at various stages. Sometimes ethics focuses on entrenched information technology; systems and software that have been accepted by a community of users, which have engendered effects on users and society at large. Sometimes, ethics is focused on information technology that is still in its introductory stage: technology that exists but is still somewhat unproven, used by early adopters but not by a large community, and that can still develop in different directions. At yet other times, ethical reflection focuses on information technology which is still in development or even anticipated future developments, at a stage when consequences are still unclear and the technology may still take different forms. Evidently, ethical reflection will not be the same at these different stages. When technologies have not yet been fully
formed, ethics will have to be more speculative and more concerned with plausible scenarios and consequences than when the technology is fully entrenched. Also, while technology is still being developed, there is more room to reflect morally on technology design as opposed to technology usage. Moor (2005) commented that technology is ambiguous. He explains technology through a technology revolution. However, to understand a technology revolution, we must first understand the terminology Moor uses. According to Moor, a technological paradigm is a set of concepts, theories and methods that characterize a kind of technology. A technological device is a specific piece of technology (Pg. 111). Technological devices are instances or implementations of the technological paradigm. Technological development occurs when either the technological paradigm is elaborated in terms of improved concepts, theories, and methods; or the instances of the paradigm are improved in terms of efficiency, effectiveness, safety etc. But in some cases technological development has an enormous social impact. When that happens, a technological revolution occurs. Technological revolutions do not arrive fully mature. Moor (2005) goes on to explain that we can understand a technological revolution as proceeding through three stages: the introduction stage, the permeation stage and the power stage. In the first stage, the introduction stage, the earliest implementation of the technology are esoteric, often regarded as intellectual curiosities or even as playthings more than as useful tools. In the second stage, the permeation stage, the technological devices are standardized. The devices are more conventional in design and operation; the number of users grows; the costs come down, training is available and technology integration begins in the society. In the third stage, the technology is firmly established and is readily available. A technological revolution has a large
Technoethics in Education for the Twenty-First Century
scale transforming effect on the manner in which a society functions. According to Moor (2005) to identify a technological revolution one must consider the technological paradigm, the technological devices that instantiate the paradigm, and the social impact of these devices. The paradigm will evolve and be articulated in new ways over time but will be identifiable as alterations of the original version of the paradigm. A technological revolution will have many technological developments within it. A sub-revolution is a technological revolution that is embedded in another. The sub-revolution will have a more specific paradigm that is a restricted version of the general paradigm and will have devices that instantiate its more specific paradigm that will be special cases of the more general revolution. The sub-revolution will move through the stages of technological revolution at the same time or at different times. Examples of sub-revolutions in the computer revolution include cell phone technology and the World Wide Web. Moor (2005) then states that technology, particularly revolutionary technology, generates many ethical problems. We are confronted with policy vacuums. We need to formulate and justify new policies (laws, rules, and customs) for acting in these new kinds of situations. What makes the technological change truly evolutionary is its impact on society. Ethical problems can be generated by a technology at any of the three stages, but the number of ethical problems will be greater as the revolution progresses. According to this model more people will be involved, more technology will be used, and hence more policy vacuums and conceptual muddles will arise as the revolution advances. Thus, the greater our ethical challenge will be during the last stage of the revolution. This argument is forceful for technology in part because we can see the dramatic effects technology has had and the ethical problems it has raised. Convergence of technology may occur with one technology serving as a component of another.
Thus, convergence may involve one technology enabling another technology as a tool, as a component, or as a model. The convergence of developing technologies make revolutionary outcomes discussed above likely. Revolutionary outcomes make ethical considerations ever more important. The number of ethical issues rises with the development of a technological revolution. In the introduction stage there are few users and limited uses of technology, but there will still be ethical issues arising. During the permeation stage of a technological revolution users and uses grows and more ethical issues are expected. Then, as expected, the power stage has the increased number of ethical issues. From this explanation of a technology revolution, Moor developed the Moor’s Law: As technological revolutions increase their social impact, ethical problems increase (Moor, 2005, pg. 117). This phenomenon happens not simply because an increasing number of people are affected by the technology but because inevitably revolutionary technology will provide numerous novel opportunities for action for which well though out ethical policies will not have been developed. The ethical issues that we will confront not only come in increasing numbers, but will come packaged in terms of complex technology. Such ethical issues will require considerable effort to be understood as well as a considerable effort to formulate and justify good ethical policies. Moor (2005) suggests three ways to improve our ethical approach to technology: 1.
2.
3.
We need realistically take into account that ethics is an ongoing and dynamic enterprise. We need to establish better collaborations among ethicists, scientists, and technologists. We need a multi-disciplinary approach. We need to develop more sophisticated ethical analyses.
Technoethics in Education for the Twenty-First Century
Barbour (1993), in an early look at technology and technoethics, noted that some see technology as a source of higher living standards, improved health, and better communication. Others are critical of technology, holding that it leads to alienation from nature, environmental destruction, the mechanization of human life, and the loss of human freedom. A third group asserts that technology is ambiguous, its impacts varying according to the social context in which it is designed and used, because it is both a product and a source of economic and political power. Barbour proposes that technology be seen in three lenses, as a liberator, as a threat and as an instrument of power: •
Technology as a liberator: The benefits of technology 1. Higher living standards 2. Opportunity of choice 3. More leisure 4. Improved communications
•
Technology as a threat: The human costs of technology 1. Uniformity in a mass society 2. Narrow criteria of efficiency 3. Impersonality and manipulation 4. Uncontrollability 5. Alienation of the worker
•
Technology as an instrument of power 1. Technology and political power 2. Redirection of technology 3. Social construction of technology (Pgs. 3-4)
With this explanation of technology, we move on to understanding the ethical theory related to technology. Table 1 (from Tavani, 2004) is designed to provide the advantages and disadvantages of the basic ethical theories. Tavani (2004) goes on to explain that technoethics, as a field of study, can be understood as a branch of applied ethics. Applied ethics, as opposed to theoretical ethics, examines practical ethical issues. It does so by analyzing those issues from the vantage point of one or more ethical theories. Whereas ethical theory is concerned with establishing logically coherent and consistent criteria in the form of standards and rules for evaluating moral problems, the principal aim of applied ethics is to analyze specific moral problems themselves though the application of ethical theory. Understanding technoethics as a field of applied ethics (that examines moral issues pertaining to technology) is an important first step, but applied ethics for technology also needs to be looked at from the perspective of professional ethics. The field of technoethics as professional ethics can be understood as identifying and analyzing issues of ethical responsibility
Table 1. Four types of ethical theory Type of theory
Advantages
Disadvantages
Consequence-based (utilitarian)
Stresses promotion of happiness and utility
Ignores concerns of justice for the minority population
Duty-based (deontology)
Stresses the role of duty and respect for persons
Underestimates the importance of happiness and social unity
Contract-based (rights)
Provides a motivation for morality
Offers only a minimal morality
Character-based (virtue)
Stresses moral development and moral education
Depends on homogeneous community standards for morality
Used with permission of the author
Technoethics in Education for the Twenty-First Century
for professionals working with technology. Among the technoethics issues considered from this perspective are those having to do with computer professionals’ role in designing, developing, and maintaining computer hardware and software systems. Conducting research in applied ethics has three stages of methodology: 1. 2.
3.
Identify a particular controversial practice as a moral problem. Describe and analyze the problem by clarifying concepts and examining the factual data associated with that problem. Apply moral theories and principles in the deliberative process in order to reach a position about the particular moral issue (pg 14).
Now that we have looked at the social implications of technology and ethical theory, particularly in the area of applied ethics, we will now look at some of the ethical issues related to technoethics in education.
Ethical Issues One of the distinguishing features of information, and one that makes it of revolutionary significance for global development and betterment, is that fact that technology is shareable. It can be given away without the original possessor being deprived of it. To be useful, of course, information must be accurate or true, for false information or disinformation will hinder, rather than produce positive effects on the world and people’s lives (DeGeorge, 2006). DeGeorge discusses the ethical issues of how technology affects the ownership of information known as intellectual property. Information has become more and more important in technological development, so the desire for the protections of intellectual property by profitmaking organizations has become stronger. Two areas are particularly instructive – one is computer
software; the other is peer-to-peer technology and its use. The first raises the issue of copyright; the second relates copyright to technology. However, this is a sensitive issue for education, where fair use of intellectual property is critical and keeping abreast of copyright laws related to technology is difficult. According to DeGeorge (2006) if someone expends a certain amount of time, money or energy into developing some expression or application of an idea – be it in the form of a written work, or of an invention-it is unfair for some else to copy and profit from its sale before the original author or inventor can do so, especially if doing so prevents the original author or inventor from reaping any beneficial rewards because it is akin to stealing the results of another’s labor for one’s own benefit at the expense of the originator. However, computer programs and software are considered copyright rather than intellectual property. So the copying or sharing of computer programs and software are considered copyright violations. DeGeorge goes on to note that although the practice of lending programs is fairly common, (especially in education) and although producers complain that such sharing costs them millions of dollars, they rarely take legal actions. The action of copying and lending violates copyright law, and so performing those actions is unethical. They are civil, however, and not criminal offenses. Fair use could easily be written to include individual trades, selling a disk, or lending a disk to a friend. Such use is to be distinguished form copying and selling the program and from making the program available, for example, on the Internet, to anyone who wishes to download it, which are unethical irrespective of the law. The present content of the U.S. laws is not ethically derived and so not ethically mandatory for all to adopt. Copyright laws are very confusing and for educators, determining what is fair use and what can be copied and used in the classroom and in computer labs, presents many ethical issues for use of technology in education.
Technoethics in Education for the Twenty-First Century
DeGeorge (2006) goes on to describe the second concern of peer-to-peer technology and notes that two issues have been confused in discussions about peer-to-peer technology. One is the issue of the technology itself. The second, which dominates most discussion, is the use to which such technology is put. The most popular and most widely used peer-to-peer programs are freeware freely downloaded and freely shared. The second interesting aspect is that the illegal uses to which the technology is being put in causing producers of copyrighted material to change their ways of doing business, sometimes to the advantage of the consuming public. These issues are illegal, hence unethical. The major issue is the sharing of copyrighted music, with a growing capability of sharing copyrighted movies, TV tapes and other products. Downloading the programs in any case is not illegal, nor is using the programs for file sharing of noncopyrighted material that is an issue. It would be unethical besides illegal when both sender and receiver are in jurisdictions covered by applicable copyright laws. Even in jurisdictions where the copying is illegal, the ethical argument comes under two kinds of attack. The first attack is from the dictum that an un-enforced law is not binding. The main point is that, although it might be abused and used for unethical purposes, the technology itself is not unethical and so developing it is not unethical. The fact that it is easier to outlaw the technology than enforce the copyright laws by going after specific offenders is not sufficient justification for legally restricting the development of technology. Tavani (2005) discussed how education deals with the intellectual property and copyright issues described by DeGeorge in an article which looks at the Digital Millennium Copyright Act (DMCA). The DMCA, which restricts the way that information in digitized format can be accessed and used, also has significant implications for the information commons. Despite the fact that digital technology has made information exchange easy
and inexpensive, the DMCA has made it more difficult to access information that either resides in or is converted to digitized form, a big issue for educators. Rotenberg (1998) discussed ethical issues related to communications privacy including: •
•
•
Confidentiality, which refers to the expectation that information will be moved between two parties without disclosure to a third party. Anonymity, the right of an individual to decide whether to disclose his or her identity to another. Data protection, which refers to those principles regarding the collection, use, and disclosure of personal information.
Rotenberg reviewed a study by the New York Public Service Commission which identified four factors that contribute to growing concern about communications privacy. These are (1) the growth of electronic transactions, (2) the accelerated collection of personal information, (3) the dramatic increase in the number of communications carriers and service providers, and (4) the growing use of technologically unsecured conduits for communications traffic, such as mobile communication. The study then made several recommendations for future service offerings: (1) there should be no enforced reduction of network intelligence to protect privacy; (2) there should be adequate privacy protection for all users; (3) privacy promoting technologies should be encouraged; (4) users with special needs should be provided it “premium privacy;” (5) privacy risks, particularly involving the disclosure of personal data, should be made known to users; and (6) organizations that collect personal data should operate a “trustees” and protect the privacy rights of customers. Finally, in this literature review we will look value analysis of ethics in order to establish an understanding of how ethics can and should be
Technoethics in Education for the Twenty-First Century
Conscientiously shared values: Religion, political parties, and so on Important shared values: Family, hard work, loyalty, compassion, financial security Critical shared values: Honesty, respect for property, respect for law, keeping promises, physical integrity (pg. 27)
incorporated into personal value systems. We also have a list of ethical issues developed by researchers, to sum up what educators deal with using technoethics. Allen and Voss (1997) discussed ethics in terms of values and provided steps to value analysis. By their definition an ethical value is a belief or principle rooted in moral behavior, based on a sense of what is right; an unethical value is a belief or principle rooted in immoral or amoral behavior, based on a sense of what is wrong or, at least, of consciously disregarding what is right and a non-ethical value: a belief or preference that is not related to right and wrong. Allen and Voss provided six steps in value analysis:
Ess, (2002) noted that the domain of Internet research ethics contains a considerable diversity of views and approaches; includes a host of particular ethical dilemmas. He provided a list of central ethical issues developed by the Committee for Scientific Freedom and Responsibility (CSFR):
1.
1.
2. 3. 4. 5.
6.
Define the issue and identify the stakeholders. Determine the stakeholders’ interests. Identify the relevant values that bear on the issue. Determine the values and interests that are in conflict. Apply a model to rank values according to importance, to weight the values and interests that are in conflict. Resolve the conflict in favor of the higher (more important) value (pgs 20-21).
Allen and Voss contend a good rule of thumb for addressing ethical issues is to picture yourself as an impartial mediator who is paid to weigh all sides of an issue objectively and choose the best path of action. The analogy is not perfect, because mediators by definition are usually seeking to forge a compromise between contending parties, in value analysis, if a clearly ethical value is in conflict with a clearly unethical one, there is no room for compromise. Value analysis clarifies ethical conflicts into four areas, with the emphasis that honesty is still top priority: •
0
Personal preference values: Sports, music, diet, smoking, and so on
• •
•
2. 3. 4. 5.
6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.
Respect for persons (as foundational value for all the rest) Privacy Confidentiality Informed consent Anonymity/pseudonymity as these – along with 1-4 – are complicated in Internet venues Risk/benefit to participants Risk/benefit to social good Public vs. private space Subject compensation Justice (i.e., the fair distribution of the benefits of research) Cross-cultural issues Special/vulnerable populations Deception Non-disclosure Conflict of interest Research misconduct
This list not only includes the range of values and acts dependent upon respect for persons, i.e., treating human beings as autonomous beings, whether as human subjects and/or as researchers. From this literature review, providing the background to ethical theory, applied ethics and
Technoethics in Education for the Twenty-First Century
ethical issues related to technoethics, we move to the main area of this chapter, discussing how education can deal with technoethics for the twenty-first century.
tEcHNOEtHIcs IN EDUcatION Now we will begin to look at the technology related issues for education and establish how educators can use technology to prepare students with the skills to use technology ethically in the twenty-first century. According to Strike and Ternasky (1993) ethics can be applied to education in three principal ways: assisting in educational policy making, assessing the school’s role as a moral educator, and informing standards to govern the conduct of educators. This section deals with administrative and policy issues. Doctor (1998) wrote a paper where he presented views on the legislative inequities of access to information from the Internet for education, particularly for K-12 schools. Clarifying legislation has been detrimental to schools. Doctor recommended providing information to empower communities in our society including: •
•
•
•
Providing assistance in learning about and obtaining needed social services, including employment information. Providing access to information repositories and resources that otherwise either would not be available or would be difficult to locate or obtain. Enhancing education opportunities and effectiveness by linking parents, teachers, and students, and expanding the repertoire of teaching tools for K-12. Creating new and effective communication links with local public officials in an effort to provide a more responsible and helpful government.
•
•
Providing open and free communication as equals with others in the community; serving as an organizing tool for people with common concerns. Creating a sense of community spirit and individual belonging and providing information about community activities (pg. 237).
It is critical in developing twenty-first century skills for our students to develop the communication links described by Doctor, particularly for enhancing educational opportunities and providing the tools needed in the K-12 system. Newman (1990) described research which supports the comments by Doctor. First, Newman described a concept know as appropriation, demonstrating how a tool can, in fact, have quite a different interpretation for the child and the adult. It is important for a student to understand how to use technology appropriately. Seeing that the student’s action is appropriate provides the student with an analysis of task as the teacher understands it. Thus, knowledge is actively constructed in the social interactions where the meaning of an action can be changed retrospectively by the actions of others that follow it. The concept of appropriation then applies to the educational system’s adoption of technology. The changes that take place when technology is appropriated by an environment may appear to be minimal if the tool fits into the existing structure. Although the school must appropriate the technology in order to use it at all, under some conditions the school can appropriate a technology that, in turn, helps to change the educational environment. This process may result in a new interpretation of he tool as well as a constructive change in the classroom activity. Secondly, Newman (1990) discussed research on the concept of educational environment in relation to technology use in our educational system. The term educational environment refers to the social world in which any such technology func-
Technoethics in Education for the Twenty-First Century
tions. Newman looked at computers in schools as the technological environment. The increase in the number of school computers makes their impact on classroom organization, an increasing concern for educators. An educational environment combined with a theory in which learning is social and mediated by technological tools suggested a methodological approach in which the technology is put to use in actual instruction. Barbour (1993) developed a method for assessing technology that assists in developing policy. Educators should, but often do not, look at the ethical side of developing policy, often not developing technology policy at all. The lists below are the areas Barbour commented that should be addressed in assessing technology and then the methods used to do so: •
•
Assessing technology: Cost-benefit analysis. 1. Distributive justice: Who bears the costs? 2. Discounting the future: Benefits current or future ones? 3. Environmental values: Technology reduces costs but at the detriment of the environment? 4. Human values: Benefits at the cost of human resources? 5. Institutional biases: Institution generally overstates benefits and understates the costs 6. Role of the expert: Biases of the expert in assigning monetary values to technology Technology assessment methods. 1. Early anticipation: Technology assessment is an attempt to anticipate consequences beforehand rather than waiting for them to become evident. 2. Diverse impacts: A wide range of impacts are considered, beneficial as well as adverse, social and political as well as environmental and economic.
3.
4.
Diverse stakeholders: In the past, representatives of institutions have presented benefits of new technology, whereas people who may have to face the indirect costs often have had no effective voice. Alternative policies: Assessments not only trace current trends but also analyze the effects of alternative policies.
Tavani (2004) through his research provided a strategy for approaching technoethics issues which can be can be used by educators: •
•
•
Step 1: Identify a practice involving technology or a feature in that technology that is controversial from a moral perspective. 1a. Disclose any hidden features or issues that have moral implications. 1b. If the issue is descriptive, assess the sociological implications for relevant social institutions and socio-demographic groups. 1c. If there are no ethical/normative issues, stop. 1d. If the ethical issue is professional in nature, asses it in terms of existing codes of conduct/ethics for relevant professional associations. 1e. If one or more ethical issues remain, go to step 2. Step 2: Analyze the ethical issue by clarifying concepts and situating it in a context. 2a. If a policy vacuum exists, go to step 2b; otherwise go to step 3. 2b. Clear up any conceptual muddles involving the policy vacuum and go to step 3. Step 3: Deliberate on the ethical issue. The deliberation process requires 2 stages. 3a. Apply one or more ethical theories to the analysis of he moral issue, and then go to step 3b.
Technoethics in Education for the Twenty-First Century
3b. Justify the position you reached by evaluating it against the rules of logic/ critical thinking (Pgs. 23-24).
Allen and Voss (1997) proved four guidelines for dealing with value conflict faced by educators when dealing with ethical issues in the workplace:
to an ethical dilemma must consider the act, the intention, the circumstance, the principles, the beliefs, the outcomes, the virtues, the narrative, the community and the political structure. In his teaching Nash uses codes of ethics as board normative guidelines for behavior, rather than as definitive specifications for ethical decision-making. Nash provides the following rules/guidelines for good moral conversation and ethics for his class discussions:
1.
Communicate the issues to other colleagues or superiors. Analyze the issue to do the greatest good for all parties concerned. Personalize—trust your judgment on what’s right and wrong. Escalate work up the chain of command until the ethical issue is resolved.
•
Allen and Voss mention personalization and trusting one’s own judgment, Nisbet (1977) also provided information that related to personalization and the teaching of ethics. The ethics of teaching is quite clearly the effort to understand and implement those actions that stimulate individual growth through cooperative effort. Teaching is the value-laden moral enterprise par excellence. Given the combination of intellectual and social capacities, the teacher’s behavior is unethical if it destroys the students’ and teacher’s opportunities to become more than they are. Nash (2002) discussed the best way to teach ethics ethically. Nash notes that there are no surefire answers to complex moral issues but that objective ethical analysis is possible. He uses a philosophical approach with ethical theories, which he believes is a practical tool any professional can use to develop ethical practices. Nash’s rules in his class includes finding truth in what you oppose and finding error in what you espouse; basically looking at, at least, both sides to every situation. According to Nash, every resolution
•
teaching Ethically
2. 3. 4.
• •
•
• •
• • • • • •
Make an effort to read and understand the course materials. Have an acute awareness that you have moral biases and blind spots. Keep an open-mindedness that you will learn something from the materials, the instructor and your peers in conversation. Have a willingness to improve your current moral language. Make a conscious effort to refrain from advancing your own current moral language as it if were the best one. Make a conscious effort to listen intently to the meaning of others’ moral truths. Agree that clarifying, questioning, challenging, exemplifying, and applying ideas are activities to be done in a self- and otherrespecting way. Do not force premature closure on a moral conversation. Find truth in what you oppose. Read as you would be read; listen as you would be listened to. Speak with, not at or separate from each other. If you don’t stand for something, you’ll fall for anything. Find and express your own voice, but also find the right time to lower your own voice so that others might find theirs (pgs 25-26).
Callahan (1980) noted that the goals of the teaching of ethics can be divided into three general classes:
Technoethics in Education for the Twenty-First Century
1.
2. 3.
Those that are important for all courses in ethics, whatever the educational level or context. Those that are doubtful for all courses. Those that will, as special topics, be either optional or important, depending upon the context.
He goes on to describe the important goals in the teaching of ethics: • • • • •
Stimulating the moral imagination Recognizing ethical issues Eliciting a sense of moral obligation Developing analytical skills Tolerating – and reducing – disagreement and ambiguity
Callahan concludes that once students complete an ethics course, they should meet the following objectives: 1.
2.
3. 4.
5. 6.
An understanding of, and ability to analyze, ethical concepts (e.g., rights, justice, liberty, autonomy). Familiarity with the history of the development of ethical theories and with the examinations to which those theories have been subjected (e.g., utilitarianism and the objections leveled against it). Familiarity with metaethical issues (e.g., the justification of moral judgments). An understanding of the general cultural, social, and political context that can lead to, and sustain, particular ethical theories and modes of ethical reasoning, and which help to explain the particular moral rules, mores, and practices of a given historical period or culture. Familiarity with the general theories and findings of moral psychology. Familiarity with those aspects of sociology and anthropology that are concerned with
the development of ethical and value systems and practices (Pgs. 72-73).
FUtUrE trENDs How will we know that that we are preparing our students for technoethics? This chapter has explained how teachers can provide ethical knowledge to students. There have been a limited number of studies that have demonstrated students’ knowledge of ethical practices with technology. The studies have primarily used survey to determine the understanding of ethical behavior. Gearhart (2005) reported on survey data collected at Dakota State University (DSU). The students were asked “How often do students at your institution copy and paste information from the WWW/Internet into reports/papers without citing the source?” First year DSU students reported forty-seven percent did compared to twenty-seven percent of their peers nationally. DSU seniors reported fifty-five percent compared to thirty-two percent of their peers. However on the DSU graduate and employer surveys reported students and employers found overwhelmingly that the graduates were able to use information ethically in the workplace. If the questions on the NSSE survey are an indicator of the ethical judgments of students’ use of technology then responding affirmatively to the graduate survey would present a discrepancy in ethical behaviors of the students and their perceptions of their own ethical behaviors. This presents an area to be further researched. Do students understand what ethical behavior is, especially ethical behavior when using technology? Are the students also prepared with ethical research practices for the Internet? In the future other studies should be competed to determine if our schools are adequately preparing students to use technology ethically.
Technoethics in Education for the Twenty-First Century
cONcLUsION The chapter has discussed the role that new technology should play in schools and in society. Generally computer and associated technologies have a positive effect on society. However, technology can have negative effects on the population. It is important to use ethically not just correctly. Schools will need to provide a serious form of technology education; more than just instruction on how to use the technology. Also, our society needs to be aware of the claims that technology will equalize learning opportunities. There is still a digital divide. We do not want our students to be winners and losers based on access to technology (Postman, 1998). Sichel (1993) contends that professional teacher ethics primarily concentrates on improving an individual teacher’s ethical reasoning and judgments. Accordingly, a teacher should be an autonomous moral agent who individually makes and carries out ethical judgments. The existence of a just, human and caring school is dependent on each teacher becoming just, human and caring. These researchers help sum up the goal of this chapter. Technoethics is a growing form of applied ethics. Defined as the study of moral, legal and social issues involving technology, technoethics is becoming an important issue for the twentyfirst century. Our students need to understand the moral, legal and social ramifications of using technology. The literature review provided a basis for understanding ethical theory and it explained how technoethics is applied ethics. The literature review also explained the concept of technology. Finally the chapter goes on to explain the issues related to the use of technoethics for educational administrative concerns and provides information that teachers can use to improve instruction for ethical issues. Our students must be prepared for the ethical use of technology throughout the twenty-first century. Today’s students are the leaders of tomorrow and if our society is to prosper ethically
with technology, today’s teachers must practice and teach technology ethically. Working in education for over 20 years, the author has seen the increase of cheating, plagiarism, and other ethical issues brought on by technology innovations. Many students comment “I didn’t realize I couldn’t cut and paste…or I thought everything on the Internet was ok to use”. It is amazing who is cheating, pastors, attorneys, people on would expect ethical behavior from. If educators do not clamp down on students behaviors with technology early on in their education and teach students what ethical behavior is, our society will continue to experience major ethical issues related to ever growing technology use. By the time students get to higher education, where the author is an educator, it is often too late to correct poor ethical behavior.
rEFErENcEs Allen, L. & Voss, D. (1997). Ethics in technical communication: Shades of gray. New York, NY: Wiley Computer Publishing, John Wiley & Sons, Inc. Barbour, I. G. (1993). Ethics in an age of technology. San Francisco, CA: Harper San Francisco. Brandon, D. G. (1980). The partnership of ethics and technology. In Kranzberg, M. (Ed.), Ethics in an age of pervasive technology. Boulder, CO: Westview Press. Brey, P., Floridi, L. and Grodzinsky, F. (2005). Ethics of new information technology. Ethics and Information Technology, 7, 109. Callahan, D. (1980). Goals in the teaching of ethics. In Callahan, D. and Bok, S. (Eds.) Ethics teaching in higher education. New York: Plenum Press. DeGeorge, R. (2006). Information technology, globalization and ethics. Ethics and Information Technology, 8, 29-40.
Technoethics in Education for the Twenty-First Century
Doctor, R. (1998). Justice and social equity in cyberspace. In Stichler, R. N. & Hauptman, R. (Eds.), Ethics, information and technology readings. Jefferson, NC: McFarland & Company, Inc., Publishers. Ess, C. (2002). Introduction. Ethics and Information Technology, 4(3), 177-188. Freedman, D. H. (2006). The technoethics trap. Inc. Magazine. Retrieved January 12, 2007 from http://www.inv.com/magazine/20060301/column-freedman_Printer_Friendly.html Galván, J.M. (2001). Technoethics: Acceptability and social integration of artificial creatures. Retrieved January 12, 2007 from http://www.eticaepolitica.net/tecnoetica/jmg_ acceptability%5Bit%5D.htm Gearhart, D. (2005). Topic: The ethical use of technology and the Internet in research and learning. Presented at DSU Center of Excellence Symposium, April 2005. Moor, J. H. (2005). Why we need better ethics for emerging technologies. Ethics and Information Technology, 7, 111-119. Nash, R. J. (2002). Real world ethics: Frameworks for educators and human service professionals, second edition. New York: Teachers College Press, Columbia University. Newman, D. (1990). Opportunities for research on the organizational impact of school computers. Educational researcher: A publication of the American Educational Research Association. 19(3), 8-13. Nisbet, L. (1977). The ethics of the art of teaching. In Hook, S., Kurtz, P. & Todorovich, M. (Eds.). The ethics of teaching and scientific research. Buffalo, NY: Prometheus Books. Postman, N. (1998). Education and technology: virtual students, digital classroom. In Stichler, R.
N. & Hauptman, R. (Eds.). Ethics, information and technology readings. Jefferson, NC: McFarland & Company, Inc., Publishers. Rotenberg, M. (1998). Communications privacy: implications for network design. In Stichler, R. N. & Hauptman, R. (Eds.). Ethics, information and technology readings. Jefferson, NC: McFarland & Company, Inc., Publishers. Sichel, B. A. (1993). Ethics committees and teacher ethics. In Strike, K. A. & Ternasky, P. L. (Eds.). Ethics for professionals in education: Perspectives for preparation and practice. New York, NY: Teachers College Press, Columbia University. Sloan, D. (1980). The teaching of ethics in the American undergraduate curriculum, 1876-1976. In Callahan, D. and Bok, S. (Eds.) Ethics teaching in higher education. New York: Plenum Press. Strike, K. A. & Ternasky, P. L. (1993). Ethics for professionals in education: perspectives for preparation and practice. New York: Teachers College Press, Columbia University. Swierstra, T. (1997). From critique to responsibility: the ethical turn in the technology debate. Society for Philosophy and Technology, 3(1). Retrieved January 12, 2007 from http://scholar. lib.vt.edu/ejournal/SPT/v3n1/swierstra.html Tavani, H. T. (2004). Ethics and technology: Ethical issues in an age of information and communication technology. Hoboken, NJ: John Wiley & Sons. Tavani, H. T. (2005). Locke, intellectual property rights, and the information commons. Ethics and Information Technology, 7, 87-97. Zonghao B. & Kun X. (2006). Digitalization and global ethics. Ethics and Information Technology, 8, 41-47.
Technoethics in Education for the Twenty-First Century
KEY tErMs Education: Process of training, cultivating, and developing the mind through teaching. Ethics: Human character and conduct, the understanding of right and wrong, and one’s obligation to society.
Moral Judgment: Using one’s ethical values in decision-making Technoethics: The study of moral, legal and social issues involving technology. Technology: Within a society, the application of knowledge and the use of tools, process, and/or skills in a manner to resolve problems.
Chapter XIX
The Ethics of Global Communication Online May Thorseth Norwegian University of Science and Technology, Norway
abstract The purpose of this chapter is to discuss important ethical aspects of online communication of global scope. We focus particularly on procedural fundamentalism as the most significant threat to free and open communication today. By contrast, it is argued that deliberation models a desirable form of communication, based in both Habermasian discourse ethics, but also rhetoric along with a plurality of communicative styles, as long as they satisfy procedural constraints of deliberation. The importance of judgments that transcend purely private conditions is discussed by reference to reflective judgments aiming at enlarged thinking - to think from the standpoint of everyone else. It is concluded that it is preferable to develop Internet technologies that stimulate imaginative powers in order to make people better informed of knowledge of counterfactual circumstances. Such knowledge may work as an impediment against fundamentalist knowledge.
INtrODUctION The purpose of this chapter is to discuss important ethical aspects of online communication on the Internet. The significance is particularly related to the discussion of fundamentalism as the most important threat to free and open communication today. The topic relates to technoethics by examining information technological impact on communication. The ethical analysis of communication technology focuses on the question
how online communication could promote open discourse in a manner that would impede fundamentalism. A sketch of an answer is offered in the final sections of the chapter. Fundamentalism as applied in media is most often envisaged as internally linked to religion. Opposite to this view, it is argued in this chapter that we need to disconnect the conceptual linkage between religion and fundamentalism. Fundamentalism is here characterised in terms of procedural traits of communication rather than
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Ethics of Global Communication Online
by its particular contents, such that any values might appear to be fundamentalist, whether they relate to religion, gender, sex, ethnicity or others. This approach to fundamentalism diverges from mainstream accounts in much of the global communication of fundamentalism in media, as fundamentalism is often identified in terms of its substantive contents. Fundamentalism might be conceived as suppression of challenges of particularity, whether a particular cultural practice or a story that is put forth with a claim to approval or condemnation without questioning. To make this concrete, we shall pay attention to a particular story from Nordic media. In January 2002 Swedish-Kurd Fadime was killed by her father in Sweden. The reason why she was killed was that she had a Swedish boyfriend. The murder was referred in media as a murder of honour, and it was seen as an expression of contest of the cultural norm forced marriage, a norm that is extensive in many Muslim societies. The murder and the debate following from it have necessitated a public reflection of a claim to participate in the public debate, and also reflection on referring to immigrants in terms of representative groups. Immigrants are often conceived as groups in the light of ethnicity, culture and identity. Additionally, these “groups” are considered to be represented by their leaders in accordance with Western, democratic principles. The contested practises of forced marriage and murder of honour do, however, separate the members within the minority societies just as well as separating minority groups from the society at large. The murder of Fadime and the debate in Sweden and Norway following from it clearly demonstrates that Muslims in these two countries cannot and should not be defined on the basis of a uniform group concept. Fadime was killed by her father because she loved a Swedish man, and because she spoke her opinions of love and marriage openly in media. She argued against arranged marriages, in favour of the right to choose a partner of one own. We might formulate this
case as a problem concerning the relation between the particular and the general, in cases where particular arguments are considered to be justified with respect to “the others”, whereas not looked upon as acceptable “for us”. We may ask: what is it that appeals to general circumstances, and how do these appeals relate to something beyond the particular, something of universal scope? We may further ask how to describe the case above, what description is the correct one? No matter how we describe it – as an act of murder of honour or something else – our description will on any occasion be a normative act. This is because we look for solutions to the problems that we raise. This act could be considered a murder of honour – Fadime’s father wanted to rescue the family’s honour – or it could be judged to be a sick man’s misdeed as Fadime’s sister Fidan claimed (Eriksson and Wadel 2002). In describing this act as a murder of honour a particular appeal is thereby made to particular circumstances about a particular culture. As a result, the description might easily be exploited for both criticism, but also for justification of murder of honour, as was seen in the public debate. Whether this particular description is used for criticism or justification, it will nonetheless encourage segregation. To describe the same act as the misdeed of a sick man, does not, however, appeal to culturally specific circumstances to the same extent as the concept ‘murder of honour’ does. The fact that some people commit sick actions because they do not function well in society might strike down anyone, and it is not necessarily related to ethnic or cultural status in particular. If the act in question is considered as a sick person’s misdeed, it is turned into something for which we may raise a more general appeal. By contrast, an act that is described in terms of a particular cultural or religious norm of a particular society cannot be defended by many. The justification by appeal to a particular religious norm in this case endangers by mobilising disgust within the society at large. Appeals to particular religious/
The Ethics of Global Communication Online
cultural norms devoid of a universal appeal are easily turned into fundamentalism. What we need to establish in order to avoid particular appeals being turned into fundamentalism, is to establish a link to a universal appeal that transcends what is embedded in the particular or culturally specific norm in question. What we should aim at establishing is a mutual respect for each other’s circumstances. Misdeeds as such may transcend culturally embedded norms, whereas culturally defined misdeeds do not. The Fadime case above is an illustration of how argumentation might be characterised as fundamentalist. We shall elaborate fundamentalism further in a while. One aim of this chapter is to demonstrate how worldwide deliberation online might work as an impediment against fundamentalism.1 A fundamental opposition between global democracy on the one hand and fundamentalism on the other is a basic assumption of the main argument. In this chapter we will consider fundamentalism from different perspectives, and put forth some arguments why fundamentalism is most adequately linked to the lack of procedural constraints by which we define fundamentalism. This view is contrasted to the claim that certain ethical systems are fundamentalist per se. Before we move on to the topic of fundamentalism we shall have a brief look at global communication and deliberative democracy.
GLObaL cOMMUNIcatION The approach to global communication is here closely linked to the idea of deliberative democracy. The global aspect is contained in the vision of communication that transcends borders of all kinds, not least geographical, whereas the deliberative aspect might best be conceived as direct communication aiming at qualifying arguments and opinions. The outcome of deliberation might be a change of preferences and opinions, due to access of opposing views of the matters
0
discussed. We shall have a brief look at some projects aiming at deliberation in the sense just described. Deliberation has on some occasions been carried out as a controlled trial ahead of elections, like the PICOLA project in the USA.2 The theory underlying this project is labelled deliberative polling, and it has been developed by James Fishkin.3 This issue of online polling is also discussed by Coleman & Gøtze (2004), and by Dag Elgesem (2005).
Democracy Unbound Democracy Unbound is a huge research project named, carried out by an international interdisciplinary research group if which I am a part. The project is mainly based in Sweden, consisting of scholars from philosophy, psychology, law and history.4 Some of the questions raised by this research group are normative. For example, what form of democracy, if any, should we try to bring about at a supranational level? After all, there are different notions of democracy, some of which are more demanding than others, e.g., regarding the participation and influence of the citizens. The significance of this question is illustrated by the ongoing debate about the appropriate role of the European Parliament within the EU. Other questions are empirical, and rather concern the possibility of a successful implementation of democratic institutions at a supranational level. Thus, the main objection to the idea of a global democracy has been, not that it is undesirable, but that it is utopian and unrealistic. In order to assess objections of this kind the project aims at identifying obstacles as well as factors that could prove favourable relative to the aim of bringing about a genuine supranational democracy. My piece of work in this project deals more specifically with online communication, and the question of feasibility of global communication – or worldwide deliberation – across boundaries, across stereotypes of which fundamentalism is an example. My philosophical interest relates to
The Ethics of Global Communication Online
public use of reason online. As yet, there are no conclusive reports supporting the assumptions that communication online moves in a more democratic and unbound direction. As we shall see, there are reports underpinning the democratic and borderless potential of this new technology (Wheeler, 2005; Coleman and Gøtze, 2004), but there are also reports of the opposite (Sunstein 2001). Rather than discussing these opposing reports in any detail, I shall focus on the kind of communication that is contrasted to fundamentalism. Thus, the main objective of this chapter is to demonstrate why online communication might work as an impediment against procedural fundamentalism.
Deliberative Democracy There are different notions of democracy, some of which are more demanding than others, e.g., regarding the participation and influence of the citizens. Democracy might be representative or direct, and the scope may vary (local, national or federal, supranational, global). The most basic characteristic is, however, the plurality of voices upon which opinions and decisions are based. How should we make sure that policies pay attention to the plurality of parties concerned, and how should we safeguard the procedures at work? Within the Democracy Unbound project Robert Goodin (2007) has argued in favour of enfranchised all effected interests, while he psychologists in the project group, Henry Montgomery and Girts Dimdins have carried out an experimental study of egalitarian vs. proportional voting, claiming that proportional voting is preferable (2007). 5 The notion of deliberative democracy pays particular attention to the kind of communication involved. Deliberation relates to discursive democracy as discussed by John Dryzek (2001), and it is partly based upon Jürgen Habermas’ idea of practical discourse (1990). The ethical norm of a free and non-coerced mode of communication, free from both external and internal obstructions,
is basic. A necessary prerequisite is a plurality of parties and opinions, and their accessibility. Further, the final arbiter is public reason itself which is the only legitimate authority in policy making. From this it follows that not only political decision makers, but any public opinion or decision should be exposed to open and critical debate. This is the normative basis for the argument of this chapter.
DELIbEratIvE POLLING OFF- aND ONLINE Deliberative Polling The core idea of deliberative polling is to contribute to a better-informed democracy. The method, as developed by Fishkin (1997), is to use television and public opinion in a new and constructive way. A random, representative sample is polled on the targeted issues. One among many examples is the national deliberative poll of the USA on health care and education. Results are also available from deliberative polling in different other countries around the world, among others, China, Greece, Italy and Northern Ireland.6 After the fist baseline poll, members of the sample are invited to gather some place in order to discuss the issues, together with competing experts and politicians. After the deliberations, the sample is again asked the original questions. The resulting changes in opinion represent the conclusions the public would reach, if people had opportunity to become more informed and more engaged by the issues.
Deliberative Polling Online The project Public Informed Citizen Online Assembly (PICOLA) has been developed by Robert Cavalier.7 It takes its point of departure in the theory of deliberative polling as developed by Fishkin. PICOLA is primarily a tool for carry-
The Ethics of Global Communication Online
ing out deliberative polling in online contexts. One objective of this software development is to create the next-generation of Computer Mediated Communication tools for online structured dialogue and deliberation. An audio/video synchronous environment is complemented with a multimedia asynchronous environment. A general gateway (the PICOLA) contains access to these communication environments as well as registration areas, background information, and surveys (polls). The PICOLA user interface allows for a dynamic multimedia participant environment. On the server side, secure user authentication and data storage/retrieval are provided. Real-time audio conferencing and certain peer-to-peer features are implemented using Flash Communication Server. Mobile PICOLA extends the synchronous conversation module and the survey module. The former provides a true ‘anytime/anywhere’ capability to the deliberative poll; the latter allows for real time data input and analysis. And with realtime data gathering, it becomes possible display the results of a deliberative poll at the end of the deliberation day, thereby allowing for stakeholders and others present to see and discuss the results as those results are being displayed. The interface relations made possible by this technology is of vital importance to the deliberative process, as it allows for synchronous conversation in real time. Thus, it appears to come very close to offline interface communication. Several other reports on online deliberation are discussed by Coleman and Gøtze (2001). Some of their examples are drawn from trials of deliberation between local politicians and their electors. By and large a main conclusion seems to be that the dialogical and responding structure of communication is obtained between the parties involved. However, it seems to be a clear tendency that there is a decline in deliberation as soon as the period of the trial has ended. Besides, the scope of the experiments discussed is of limited/local scope, and thus they are not comparable to a global
level of communication. The empirical reports on online polling are not conclusive, then, but the technology as such seems to offer a potential for worldwide communication.
FUNDaMENtaLIst KNOWLEDGE The most dangerous threat to deliberative communication is fundamentalism. Fundamentalist knowledge is here defined in procedural terms. At the outset it is not rooted in any particular value system, whether religion or others. This does not, however, entail that every value system can evade fundamentalism, as some value systems tend to be less open to public criticism. First we shall mention argumentative closure towards counterarguments and absence of critical reflection on preferences as characteristic of most instances of fundamentalist knowledge. Last year’s reactions among many Muslims towards the dissemination of ironic Mohammed cartoons may serve as an example. This example is of course complex, but the core problem of this case was refusal to take up the criticism underlying these cartoons. The simple fact that the topic has entered the public and politic arena is itself a proof that the issue does not any more belong to a private or religious area that should be protected from argumentative intervention. This issue is contested among Muslims themselves, and thereby demonstrates that there is no “Muslim” interpretation that is immune to argumentation. Second, absence of mediation and dialogue is another procedural trait of fundamentalist knowledge. Western feminists have been accused by Muslim feminists that the latter are suppressed by males because they prefer to wear the hijab, or because they defend arranged marriages. The lack of dialogue between the defenders of the two branches of feminism has made both parties fail to see that they obviously share the most important values they want to defend, while avoiding
The Ethics of Global Communication Online
a real dialogue about the institutions or dressings in question (hijab, miniskirts, etc.). Third, relativism might be another source of fundamentalist knowledge. We will not get deeply into the topic here, since it is a huge field of philosophy and ethics themselves. The main point I would like to draw attention to might be stated very simply by an illustration: If we accept utterances like “my culture made me do it” when someone has committed a murder, arguing that the act can only be interpreted relative to culture/religion/a particular value system, then we have, by the same token, accepted that there is no standard by which we could criticise the act in question. The legitimising of the act is done by appeal to culturally imbedded values considered to be beyond criticism, as defenders of murder of honour did in the Fadime case. This may also be seen as suppression of challenges to particularity, thereby supporting mistaken stereotypes. Both critiques and defenders of murder of honour in the Fadime case committed themselves to relativism in the sense that both accepted a culturally internal definition of the act. The killing by a Muslim father of a daughter who objected to arranged marriage and other Muslim institutions was at the same time a perfect example of a stereotype that many Muslims object to, among them, Fadime’s family who rather referred to the misdeed as a sick man’s act. The latter characteristic neutralises the culturally imbedded stereotype contained in ‘murder of honour’. Instead they appeal to particular circumstances that might strike down anyone, independent of one particular religion as opposed to others. This brings us to the next issue in discussing how to prevent fundamentalist knowledge: the need for knowledge of particular circumstances. In order to be able to see that the murder by Fadime’s father does not necessarily adhere to cultural or religious stereotypes, or something that fits a definition applied to an aggregated group level, we need to know more about the particular circumstances of this individual. Those
who applied the label ‘sick man’s act’ rather paid attention to the fact that this was a person who was not well integrated in the society at large, he felt alienated and socially unsuccessful, without a proper job etc. How should we understand these circumstances as a source of the act committed? By focussing on marginalisation due to lack of access to welfare goods rather than focussing on a Muslim traditional institution, it is easier to judge the act in different terms, and to prescribe a punishment or a cure when needed, dependent on what circumstances are brought to the fore. So far, we have arrived at a possible dilemma that has to be resolved. On the one hand, we have defended a procedural approach to fundamentalism, which rules out a focus on particular contents of particular cultural or religious norms. On the other hand, we have recognised a need for knowledge of particular circumstances in order to transcend culturally embedded judgments of acts like the crime discussed above. How could these seemingly opposing suppositions fit together? In the following we shall discuss how a universal appeal might be possible in judgments of particular circumstances. We need a position that allows us to judge values not only relative to particular norms, but rather to state that all values are not on the same par.
PartIcULar aND UNIvErsaL aPPEaL IN jUDGMENts According to Onora O’Neill “[o]nly communication that conforms to the maxim of enlarged thought can reach the ‘world at large’ ” (O’Neill 1990).8 This quotation refers to the idea that public use of reason has the world at large as its scope – as opposed to private use of reason which is considered to be more limited in scope, e.g., communication filling the roles of clergy, officers, and civil servants. This definition of the distinction between public and private stems from Immanuel Kant (1970). Addressing the world at large is a
The Ethics of Global Communication Online
core idea in deliberation, as well, which clearly diverges from the aim of fundamentalism. Rather, the aim of fundamentalism is often to make supporters adhere unquestioningly, and to frighten non-adherents. Within these frames the aim of deliberation is to improve the public (as opposed to private) use of reason. The Kantian concept of public reason is internally related to communication itself. The rationale underlying this assumption is that “the world at large” accepts no common external authority. Attaching this authority to reason itself requires submitting one’s own arguments to free and critical debate (O’Neill 1990:38). A core idea is that there is an internal link between public reason and toleration of a plurality of parties. This idea is based upon the maxim of preservation of reason which means that one should not merely be led by others’ judgments in forming one’s own. Hence, it follows that there is a need for a plurality of parties in public reason and deliberation. The plurality of parties in public reason is a challenge to deliberation and democracy formation of our time. One central condition for plurality is autonomy, and I want to draw attention to the need for autonomous parties in deliberation. The kind of autonomy that is embraced in the Kantian tradition is put at a risk to the extent that deliberation tends to reduce the plurality of autonomous voices. One such threat has been regarded to follow from the problem of filtering on the Internet (Sunstein 2001). Whether online deliberation increases the problem of filtering or not is yet an open question that we shall approach later in this chapter. Public use of reason has the world at large as its scope, as noted above. An important notion contained in public – as opposed to private – use of reason is that what is communicated is publicizable (O’Neill, 1990:33). This presupposes intellectual freedom in the sense that there is no particular authority restricting the speech. An example of shortcoming with respect to intellectual freedom was the fundamentalist banning of
the Islam cartoons mentioned above. According to O’Neill’s account of public use of reason there is only one authority that is legitimate, namely reason which is contained in communication itself (1990:35). This conception of public reason rests on a Kantian understanding of it: [I]f reason will not subject itself to the law it gives itself, it will have to bow under the yoke of laws which others impose. (Kant 1970, 145-6; 303-4, quoted in O’Neill 1990:37). The verdict of reason is always the agreement of free citizens to be permitted to express objections without hindrance (O’Neill, 1990:37). Two important conditions for the authority of (public) reason, then, are the plurality of parties thinking for themselves, and the ability to think from the standpoint of everyone else. The latter is called “the maxim of enlarged thought” in Kant (Critique of Judgment, cited in O’Neill 1990:46). This maxim applies in particular to our judgment of particular situations. The idea of “enlarged thought” is elaborated in several other philosophical works, as well. In Kant’s work this mode of thinking is related to judgment, and as such it is different from subsuming something particular under some universal law. In Seyla Benhabib’s words: “Judgment is not the faculty of subsuming a particular under a universal, but the faculty of contextualizing the universal such that it comes to bear upon the particular” (Benhabib 1992:132). The main point to be drawn from this view on enlarged thought is to take account of the possible judgment of others. Hence, this enlarged way of thinking – or reflective judgment – cannot be carried out in solitude or isolation, it needs the presence of, or potential communication with others. In Kant’s work this mode of thinking is restricted to establishing intersubjective validity in the aesthetic domain. Hannah Arendt and Benhabib, on the other hand, want to extend this mode of
The Ethics of Global Communication Online
validation to the domain of ethical and political faculties of thought, as well. Part of the reason for this extension concerns the linkage between the particular and the universal. A moral judgment of a particular action depends upon the liberation of the particular from “subjective private conditions” (Arendt 1961:220-1, quoted in Benhabib 1992:133). This kind of validity is dialogically established through its appeal to something of common concern that transcends the particular case in question. As an example, a regime that obstructs free speech is condemnable due to common concern for the importance of free speech in general and not only in the particular case. By the same token, Seyla Benhabib, argues in favour of bridging the gap between what is good and just, that is, the mediation between the ethical and the political. In order to understand the perspectives of those who differ from us, it is necessary to extend our moral and political imagination through actual confrontation in public life (Benhabib 1992:140). This implies that we need to take into consideration the presence of voices and perspectives of those who are strangers to us. Unlike Kant who believes that the standpoint of everyone else can be obtained in vacuum, Benhabib believes that this standpoint “entails sharing a public culture such that everyone else can articulate indeed what they think and what their perspectives are” (Benhabib 1992:141). In this sense she reformulates Kantian moral theory in terms of a dialogic procedure of enlarged thought in a public culture of democratic ethos. Thus, the rationale for enlarged way of thinking fits with our conception of deliberation, and it also rules out procedural fundamentalism. Public reason in deliberation requires communication between people who would otherwise remain strangers to each other. The main purpose has much to do with the ideals underlying deliberative democracy itself, i.e., to qualify arguments and opinions through a public test. There is no unequivocal understanding of deliberative democracy, except for the belief in unconstrained and
free speech very much in line with discourse ethical ideals of Habermas (1990), Benhabib (1992), Dryzek (2001), Regh (1997) and many others. Very briefly, the main idea is that everyone should in principle be able to put forth their opinions and arguments without being obstructed by power relations or other instances of lack of symmetry between the parties. Those who share this basic condition for deliberation as a means to promote public reason often differ in their view on modes of communication in deliberation. We shall pay attention to one of the contested topics, namely the role of rhetoric in deliberation. Judgment of particular situations requires reflective judgment or enlarged way of thinking, i.e. the ability to think from the standpoint of everyone else. Through imagination we are requested to put ourselves in the position of everyone else. How could that be obtained? In the following we shall give an account of some aspects of communication discussed in the philosophical literature on deliberation.
rhetoric and Universal appeal in Deliberation An important aspect of deliberation and public use of reason is related to the question whether rhetoric should be considered a constituent part of communication itself. The role of rhetoric might be formulated as a controversy between Aristotle and Plato – and later Kant. Whereas the Platonic-Kantian tradition tends to look upon rhetoric as incompatible with reasoned discourse and autonomy, the Aristotelian tradition does not. The key question, then, is whether rhetoric as part of persuasion should be looked upon as part of reason or not. Some authors have argued that public deliberation not only relies upon rhetoric for reasons of persuasion; they also stress that this is for the good. John O’Neill is a defender of this approach to rhetoric when arguing that trust and credibility plays an important role in deliberation (O’Neill 2002:257). The core idea
The Ethics of Global Communication Online
of reason’s dependence upon trust is nicely expressed by O’Neill in the following utterance: “[S]cience itself is a co-operative enterprise that involves the acceptance of testimony of others” (2002:258). In other words, even in science which is normally considered to be highly motivated by public reason, we still have to rely upon others’ testimonies. Because we cannot confirm all basic knowledge-claims ourselves, we have to rely on judgments about the credibility and reliability of sources. Credibility does to a great extent rest on the trustworthiness of the speaker (ethos) and the emotions of the audience (pathos). In this chapter I draw upon several authors who believe that the appeal to emotions – along with the appeal to other basic rhetorical features of communication – is intrinsic to reason and rational persuasion in deliberation. Thus, emotions become a constituent part of public reason. Deliberative democracy theorists believe that rationality – or reason – is immanent to argumentation. The Habermasian account of rational argumentation distinguishes between three levels of argumentation: logical products, dialectical procedure and rhetorical process of communication (Habermas 1990:87-92, Regh 1997:360). The logical level concerns cogency of arguments based on the relation between conclusion and supporting evidence and claims, while the dialectical level is about competition for the better arguments. Both the logical and the dialectical levels make an appeal to truth claims. The rhetorical level is by Habermas considered to concern arguers’ search to gain the assent of a universal audience. There are two separate questions that are raised here: the first one is the fundamental question whether rhetoric should be allowed to enter deliberation at all, whereas the other concerns in which way rhetoric might play a significant role in deliberation. We will leave aside the more specific discussion on Habermas, and have a closer look at Regh’s analysis of rhetoric, still from a discourse ethical perspective. He emphasises the intrinsic role of ethos and pathos in cogent argumenta-
tion in addition to the rational standards of logic and dialectic. While the logical and dialectical standards relate to the possibility of reasonable assent, the rhetorical level relates to actual rational motivation (Regh 1997:367). Between these general standards and participants’ actual positions lie the various contextual contingencies that tip the scales in favour of one set of arguments: [T]hat logical consistency and dialectical responsiveness in open debate have made it possible for them to adopt a given position but that various contingencies traditionally associated with rhetorical effectiveness have led them actually to adopt the position. This opens the door to a normative account of argumentation in which rhetoric plays an intrinsic role. (Regh 1997:367). Regh identifies an important role for rhetoric (ethos and pathos) in the process towards making judgments, in defining argumentation as “a process of cooperative judgment formation” (Regh 1997:368). If argument is to issue in judgment the rhetorical aspects are basic: proofs of the speaker’s character (ethos) and appeals to the audience’s emotions (pathos). This view on rhetoric goes back to Aristotle (Lawson-Tancred 1991). The main point to be made in our discussion of rational deliberation is that the plausibilities of a certain outcome are judged differently among the participating parties. This point can be demonstrated very briefly: we have to rely upon others’ verdict for many of our suppositions, both ethically and politically, as well as in science. A medical example mentioned by Regh illustrates the point nicely. The case in question is a dying, comatose patient and different judgments among doctors and nurses on the question whether they should scale back treatment. The parties disagree over the estimation of a number of uncertain facts, like chances of recovery and what the patient would have wished. Regh’s conclusion is that “[p]recisely because the problem calls for such a judgment, it is rational for the participants to make two kinds
The Ethics of Global Communication Online
of persuasive rhetorical appeals in addition to the arguments themselves” (Regh 1997:369). Evidence of someone’s capacity to judge possibilities responsibly is internal to the arguments’ rationality. In addition, we should also ask whether the emotions that are evoked by rhetorical means are appropriate or not. To the extent that the rhetorical means of pathos is appropriate, it is also rational. As an example, it might be appropriate for a politician to alert the audience by stirring up a fear for a danger that is otherwise believed by the politician to be ignored by the audience. And sometimes it is necessary to introduce passionate communication for the end of democratic and rational deliberation. One case is presented by Gutman and Thompson where they refer to a discussion in the U.S. Senate in July 1993. The debated topic was on routine renewal of a patent of the Confederate flag insignia. The only black member of the chamber tried to argue that the insignia was a symbol of slavery, and she provoked the audience by shouting and using tears to move the Senate members to take the issue seriously. With this provocation she turned the routine into the notable. After a three-four hours period twenty-seven senators reversed their earlier vote (Gutman and Thompson 1996:135). This case demonstrates that communication is a process of cooperative judgment formation where rhetoric plays an intrinsic role. The grounds for trusting fellow participants are internally linked to the quality of arguments themselves. Whether actual consensus is received is another question that I will not discuss here. In the particular case just discussed, we saw that the rhetorical means of shouting and crying contributed rationally by motivating officials to discuss an important question. This incidence can be viewed in two different ways: either to look upon rhetorical means as non-deliberate, but acceptable as means to democratic deliberation (like Gutman and Thompson do), or to consider the rhetorical aspects as intrinsic to rational deliberation, as
Regh and Young (2000) do. We shall leave this part of the debate for now. There is a division line between a PlatonicKantian and an Aristotelian account of the appeal to emotions which is prevalent in rhetoric. The former tends to oppose rhetoric to deliberation, whereas the latter takes rhetoric to be intrinsic to all kinds of discourse, as we have seen. In Iris Marion Young’s words: “[n]o discourse lacks emotional tone; ‘dispassionate’ discourses carry an emotional tone of calm and distance (2000:65, my emphasis). Young further argues that the appeal to emotions, i.e. affective, embodied, and stylistic aspects of communication involve attention to the particular audience of one’s communication. The appeal to emotions is not only compatible with deliberation on this view; rather, it is a necessary condition of it. Aristotelian rhetoric considers emotions to be crucial to moral maturity, showing itself in a person’s ability to have the right kind of emotion on the right occasion. Thus, emotions are important to gain practical knowledge. Contrary to this, Plato (Gorgias) and the Kantian tradition view rhetoric as persuasive means without providing knowledge of any kind. Still, there is an interesting link between Aristotelian moral maturity and Platonic emotions in Kant’s thinking, as well, as pointed out by O’Neill (2000:251). To have maturity (Aristotle) and to be courageous (Kant) is to use one’s own understanding and judgment. Thus, emotions do have a role to play here, as well. The Aristotelian understanding of moral maturity which is to have the right emotion at the right occasion, equals the Kantian concept of moral courage which is to judge in an autonomous way. Both traditions do appeal to rationality of emotions for making mature moral judgments, and they both attach emotions to judgment about particular cases. As we have seen, Iris Marion Young discusses the role of rhetoric in deliberation as part of a criticism of a mainstream mode of communication based in the ideal of discourse ethics, which is
The Ethics of Global Communication Online
envisaged as calm and dispassionate. The requirement is that free and non-coerced communication must be based in a mode of communication that is apt for empowering those who are disempowered, like minorities of different kinds. Therefore, rhetoric, storytelling and testimony should be allowed to enter the deliberative arena, according to Young (2000). Against this, John Dryzek has convincingly argued that any mode of communication might be hierarchical and, thus, advantage some while disfavouring others (Dryzek 2001). The main point to be drawn from this dispute is, however, that individual and particular circumstances and emotions do play a vital role in enabling the plurality of different and opposing opinions to be publicly accessible, as deliberative democracy surely requires. Another important aspect of this is captured in Young’s distinction between external and internal exclusion. While external exclusion demarcates how people are kept outside the process of discussion and decision-making, internal exclusion is about being ignored or dismissed or having ones statements and expressions patronised. Internal exclusion concerns ways that people lack effective opportunity to influence the thinking of others even when they have access to places and procedures of decision-making (Young 2000:55). This brings us to an important aspect of judgment: it cannot be a priori, that is, independent of communication with those we make judgments about. Judgment of particular acts and values is a dialogical enterprise. The universal component of the judgment is, however, basic – it takes its point of departure in moral claim to judge from the perspective of everyone else. That is, the judgment must take into consideration that the subjective conditions in question do not only apply in the particular case in question. As such, it transcends the particular subjective conditions or circumstances. Unless the appeal is accessible to others, it cannot make a claim to validity beyond the particular context.
rEFLEctIvE jUDGMENts ONLINE? As yet, it is an open question whether the Internet and online communication technologies facilitates deliberation and reflective judgments. In the following we shall summarise some aspects concerning both feasibility and limitations/constraints.
Feasibility of Online Global communication? As we have seen above, from the reports on online polling, the Internet obviously does offer a venue for free and non-coerced communication across barriers of different kinds. Online communication makes it possible for people worldwide to deliberate both systematically (cf. the PICOLA project), but also in a variety of other ways.9 Even if most of the experiments that have been carried out bear witness to a rather limited scope of organised deliberation on public and political issues, there is no doubt that the Internet itself offers people worldwide access to a wide range of opinions and knowledge of others’ particular circumstances. Hence, it is probable that such encounters, even if not organised as deliberative polls, still make many people more willing to change their judgments of culturally embedded norms. One optimistic report in this direction comes from Deborah Wheeler (2005). She has done very interesting empirical research in the Arab world on female Internet users. The Middle-East and North Africa are contexts of particular interest because they are among the most resistant places in the world to institutionalised democracy. Wheeler has observed democratic growth due to technological coordination on the Internet, mentioning, among others, mass public demonstrations against Syrian troops in Lebanon. Her report demonstrates some important features of the Internet’s contribution to democracy: there is a movement from bottom up, and there is a link between communication
The Ethics of Global Communication Online
online and political actions offline. Like many deliberative democrats Wheeler also emphasises the importance of the particular context for empowerment and engagement. Another example of democratic action on a broad scale, thanks to information technology, is the organisation moveon.org, an initiative that started by two Silicon Valley entrepreneurs in 1998.10 The founders, Joan Blades and Wes Boyd were frustrated because the Clinton-Levinsky affair drew all attention away from political affairs. They launched an online petition to “Censure President Clinton and Move On to Pressing Issues Facing the Nation”. Within a week more than 100 000 people signed the petition online, and it extended to 500 000 in some weeks. The petition was delivered, and thousands of email messages were forwarded to individual Senators and members of Congress. Additionally, there are also some other indications that online communication contributes to more pluralism and modes of communication, e.g. open source movements, Facebook, and other similar venues for open and free discourse. The flow of information in such contexts is, of course, beyond control. Still, there are proofs that some of this unstructured communication clearly does have an impact on political issues offline. As an example, we may think of the Mohammed cartoons that were released by a small Danish newspaper in 2006. In very short time the issue was discussed worldwide thanks to the Internet, and it was not only a matter for those who first entered the stage. This case demonstrates what has been argued above: (a) different modes of communication do play an important role in public deliberation on political issues of global scope, and (b) the emotional aspect is essential to reflective judgment or enlarged thought in the ethical and political faculties.
LIMItatIONs aND tHrEats tO GLObaL cOMMUNIcatION ONLINE? As stated earlier in this chapter, there are no conclusive proofs that the Internet works as an impediment against fundamentalism, in favour of global democracy. However, there are several reports strengthening the belief that online discussion of particular cases are brought out into the open, and thus, they become subject to public scrutiny by an extended plurality of people worldwide. The organised trials of online polling no doubt work as intended – they do qualify opinions and arguments better by facilitating the participants’ access to improved knowledge of particular matters. Still, they are limited in scope, and they tend to decline as soon as the trial has ended. There are perhaps even stronger proofs that much of the less organised movements contribute even more in supplying people with a plurality of judgments of particular issues. Hence, we might be optimistic about the feasibility of broadening peoples’ capacity to think from the standpoint of everyone else. Still, there are some obvious limitations to a conclusive opinion of the matter. Some of them are much discussed in the literature, and has to do with more general considerations on the digital divide. Other kinds of limitations concern online deliberation more particularly, like the threat from filtration and group polarisation discussed by Cass Sunstein (2001). The idea is that people do not have access to an unlimited range of opinions because they only visit places and people they want to see online. Hence, they get only what they want, and not what is required for deliberation and possible change of preferences. The problem of group polarisation is closely connected to the problem of filtering: the outcome of deliberation might be more rather than less extreme opinions, since the argument poll is limited. This threat has to be modified by Fishkin’s deliberative polling which seems to prove that it is possible to diminish the
The Ethics of Global Communication Online
risk of polarisation (Fishkin 1997). The idea was to measure how people’s preferences are shaped through deliberation. As mentioned earlier, the refinement of Fishkin’s model of deliberative polling is not only to measure what people think ahead of an election or important political decision, but rather to do some counterfactual polling, in measuring what people would have thought if they had the time to consider the issue more closely (Fishkin 2003, referred in Elgesem 2005:65). I would like to draw attention to a serious threat to deliberative democracy and reflective judgment in the ethical and political faculties that stems from lack of interest and participation. In non-democratic societies we have seen examples of Internet communication working as means of empowerment and political mobilisation for offline political action (Wheeler 2005). What about democratic societies? Here we observe rather different usages of online communication. Facebook, as an example, has become a means for many people to spot friends on issues like how many friends they have, who they communicate with etc. Some use it for genealogy, and many use it for fun. Members of Facebook obviously do get well informed on many questions they raise about their friends, but hardly of a kind that is important to deliberative democracy. Or how should we judge it?
FUtUrE trENDs There has since long been a focus on the potential of online polling as a means of broadening people’s mind, and thus turn public opinion into less fundamentalist positions. There are, however, diverging reports of the result of trials of the kind that Fishkin and others have undertaken. The impact appears to be limited both in scope and time extension. Additionally, there is a huge literature on methodological problems about online research that has not been discussed here. Despite many methodological limitations the underlying idea of
0
this chapter is that the flow of information will in the long run have a positive impact on broadening peoples’ minds by contributing to more enlarged thought on a global scale. The basic assumption is that information technology offers a room for displaying imaginative powers. The virtual realities online have a key role to play. I believe that research on online deliberation will continue to be important. Additionally, there is a need and a huge potential for research focussing on virtuality which has not as yet been sufficiently researched. This aspect of technology should be examined for anti-fundamentalist purposes. The idea is that potentiality, virtuality and counter factuality are means of exceeding the subjective and limited scope of actual conditions as discussed by Kant, Benhabib and Arendt. Thus, we would welcome more empirical research on the impact of people visiting virtual realities online. In our opinion this is the place to dig deeper when researching technoethics related to global communication online. I believe that Internet communication of different kinds – polling, blogs, virtual realities like Second Life – contribute to improved conditions for reflective judgment due to the following experiences: (1) the public cannot avoid awareness of different tastes and judgments, from which it follows that (2) it becomes harder to ignore the differences of tastes and opinions of others. From this follows that (3) Internet activities as mentioned above do have an impact on public reason. Still, there is a concern that lack of regulation or structure weakens democracy in the public domain. The main problem, related to blogs in particular, is that even if they are democratic, they often lack structures for discussion of community affairs, and hence there is a risk of a cacophony of voices. Against this conclusion it may be argued that there is no proof that this very cacophony is a problem. Rather, it is likely to be for the good. We may even envisage a two step procedure towards more organised deliberation: the first step starts with increased awareness and openness of the public
The Ethics of Global Communication Online
for instance by participating in blogs and virtual realities like Second Life. The next step may be to participate in more structured communication, for instance by online deliberative polling.
HYPOtHEsIs aND cONcLUDING rEMarKs Based on the preceding, we cannot conclude whether online deliberation serves global democracy. Simultaneously, there are many indications that peoples’ awareness of circumstances that are different from their own does play a vital role to people’s judgments of themselves and others. Maybe their basic opinions and preferences are not easily changed, but I do believe that their imaginative faculties are affected by different kinds of online communication – sometimes for the better, sometimes for the worse. This indicates a need for ethical and political control of the technology. We do want people to be more tolerant, to understand conditions and circumstances different from their own, but we also want to eliminate imaginative powers that turn people into terrorists. In short, it is preferable to develop the technology such that the imaginative powers make people curious about knowledge of counterfactual circumstances. The reason why, is because such knowledge may work as an impediment against fundamentalist knowledge. The idea is that we could take advantage of the potential of virtual realities that are available online, in utilising the Internet for deliberative purposes. This idea is an extension of the concept of deliberation such that it both includes many different modes of communication, but also an extension to the virtual world. This brings us to the distinction between real and virtual which does not correspond to the distinction between on- and offline, as many people tend to presuppose when discussing the impact of the new technology on real life affairs. An argument that has often been put forth is that the Internet is damaging because many people are moved to criminal acts due to the
impact of the virtual world online. What matters is how people actually act, i.e. that they remain decent citizens rather than commit damaging acts towards others. In the attempt to stimulate the imaginative faculties in desirable directions it is less important whether the means belong to a real or a virtual world. Maybe access to virtual realities that are populated with avatars is just as efficient as meeting real people in experiencing a plurality of scenarios when deliberating about the best among worlds.
rEFErENcEs Arendt, Hannah (1961). Crisis in Culture. In Between past and future: Six exercises in political thought. New York: Meridian. Aristotle (2004). The art of rhetoric. London: Penguin Classics. Benhabib, Seyla (1992). Situating the self. Cambridge: Polity Press. Coleman, Stephen & Gøtze, John (2001). Bowling together: Online public engagement in policy deliberation. Hansard Society, 2001. Online at bowlingtogether.net Dryzek, J. (2001). Deliberative democracy and beyond: Liberals, critics, contestations. Oxford: Oxford UP. Elgesem, D. (2005). Deliberative Technology? In Thorseth, M. & Ess, C. (eds.), Technology in a Multicultural and Global Democracy (pp. 6177). Trondheim: Programme for Applied Ethics, NTNU. Eriksson, C. & Wadendal, I. (2002). Pappan: Det var inget hedersmord. Svenska Dagbladet 29(1-02). Fishkin, J. (1997). Voice of the people. Yale UP. Girts, Dimdins and Montgomery, Henry (2007). Egalitarian vs. proportional voting in various
The Ethics of Global Communication Online
contexts: An experimental study. Paper presented at the Workshop Democracy in a Globalised World in Oñati, Basque Country, April 20. Forthcoming in Hart publication. Goodin, R. (2007). Enfranchising all affected interests, and its alternatives. Philosophy and Public Affairs, 35, 40-68. Gutman, A. & Thompson, D (1996). Democracy and disagreement. Cambridge, MA: Belknap Press. Habermas, Jürgen (1990). Moral consciousness and communicative ethics. Cambridge, MA: MIT Press. Kant, I. (1964). The critique of judgment. Transl. J.M. Meredith. Oxford: Clarendon. (n.a.) (2005). IT, multiculturalism and global democracy – ethical challenges. In Technology in a Multiculrural and Global Society. (eds.) Thorseth, M. & Ess. Trondheim: Programme for Applied Ethics, NTNU. O’Neill, J. (2002). The rhetoric of deliberation: Some problems in Kantian theories of deliberative democracy. Res Publica, 8, 249-268. O’Neill, Onora (1990). Constructions of reason: Explorations of Kant’s practical philosophy. Cambridge: Cambridge UP. Regh, W. (1997). Reason and rhetoric in Habermas’ theory of argumentation. In Walter Jost & Michael M. Hide (eds.), Rhetoric and hermeneutics in our time. New Haven/London: Yale UP. Sunstein, C. (2001). republic.com. New Jersey: Princeton UP. Thorseth, M. (2006). Worldwide deliberation and public reason online. Ethics and information Technology, 8, 243-252. Wheeler, Deborah. (2005, November 19th – 24th). Digital politics meets authoritarianism in the Arab world: Results still emerging from Internet cafes
and beyond. Paper presented at the Middle Eastern Studies Association Annual Meeting, Mariott Wardman Park Hotel, Washington D.C. Young, Iris Marion (2000). Inclusion and democracy. Oxford: Oxford UP.
Links to Websites Democracy Unbound Project: http://people. su.se/~folke/index.html#om_projektet Fishkin’s Deliberative Polling: http://cdd.stanford. edu/polls/ PICOLA project: http://caae.phil.cmu.edu/picola/ index.html
KEY tErMs Deliberation: Dialogical communication that induces reflection upon preferences in noncoercive fashion. Deliberators are amenable to changing their judgments, preferences and views during the course of interactions. Deliberative Polling: J. Fishkin’s model of deliberative polling measures what people think ahead of an election or important political decision, and also to do some counterfactual polling, in measuring what people would have thought if they had the time to consider the issue more closely. Democracy Unbound: Global democracy that might be representative or deliberative, egalitarian or proportional, enfranchising all affected interests, or other models. Fundamentalist Knowledge: Procedurally defined, characterised by argumentative closure towards counterarguments, absence of critical reflection on preferences, absence of mediation and dialogue, and suppression of challenges to particularity.
The Ethics of Global Communication Online
Global Communication: Borderless, worldwide deliberation across boundaries of all kinds (nations, religion, gender etc.). Reflective Judgment: Judgments that transcend purely private conditions and receive their validity from enlarged thinking, i.e. the capability to think from the standpoint of everyone else. Reflective judgment is thus not the faculty of subsuming a particular under a universal, but the faculty of contextualizing the universal such that it comes to bear upon the particular. Rhetoric: Persuasive means of communication that particularly bears upon the ethos of the speaker and pathos of the audience. Rhetoric – and the right emotions (Aristotle) or courage (Kant) are essential to judgments of particular circumstances and contexts.
ENDNOtEs 1
2
The issue of deliberation is discussed in further details in Thorseth 2005. See http://caae.phil.cmu.edu/picola/index. html
3 4
5
6
7
8
9
10
See http://cdd.stanford.edu/polls/ http://people.su.se/~folke/index.html#om_ projektet This is one among the papers presented at a workshop in Oñati in April 2007. Along with the others, it will appear in an Oxford Hart publication in 2008. The paper is accessible here http://people.su.se/~folke/dimdins. pdf More information on the method and different trials are accessible at http://cdd.stanford. edu/ Visit Cavaliers homepage here http://www. hss.cmu.edu/philosophy/faculty-cavalier. php. More information of the PICOLA project is accessible here http://caae.phil. cmu.edu/picola/index.html Discussion of the relation between particular and universal appeal is also discussed in Thorseth 2006. Different chatrooms, Facebook and many other modes and venues contribute to worldwide encounters of different kinds. The example is discussed in Dag Elgesem (2005), pp. 71-72.
Section III
Case Studies and Applications in Technoethics
Chapter XX
Engaging Youth in Health Promotion Using Multimedia Technologies: Reflecting on 10 Years of TeenNet Research Ethics and Practice Cameron Norman University of Toronto, Canada Adrian Guta University of Toronto, Canada Sarah Flicker York University, Canada
abstract New information technologies are creating virtual spaces that allow youth to network and express themselves with unprecedented freedom and influence. However, these virtual spaces call into question traditional understandings of private and public space and open up new tensions for institutions (e.g. schools and law enforcement) trying to maintain safe spaces. For adolescent health researchers, these virtual spaces provide exciting opportunities to study youth culture, but also challenge the utility of ethical guidelines designed for a non-networked world. At issue are tensions between the realities of ‘natural’ interactions that occur online, often in full public view, and creating ethical research environments. These tensions and issues will be explored within this chapter, through an overview of the TeenNet project, a discussion of anonymity and confidentiality within social networking technologies and software (including Friendster, Facebook, and Myspace), and a discussion of ethical considerations for researchers engaged in adolescent health research and promotion. Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Engaging Youth in Health Promotion Using Multimedia Technologies
INtrODUctION Time Magazine broke with tradition and named “You” its 2006 Person of the Year. The magazine’s cover featured a glossy computer monitor that reflected the reader’s own image, as if displayed on the Internet for all to see. This was an acknowledgment of the unprecedented way in which contributions from individuals around the world are expanding our knowledge, providing entertainment and enhancing our problem-solving capabilities through networked media and linked information technology. The ubiquity of hardware applications (such as laptop computers and wireless handsets) blended with social media tools (like wikis and blogs) has resulted in a mix of text, image, sound, and video allowing people with limited technical skill to become part of a wider landscape of information producers and consumers – or ‘prosumers’ (Wurman, 2001). This rapidly changing environment provides an unprecedented medium for dialogue and communication, yet also introduces profound ethical questions regarding the risks and benefits that these media present. This is particularly so in the area of health information. This chapter explores these ethical challenges through the lens of engaging youth, a population on the forefront of using these new technologies, in health promotion. We draw on more than a decade of research and action with TeenNet, a youth-focused research group based at the University of Toronto. Adolescence is the developmental period during which complex decisions about health issues are first independently made, requiring skills to manage risks and negotiate options (Gardner, 2005). It is also a social time of life when peer influence is high and life experience is relatively limited. Many of the behavioral patterns initiated in adolescence become lifelong habits. While most move through this stage without serious problems, some gravitate towards riskier behaviours, have fewer supports and options available, and experi-
ence the consequences of poor decision making well into adulthood (Hutchinson & Stuart, 2004). This potential for lifelong harm has contributed to the perspective of viewing youth as a vulnerable population and thus, from a research and care perspective, to be treated as a group in need of special protections (Tri-Council of Canada, 1998; United States Department of Health and Human Services, 2005). For example, Canadian regulations stipulate that “subject to applicable legal requirements, individuals who are not legally competent shall only be asked to become research subjects when: the research question can only be addressed using individuals within the identified group(s); and free and informed consent will be sought from their authorized representative(s); and the research does not expose them to more than minimal risk without the potential for direct benefits for them”(Tri-Council of Canada, 1998). This model views young people as lacking the competence to make informed decisions pertaining to their wellbeing with regards to research and, implicitly, for health treatment, including procurement of health information. And yet, with more than 80% of young people regularly using information technology for health and communication (Lenhart et al., 2003) the supposed safeguards provided by these guidelines become problematic. The proliferation of new technologies is creating contested spaces that allow for unprecedented creative youth expression and networking. However, these new spaces also challenge the utility of ethical guidelines designed for a non-networked world. At issue are tensions between the realities of ‘natural’ interactions that occur online, often in full public view, and creating ethical research environments. These tensions and issues will be explored below and discussed with reference to the work done in connection with TeenNet research projects.
Engaging Youth in Health Promotion Using Multimedia Technologies
tHE tEENNEt ExPErIENcE In 1995, Dr. Harvey Skinner, then professor of Public Health Sciences at the University of Toronto, established TeenNet with the aim of providing a living laboratory to develop, evaluate and disseminate youth-friendly tools for health promotion using emerging information and communication technologies (ICT) (Norman & Skinner, 2007; Skinner, Maley, Smith, & Morrison, 2001; Skinner et al., 1997). TeenNet’s approach to research is predicated on the idea that engaging young people in all aspects of health promotion planning – from identification of needs, to the design and delivery of programming – produces more authentic, relevant, useful and attractive tools to support positive youth development (Blum, 1998). This process is guided by five principles: • • • • •
Participation of youth in all stages of development and delivery. Relevance – focusing on personal, social and health issues that are identified by youth. Autonomy support and respect for individual freedom. Active learning and fun Access, which includes addressing both absolute access and quality of access (Skinner, Biscope, & Poland, 2003), while fostering the eHealth literacy skills required to fully utilize ICT’s for health (Norman & Skinner, 2006b).
This health promotion approach emphasizes self-determination and is informed by established models for community engagement and health behaviour change. The overall style of engaging youth through dialogue reflects the critical pedagogy of Paulo Friere (Freire, 1970, 1973), which posits that listening must precede dialogue, which then precedes action, and suggests that only through authentic dialogue in which all parties have equitable voice can true informed action take place. The engagement process by which
this dialogue takes place occurs in five steps and is captured in a model called EIPARS, developed by TeenNet as a model for sustainable action with youth (Figure 1): 1.
2.
3.
4. 5.
6.
Engage is about connecting interested youth and motivating them to work in a peer relationship on an action project in their community. Identify involves the youth exploring and identifying issues that concern them in their community Plan focuses on choosing one issue and developing a strategy for addressing this issue. Act is about implementing the project according to the plan the group developed. Research, Reflect and Reward is where youth evaluate the effectiveness of the action project and reflect on the outcomes of the group processes. Sustain considers ways to sustain the group and its action. This is as well a motif throughout the model.
The EIPARS model has been used successfully to engage diverse youth groups in Canada and internationally (Bader, Wanono, Hamden, & Skinner, 2007; Ridgley, Maley, & Skinner, 2004) to create and sustain social action projects on a variety of health issues such as HIV/AIDS, youth violence, and smoking. Essential to the success of these projects is the cultivation of authentic partnerships between youth and adults (Camino, 2000; Ginwright & James, 2002; Zeldin, 2004; Zeldin, Camino, & Mook, 2005). Through implementing the EIPARS approach and the STAR model, TeenNet created a series of engagement platforms for health promotion in a number of areas, particularly tobacco control and HIV/AIDS: 1.
The Smoking Zine (www.smokingzine. org): A five-stage program on tobacco use prevention and cessation that operates as
Engaging Youth in Health Promotion Using Multimedia Technologies
Figure 1. EIPARS model
2.
3.
4.
a stand-alone intervention or as part of an integrated school-based program (Norman, Maley, Li, & Skinner, submitted), and has been translated and culturally adapted into six languages. The Virtual Classroom on Tobacco Control (http://www.takingitglobal.org/tiged/ projects/tobacco/): an online classroom that includes the Smoking Zine and other digital and paper-based tools for teachers and students to learn about tobacco use on a global scale. Global Youth Voices (www.globalyouthvoices.org): A set of tools and information resources to promote engagement and action both locally and globally around a variety of broad social determinants of health (Bader et al., 2007). YouthBet (www.YouthBet.net): an interactive resource to educate youth and promote socially responsible approaches to gambling (Korn, Murray, Morrison, Reynolds, & Skinner, 2006).
5.
6.
7.
TiGXPress (http://en.takingitglobal.org/ tiged/projects/tigxpress/): a toolkit and resource package for teachers and students that draws on the potential of media technologies to undertake HIV/AIDS education based on social justice, transnational communication and global solidarity among youth. Smoke Free World (www.smokefreeworld. org): a freestanding resource on framing tobacco within the context of globalization and social justice. LivePositive (www.livepositive.ca): A virtual space designed by youth living with HIV for youth living with HIV to help support treatment decision-making and reduce social isolation.
To address the unique challenges for health promotion that engagement using internet and communication technology poses (Street & Rimal, 1997), TeenNet developed an iterative design process called the Spiral Technology Action Research (STAR) model. It is a multi-stage methodology involving cycles of dialogue with
Engaging Youth in Health Promotion Using Multimedia Technologies
rapid evaluation to allow development of relevant, timely and evidence-informed tools for youth that keeps products on the leading edge of innovation (Skinner, Maley, & Norman, 2006). This process has been central to the development of each of these methodologies, blending EIPARS with evidence-informed usability and development best practices (National Cancer Institute, 2000). Yet, through this process and the products that have been created, unique ethical problems have surfaced.
HOt taLK: tENsIONs WItH MEssaGE bOarDs, cHats aND cONNEctION Youth, more than any other group in society, gravitate towards technology as a medium for self-expression, communication, and information (S. Flicker, Haans, & Skinner, 2003; Lenhart, Madden, & Hitlin, 2005). Harnessing the power of interactive technologies for research and health promotion purposes creates dilemmas about how to negotiate the boundaries that these technologies traverse between public and private spaces. There is a paradox in that young people are among the most likely to use the Internet and be familiar with its social dynamics, yet among the least likely to self-censor online. They also appear to be less likely to appreciate the long-term consequences of living in a surveillance culture created through online activity (Montgomery, 2007). Thus, while trying to exploit the Internet’s ability to foster social identity through creating content, they unwittingly expose themselves to the risk that such information will be applied to other contexts of their life due the absence of any effective privacy on the Internet. A young person who seeks to divulge information about risk behaviours may be speaking authentically; however, the implications for how that information may be used in the future – particularly with politically charged topics such as drug use or
sexual behaviour – may not be fully appreciated when it is posted online. Indeed, this lack of awareness of boundaries is reflected in three recent cases that emerged in Toronto, Canada, where disciplinary actions were taken against high school students who had made disparaging remarks about teachers or fellow pupils using the social network site Facebook, which they had considered to be private conversations (El Akkad & McArthur, 2007). In these circumstances, discussion content normally reserved for face-to-face banter among students is suddenly transformed into potentially libelous rhetoric on a global scale when placed in an online environment such as Facebook where anyone can view it. For many, these groups are considered private spaces even though they are universally accessible and the manner in which this ‘privately public’ space is negotiated has considerable ethical implications for its inhabitants and visitors alike. TeenNet faced similar challenges with the use of online bulletin boards, a precursor to the virtual ‘walls’ created within Facebook, with its Hot Talk feature of the website CyberIsle. Although CyberIsle featured games, quizzes and a variety of interactive informational experiences, in the late 1990’s the Hot Talk board was by far the most popular feature on the website. The attraction was due in part to CyberIsle’s ‘by youth for youth’ approach, which meant that the discussions were not moderated and youth could say what they wanted so long as it was done within a culture of respect for the rights and dignity of other participants. The community was governed by a set of community values and hosted discussion threads on topics such as smoking, sex, body image, music, and racism. From a research perspective, this environment provided opportunities to understand youth communication, but in doing so introduced challenges for researchers and participants alike in negotiating the boundaries between private and public space. From the process of studying Hot Talk, a set of ethical guidelines was developed to inform further project work and published to
Engaging Youth in Health Promotion Using Multimedia Technologies
aid other researchers (see Table 1)(S. Flicker et al., 2003): 1.
2. 3. 4.
Supply a readily available link to the individuals and institutions responsible for the research project. Describe study aims, potential benefits, and harms. Provide information about what data will be collected and how it will be used. State clearly what kinds of services you are (and are not) able to provide.
5.
Identify any commercial or competing interests. 6. Offer direct contact information for the principal investigator and/or study coordinator so that participants can get their questions answered. 7. Seek informed consent. 8. Grant users who do not consent to be part of the research a comparable service. 9. Be explicit about steps taken to preserve confidentiality and anonymity. 10. Create policies and procedures to ensure the well-being of the community (e.g., pro-
Table 1. Ethical guidelines for researching Internet communities Situation
Issues
Resolution/ What we did
Study Group Enrollment: Enrollment of study participants needs to be facilitated online.
• Guaranteeing informed consent from study participants • Ensuring that eligibility criteria are met • Differentiating between service delivery and research agendas in an online context • Ensuring accuracy of information
• Information about study aims and procedures is provided online • All users coming to the site must register; consent is obtained during registration • Youth users are given autonomy to consent; parental consent is not required • Users are asked for their date of birth at registration; only those between the ages of 10 and 24 are considered eligible • Users not consenting still have access to all parts of the site, but their data are not used in analyses
Protecting Participants From Risk and Harm: All research participants need to be protected from any unintended harm resulting from the research process; special attention needs to be paid to ways in which online research might create unintentional harm.
• Protecting users of the site from other users and themselves • Limiting use of the data for research purposes and service delivery • Making sure reported results do not breach confidentiality or cause embarrassment or more serious harm to participants
• A privacy page tells users what type of data is collected, what we do with it, and how it is used and not used • The TeenNet Project monitors the site daily for messages requiring immediate attention and removes those that are deemed inappropriate • Web server information is password protected and stored in a locked research office; linked data are made available only to the research team
Linking “Public” and “Private” Data: Discussion board (HotTalk) includes postings by youth who might or might not have consented to our research; including some messages in analyses and excluding others becomes problematic.
• Requiring consent from posters on “public” message boards • Determining which individuals consented to be part of the study group • Analyzing conversations where some message posters are study group members • and others not
• Assumed that all data were private and only those posters who consented to be part study group could be explicitly quoted in analyses • Used web server logs to determine study group membership; although messages are publicly available, registration information is attached for data analysis purposes only • Used postings of nonconsenters only as contextual in analyzing conversations; do not quote them in analyses
From: Flicker S, Haans D, Skinner H. Ethical dilemmas in research on Internet communities. Qualitative Health Research 2003.
00
Engaging Youth in Health Promotion Using Multimedia Technologies
tocols for maintaining community values, moderating the site and managing crisis); make policies public and transparent. 11. Limit the ability of search engines to access message boards directly to safeguard privacy. 12. Gain approval from a credible human subjects ethics review committee. Although originally popular, the discussion board format soon lost favour among youth. They gravitated to new technologies, particularly live chat tools such as ICQ, MSN chat, and AOL Instant Messenger. As these gained favour, public message boards became more problematic as both youth and unwelcome adult visitors were drawn to these boards. For TeenNet, protecting youth’s anonymity was becoming problematic when adults could easily pose as young people. Although TeenNet never actively moderated the board, it did monitor it for adherence to community standards, which included inappropriate posts by those who clearly appeared to be adults. However, as the challenges increased and as youth were self-organizing their social dialogue around new peer-to-peer technologies, Hot Talk was eventually disabled. Yet, as researchers deeply concerned about ethical integrity, we remained challenged by questions such as balancing anonymity and privacy with informed consent to participate. Vulnerable youth, such as those living with HIV (Sarah Flicker et al., 2004), are desperate for connecting anonymously with others who might share their experience. Initially, TeenNet endeavored to provide such safe spaces, while studying the role that these spaces played in promoting health and providing support. This practice challenged us ethically to balance multiple community norms. On the one hand, youth gravitated towards online environments so that they could interact freely and anonymously without undue interference. On the other, we worried about how we could protect
young people from harm or contact a young person who might need help without breaching confidentiality. In other words, what were the moral, legal and ethical implications of having a platform for young people to talk about risk behaviours on a university-sanctioned space without the necessary supports available around supervision? For TeenNet, these challenges were simplified by the fact that these issues were confined to a specific space (Hot Talk). With the rise of new technologies such as Facebook and MySpace that are open not only to youth, but to the world, these challenges proliferate exponentially.
aNONYMItY aND cONFIDENtIaLItY IN tHE Era OF FrIENDstEr, FacEbOOK & MYsPacE The popularization of instant messaging represented a new type of peer-to-peer interaction for young people that displaced, but did not eliminate, online message boards. Yet, the inherent limitation of tools such as MSN and Yahoo! Messenger, Skype, and iChat is that they are essentially one-toone means of communicating even though group chats are possible. They also pose difficulties for people who want to meet others with common interests and for communicating across time zones. It was this temporal flexibility and public space that made bulletin boards so popular. Building on the strengths of both approaches, a new class of tools emerged in the early 2000’s that enabled people to chat, share and join groups around shared interests. Tools like Yahoo! Groups, Friendster, MySpace and Facebook created new possibilities to connect, but also amplified the challenges that bulletin boards faced because unlike most boards, these new environments enabled people, ideas and content to be networked in real time.
0
Engaging Youth in Health Promotion Using Multimedia Technologies
social Networking sites: What Youth Do and the Information they Exchange Internet-based social networking sites enable users to create an individualized profile and connect it to other profiles, forming a larger personal network. This “social technology” or “social software” includes “various, loosely connected types of applications that allow individuals to communicate with one another, and to track discussions across the Web as they happen” (Valkenburg, Jochen, & Schouten, 2006). Integration between different media is the core feature of this new software platform (Al-Ghorairi, 2005). A prime example is YouTube, a service that enables users to easily convert video files and upload them into a widely accessible format on the Web, comment on them, and share these works through links. While debates over the legality of some of this
content continue, many of the most discussed videos have been homemade. They can be viewed on computers and increasingly via mobile devices such as wireless phones and PDA’s. Indeed, you can always post a link to your YouTube video on your Facebook profile. A recent Pew Report (2007) indicates 55% of online teens aged 12-17 have created their own profile. While these rates vary by gender, race and household income, almost 70% of American girls between the ages of 15 and 17 report having created an online profile (see Table 1). Young people are embracing these technologies en masse, and are leading the drive for innovation among services competing for advertising revenue dollars. Corporations are recognizing the power of peer influence and are happy to provide youth with free access to music and images to upload to their profiles, especially if they are linked to others and visited hundreds of times daily. A few years ago,
Box 1. Teens who create profiles online
Source: Pew Internet & American Life Project. Teens and Parents Survey, Oct.-Nov. 2006. Margin of error is ±5% for profile creators. * indicates statistically significant differences. In (A. Lenhart & M. Madden, 2007)
0
Engaging Youth in Health Promotion Using Multimedia Technologies
music companies were suing individual teens and their parents for downloading and trading music; now they are happy to give them access through their profiles as a free marketing. When considering the impact of these technologies on the lives of youth, a distinction must be made between using the Internet for simple information seeking (e.g., doing homework) and communication (e.g., networking through these sites), with the latter having important implications for the social self-esteem and well-being of adolescents (Holloway & Valentine, 2003). Youth experience social systems differently than adults, and through technology they may exert substantially more power than in their everyday lives. This has created a condition in which some youth choose to live a substantial part of their lives in a virtual medium. A young woman whose protective parents restrict her personal freedom can spend all night chatting with friends around the world from the relative safety of her bedroom; or a young man prevented by parents from listening to explicit music or viewing violent films can view and upload this same content without being dependent on parental permission or financing.
Information sharing and the Myth of Privacy In trying to prove a subjective expectation of privacy in a user’s profile, the inherent nature of the action or its everyday use works against any notion of an expectation of privacy. By signing on to Facebook or MySpace and providing personal information for others to see, a user is, in effect, not seeking to preserve the information as private, but is instead making a choice to publicize this information for others. (The Centre for Missing and Exploited Children, 2007) Although youth participate heavily in online conversations, web-cams, profiles and networks, they often view these as private domains (Holloway & Valentine, 2003; Twenge, 2001, 2006).
As such, there is little self-censorship around sensitive topics (Montgomery, 2007), particularly when the process of disclosing personal information motivates interaction between members of networks and may result in an increased sense of bonding. For adolescents in the midst of critical developmental changes, this is particularly important. These online interactions may provide the only social support they have (or believe they have). The desire to share private thoughts and feelings in order to receive acceptance and validation from peers is a strong urge during this phase. Unfortunately, what may have started as a desire to stay connected with close friends can soon transform into a vast network that includes hundreds of strangers. The potential consequence of young people sharing personal information with strangers has drawn attention from social service agencies. Recent public service campaigns highlighted the situation in a series of vignettes (The Centre for Missing and Exploited Children, 2007). For example: A young woman pins a suggestive photograph of herself on a school bulletin board. The photo is taken by classmates and replaced each time it is taken and subsequently passed around the school. Finally, the young woman tries to take the photograph down herself, but each time it reappears. A young woman is seen leaving her school with a group of friends. As they walk through various settings, strange men and boys address her by name, but she doesn’t know who any of them are. The nature of their comments become increasingly suggestive, as one asks her when she will post something new, and another what colour underwear she is wearing that day. These public service announcements are designed to bring awareness to young people, particularly females, and their parents about the potential for privacy violations on the Internet.
0
Engaging Youth in Health Promotion Using Multimedia Technologies
Barnes (2006) notes the “privacy paradox” inherent in the generation gap, with many adults becoming increasingly hesitant to share their personal information, while their children post their most intimate details on the Internet, and “because schools, college admissions officers, and future employers are checking these sites, personal information and pictures revealed online can directly influence a student’s educational, employment and financial future.” Jean Twenge has observed an increase in sense of entitlement and narcissism amongst this generational cohort, see (“Narcissistic, but in a good way,” 2007; Twenge, 2001, 2006). Young people are increasingly expressing not only their musical and aesthetic preferences, but also their emerging sexuality, provoking concerns about access to pornography and sexual solicitation (Fleming, Greentree, Cocotti-muller, Elias, & Morrison, 2006; Mitchell, Wolak, & Finkelhor, 2007; Peter & Valkenburg, 2006). Youth’s perceptions of anonymity online encourage the sharing and discussion of sexuality and sexual behavior, with the greater perception of anonymity resulting in increased sexual selfdisclosure (Chiou, 2006). Sexually diverse youth in particular have embraced these technologies (Simonette, 2007), as they provide opportunities to “come out”, share sexual information, and form intimate relationships in spaces potentially safer than actual home environments (Hillier & Harrison, 2007). However, these sites are unable to guarantee confidentiality among users, and for young people, especially those in school, this causes a number of problems. What is also becoming evident is the blurred distinction between on- and offline behaviours, exemplified by the rising concern over cyber-bullying, the scope of abuse suffered by victims, and the difficulty of managing these spaces (Chaker, 2007; “Cyber bullying: understanding and preventing online harassment and bullying,” 2006; DeWolfe, 2007; Miller, 2006a, 2006b; Pack, 2006; Rubin, 2005; Shariff, 2005; Splete, 2005; Stover, 2006). Young people are exploiting the
0
information and images shared by their peers, and using it to harass and terrorize them online and in the hallways. School Boards and educational institutions have scrambled to create policies on the use of these sites and to discipline students as a result of misuse (Kovach & Campo-Flores, 2007; Shariff, 2005; Stone, 2006; Trotter, 2006), some even regulating use by teachers and staff (Stewart, 2006). The impetus for school board intervention has often been the posting of illegal activity (pictures of underage youth drinking before the school dance) or inflammatory remarks about teachers or administrators (Boyle, 2007; Coyle, 2007). Sexually diverse youth flocking to social networking sites as safe spaces to come out may find themselves “turned in” by fellow students, which has translated into situations such as expulsion from religious education institutions (Dukowitz, 2006). In Canada, the Ontario Safe Schools Act, reflecting an overarching provincial policy covering school boards on matters of discipline, has recognized the need to include cyber-bullying as a punishable behaviour alongside more traditional forms of harassment (Ministry of Education, 2007; “The Ontario Safe Schools Act: School Discipline and Discrimination,” 2003). Particularly concerning is when youth post information that could put them at risk of exploitation and violence. Consider the young person who casually posts that their parents will be out of town for the weekend. Combined with a history of other personal information (neighbourhood or exact address), this creates a situation in which potential predators have information fed to them, sometimes over lengthy periods of trust building. MySpace, in particular, received considerable backlash following high profile cases of predators meeting youth on their site, resulting in regulatory investigation and a movement towards improving safety on their site (Andrews, 2006a, 2006b; Lehman & Lowry, 2007; Newman, 2006). However, does MySpace constitute any more risk than other “spaces” frequented by youth (e.g., parks or recreation centres)? Are the issues it brings up
Engaging Youth in Health Promotion Using Multimedia Technologies
(sexual coercion and violence) not omnipresent in society (Rickert & Ryan, 2007)? Lost within much of the debate about information and privacy is a discussion of youth agency, and their ability to understand and negotiate the various words and images with which they are inundated. A Pew Internet and American Life report examined strategies used by teens to negotiate and moderate the information they share, including: restricting who can see their full profile and limiting or falsifying the information they include (A. Lenhart & M. Madden, 2007). In recognizing youth’s ability to push these social technologies to their limits, we must also recognize their ability to protect themselves as active agents in the process of constructing these online communities and networks. One example of this was a grassroots petition by users (almost exclusively youth and young adults) to have the social network site Facebook remove from the site a feature that tracked and posted their every move when users logged into their profile (Romano, 2006; Schmidt, 2006; Warren & Vara, 2006). TeenNet has aimed to address these issues by promoting critical appraisal skill development in schools and community programs (Norman & Skinner, 2006b), and also sought to promote active engagement in media development to raise awareness of both the social and health consequences (and potential) of using the Internet for health (Lombardo, Zakus, & Skinner, 2002; Norman, Maley, & Skinner, 2000; Norman & Skinner, 2007). Youth are using these technologies to do more than just entertain themselves; for example, exploring advocacy interests. Organizations ranging from political parties (all 2008 Republican presidential hopefuls) to advocacy organizations (Planned Parenthood) have MySpace profiles. The goal is to attract youth and young membership. Adding an organization to one’s profile or speaking for a cause, while requiring minimal effort in these media, may be telling of lifelong political patterns. Never before have young people had the ability to interact about issues locally and globally
in such an instant and unrestricted way. Yet, while this movement has achieved global engagement of youth, there is relatively little we know in any systematic manner about what happens online, in part because of the ethical issues presented here. TeenNet has attempted to leverage this political potential through providing young people with the technical skills and online space to develop their own platforms and advocacy strategies (Bader et al., 2007; S. Flicker et al., 2004). The Society for Adolescent Health published guidelines to help researchers negotiate through the maze of problems that doing work with young people presents (Santelli et al., 2003; Society for Adolescent Medicine, 2003). The guidelines, published in 2003, resemble many other such documents in that they stress the need to ensure privacy and provide support services to those who require it as a result of research-related intervention. Framed in terms of relative risk, such guidelines make assumptions that the risks are known or even knowable. In online environments that evolve so quickly, this may not be possible as the technology advances far faster than the professional codes of conduct that regulate research on it. What is especially interesting to adolescent health researchers is the “peer-to-peer” element of these sites, providing observers unprecedented insights into the processes of socialization and documentation of the lived experiences of users in real time (Moinian, 2006). Corporations are already mining this data to explore the habits and interests of young people. Should researchers not be monitoring those same discussions for insights into health and risk taking behaviour? However, researchers wishing to meet youth on their own ‘cyber territory’ walk a precarious line. The legal and ethical implications of working with youth around such important health issues as sexuality may serve to silence and restrict both researchers and participants (Curtis & Hunt, 2007; Sarah Flicker & Guta, Under Review), serving neither community.
0
Engaging Youth in Health Promotion Using Multimedia Technologies
Although anyone can troll through online communities for information on individuals, doing so for research purposes changes the way such interactions are viewed. Should researchers be expected to disclose their identities when reading items posted on the Internet? Is this information not in the public domain, and are these not public spaces? In Canada, the Tri-Council statement notes, “due to the need for respect for privacy, even in public places, naturalistic observation raises concerns of the privacy and dignity of those being observed” (Tri-Council of Canada, 1998). The time-tested recruitment strategies of posting an invitation flyer at the local recreation or community centre may be becoming obsolete and irrelevant – with young people more likely to check the “bulletin” board on their own MySpace profile. However, this poses a number of ethical challenges as these publicly accessible boards are now located on ‘private’ personal pages. One example of a recruitment strategy was to use a “pop up” when youth signed onto their accounts to solicit participants (Valkenburg et al., 2006). Is this any different than recruiting through traditional forums, or does recruitment through a young persons’ personal profile represent an invasion of privacy or possible harassment? Should youth expect such intrusions as part of online life? Or are such intrusions expected when such solicitations are commonplace in online marketing, where there is no request made for permission to communicate?
crEatING cONtENt ONLINE: FINDING tHE ‘rIGHt’ MEssaGE IN tHE aGE OF WIKIPEDIa Social networks and peer-to-peer communication tools are part of a larger phenomenon known as Web 2.0 (Wikipedia, 2006b), which describes a class of technologies that facilitate user-created content with little to no technical knowledge required. Unlike static webpages, these new
0
technologies, such as blogs (online web logs or diaries) (Wikipedia, 2006a) and wikis (easily customizable webpages) (Wikipedia, 2006c), have gained popularity with youth, with 31 % of teens and 41% of young adults reporting using blogs and other content creation technologies (Lenhart & Fox, 2006; Lenhart, Horrigan, & Fallows, 2004). Although the World Wide Web was considered highly democratic in that anyone with a hosted website could create content online, this option was limited to those who had the technical expertise to program and the requisite financial means. Blogs and wikis have come much closer to realizing this early democratic vision, enabling individuals to create Web content without the need for programming knowledge or the means to own and control webserver space. Rather than passively ‘consuming’ information, this new model establishes users as ‘prosumers’, creators and consumers of knowledge (Wurman, 2001), allowing them to have a greater influence in shaping public discourse than ever before. Although users can create and modify content with relative ease, such dynamic and distributed authorship models pose unique challenges for consumers of this content beyond issues of privacy. While participatory electronic environments facilitate authentic expression, they do not provide safeguards against inaccurate, incomplete or inaccessible posts that may have real consequences for those who choose to use that information to guide health decisions. Approaches to addressing quality in consumer health information fall largely within two major theoretical frameworks: a consequentialist (or moralistic) framework and a deontological (or common morality) framework. The former emphasizes ‘right’ and ‘wrong’ decisions based upon their health consequences, while the latter emphasizes consumer autonomy and the need for consumers to make informed choices based on the information provided (Entwistle, Sheldon, Sowden, & Watt, 2001). These frameworks underlie two approaches to
Engaging Youth in Health Promotion Using Multimedia Technologies
promoting quality assurance in consumer health information: credentialing and critical appraisal skill development. Third-party credentialing is an informationcentred strategy that typically involves reports from expert reviewers or ‘seals of approval’ to validate information online. For third-party credentialing to be successful, the credentialing process must be legitimate and perceived to have an appropriate pedigree. Seals of approval such as the Health on the Net code (http://www.hon.ch) strive to achieve this (Health on the Net Foundation, 2007). However, for a seal of approval to be effective, it must have public trust and recognition. In the absence of a clearly identifiable, transparent process in place for consumers to see, such seals of approval are unlikely to be effective in reducing information quality perception problems. The most significant challenge facing seals of approval is largely one of volume and the speed of change: there is no reasonable way to rate content sites like wikis where information posted on millions of sites changes without a means of centralized control. Indeed, it is the absence of control over the content that makes this strategy the most problematic in a Web 2.0 environment. The second main approach involves a consumer-centred strategy whereby individuals are taught critical appraisal skills to evaluate information quality on their own. These skills can be taught through specific courses (Sheppard, Charnock, & Gann, 1999), dissemination of professional guidelines (Gustafson et al., 1999; Winker et al., 2000), or providing interactive online evaluation checklists (Youth Voices Research, 2008). This approach typically employs rating tools to emphasize specific criteria that have been identified as the best (or better) practices for online health information. These criteria typically include accuracy, reliability, currency, author affiliation and authority and some design factors in their assessment of a particular website (Kim, Eng, Deering, & Maxfield, 1999).
TeenNet has taken up this second approach to addressing quality in its work and developed a tool and training site for young people to evaluate health information online using criteria developed both from the professional literature and youth-specific criteria developed through a youth working group (Youth Voices Research, 2008). The four areas of concentration are: (1) access – both in absolute terms and in relation to having privacy and comfort to explore social issues (Skinner et al., 2003), (2) credibility and security of the environment; (3) personal fit, such as youth-friendly language and learning style; and (4) interactivity or the ability to engage the material beyond passive consumption. From this work the concept of eHealth literacy and its related measurement tool, the eHealth Literacy Scale (eHEALS), was first formed (Norman & Skinner, 2006a, 2006b). This meta-literacy reflects the varied skills needed to make sense of health information in a changing context, such as literacy in computers, science, media, information, and health in addition to basic reading and writing skills. What this work suggests is that there is some ethical imperative for health information providers to consider these issues in developing content for youth audiences. Placing content within a context for evaluation becomes more challenging with each evolution of the technologies used to drive Web content. Stability is an attractive feature of most webpages and print media, enabling people to bookmark content to enable contemplation of the quality and relevance of the material in relation to other sources. However, with tools such as wikis (Wikipedia, 2006c) content may change multiple times between viewings and in the case of highly trafficked sites like the user-authored online encyclopaedia Wikipedia (Wikipedia, 2007) this may occur often, particularly on contentious topics. This poses a problem for issues such as health, where the implications for decisions made on inaccurate, misleading or incomplete information are critical (Eysenbach, 2002) and there is no
0
Engaging Youth in Health Promotion Using Multimedia Technologies
explicit ‘owner’ of the information to influence, but rather a conglomeration of independently contributing authors. Although user-created content may be problematic in terms of quality in some cases, there are others where this may be less so. In the case of Wikipedia, there may be some reason to trust the information that is posted on it. In a special report prepared for the journal Nature, the reliability of Wikipedia was compared against the electronic version of Encyclopaedia Britannica and found to be nearly as accurate and complete, but also more current (Giles, 2005). The reason for this success may lie in the way that networks form around a critical mass and the ability of these groups to make informed decisions based on information gleaned from those connections (Ball, 2004). Surowiecki (2004) examined the phenomenon of group-decision making and found that groups tend to make better decisions as a whole than even small clusters of experts working together in decisions based on simple polls or topics where there is a coordinated, decentralized effort. Thus, a site like Wikipedia that has millions of contributors is likely to produce a higher quality product than one that has far fewer contributors. Yet, in areas with less traffic, where relatively few quality electronic resources exist in a format that is relevant to their concerns, this poses a substantial problem because this critical mass is rarely achieved. Web 2.0 technologies introduce challenges to both consequentialist and deontological frameworks for evaluating health information quality. Consequentialist frameworks are unlikely to find wide purchase in an environment where respect for diversity is so widespread and there is a resistance to centralized authority. However, while deontological frameworks recognize different viewpoints, to be effective there must be some common ground or morality in which individual opinion can be built upon. In a globalized world, where technologically amplified diversity comes both from within and beyond the boundaries of
0
the community, such common ground may be hard to find or at least to negotiate in diverse Web communities. This reinforces the need to foster an ethic of empowered consumer participation at both the individual and social levels. At the individual level, this includes fostering eHealth literacy skills through education and providing youth with opportunities to exercise choice in making health decisions. At the social level, it means promoting mechanisms to encourage deliberative dialogue among youth in safe, open spaces to develop shared social norms. For health service professionals seeking to use this approach in eHealth promotion or service delivery (e.g., Web-assisted tobacco interventions), user-generated content introduces questions about treatment fidelity and liability. Without controls on information content, adherence to the core ethical principals of nonmaleficence (do no harm) and beneficence (do good) in service delivery become more difficult. One potential option used by services like Wikipedia and You Tube is to have individuals assigned to monitor posted information for adherence to community standards. This system however, is labour intensive and relies heavily on members of the community reporting problems to sufficiently identify problematic content. For health researchers, user-centred social media pose an opportunity to engage the public in health issues, but also surrender much of the control of the message content that is essential in positivist science models. In order to understand how social media like wikis and blogs contribute to the sense making process in health matters, a more developmental, iterative process for researching and evaluating eHealth interventions is required. This means both consideration of more qualitative methodologies for understanding context and adoption of flexible quantitative model development through such processes as developmental evaluation and forms of action research . Although these ethical issues have been identified, there is not yet a body of research that has
Engaging Youth in Health Promotion Using Multimedia Technologies
looked at how young people make decisions, evaluate information, and manipulate content for health in the new Web 2.0 world. Without this understanding of how young people are using these resources for decision making about health, researchers, practitioners and policy makers alike are limited in their ability to provide practical strategies to best address the ethical challenges that these tools pose and, potentially, to develop helpful options. And those options may themselves inspire users to create new norms.
NEW FrONtIErs: MOvING tHE FIELD FOrWarD There has been much debate over allowing youth to consent to health research. The standard for including adolescents in research remains focused on obtaining consent from the parent(s) or guardian(s) of anyone under the age of 18. Consequently, the literature shows researchers are often hesitant to include adolescents in their studies because of fears associated with navigating ethical review, and in turn, youth are hesitant to seek health service and participate in research when parental permission is require . The underlying assumption is that parents have the ability to understand research and assess harms, over and above that of their adolescent children . Consequently, young people are often excluded from participating in research and initiatives that may serve to improve their lives. However, many of the aforementioned technologies are readily accessible to youth, and provide opportunities to engage in ‘risky’ online behaviour. Over 60% of teens aged 12-16 regularly frequent social networking sites such as MySpace. com, with the majority posting some content on either a daily or weekly basis (A. Lenhart & M. Madden, 2007). What are the ethical issues of not engaging in research to understand how youth use these technologies? Are there strategies that we can employ that balance the need to
protect young people from exploitation through research with the desire to understand what risks and benefits youth experience from using these communication tools? Parents are often unaware of what their children are accessing, except when the consequences become unavoidable; e.g., teens implicated in Web-based terrorist activity, cyber-bullying, and, in extreme cases, meeting with online contacts resulting in kidnap and assault. Can we create ethical frameworks that enhance the opportunity to promote health, while respecting the rights and freedoms of youth to self-expression and experimentation in a dynamic environment? We have moved from an environment where adults hosted websites (created the playground) and were in a position to dictate the rules, to a generation of young people creating their own online spaces. Here the rules are ever changing. To understand these rules and their implications, TeenNet has viewed youth engagement within a latticework of personal and social contexts of which information technology is but one environment of interaction . This process is about fostering youth engagement in not just creating the tools, but also in developing frameworks for understanding how to research these tools, their use, and their social impact. While ethical frameworks are important, so is the related technoethics research. For a phenomenon that is so popular, global and pervasive, we know relatively little about how people interact, how they make decisions, what information they use to inform their actions, and what the influence these spaces have on action. It is perhaps ironic that the solution to developing ethical guidelines for research is research on technoethics, symbolizing the paradox that has become the modern information landscape.
acKNOWLEDGMENt We want to credit Harvey Skinner, our mentor and the founder of the TeenNet Research group, for his
0
Engaging Youth in Health Promotion Using Multimedia Technologies
leadership in this area. In addition, we recognize the entire team of TeenNet staff and investigators who contributed to the rich intellectual environment that prompted these discussions and debates. Finally, we are indebted to David Flicker for his editorial genius.
rEFErENcEs Al-Ghorairi, M. (2005). The rise of social software: What makes software “social”? Oxford Brookes University. Andrews, M. (2006a). Decoding MySpace. U.S.News & World Report, 141(10), 46-58. Andrews, M. (2006b). Make it predator-proof. U.S.News & World Report, 141(10), 52. Bader, R., Wanono, R., Hamden, S., & Skinner, H. A. (2007). Global youth voices: Engaging Bedouin youth in health promotion in the Middle East. Canadian Journal of Public Health, 98(1), 21-25. Ball, P. (2004). Critical mass: How one thing leads to another. New York: Farrar, Straus and Giroux. Blum, R. W. (1998). Healthy youth development as a model for youth health promotion: A review. Journal of Adolescent Health, 22, 368-375. Boyle, T. (2007). Pupils punished over Facebook comments: Five Grade 8ers bumped from end-ofyear trip after ridiculing their teachers. Toronto Star. Camino, L. A. (2000). Youth-adult partnerships: Entering new territory in community work and research. Applied Developmental Science, 4(Supplement 1), 11-20. Chaker, A. M. (2007). Schools act to short-circuit spread of ‘cyberbullying’. Wall Street Journal (Eastern Edition), D1, D4.
0
Chiou, W.-B. (2006). Adolescents’ sexual selfdisclosure on the Internet: Deindividuation and impression management. Adolescence, 41(163), 547-561. Coyle, J. (2007). Learning the Golden Rule the hard way. Toronto Star. Curtis, B., & Hunt, A. (2007). The fellatio “epidemic”: Age relations and access to the erotic arts. Sexualities, 10(1), 5-28. Cyber bullying: Understanding and preventing online harassment and bullying. (2006). School Libraries in Canada, 25(4). DeWolfe, C. (2007). The MySpace generation. Forbes, 179(10), 72. Dukowitz, G. (2006). Out on MySpace, then out the door. Advocate (Los Angeles, Calif.), 22. El Akkad, O., & McArthur, K. (2007). The hazards of Facebook’s social experiment. The Globe and Mail. Entwistle, V. A., Sheldon, T. A., Sowden, A. J., & Watt, I. S. (2001). Supporting consumer involvement in decision making: What constitutes quality in consumer health information. International Journal of Quality in Health Care, 8(5), 425-437. Eysenbach, G. (2002). Infodemiology: The epidemiology of (mis)information. American Journal of Medicine, 113(9), 740-745. Fleming, M. J., Greentree, S., Cocotti-muller, D., Elias, K. A., & Morrison, S. (2006). Safety in cyberspace: Adolescents’ safety and exposure online. Youth & Society, 38(2), 135-154. Flicker, S., Goldberg, E., Read, S., Veinot, T., McClelland, A., Saulnier, P., et al. (2004). HIVPositive youth’s perspectives on the internet and e-health. Journal of Medical Internet Research, 6(3), e32.
Engaging Youth in Health Promotion Using Multimedia Technologies
Flicker, S., & Guta, A. (2008). Ethical approaches to adolescent participation in sexual health research Journal of Adolescent Health, 42(1), 3-10.
Hutchinson, A., & Stuart, C. (2004). The RyersonWellesley social determinants of health framework for urban youth. Toronto, ON: Ryerson University & Wellesley Institute.
Flicker, S., Haans, D., & Skinner, H. A. (2003). Ethical dilemmas in research on Internet communities. Qualitative Health Research, 14(1), 124-134.
Kim, P., Eng, T. R., Deering, M. J., & Maxfield, A. (1999). Published criteria for evaluating health related web sites: Review. British Medical Journal, 318, 647-649.
Freire, P. (1970). Pedagogy of the oppressed. New York: Continuum.
Korn, D., Murray, M., Morrison, M., Reynolds, J., & Skinner, H. A. (2006). Engaging youth about gambling using the Internet. Canadian Journal of Public Health, 97(6), 448-453.
Freire, P. (1973). Education for critical consciousness. New York: Continuum Publishing. Gardner, M. L. S. (2005). Peer influence on risk taking, risk preference, and risky decision making in adolescence and adulthood: An experimental study. Developmental Psychology, 41(4), 625635. Giles, J. (2005). Internet encyclopaedias go head to head. Nature, 438, 900-901. Ginwright, S., & James, T. (2002). From assets to agents of change: social justice, organizing, and youth development. New Directions for Youth Development, 96, 27-46. Gustafson, D. H., Hawkins, R., Boberg, E. W., Pingree, S., Serlin, R. E., Graziano, F., et al. (1999). Impact of a patient-centered computer-based health information/support system. American Journal of Preventive Medicine, 16(1), 1-9. Health on the Net Foundation. (2007). Home page. Retrieved May 4, 2007, from http://www. hon.ch Hillier, L., & Harrison, L. (2007). Building realities less limited than their own: Young people practising same-sex attraction on the internet. Sexualities, 10(1), 82-100. Holloway, S. L., & Valentine, G. (2003). Cyberkids: Children in the information age. London: RoutledgeFarmer.
Kovach, G. C., & Campo-Flores, A. (2007). The trail of cheers. Newsweek, 149(3), 17. Lehman, P., & Lowry, T. (2007). The marshal of Myspace; How Hemanshu Nigam is trying to keep the site’s ‘friends’ safe from predators and bullies. Business week (4031), 86. Lenhart, A., & Fox, S. (2006). Bloggers: A portrait of the Internet’s new storytellers. Washington, DC: Pew Internet & American Life Project. Lenhart, A., Horrigan, J. B., & Fallows, D. (2004). Content creation online. Washington, DC: Pew Internet and American Life Project. Lenhart, A., Horrigan, J. B., Rainie, L., Allen, K., Boyce, A., Madden, M., et al. (2003). The evershifting Internet population: A new look at Internet access and the digital divide. Washington, DC: The PEW Internet & American Life Project. Lenhart, A., & Madden, M. (2007). Social networking websites and teens: An overview. Washington, DC: PEW Internet and American Life Project. Lenhart, A., & Madden, M. (2007). Teens, privacy & online social networks: How teens manage their online identities and personal information in the age of MySpace. Washington, DC: Pew Internet & American Life Project.
Engaging Youth in Health Promotion Using Multimedia Technologies
Lenhart, A., Madden, M., & Hitlin, P. (2005). Teens and technology: Youth are leading the transition to a fully wired and mobile nation. Washington, DC: PEW Internet and American Life Project. Lombardo, C., Zakus, D., & Skinner, H. A. (2002). Youth social action: Building a latticework through information and communication technologies. Health Promotion International, 17(4), 363-371. Miller, C. (2006a). Cyber Harassment: Its Forms and Perpetrators. Law Enforcement Technology, 33(4), 26, 28 - 30. Miller, C. (2006b). Cyber stalking & bullying: What law enforcement needs to know. Law Enforcement Technology, 33(4), 18, 20-22, 24. Ministry of Education. (2007). McGuinty government doing more to make schools safer [Electronic Version]. Retrieved June 2, 2007 from http://ogov. newswire.ca Mitchell, K. J., Wolak, J., & Finkelhor, D. (2007). Trends in youth reports of sexual solicitations, harassment and unwanted exposure to pornography on the Internet. Journal of Adolescent Health, 40(2), 116-126. Moinian, F. (2006). The construction of identity on the Internet: Oops! I’ve left my diary open to the whole world! Childhood, 13(1), 49-68. Montgomery, K. C. (2007). Generation digital: Politics, commerce, and childhood in the age of the Internet. Cambridge, MA: MIT Press. Narcissistic, but in a good way. (2007). 199864 (983), 6-6. National Cancer Institute. (2000). Researchbased web design usability guidelines. Retrieved 06/02/2004, 2004, from http://www.usability. gov/guidelines/index.html Newman, M. (2006). Internet site hires official to oversee users’ safety. The New York Times, C3(L).
Norman, C. D., Maley, O., Li, X., & Skinner, H. A. (in press). Using the Internet to initiate and assist smoking prevention and cessation in schools: A randomized controlled trial. Health Psychology. Norman, C. D., Maley, O., & Skinner, H. A. (2000). CyberIsle: Using information technology to promote health in youth. CyberMed Catalyst, 1(2). Norman, C. D., & Skinner, H. A. (2006a). eHEALS: The eHealth Literacy Scale. Journal of Medical Internet Research, 4, e27. Norman, C. D., & Skinner, H. A. (2006b). eHealth literacy: Essential skills for consumer health in a networked world. Journal of Medical Internet Research, 8(2), e9. Norman, C. D., & Skinner, H. A. (in press). Engaging youth in eHealth promotion: Lessons learned from a decade of TeenNet Research. Adolescent Medicine: State of the Art Reviews, 18(2), 357-369. Pack, T. (2006). Keeping cyberteens safe. Information Today, 23(4), 37-39. Peter, J., & Valkenburg, P. M. (2006). Adolescents’ exposure to sexually explicit online material and recreational attitudes toward sex. Journal of Communication, 56(4), 639-660. Rickert, V. I., & Ryan, O. (2007). Is the Internet the source? Journal of Adolescent Health, 40(2), 104-105. Ridgley, A., Maley, O., & Skinner, H. A. (2004). Youth voices: Engaging youth in health promotion using media technologies. Canadian Issues, Fall 2004, 21-24. Romano, A. (2006). Facebook’s ‘news feed’: [International Edition]. Newsweek, International ed. Rubin, G. (2005). Cyber bullying. The Times Educational Supplement, F11.
Engaging Youth in Health Promotion Using Multimedia Technologies
Santelli, J. S., Smith Rogers, A., Rosenfeld, W. D., DuRant, R. H., Dubler, N., Morreale, M., et al. (2003). Guidelines for adolescent health research. A position paper of the Society for Adolescent Medicine. J Adolesc Health, 33(5), 396-409. Schmidt, T. S. (2006). Inside the backlash against Facebook. Time Magazine (Online edition). Shariff, S. (2005). Cyber-dilemmas in the new millennium: School obligations to provide student safety in a virtual school environment. McGill Journal of Education, 40(3), 467-487. Sheppard, S., Charnock, D., & Gann, B. (1999). Helping patients access high quality health information. British Medical Journal, 319, 764-766. Simonette, M. (2007). Gays, lesbians and bisexuals lead in using online social networks. 138000, 8(17), 9-9. Skinner, H. A., Biscope, S., & Poland, B. (2003). Quality of Internet access: Barrier behind Internet use statistics. Social Science & Medicine, 57(5), 875-880. Skinner, H. A., Maley, O., & Norman, C. D. (2006). Developing Internet-based ehealth promotion programs: The spiral technology action research (STAR) model. Health Promotion Practice, 7(4), 406-417. Skinner, H. A., Maley, O., Smith, L., & Morrison, M. (2001). New frontiers: Using the Internet to engage teens in substance abuse prevention and treatment. In P. Monte & S. Colby (Eds.), Adolescence, alcohol, and substance abuse: Reaching teens through brief interventions. New York: Guilford. Skinner, H. A., Morrison, M., Bercovitz, K., Haans, D., Jennings, M. J., Magdenko, L., et al. (1997). Using the Internet to engage youth in health promotion. International Journal of Health Promotion & Education, IV, 23-25.
Society for Adolescent Medicine. (2003). Guidelines for adolescent health research. J Adolesc Health, 33(5), 410-415. Splete, H. (2005). Technology can extend the reach of a bully: Cyber bullying by girls, who ‘share so much ... when they are friends,’ can be particularly devastating. Family Practice News, 35(12); 31(31)-32. Stewart, W. (2006). Private lives laid bare on MySpace. The Times Educational Supplement(4699), 10. Stone, B. (2006). Web of risks. Newsweek, 148(8/9), 76-77. Stover, D. (2006). Treating cyberbullying as a school violence issue. The Education Digest, 72(4), 40-42. Street, R. L., & Rimal, R. N. (1997). Health promotion and interactive technology: A conceptual foundation. In R. L. Street, W. R. Gold & T. Manning (Eds.), Health promotion and interactive technology: Theoretical applications and future directions. Mahwah, NJ: Lawrence Erlbaum. Surowiecki, J. (2004). The wisdom of crowds. New York: Doubleday. The Centre for Missing and Exploited Children. (2007). (Publication: http://www.cybertipline. com/ The Ontario Safe Schools Act: School Discipline and Discrimination, Ontario Human Rights Commission. (2003). Tri-Council of Canada. (1998). Tri-Council policy statement: Ethical conduct for research involving humans. Ottawa: Interagency Secretariat on Research Ethics on behalf of The Canadian Institutes of Health Research, Natural Sciences and Engineering Research Council of Canada and the Social Science and Humanities Research Council of Canada.
Engaging Youth in Health Promotion Using Multimedia Technologies
Trotter, A. (2006). Social-networking web sites pose growing challenge for educators. Education Week, 25(23), 8-9. Twenge, J. M. (2001). Birth cohort changes in extraversion: A cross-temporal meta-analysis, 1966-1993. Personality and Individual Differences, 30(5), 735-748. Twenge, J. M. (2006). Generation me : Why today’s young Americans are more confident, assertive, entitled-and more miserable than ever before. New York: Free Press. United States Department of Health and Human Services. (2005). Code of Federal Regulations. Title 45: Public Welfare -- Part 46: Protection of Human Subjects. www.hhs.gov/ohrp/humansubjects/guidance/45cfr46.htm Valkenburg, P. M., Jochen, P., & Schouten, A. P. (2006). Friend networking sites and their relationship to adolescents‘ well-being and social self-esteem. CyberPsychology & Behaviour, 9(5), 584-590. Warren, J., & Vara, V. (2006). New Facebook features have members in an uproar. Wall Street Journal (Eastern Edition), B1-35. Wikipedia. (2006a). blog. Retrieved 23 Sept, 2006, from http://en.wikipedia.org/wiki/Blog Wikipedia. (2006b). Web 2.0. Retrieved 23 Sept, 2006, from http://en.wikipedia.org/wiki/Web_2 Wikipedia. (2006c). wiki. Retrieved 23 Sept, 2006, from http://en.wikipedia.org/wiki/Wiki Wikipedia. (2007). Wikipedia home page. Retrieved 3 May, 2006, from http://en.wikipedia. org Winker, M. A., Flanagin, A., Chi-Lum, B., White, J., Andrews, K., Kennett, R. L., et al. (2000). Guidelines for medical and health information sites on the internet: principles governing AMA web sites. American Medical Association. JAMA, 283(12), 1600-1606.
Wurman, R. S. (2001). Information Anxiety 2. Indianapolis, IN: QUE. Youth Voices Research. (2007). Cyberhealthliteracy home page. Retrieved July 2008, from http://www.ehealthliteracy.ca Zeldin, S. (2004). Youth as agents of adult and community development: Mapping the process and outcomes of youth engaged in organizational governance. Applied Developmental Science, 8(2), 75-90. Zeldin, S., Camino, L. A., & Mook, C. (2005). The adoption of innovation in youth organizations: Creating the conditions for youth-adult partnerships. Journal of Community Psychology, 33(1), 121-135.
KEY tErMs Critical Appraisal: Can be understood as the ability to filter and decipher evidence which is presented, and evaluate it for its merits and limitations. Youth accessing information technologies and social technologies may not have developed and refined these skills, and therefore uptake incorrect information and/or may not know how to effectively use correct information. Cyber-Bullying: Is a growing phenomena in which youth, but also adults, use information technology to harass others. This can take seemingly endless forms, and often maximize the potential of many applications; including posting hurtful comments about other people on one’s own site, about them on their site, hacking into and altering their personal profile, sending insulting text messages to mobile devices, and even filming embarrassing and/or illegal actions and uploading them onto the web. eHealth (electronic health): Can be understood as the host of emerging information technologies related to accessing and/or exchanging health
Engaging Youth in Health Promotion Using Multimedia Technologies
information; be it within formal health systems (transferring patient records within a hospital), or between independent users (exchanging treatment information through an on-line support group). Mining (or data mining): Can be understood as the collection and analysis of information accessed and shared on the web. This can take the form of tracking how many people clicked on a certain web site, or searching for how many people list a certain musician in their individual profile. This information is often collected, without users’ knowledge, and sold to marketing firms. Privacy Paradox: Is a consequence of the competing demand to use information technologies (including social technology and social software) and have an on-line persona, which simultaneously having to guard against potential threats to personal safety and privacy resulting from the misuse of available information. Prosumers: Are individuals who actively engage in accessing, viewing and creating content on the web. This is an important shift from the passive consumer who simply views content, as is a limitation of other mediums. Social Software: Can be understood as the growing number of applications which run on computers and portable devices, allowing users
to access and connect to individuals or networks of people. Examples of these (as discussed in the chapter) include Friendster, Facebook and Myspace, and can be connected to other services such as YouTube Social Technology: Can be understood as the physical hardware (cell phones, iPod’s, high speed data servers) on which networking software is supported. In many cases, these are small and highly portable devices which are themselves connected to powerful wireless networks. Web 2.0: Can be used to describe the numerous interactive applications which have been developed to provide individuals with greater access and control in web based environments towards greater collaboration and information/resource sharing; these include social networking sites like Facebook, as well as information resources like wikipedia. Youth Agency: Can be understood as youth actively engaging with systems, institutions, and technologies towards finding space for expression and resilience. Youth are often constructed as passive in debates about the use and abuse of technology, despite their demonstrated desire to participate in its development and evolution.
Chapter XXI
Ethical Challenges of Engaging Chinese in End-of-Life Talk Samantha Mei-che Pang Hong Kong Polytechnic University, Hong Kong Helen Yue-lai Chan Hong Kong Polytechnic University, Hong Kong
abstract In Hong Kong, end-of-life practice ideally adheres to values that include respect for the patient’s selfdetermination and an understanding shared by and consented to by the patient, the family and the healthcare team. However, consensus about end-of-life care is seldom reached within this trio before the patient become critically ill or mentally incompetent. This chapter examines the customary belief, protectiveness in medical care, which hinders Chinese patients and families in their discussion of lifesustaining treatment; challenges the practice in question; and discusses the possibility of engaging frail nursing home residents in dialogue by using the “Let Me Talk” advance care planning program.
MaKING trEatMENt DEcIsIONs at tHE END OF LIFE Medical technology has a powerful allure, yet there is distorted clinical evidence concerning its relevance for end-of-life care. As imminent death approaches, the clinical attractions of advanced technology lose their charm. Examples include the once routine practice of resuscitation of patients regardless of disease severity and prognosis, the institution of mechanical ventilation for patients
with advanced and progressive chronic respiratory disease, and the provision of artificial feeding for patients in persistent vegetative state or with advanced dementia. Arguably these treatments have questionable utility because of predicted improbable outcomes, improbable success, and unacceptable benefit-burden ratios (Beauchamp & Childress, 2001). Post-resuscitation studies have shown that the survival to discharge rate of adult resuscitation was 14.6%, the rate decreased if associated with
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Ethical Challenges of Engaging Chinese in End-of-Life Talk
sepsis, malignancy, dementia, coronary artery disease and physical dependence (Ebell, Becker, Barry & Hagen, 1998). In addition, neurological and other sequelae are often found among survivors after prolonged resuscitation. Finucane and associates (1999) conducted a systematic review of tube feeding in the care of patients with advanced dementia and found little evidentiary basis for the practice. Instead of preventing suffering, instituting tube feeding can cause more suffering, such as problems with diarrhoea, clogging of tubes, and the tendency of patients with dementia to pull out tubes with consequential physical restraint. The complexity of making life-sustaining treatment (LST) decisions can be characterized by competing views regarding prognostication and clinical outcomes in the context of the varying personal, professional, legal and ethical responsibilities of decision makers. Over the past few decades, several broad areas of legal consensus surrounding the withholding and withdrawal of LST at the end of life have emerged in European and North American countries, as well as Hong Kong. First, LST refers to all treatments which have the potential to postpone the patient’s death, and includes, for example, cardiopulmonary resuscitation, artificial ventilation, specialized treatment for particular conditions such as chemotherapy and dialysis, antibiotics when given for a potentially life-threatening infection, and artificial nutrition and hydration. Second, there is no ethically important distinction between withholding and withdrawing LST. Third, the patient’s preference should be taken into account when such decisions are made. Fourth, a surrogate can make end-of-life decisions for mentally incapacitated patients. Such decisions are based on the surrogate’s perceptions of the patient’s previous wishes or the patient’s best interests (Hospital Authority Hong Kong, 2002). Overarching the broad areas of consensus is the patient’s right to decide. Often such decisions are made when the patient becomes imminently terminal, but are implemented, regardless of men-
tal capacity, at a later moment of life. The verbal or written advance directive of the patient, stating a preference for treatment limitation, should be respected (Law Reform Commission Hong Kong, 2006). In situations where the patient’s preference is not known, a surrogate can make decisions based on the perceived best interests for the patient. The surrogate can be a family member of the patient or a legal guardian (Hospital Authority Hong Kong, 2002). To facilitate better decision making, professional guidelines for informing local practice have been formulated. It is noteworthy that different places vary in their interpretations. Examples are the legal rights of next of kin when consenting to or refusing treatment. In the U.S., the consent of the next of kin is required (Pang, Volicer & Leung, 2005c), while the next of kin’s consent is regarded as important but not determinative in Hong Kong, where physicians and legal guardians bear responsibility for the decision. In accordance with Hong Kong Hospital Authority guidelines on life-sustaining treatment for the terminally ill (2002), it is ethically and legally acceptable to withhold or withdraw LST when (a) a mentally competent and properly informed patient refuses LST, and/or (b) the treatment is futile. The decision-making process, except when the treatment is physiologically futile and thus not an option, should basically be a consensus-building process among the healthcare team, the patient and the family. However, it is doubtful to what extent these good practice guidelines, which uphold the patient’s right to self-determination and shared decision-making between the patient and clinician with family involvement, are articulated in actual practice. Most patients often become too ill to take part in the decision making, or that doctors solicit family members’ support without involving patients in conversations about end-of-life treatment options. In both situations, the patients’ right to self-determination is undermined and their wishes are not well presented in the deci-
Ethical Challenges of Engaging Chinese in End-of-Life Talk
sion-making process, except in rare cases where patients have had prior discussion with family members about their LST preferences. One might ethically justify such a practice by making reference to some common beliefs that Chinese people place higher value on family-determination than on self-determination (Pang, 1999). However, opinion survey findings on Hong Kong people’s attitude towards advance directives reveal that the majority of Hong Kong people favor stating their own LST preference in an advance directive (Pang et al., 2006). In this regard, how to respect the patient’s right to selfdetermination in LST decision making invites further inquiry into the practice in question if Hong Kong society takes the value of autonomy in medical care seriously. In order to explore problems surrounding the discussion of end-of-life care with Chinese patients and their families, this chapter has three sections. The first section examines the customary beliefs of protectiveness in medical care that prevent both healthcare professionals and patient families from involving patients in making LST decisions. The second section challenges the validity of these beliefs by using local evidence drawn from patients themselves. The third section discusses how patients’ LST preferences can be communicated to family members and/or healthcare professionals through the “Let Me Talk” advance care planning (ACP) program.
PrOtEctIvENEss IN MEDIcaL carE The Chinese doctrine of protectiveness in medical care comprises two major concerns: trying one’s best to protect life, and protecting the patient from harm. Both beliefs caution against forgoing LST and involving patients in treatment-limiting decision making. What follows is an exposition of this protective doctrine in the light of Chinese medical ethics tradition. The information is pri-
marily drawn from the first author’s book, Nursing Ethics in Modern China (Pang, 2003). In line with the Hippocratic medical tradition, respect for life is highly valued in traditional Chinese medical ethics, as explained in Sun Simiao (AD 581? – 682)’s treatise entitled On the Absolute Sincerity of Great Physicians. A physician is obliged to “commit himself firmly to the willingness to take the effort to save every living creature”. He should “protect life and have compassion for it” even at his own expense. To this end, Sun mandated that a great physician should possess “absolute sincerity”. Gan Zuwang (1995) suggested that “absolute sincerity” connoted the two core virtues and five criteria for ethical practice. The two core virtues are jing in technical competence and cheng in relation to patients. The five criteria for ethical practice are (1) to demonstrate understanding of the patients under their care, and be compassionate with regard to the patients’ suffering, (2) to master the necessary skills for treating patients, so as to help them recover from illness, (3) to have a high sense of responsibility, (4) to treat all patients equally, and (5) not to emphasize one’s reputation and praise only one’s own virtue (Gan, 1995). To maintain jing, or refinement in practice, medical practitioners are required to devote themselves to medicine and be diligent in work and study. They need to constantly update their knowledge and refine their skills. The common English translation of the word cheng is sincerity (Legge, 1971; Unschuld, 1979). The meaning of cheng as embedded in the Chinese cultural context may be commensurate with the meaning of sincerity, but it is not identical. While sincerity means freedom from deceit or falseness, honesty, trustworthiness, and truthfulness, the character cheng is made up of two Chinese words, one meaning “speech, talk, or words”, and the other meaning “finish, complete, or accomplish”. The meaning of cheng can be illustrated by a Chinese saying, which reads, “What is said must be done,
Ethical Challenges of Engaging Chinese in End-of-Life Talk
and what is done must be brought to fruition”. This notion of cheng has a strong moral connotation—not only should people keep their word and demonstrate good intention in their deeds, but it is a fundamental principle which should guide all human actions. As the Doctrine of the Mean explains: Sincerity [cheng] is that whereby self-completion is effected, and its way is that by which man must direct himself. Sincerity is the end and beginning of things, without sincerity there would be nothing. On this account, the superior man regards the attainment of sincerity as the most excellent thing. The possessor of sincerity does not merely accomplish the self-completion of himself. With this quality he completes other men and things also. The completing himself shows his perfect virtue. The completing other men and things shows his knowledge. Both these are virtues belonging to nature, and this is the way by which a union is effected of the external and internal. Therefore, whenever he—the entirely sincere man—employs them,—that is, these virtues,—their action will be right. (Legge, 1971, pp. 418-419) A person who possesses the virtue of cheng understands the nature of things, self, and other people. This enables the person to see into others, to foresee the good and the evil in things, and to acquire the right ways in relating to others. In this regard, the healthcare worker who possesses “absolute sincerity” places patients deep in mind as if they were family members. This is represented in the popular Chinese saying, “a person who professes medicine should have a father’s and a mother’s heart”. This point is missed in Unschuld’s translation of Sun Simiao’s treatise. He translated the sentence as “he should always act as if he were thinking of himself” (Unschuld, 1979, p.30). The correct translation is “he should always act as if he were thinking of his significant others” (Gan, 1995, p. 313).
This moral imperative of respect for life and treating patients with parental love is further elaborated with a sense of urgency, as if one were rescuing someone from a deadly crisis. Lu Chih’s (754–805 AD) treatise on medical ethics maintains that: When someone suffers from a disease and seeks a cure, this is no less crucial than if someone facing death by fire or by drowning calls for help. Physicians are advised to practice humaneness and compassion. Without dwelling on [externals such as] tresses and a cap that fits, they have to hasten to the relief of him [who asks for it]. This is the proper thing to do. Otherwise accidents such as burning or drowning take place. (Unschuld, 1979, p.35). It is noteworthy that “Rescue the dying and heal the wounded” and “Treat the patient as your family member” are upheld as ethical mandates for healthcare professionals in China today (Pang, 2003). Medical practices such as forgoing LST that go against the mandate of life preservation are thus denounced as misconduct. The second concern of protectiveness holds that the psychological link between the patient and his or her disease is so significant that patients should be protected from all unpleasant stimuli that might retard the healing process. The duty of healthcare professionals is to mobilize all the strengths of the patient to fight the disease—including helping the patient develop a positive outlook (Qiu, 1982). Maintaining patients’ confidence in therapeutic regimens and hope in recovery is one of the most important ethical obligations for healthcare professionals. They would find themselves at risk of committing an insincere act if a patient were to lose hope and confidence after making an informed decision on LST limitation. As far as nursing is concerned, protective care has been practiced since its early development in China. In Yao Changxu’s list of role requirements for nurses (1939), she put much emphasis on the
Ethical Challenges of Engaging Chinese in End-of-Life Talk
nurse’s duty to protect the patient from harmful effects by all means, including protecting the patient from unpleasant messages, and using kind words to cultivate hope of recovery. Under the doctrine of protectiveness, where healthcare professionals are expected to protect life at all costs and to protect the patient from loss of hope in recovery, it is not surprising that healthcare professionals prefer to solicit patients’ families’ views and vice versa rather than talking directly to the patient should LST-limiting decisions need to be made in the reality of the clinical situation. Furthermore, in the Chinese ethical tradition, healthcare professionals have a moral obligation to treat patients with cheng. Cheng is regarded as both a cardinal virtue to be cultivated and the guiding principle among healthcare professionals. This notion of cheng in the customary medical practice carries a strong sense of parental protectiveness. Healthcare professionals are expected to act as if they were patients’ family members, protecting patients from harm, and bearing the patients’ burdens of illness, to the extent of making medical decisions on behalf of the patient without the patient’s knowledge, as illustrated by Huai Yuan’s writing in his A Thorough Understanding of Medicine in Ancient and Modern Times: Medicine is applied humaneness. To see other people suffer rouses compassion and pity within myself. When the ailing themselves cannot make any decisions, I will make them in their place. I always put myself in their place. (Unschuld, 1979, p.102). We contend that this customary practice of parental protectiveness to the exclusion of the patient in making his/her own treatment decisions actually deviates from the authentic meaning of cheng as expounded in the Doctrine of the Mean. As the fundamental principle in relating to others, a person who acts with cheng has to understand self, things and others. How can a person assume
0
understanding of what is best for the person without directly communicating with the person concerned regarding his or her thinking?
PrOtEctIvE MEDIcaL carE IN qUEstION Many would agree that the primary concerns in end-of-life care are shifted from life preservation to symptom control, alleviation of suffering and optimization of a caring environment for preparing patients for a dignified and peaceful closure of life. Medical technology that sustains life, or, negatively framed, prolongs dying, would not match the goals of care. However, this shift in goal of care does not happen spontaneously. The findings of two prospective case studies on the decision-making patterns of forgoing tube feeding for patients with advanced dementia and their emergent ethical dilemmas typified in Boston and Hong Kong reveal that only when a paradigm shift of the values underpinning medical practice has occurred will forgoing tube feeding in the context of palliative care be considered ethically acceptable. This shift in goal of care can only happen by actively involving the patients and their families in advance care planning (Pang, Volicer & Leung, 2005c). However, as expounded earlier, with protectiveness as the predominant value underpinning medical practice, open conversation within the patient-family-healthcare professional trio is unlikely to occur. Avoiding LST discussion with patients does not necessarily mean that they are unaware of their condition. Studies suggested that many patients have tacit understanding of their prognosis and are able to come to terms with their impending death (Pang, Leung, Pang & Shi, 2005b). On the other hand, the avoidance of open communication on the part of family members might lead to unwarranted assumptions of what would be in the patient’s best interests, with tragic endings. Failing openly to acknowledge patients’ concerns and needs might
Ethical Challenges of Engaging Chinese in End-of-Life Talk
lead to wrongful acts, which could bring harm to patients as well as their families, as illustrated in the following high-profile case. In 2004, two adult daughters were prosecuted for murdering their father in Hong Kong because they removed the breathing tube from their dying father without prior discussion with the attending physician in a private hospital. The 84-year-old man was unconscious as a result of recurrent stroke and required mechanical ventilation. After several court hearings, the Department of Justice dropped the charges against the accused because there was insufficient evidence to show that their father’s death was related to the withdrawal of life support. The case sparked wide public debate on the moral cause of the two adult daughters and whether their behavior strayed from the filial devotion due to their parent. Some held the view that the two daughters’ act was justifiable because it alleviated the dying father’s suffering, and some held that such an act should never be performed to one’s parent by a filial child. This case could have been better settled for the good of the family if there had been prior LST discussion between the father, his children and the attending physician, with the LST-limiting decision made in accordance to the professional guidelines (Hospital Authority Hong Kong, 2002). Three questions remain: is it possible to engage patients in LST discussions, when is it best to start this sensitive and value-laden decision-making process, and how can an environment for fostering a decision-making process that can best represent the patient’s values, wishes and preferences in endof-life care be provided?
Is It POssIbLE tO ENGaGE PatIENts IN INFOrMED Lst DEcIsIONs? To answer whether a conspiracy of silence is in the best interests of the patient and whether engaging patients in LST decision making would
be beneficial, our research team conducted an ethnographic study which examined the treatment-limiting decision-making pattern of 49 hospitalized patients with advanced chronic obstructive pulmonary disease (COPD) in the unpredictable but progressive dying trajectory through a 1-year documentary review. Nineteen of them were followed up in subsequent longitudinal case studies to understand their experiences, concerns and values pertinent to LST decision making and thereafter (Pang et al., 2004). Having lived with the disease for years, coupled with recurrent exacerbations, its debilitating and possibly life-threatening nature was undeniable among the patients. Many of them had witnessed the death and dying of other patients, noticed the worsening of their illness and had near death experiences themselves, so they were aware that their life would come to an end in the near future. Death was less a taboo than a reality to them. Hence, some of them took a more proactive stance in making explicit their LST preferences to their family and healthcare team. We also noted that although some patients did not initiate the treatment-limiting discussion themselves, they had already formed their views on LST before their physicians discussed the matter with them. LST discussions with them were conducted in an open, spontaneous, relaxed and positive atmosphere. This echoes an earlier empirical study that had nurses share their care experiences (Pang, 1999). It was reported that disclosing the terminal diagnosis to patients helped them to comply with their therapeutic regimen, alleviated their psychological burden, and prepared them for the impending death. Our ethnographic study also revealed that the deep-seated deviation in the understanding of protectiveness had prevented patients from receiving adequate pain medication and acquiring knowledge about the alternatives to LST, and resulted in neglect of their major concerns in the last phase of life. The insufficient LST discussion largely explained the misunderstandings
Ethical Challenges of Engaging Chinese in End-of-Life Talk
and difficulties involved in delivering the best possible end-of-life care in accordance with the patient’s wishes. In this regard, we developed an instrument – Quality-of-life Concerns in the End of Life Questionnaire (QOLC-E) – to assist healthcare professionals in identifying their needs and concerns (Pang et al., 2005a). Older people are often viewed as another vulnerable group that cannot bear the emotions entailed in LST discussion. In contrast, Hui and her associates (1997) suggested that most of them wanted to be involved in the LST decision making and to decide for themselves. The discrepancy between the traditional view and the survey result sparked our interest in expanding the focus to the older population. We wanted to examine the possibility of initiating LST discussion with older people, and also find out if the discussion could take place at an earlier time, before they become critically ill. Grounded on these inquiries, we conducted a survey study with nearly 300 elderly residents from ten nursing homes. Although some of them burst into tears when sharing their experiences and concerns during the interviews, they appreciated it as an opportunity to share their thoughts and vent their feelings. Overall, no undesirable emotional reaction was reported. A considerable proportion of them were able to articulate their LST preferences and wanted to be involved in the medical decision-making process (Chan & Pang, 2007). Contrary to the customary belief that life should be saved at all costs, our study showed that most of the respondents preferred that the goal of care be shifted to maintaining comfort rather than extending life through LST if they were critically ill with a grim prognosis. Death was accepted by them as inevitable and not far away, and having an unburdened death seemed preferable to them. Chinese society has long been regarded as a familial society in which family sovereignty is extended to the area of medical decision making,
but the family was the least preferred party in LST decision making among our sample (Chan & Pang, 2007). One possible reason is that the older people supposed that their family might request LST to prolong their life unnecessarily. Otherwise, the connection between the older people and their families might be weaker since they moved into the nursing homes. A considerable number of them regarded themselves as playing the most crucial role in the LST decision-making process even though their LST preferences were seldom elicited in the current practice. Most respondents wanted the physician to be involved in the medical decision making, due to their clinical knowledge and expertise. Their decisions were also considered to be the most important by nearly half of the respondents. This demonstrated their trust in the healthcare team. We can conclude that open discussion of LST and death and dying did not bring about emotional distress in the patients and older people as was traditionally perceived; indeed they wanted to have a voice in their LST decision-making process. The question raised is how to enhance their participation in the medical decision-making process, taking into account the complexity of their relationship with the family and healthcare team in the context of the protective medical care culture? We therefore suggest that ACP would be instrumental in enabling patients to articulate their LST preferences and the underlying values, and in encouraging communication among the patient, family and healthcare professional at an earlier time. The following section is based on a feasibility study of an ACP program entitled “Let Me Talk” among 50 nursing home residents aged 65 or above, partially dependent in the instrumental activities of daily living and with at least one moderately to severely impairing health problem. The program was conducted between June 2006 and January 2007.
Ethical Challenges of Engaging Chinese in End-of-Life Talk
LEt ME taLK: aN acP PrOGraM FOr cHINEsE FraIL NUrsING HOME rEsIDENts The “Let Me Talk” ACP program aims to elicit the LST preferences of frail nursing home residents and their underlying personal values. Its design was grounded on the literature review and experience gained in the survey study mentioned previously (Chan & Pang, 2007). The implicit mode of communication prevailing in the Chinese culture urges the need for an intermediary to clarify older peoples’ attitudes toward death and dying and to bridge the communication gap between them and their families. In view of the difficulties ensuing from open discussion of a potentially taboo topic, nurse who equipped with medical information and communication skills is in an ideal position to provide support to older people and their families. Nurse can be a vehicle leading them to break from the traditional customary beliefs by initiating and intermediate the LST discussion. The relationship between the nurse and the participant was the cornerstone for in-depth communication of these subjects. A good nursepatient relationship is half of the success. In line with the theme of understanding the patient, the program did not move straight to LST discussion, but first let the nurse gain a brief understanding of the older person’s background and illness experiences. The whole program had four themes: Life Stories, Illness Narratives, Life Views and End-of-life Care Preferences. The themes were introduced on an individual basis over several encounters. A storytelling approach was used to thread their thoughts around the themes. In the Life Stories part, participants were invited to share their most memorable life experiences. The inclusion of a reminiscence activity into the ACP was intended to enhance the nurses’ understanding of the older persons and facilitate their exploration of the values underlying their LST preferences. One assumption was that
personal values can be reflected from particular events or persons that are held to be important. Looking back on one’s own life is one of the common themes among older people when they discuss end-of-life issues. Stories recounting the impact of medical disorders or disability on them were categorized under the Illness Narratives theme. The objective of this theme was to explore their understanding of the illness, how they had lived with the illness and what they had learnt from these experiences. Healthcare expectations, embedded in these stories, also helped to explain their personal beliefs. These would be a useful guide for the family and caregivers in seeking to understand the underlying reasons for certain behaviors, such as the reasons for non-compliance with medication or the use of complementary therapies. Drawing on their life and illness experiences, the nurse elicited their views of life. Evaluation of one’s own life, attitude towards death and dying, influence of religious beliefs and final wishes were the foci of the conversations. After several encounters, the nurse was able to explore the participants’ concerns and worries and assess their readiness to talk about death-related issues. Participants who were ready to discuss their end-of-life care were invited to talk about their funeral preparation, LST preferences and identification of proxies. During the conversation, the nurse observed the older person’s interest and understanding in the topic, and shaped the focus of the discussion accordingly. For instance, after eliciting the general views toward LST and reasons for their preferences, if the participant did not find the topic uncomfortable, the nurse then introduced a more specific scenario for detailed discussion. The nurse would spend more time discussing mechanical ventilation if the participant was already suffering from respiratory problems. To conclude each conversation, the nurse edited a personal “Let Me Talk” booklet summarizing the participant’s life story and documenting their
Ethical Challenges of Engaging Chinese in End-of-Life Talk
healthcare concerns and their LST preferences. The participants could add in any information they wished to include, or remove information if they felt uncomfortable showing it to others; the participant was considered to be the author with the final say in content and design. The booklet was read out to the participant for verification if they were illiterate, to ensure their involvement. As individuals may be afraid to initiate LST discussion with their loved ones, thus the booklet served as a means of communication between the participant and their family or caregivers. In due course, invitation letters were issued to their families asking them to join a family conference. Among the 50 participants, 31 had made their LST preferences explicit in the booklet. Five wished to be given a trial of LST, while 26 did not wish to receive any kind of treatment to sustain their lives in the event of critical illness. Reasons for wishing to receive LST were mainly that they believed that one should try when there is a chance, but most of those who did not wish to receive LST believed they were old enough and did not want their lives be extended through artificial means. Three participants who were able to articulate their LST preferences had invited their family members or close friends to join in the family conference to discuss their preferences. Most of them found the discussion fruitful in terms of facilitating mutual understanding. Another six participants said they were interested in letting their family read the booklets but failed to invite them to the family conference. The quality of communication between these participants and their families could not therefore be monitored. Seven participants were indecisive about their preferences and wanted to rely on either their family or their physician in medical decisions. Eleven participants had no prior thoughts of their LST preferences should they become critically ill. Only one participant was uncomfortable with talking about death and dying, and declined the discussion.
DIscUssION This study has demonstrated that engaging frail nursing home residents in end-of-life care discussion is plausible. The essence of the process lies in the trusting relationships between the nurse and the residents. In developing the close relationship, it is important for the nurse to act as an active listener. The nursing home residents seldom had the chance to vent their innermost feelings and concerns because of the customary notion of protectiveness. To the contrary, they were encouraged to voice out their particular concerns and thoughts in the Let Me Talk program. The nurse got to know the participants through listening to their life stories and sharing. We started the ACP program by introducing the topic, giving information, followed by eliciting the LST preference, tailoring the modality of planning and recording the wishes. During the process, the nurse assessed their readiness to be involved with the LST decision making. Nursing home residents with deficient medical knowledge were often at a disadvantaged position in ascertaining their treatment preferences. The nurse provided them with information in simple language, and clarified any misunderstanding if necessary, to assist them in making choices. If the nurse noticed that the participant was not able to comprehend the sophisticated advanced medical technology, she then shifted the emphasis to the generic goal of end-of-life care rather than specific treatment choices. The merit of this type of facilitated discussion is that the process is flexible enough to accommodate individual deliberation and needs. Participants who were more certain about their care preferences are encouraged to make explicit their wishes to their family members. In the care conference, the nurse facilitated the communication so that the participants and family members were able to share their thoughts openly.
Ethical Challenges of Engaging Chinese in End-of-Life Talk
However, not all people are able to envisage the end of life situation and some may not be prepared for the discussion, thus they may like the care decision to be left to the healthcare team or family. Delegation of the decision right is an individual choice which should also be honored. The nurse’s role in the discussion here is to assist the participants in identifying and preparing a surrogate to face the anticipated situation. They can talk about their values within a broader context which helps to shape their preferred goal of endof-life care rather than specific LST preferences. It is believed that prior discussion can relieve the family stress and increase their confidence in exercising the substituted decision making. The underpinning concept of advance care planning is to assist the nursing home residents to exercise their right to self-determination in healthcare decision as proposed by Gadow (1980) in the theory of existential advocacy. Yet, the idea may not necessarily be accepted universally. Thus, the nurse had paid attention to her way of phrasing and presenting the information in the planning process to avoid they have been influenced by her own thoughts, attitudes and prejudices. The participants’ rights were retained as can be seen that some participants had withdrew from the discussion rather than being forced to join in if they found it uncomfortable and some other participants could still opt for the use of LST as they preferred in the planning process.
FUtUrE trENDs Given the difficulty of predicting the trajectory of chronic illness as well as defining medical futility in the era of technological advances, discussion of LST preferences with the patient is of paramount importance. Since previous studies have unmasked that most local patients and older people have forethoughts about their future care and wish to be informed and to participate in their medical de-
cision, a “Let Me Talk” advance care planning program was designed. It was found to be well received by most of the residents in the nursing homes. The nurse played a crucial role in initiating the discussion, providing medical information, clarifying any misunderstanding, eliciting their care preferences and the underlying personal values on a voluntary basis and setting a platform between them and their family or care providers for further dialogue. Perhaps the most challenging issue in the area of end-of-life care discussion is how to have the family members be involved. Many participants have articulated their LST preferences but failed to invite their family members to the conference. Most of them explained that their family members were upset when they tried to express their preferences to them. The participants were often told by their family members to live in present and that it is meaningless to think too much about the “future” as one cannot control it. We believe that both the notions of protectiveness and right of self-determination are stemmed from patient-centered beneficence, although they were expressed as two extremes in terms of information disclosure and decision making authority. The positive experiences reported in this chapter shed light on how to engage older people in endof-life care talk, however it is naive to think that the myth of protectiveness can be banished right away. The culture for open discussion can only be fostered through an orchestrated effort of the healthcare providers in the nursing homes as well as public education and social policy. Nevertheless, it is not within the scope of this paper to provide an extended discussion of how the Chinese in both mainland China and the Diaspora with various cultural contexts approached end-oflife issues. Further research would be needed to identify the pattern of end-of-life care discussion in other Chinese communities and how it has been influenced by the ancient Chinese values.
Ethical Challenges of Engaging Chinese in End-of-Life Talk
rEFErENcEs Beauchamp, T.L. & Childress, J.F. (2001). Principles of biomedical ethics (5th ed.). New York: Oxford University Press. Chan, H.Y.L. & Pang, S.M.C. (2007). Quality of life concerns and end-of-life care preferences of aged persons in long-term care facilities. Journal of Clinical Nursing, 16, 2158-2166. Ebell, M.H., Becker, L.A., Barry, H.C. & Hagen, M. (1998). Survival after in-hospital cardiopulmonary resuscitation: a meta-analysis. Journal of General Internal Medicine, 13(12), 805-816. Finucane, T.E., Christmas, C. & Travis, K. (1999). Tube feeding in patients with advanced dementia: A review of the evidence. Journal of American Medical Association, 282, 1365-1370. Gadow, S. A. (1980). Existential advocacy: philosophical foundation of nursing. In S.F. Spicker & S. Gadow (eds), Images and ideals: opening dialogue with the humanities (pp.79 – 101). New York: Springer. Gan, Z. (1995). Critical biography of Sun Simiao [Sun Simiao ping zhuan]. Nanjing: Nanjing University Press. In Chinese. Hospital Authority Hong Kong. (2002). Guidelines on life-sustaining treatment in the terminally ill. Hong Kong: Hospital Authority Head Office. Hui, E., Ho, S.C., Tsang, J., Lee, S.H. & Woo, J. (1997). Attitudes toward life-sustaining treatment of older persons in Hong Kong. Journal of the American Geriatrics Society, 45(10), 12321236. Law Reform Commission of Hong Kong. (2006). Report on substitute decision-making and advance directives in relation to medical treatment. Hong Kong Government. Legge, J. (1971). Confucian analects, the great learning and the doctrine of the mean. New York: Dover.
Pang, S.M.C. (1999). Protective truthfulness: The Chinese way of safeguarding patients in informed treatment decisions. Journal of Medical Ethics, 25, 247-253. Pang, M.C.S. (2003). Nursing ethics in modern China: Conflicting values and competing role requirements. Amsterdam-New York: Rodopi. Pang, S.M.C., Tse, C.Y., Chan, K.S., Chung, B.P.M., Leung, A.K.A., Leung, E.M.F. & Ko, S.K.K. (2004). An empirical analysis of the decision-making of limiting life-sustaining treatment for patients with advanced chronic obstructive pulmonary disease in Hong Kong, China. Journal of Critical Care, 19(3), 135-144. Pang, S.M.C., Chan, K.S., Chung, B.P.M., Lau, K.S., Leung, E.M.F., Leung, A.W.K., Chan, H.Y.L. & Chan, M.F. (2005a). Assessing quality of life of patients with advanced chronic obstructive pulmonary disease in the end of life. Journal of Palliative Care, 21(3), 180-187. Pang, M.C., Leung, W.K., Pang, L.Z. & Shi, Y.X. (2005b). Prognostic awareness, will to live and health care expectation in patients with terminal cancer. Chinese Medical Ethics, 18(5), 28-31. Pang, M.C., Volicer, L. & Leung, W.K. (2005c). An empirical analysis of making life-sustaining treatment decisions for patients with advanced dementia in the United States. Chinese Journal of Geriatrics, 24(4), 300-304. Pang, M.C., Wong, K.S., Dai, L.K., Chan, K.L. & Chan, M.F. (2006). A comparative analysis of Hong Kong general public and professional nurses’ attitude towards advance directives and the use of life-sustaining treatment in end-of-life care. Chinese Medical Ethics, 3(107), 11-15. Qiu, R. (1982). Philosophy of medicine in China (1930-1980). Metamedicine, 3, 35-73. Unschuld, P.U. (1979). Medical ethics in Imperial China. Berkeley: University of California Press.
Ethical Challenges of Engaging Chinese in End-of-Life Talk
Yao, C. (1939). Methods of nursing the sick [Bing ren kan hu fa]. Shanghai: Shang wu yin shu guan. In Chinese.
KEY tErMs Advance Care Planning (ACP): This refers to an ongoing communication process exploring the patient’s end-of-life care wishes and discussing these wishes with surrogates and healthcare professionals. By the end, the patient can record their treatment preferences by means of advance directives, but the focus of the communication is to ensure that the trio will understand the personal values underpinning the patient’s care wishes and preferences. Advance Directives: This refers to documentation of a person’s will regarding his/her preferences for life-sustaining treatment at the end-of-life. According to the recommendation of the Law Reform Commission of Hong Kong (2006), this document would be signed by the person with a doctor and another person as witnesses. This document would ensure that if or when such a medical situation was to occur, the medical doctor would follow the person’s advance directive in making medical decisions. Customary Belief: This refers to mental acceptance of a proposition which has been
established for a long time, but such belief does not necessarily hold the truth of a fact and is not necessarily grounded in evidence. End-of-Life: This refers to the period when an individual experiences disability or disease that progressively worsens until death. In other words, it is a transitional stage prior to death. The worsening of the condition may be due to aging, or to the advanced stage of a chronic illness or terminal disease. Death is not imminent, but it is approaching and not distant. Frail: This refers to a decline in physiological reserves resulting in increased vulnerability to adverse outcomes, such as falls, injuries, dependence, disease, institutionalization and death. Life-Sustaining Treatment (LST): This refers to all treatments which have the potential to postpone the patient’s death, and includes, for example, cardiopulmonary resuscitation, artificial ventilation, blood products, pace makers, vasopressors, specialized treatment for particular conditions such as chemotherapy and dialysis, antibiotics when given for a potentially life-threatening infection, and artificial nutrition and hydration. Surrogate: This refers to a person being assigned as having authority to make decisions in relation to medical treatment on behalf of the patient if he/she becomes mentally incompetent.
Chapter XXII
Community Education in New HIV Prevention Technologies Research Busi Nkala Chris Hani Baragwanath Hospital, South Africa
abstract An estimated 39.5 million people are living with HIV worldwide. There were 4.3 million new infections in 2006 with 2.8 million (65%) of these occurring in sub-Saharan Africa with important increases in Eastern Europe and Central Asia, where there are some indications that infection rates have risen by more than 50% since 2004. In 2006, 2.9 million people died of AIDS-related illnesses (UNAIDS, 2006). The continued increase in new HIV infection is a call for concern. It is imperative that more innovative ways of combating the infections are found sooner. There is an enormous body of evidence that HIV infection is caused mainly by sexual contact. There is also undisputed evidence that there are other contributing factors such as extreme poverty, survival sex, gender inequality, lack of education, fatalism, religious barriers and others. This chapter seeks to support the need to do more research in finding new technologies and innovative ways of dealing with the spread of HIV. The chapter suggests that the involvement of researched communities be effectively involved. Involving communities in finding solutions will help, in that research protocols and health programmes will take into account the cultural acceptability of the new technologies and systems and ensure that recipients of health services become effective organs of change. The chapter seeks to highlight the fact that, if the recipients are involved in all stages of development of health programmes, including technologies, we may begin to see changes in how new technologies are taken up or may shift toward getting technologies that are acceptable. There are various suggested and implemented ways which aid in achieving the protection for individuals and communities; such as community involvement, community participation and community education (Collins, 2002; Gupta 2002), this chapter will focus on community education and a proposal for a community principle.
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Community Education in New HIV Prevention Technologies Research
INtrODUctION The world is currently faced with challenges that are brought about by increase in diseases and the complexity involved in disease management. There are differences in the nature of diseases and associated challenges between the developing and developed world. This chapter focuses on the HIV epidemic in a developing country. HIV is a very complex disease which brings along very complex manner in which to deal with; one of such complexities is the different strains in different regions and its socio-economic linkage. Counseling, testing, use of the male and female condom remains the only currently available technologies proving to be good options for HIV prevention. However there is continuing evidence of ongoing transmission of HIV despite active promotion and distribution of condoms. While the reasons for effective technologies not resulting in clear interruption of transmission in public health setting is complicated it is the argument of this chapter that when these stated technologies were introduced into the health service the communities were not well prepared for the lifestyle change which comes with use of condom and behavioural adjustment. This makes the continued search for a safe and effective HIVpreventive technology seem reasonable. Thus it is imperative that the scientific community pursue new ways of preventing, identifying and treating HIV. This effort requires a great expansion in what is known and what can still be learnt in order to deal with the situation. However there is a need to streamline how these new technologies are brought into the health care service. The expansion of knowledge and introduction of the new technologies starts at the research stage. The expansion of knowledge cannot be divorced from the advancement of technology. Some of the technologies that are currently under research are the Diaphragm (cervical barrier), Microbicides (topical vaginal barriers) and Vaccines (systemic barriers). The ongoing HIV epidemic raises the
need for improved intervention strategies through improved knowledge. Research forms an integral part of the development of technologies that can be applied to health problems in order to improve health. Improved interventions require improved technology which in turn may raise new ethical challenges. With the expansion of knowledge through advancement of technology comes additional responsibility for recognizing and dealing with ethical issues (Goldner, 2002). The conduct of research is guided by codes and regulations (international, national and local) which were developed due previous ethical lapses. Such guidelines are revised from time to time to keep pace with social change and advancement in science. A reliance on guidelines alone is not sufficient to bring about ethically sound research. Community involvement in research is a critical aspect in ensuring that the community concerns are taken into consideration. Biotechnology is relevant to the health needs not only of rich nations, but also those of the world’s poor. This chapter focuses on the techniques of preparing communities for change; that is brought about by the investigation or application of new technologies. Appropriate community engagement can facilitate cultural acceptability of change and thus ensure that researched communities become effective advocate for change within their respective societies. The purpose of writing this chapter is to: •
•
Highlight the importance of biotechnology in improving HIV prevention in developing countries. Offer guidance and influence the development of community education in research.
bacKGrOUND Biotechnology has been seen as a way in which health can be improved throughout the world. Some scholars have strongly advocated for more
Community Education in New HIV Prevention Technologies Research
research into technologies that could help improve health in developing countries (Daar, 2002). Research is concerned with advancement of science thus improving knowledge (Goldner, 2000), and technology is part of the knowledge we seek to improve. In the field of HIV, there are a growing number of promising new HIV prevention technologies that are in different stages of clinical trials. There is also a growing need for human beings to be prepared for these technologies which brings about changes and adjustments. Clinical trials are conducted on these new technologies with the hope and perceived potential for the technologies to reduce the burden of HIV/AIDS around the world (Global HIV Working Group, 2006). The following are the promising approaches to HIV prevention which are currently in clinical trials: •
•
•
•
0
Microbicides: Several efficacy trials are currently underway. A number of newer generation products are in early stage trials. Cervical barriers: An efficacy trial of the diaphragm in South Africa and Zimbabwe has been conducted to assess if the diaphragm can provide additional benefit to women against HIV and other sexual transmitted infections, but has shown no benefit of the addition of the diaphragm to the provision of condoms (Padian, 2007). HIV Vaccines: There are 30 vaccine candidates that are in clinical trials, including two in advanced efficacy or “proof-of-concept” trials. Male circumcision: This procedure involves removal of the foreskin of the penis (Reider, 2006). A highly publicized study conducted in South Africa, reported its results in 2005 showing that circumcised men were 60% less likely than uncircumcised men to become infected with HIV (Cohen, 2005). Two additional trials have been reported
•
and showed very similar levels of efficacy (Rennie, 2007) Herpes suppression pre-exposure prophylaxis with antiretrovirals are among the other HIV prevention trials currently underway (Global HIV Working Group, 2006; UNAIDS 2006).
These trials and other work not included in this chapter are very important in that, if these trials prove these technologies to be effective; there will be the potential for great improvements in health. Because of the burden of disease, the importance of finding solutions for developing world health issues and easy access to the population in the developing world justify that research be conducted in developing countries (Guidelines for GCP, 2000). In recent years there has been an intensification of research activities in the context of HIV/AIDS (Health Research Policy in South Africa, 2001, Ethical Consideration, 2000) and other infectious diseases. A concern is that large economic disparities exist in communities that are affected by HIV/AIDS in the developing world. Such disparities may make individual vulnerable and therefore open to exploitation. According to UNAIDS vulnerable population are “communities having some or all of the following characteristics: Limited economic development; inadequate protection of human rights and discrimination on the basis of the health status; inadequate community/cultural experience with understanding of scientific research; limited availability of health care and treatment options; limited ability of individuals in the community to provide informed consent” (UNAIDS, 1999). Vulnerability to exploitation in medical research has raised the challenge of extending protection of individual human research participants to the protection of communities (Weijer 1999). The very point of departure is the recognition that communities should be treated as equal partners in decision making on research that impact their lives. Through partnerships, community capacity
Community Education in New HIV Prevention Technologies Research
building, education and skills development, real benefits to communities can be achieved. The process enhances respect for different values and customs while reaching agreement about what can be changed and what can be left out (Masahiro Morioka, 1994). Communities can only be defined as equal partners in research if they are empowered to understand research and appreciate that the development of new technologies will bring about positive changes for the population. Historically there has been greater concern for balancing the demands of science and rights of individuals in the conduct of research. Through individuals choosing to participate in research make choices that impact the well being of the larger community or society. Ethical guidelines typically involve moral autonomy, which places an onus on individual to go beyond self interest and acknowledge the larger social aspect of moral autonomy (Khan, 1991). While protection is currently ensured by ethics approval and informed consent, this should not be done because is a requirement by codes. It is important for predominant ethical theory in research (Beauchamp, 2001; Medical Research Guideline, 1993; CIOMS, 1993) to be widely known, interpreted and reviewed. Three principles are particularly relevant to biomedical research involving human participants; respect for persons, beneficence and justice. Originally articulated in the Belmont Report, lack of respect involves refusing to accept person’s judgments; denying individual to act on their considered judgments and withholding information necessary to make to make considered judgments when there is no compelling reasons to do so. Respecting the judgment of the participants based on the fact that they have been given information can further be optimized when there is ongoing education. Closely linked to respect for an individual is the principle of beneficence. The Belmont Report specifies two rules: do no harm and maximizing possible benefits and minimizing possible harms. The need to avoid wrongs and harms, assuming
that participants understand what is asked of them without attempting to educate them, will wrong them; this may translate to social, psychological, economic or even physical harm. The principle of justice states that harms and benefits should be balanced and that benefits of the research must out weigh the harms. Guideline 8: CIOMS, states that research involving human subjects must be responsive to the country where research is conducted with special mention of underdeveloped communities. Some scholars have proposed an additional principle: “respect for communities” although with the rise of stem cell research there is greater advocacy to look into protection of communities. Some guidelines highlight the need for community engagement and considerations (CIOMS, 1991), Australia and Canada have a special clause that considers the protection of Aboriginal communities (Tri Council Statement, 1998; McNeil 1993).
DIscUssION Technology development is firstly done to improve the health and lifestyle of the community. Whatever the nature of development, it will impact on how people live and die. This development, while positive toward preventative health care, comes with its own ethical challenges. By involving and educating the communities throughout these stages of development, the communities are then better prepared to deal with these life changes and technological developments. The community education needs to also highlight the fact that development has to happen, giving examples that the entire world around us is changing because of new technology. It is also necessary to caution about how technological advancement can be used to the detriment of the very society is trying to improve. However it is beyond the scope of this chapter to discuss the negative procedures that can result from use of technology. The need to
Community Education in New HIV Prevention Technologies Research
conduct education on research is greater than the challenges and limitations noted above. In line with recommendation 2.2 of the National Bioethics Advisory Committee (NBAC), that host country representation should have a strong voice in determining whether a proposal is appropriate, is an emphasis from both the local and international community that research conducted in countries must be responsive to the health needs of those being engaged in research (US National Bioethics Commission, 2001). This can be realized when communities participate as equal partners in all stages of research and not merely in protocol development. This means that communities need to be empowered such that they can realize health issues within their communities and proactively approach scientists/research institutions/relevant government departments to highlight problems and identify research questions that need to be answered. Communities need to be consulted prior to protocol development rather than to be consulted in order to legitimize the process. There are two broad bases for embarking on education for community, practical and substantive. As per NBAC there are two types of ethical requirement, first it is a procedural requirement to ensure that the legitimate bodies assess that standards of ethical requirements are complied with. If individuals and communities are empowered to make decisions about participation and are able to ask question they can give input in research priority setting and can participate as suggested equal partners. The second is substantive where the education is interpretation of the research principles; respect for person, beneficence, justice (Belmont Report, 1979) which provide the analytic framework for understanding not all but many of the ethical issues in research. These principles need to be further informed by the perceptions and experiences of community. Education and the establishment of dialogue will help achieve the following: (1) raise awareness about research and development, (2) improve protection, (3) build trust between researchers,
public and policy makers, (4) promote open dialogue, (5) build capacity of research participants and community representatives, and (6) increase accountability of organizations (research institutions and hospitals) involved in research to the general public. Interaction between the public and research community will help advance the ethical principles used in human participants research (CIRCARE, 2002). •
•
Awareness raising: It is widely documented that research participants, when approached by their doctor/health care provider asking them to participate in research, often cannot differentiate between health care provision and research. Education will serve to inform the public that there is research continuously being conducted. The education needs to highlight the importance of research, which is that research is the pursuit of knowledge in the best interest of science and society which may not always be compatible with best interest of the patient, research participants or community (Levin, 1983). By sharing information with the public about the research projects that are taking place and research that may be conducted, individuals and community are more informed to make decisions, be it in participating in research, being involved as representatives of the community or advocates. When potential research participants are aware of the distinction between participating in research and receiving health care (Goldner, 2000), they are empowered thus do not just blindly comply with the request to participate in research. Protection: Any new technology, including drugs, is thoroughly researched in the laboratory, in animals then in human beings. Throughout these stages issues of safety and effectiveness are established (Deciphering AIDS Vaccine, 2006). Prior to conducting research several measures are taken to ensure
Community Education in New HIV Prevention Technologies Research
•
that any possible benefits and risks of trial participation are identified, this includes review of protocols by research ethics committees (REC) and/or independent research boards (IRB). The codes, guidelines and research review communities are in place in order to ensure that human participants are protected from exploitation and abuse. Researchers are acting in the name of science and humankind; they may not be objective judges of their work’s scientific merit and ethics (Snow 1998). Communities that have received education will have representation that is better positioned to inform the design of the protocol such that individuals are protected from stigma and abuse. Individuals themselves can be better equipped to understand that there are risks and benefits involved in research and can be in a better position to ask the researcher about these. Education will help inform about the processes followed to ensure that individuals and communities are protected. If problems are experienced, they can then be in touch with researchers, advocacy groups, and community representatives and even directly with ethics review boards. Building trust: The scientific and clinical research communities have a lot of bad press to overcome (Vazquez, 1992). There is a need to share information about the past research experiences, both bad and good. Education may assist in clarifying misconceptions and explaining the things that may not have gone the way that was expected. In interactive educative sessions the participants may also voice what they expect. The education messages, just like the guidelines, are continuously being updated and this will help update what the public know and help the research community also gain insight of the public’s perspective. If there is a proactive way of sharing information with the public the public will
•
•
better understand that research is in most cases done in good faith, this will help build trust between the research community and the researched communities. Openness: Closely linked to trust is openness. Openness about the processes and procedures in research to ensure that the research is conducted ethically allows for those invited to participate and to engage fully. The community representatives and advocates can be better equipped to think about issues and to assess what input they can make, how it will influence decisions and how it will impact on the actual conduct of research. On the other hand, individuals will have a much better understanding of how decisions are arrived at, what their contribution means to research and how best they can use themselves to answer the research question. Transparency which is brought about by openness legitimizes the research and research process (Simelela, 2002). Capacity building: Research capacity development has been highly selective, serving the interest of a limited part of the population. Research participants and their representatives were never in the picture until recently with HIV/AIDS research. With the demand by advocacy groups to be involved in protocol planning, designing and implementation (Collins, 2002) it became more than necessary that these groups were educated about research, with the education taking into consideration ethical issues as understood by community members and as documented in the ethics guidelines. The more people who receive the education, the more there will be a pool from which informed community representatives and advocates can be pulled. Education about research and ethics will promote equity between the marginalized disciplines and groups. The participants can also be empowered to decide on who they
Community Education in New HIV Prevention Technologies Research
•
want as their representatives or advocacy, and there will be an increase of informed participants who can volunteer to take up this position especially where it is difficult to identify community representatives. Well informed potential participants and communities will be involved in all stages of decision making as true partners and not as mere tokens or an after thought (Tan-Ud, 2002). Their involvement will not depend solely on the information from researchers, as is the problem currently with lay members in ethics committees, where they rely on opinion of study investigators regarding when and what grounds study procedures should be changed or consent be amended (Dickens 2001). Accountability: When the public is aware of the processes involved in the conduct of research they understand the issues of feedback and publication better. Their expectation of how results of the study are going to be handled will be informed by the understanding which they gained from receiving research education. Also, an informed public will be aware when there are procedures that are not followed in the protocol which they were part of designing, for example misleading recruitment messages. They will recognize this and rightfully demand explanation. The researchers will be accountable to the general public either directly or through their representatives, the representatives will be accountable to those they represent, and the ethics committee will also be informed of misconduct by any of the parties. Accountability includes giving feedback to the communities about the research which they approved, accommodated and supported by participating (Dickens, 1991).
Education can achieve the above if delivered to the intended recipients and the recipients are
empowered to enter into dialogue in all stages, from recognition of health issues, identification of research question, working toward answering the research questions and influencing policy to ensure that products which have been found to be safe and effective are placed in the health care delivery system. The main aspect is that of educating participants to prepare them, prior to an interaction with a researcher that research is different from health care provision. This will maximize protection (coercion and undue influence) and volunteerism (enter into research voluntarily with adequate information). This is also in line with Guideline 3: CIOMS; which cites that the investigator has an obligation with regard to informed consent and the participants must be given an opportunity to ask questions. It is only possible to ask question when the potential participants and the researchers have an understanding of the issue at hand. It is currently not well defined as to how much participants understand, with their engagement in a non health care provision form of interaction when they volunteer to participate. Avoiding harm requires learning what is harmful; it has been argued that this cannot be left in the hands of researchers and research ethics committees alone. An educated community may give some insight on how a proposed research may wrong or harm individuals or community, and educated individuals may give their perspective on how they see the research as impacting on their lives. Educated communities can be able to assess, together with researchers, the reason why some communities are selected over others for the conduct of research. Failing to educate the participants deny them the opportunity to take part in weighing their own and societal benefits and harms. In an African context respect for communities will involve a practice of understanding which is founded in humanity, commonly described by the concept of “ubuntu”. Ubuntu emphasizes that only by understanding the actions and feelings of other people in the social group can all human
Community Education in New HIV Prevention Technologies Research
behaviour, fortune and misfortune be understood. This principle will help researchers understand communities where they intend conducting research, in that as much as individual’s autonomy is a strongly held principle; they need to understand individuals and also their actions and feelings of others in their social groups. Based on the fact that health concerns in this context are socio-economic and cultural in nature, adoption of this principle can enhance the relationship of researcher and communities. The principles will bring about realization and enforce communal involvement. It is the principle that inspires individuals to open themselves to others and learn of others as they learn of themselves (Ubuntu). Disease is one of the social factors which contribute to the struggle for change. Adoption of this principle in the ethics of research of health service provision will emphasize that researchers need to learn from the researched or targeted communities for research while they also help these communities learn from them. Dialogue between community representatives, the general public and research community can best be achieved when communities have a clearer understanding of issues involved. The current situation is such that the power and knowledge is concentrated at the hands of privileged few. Dialogue should not be limited to the formation of community advisory boards (CABs) in AIDS research (such as HIV vaccine research) but be adopted for all research projects and service provision programmes. CAB members, just like research ethics committee members, need to receive education on the ethics of research and their education should not be limited to the protocol at hand. There is no documented research ethics education currently conducted for the general public, there is a need to start on a small scale. Medical Research Council, South African AIDS Vaccine Initiative SAAVI) is conducting education in the communities where vaccine research is being conducted. The teaching aims to educate communities about the process involved in vaccine research and development, and specifically about
clinical trials and clinical research to enable them to make informed decisions at community and individual level. (SAAVI, 2007). The proposed community education seeks to look into research in general and not only vaccine. Targeting community structures, organizations, media and community representatives may form a base for wider community involvement.
cONcLUsION Public education in ethics of research and service provision has a role to play in contributing to understanding and improving the relationships between research community and researched communities. While research has a role to respond to advancement of science it should also promote human rights, and specifically personal rights, of research participants. The need to share and exchange knowledge and skills with vulnerable populations can never be over emphasized. The three basic principles, among those generally accepted in our cultural tradition, and an additional principle of “Ubuntu” are particularly relevant to medical ethics involving human subjects and can best be achieved where individuals understand them and can interpret them to their own situation. Persons are to be treated in an ethical manner, not only by respecting their decisions and protecting them from harm, but also by making efforts to secure their well-being. If communities are involved throughout the research and development of products the hindrances in using can be identified on time and be dealt with. If the communities are involved in development of the product they take ownership and no one would like to waste what s/he believes is his/hers.
rEFErENcEs Abdallah S. Daar, Halla Thorsteinsdottir, Douglas Martin, Alyna C. Smith, Shauna Nast, & Peter
Community Education in New HIV Prevention Technologies Research
Singer. (2002). Top ten biotechnologies for improving health in developing countries. Nature Genetics, 32.
as the next best solution to the abolitionist approach. Journal of Law, Medicine and Ethics, 28, 379-404
Beauchamp TL and Childress JF. (2001) Principles of biomedical ethics. Fifth Edition. New York: Oxford University Press.
Guidelines for Good Practice in the Conduct of Clinical Trials in Human Participants in South Africa. (2000). Clinical trials guidelines. Republic of South Africa: Department of Health.
Citizens for Responsible Care and Research. (CIRCARE) (2002) A human rights organizations. April 17. Cohen J. (2005) AIDS research: Male circumcision thwarts HIV infection. Science; 2005: 860. Collins C. (2002) AVAC announces appointment of new director of education and outreach: Ford foundation make two year award to AIDS vaccine advocacy coalition for community education and mobilization. June 25, New York. Retrieved from: avac.org/press releases.htm Collins C. (2002) Thai-WRAIR phase III HIV vaccine trial. AIDS Vaccine, Advocacy Coalition. July 8, avac.org reports and documents. http://www.avac.org/index.htm Deciphering AIDS Vaccine. (2006) An anthology of VAx and IAVI report articles explaining key concepts in AIDS vaccine research and clinical trials. July 2006. Dickens BM. (2001) The challenge of equivalent protection. Commissioned Paper. Dickens BM. (1991) Issues in preparing ethical guidelines for epidemiological studies. Law, Medicine and Health Care, 19(3-4), 175-183 Ethical Conduct for Research Involving Humans: Tri Council Statement. (1998). Ethical consideration for HIV/AIDS clinical and epidemiological research. (2000). Department of Health, HIV/AIDS/STD Directorate, Republic of South Africa. Goldner JA. (2000) Dealing with conflict of interest in biomedical research: IRB oversight
Gupta GR. (2002) Assuring access to AIDS vaccine by talking to the experts. International Center for Research on Women. International AIDS Vaccine Initiative. A satellite Meeting Prior to the XIV International AIDS Conference. 6 July 2002, Barcelona. International guidelines for ethical review of epidemiological studies. (1991) CIOMS, Geneva. International ethical guidelines for biomedical research involving human subjects. (1993) CIOMS. Health research policy in South Africa. (2001). Republic of South Africa: Department of Health. Khan, KS. (1991) Epidemiology and ethics: The people’s perspective. Law, Medicine and Science, 19(3-4), 202-206 Levin R. (1983) Informed Consent in Research and Clinical Practice: Similarities and Differences. Archive of Internal Medicine. 143: 1229-1231 Masahiro Morioka. (1994). Toward international and cross-cultural bioethics. Proceedings of the Third International Bioethics Seminar in Fukui, Eubios Ethics Institute, 293-295 McNeil PM. (1993) The ethics and politics of human experimentation. School of Medicine, University of South Wales Cambridge University Press. Medical research guidelines on ethics for medical research. (1993). Found at http://www.mrc. ac.za/ethics/epidemiolgical.htm
Community Education in New HIV Prevention Technologies Research
New approaches to HIV prevention: Accelerating research and ensuring future access. Global HIV Prevention Working Group, August 2006. http://www.paho.org Padian, NS., van der Straten, A., Ramjee, G., Chipato, T., de Bruyn, G., Blanchard, K., Shibsoki, S., Montogomery, ET., Fancher, H., Cheng, H., Rosenblum, M., van der Laan, Jewell, N., & McIntyre, J. (2007). Diaphragm and lubricant gel for prevention of HIV acquisition in southern african women: A randomized controlled trial. Lancet, DOI: 10: 1016/50140 – 673 (07) 60950-7 www.thelancet.com Reidar K Lie, Ezekiel J Emmanuel, & Christine Grady. (2006) Circumcision and HIV prevention research: An ethical analysis. Lancet, 368, 522 – 25 Rennie S, Muula AS, Westreich D. (2007) Male circumcision and HIV prevention: Ethical, medical and public health trade offs in low-income countries. Journal of Medical Ethics, 33, 357-361 Smith W. (1992) A process: Framework for teaching bioethics. Woodrow Wilson National Fellowship Foundation. Simelela N. (2002) The South African Government on HIV/AIDS. Science in Africa. Snow B. (1998) Consenting adults: The challenge of informed consent. Bay Area Reporter, June. South African AIDS Initiative (SAAVI) (2007). The community and the scientist: A Synergistic relationship. http://www.saavi.org.za Tan-Ud P. (2002) Community involvement in the Thai phase III trial. Thai Network of People Living with HIV/AIDS. 2002 International AIDS Vaccine Initiative. A satellite Meeting Prior to the XIV International AIDS Conference. 6 July 2002, Barcelona. The Belmont Report, OHSR. (1979). Ethical principles and guidelines for protection of human
subjects. Department of Health, Education, and Welfare, USA. Ubuntu. Umuntu Ngumuntu Ngabantu. A South African peace lesson. Social Studies, Grades 7 – 12. http://www.ivow.net/ubuntu.html UNAIDS (1999). Gender and HIV/AIDS: Taking stock of research programmes. Joint National Programme on HIV/AIDSA, Geneva, Switzerland. http://www.unaids.org UNAIDS/WHO. (2002) AIDS epidemic update: December. Geneva: UNAIDS/WHO; 2002. ISBN: 92-9173-253-2. UNAIDS (2006) Report on global AIDS epidemic, executive summary. A UNAIDS 10th Anniversary Special Edition. http://data.unaids.org US national bioethics advisory commission ethical and policy issues in international research: Clinical trials in developing countries. Vol. I, Report and Recommendations: Executive Summary, i-xv. Vazquez R. (1999) A discussion of community concerns. National Cooperative Vaccine Development Groups for AIDS Meeting. August 1992. Original: AIDS Research and Human Retroviruses, 9(1). Ann Liebert Inc., Publishers Weijer C, Goldsand G, Emanuel EJ. (1999) Protecting communities in research: Current guidelines and limits of extrapolation. Nature America Inc. (Commentary) http://www.nhrmrc.gov.au
KEY tErMs Cervical Barriers: Diaphragm is an object that covers the cervix. It has been used for decades as a method of preventing pregnancy, and is an approved contraception when used with spermicidal.
Community Education in New HIV Prevention Technologies Research
Community: A community is a social group of organisms sharing an environment, normally with shared interests. In human communities, intent, belief, resources, preferences, needs, risks and a number of other conditions may be present and common, affecting the identity of the participants and their degree of cohesiveness. Community Advisory Board (CAB): Is a group of individuals, generally made up of no more than 20 people who serve as primary liaisons between the community and the trial researchers. Often a senior scientist or physician and/or other member of the trial staff will attend CAB meetings on a regular basis, a sign indicative of the CAB’s importance in the trial process. Community Education: A process whereby learning is used for both individual and community betterment. It is characterized by: involvement of people of all ages, the use of community learning, resources and research to bring about community change, the recognition that people
can learn through, with and for each other to create a better world. (Canadian Association for Community Education) Community Involvement: A process where communities work collaboratively with the research team in decision-making, problemsolving and implementation of projects and programmes. Community Participation: Is one of the key ingredients of an empowered community. It entails active citizen involvement in all aspects of strategic plan development and implementation of project and programmes. Microbicides: Topical substances such as gels or creams that could be applied to the vagina or rectum to reduce HIV transmission. HIV Vaccines: Preventive vaccines enhance the body’s immune defenses, enabling the immune system to fight off diseases that it cannot naturally control.
Chapter XXIII
The Public / Private Debate: A Contribution to Intercultural Information Ethics Makoto Nakada University of Tsukuba, Japan Rafael Capurro Stuttgart Media University, Germany
abstract In this article we give an overview of the range and characteristics of intercultural information ethics (IIE) focusing on the public/private debate in the so-called information age. IIE is a relatively newly emerging field which addresses a variety of issues such as similarities and differences of views on the public/private spheres in different cultural and social traditions, the comparative analysis of moral norms of communication in global information network(s) or the Seken-Shakai-Ikai trichotomy as a specific typology of structures underlying today’s Japanese information society. We examine these problems, in particular the public/private debate, from a perspective in which cultural differences arise from the underlying dimension of sharing with others a common world and with special reference to the differences between Japanese and Western culture(s).
INtrODUctION Intercultural Information Ethics (IIE) deals with the impact of information and communication technology (ICT) on different cultures as well as on how specific ICT issues are understood from different cultural traditions. The main purpose of
this chapter is to consider the range and characteristics of IIE focusing on the public/private debate as one of the most crucial issues in the so-called information age. IIE is a relatively newly emerging field which includes a variety of problems such as similarities and differences of views on the public and private spheres in different cultural and so-
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Public / Private Debate
cial traditions, the comparative analysis of moral norms of communication in global information network(s) (Capurro, 2006a; Capurro, Frühbauer, & Hausmanninger, 2007; Hongladarom & Ess, 2007; Sudweeks & Ess, 2004), differences of justifications of privacy as an intrinsic good in Western countries and as an instrumental good in Asian countries (Ess, 2005), cultural and historical backgrounds behind the difference between direct speech and indirect speech in the ‘Far East’ and the ‘Far West’ (Jullien, 1982, 1985, 1995; Capurro, 2006a), ‘Seken-Shakai-Ikai’ trichotomy as a different typology of structures of the life-world in Japan of the information age depending on different perspectives on values and meanings of this world (Nakada & Tamura, 2005). According to the authors these problems are crucial for our lives in a common world where we are facing a danger reflecting split attitudes towards universalism and cultural relativism. We also believe that the consideration of these problems is tightly related with the understanding of our own existence, selfidentity and mutual human relations. Therefore the analysis of cultural differences concerning for instance the public/private debate should not be divided from our attention to the perspective of a common shared world. In this respect, Heidegger’s ontological conception of ‘being-in-the-world’ (‘In-der-Welt-sein’) (Heidegger ,1976) seems to us crucial for a transcultural understanding of what being public means in an ontological or structural sense as a basis for an intercultural dialogue on ethical issues of the information society (Nakada & Capurro, 2007). In this chapter we explore various conditions affecting people’s understanding of public/private related problems on the basis of the world as a shared world openness with special regard of the differences between Japanese and Western culture(s).
0
tHE PUbLIc/PrIvatE DEbatE FrOM aN IIE PErsPEctIvE the Problematization of Privacy What is privacy? What are the relations between ‘the public’ and ‘the private’? Contrary to the popular belief in undoubted acceptance of the concrete and stable definitions of ‘the public’ or ‘the private’, most of the scholars in the fields of IE (information ethics) or CE (computer ethics) may admit that the definitions in these respects somewhat or to a great extent remain ambiguous. In fact, the recent debates on the meanings of ‘privacy’ or the relations between ‘the public and the private’ in dominant academic journals such as Ethics and Information Technology seem to reflect this ambiguity of ‘the public’ and ‘the private’. These recent debates include such as: (1) the debates on the natures of values consisting of the highly evaluated meanings of privacy (‘the values of privacy are intrinsic ones or instrumental ones?’) (Johnson, 1994; Rachels, 1975; Moor, 1989; Moor, 1997); (2) the debates on the validity or usefulness of the concepts of privacy in the information age (‘the term “personal security” is better than the ambiguous term “privacy”?’) (Thompson, 2001); (3) the debates on the validity of the presuppositions (such as subjectivity or individualism) behind ‘privacy’ in the age of everyday surveillance through IT (Lyon, 2001). These debates about ‘privacy’ or ‘the public and the private’ reflecting the ambiguous meanings of these concepts endorse the necessity of discussions related to IE (information ethics). According to our own definition, IE means the problematization of information morality (Capurro, 2006b). In the process of ‘problematization’ we try to doubt or reinterpret something at least superficially clear or undoubted. The debates cited above are some of the concrete examples of such problematization particularly with regard to ‘privacy’. As one of the authors (Capurro) suggested elsewhere (Capurro, 1995, pp.97-114), the birth of philosophy
The Public / Private Debate
in Greece is related to problematization of the concept of message or angelia (Capurro, 2003). Our understanding of ethics as problematization of morality follows Foucault’s discussions on parrhesia (Foucault, 1983). What we can extract from them is one of our fundamental presuppositions with regard to the definitions and necessities of IE, namely that it arises when a given information morality becomes problematic. In this sense, there is no doubt about today’s problematization of privacy along with the problematization of IT and of the information society altogether, facing various problems such as everyday surveillance (Lyon, 2001), the reconsideration of the meaning of the human body for initiating and maintaining inter-subjective bonds among people in the cyberspace (Anderson, 2000,p.153) and the like.
the Problematization of Privacy from an IIE Perspective We believe that the necessity of the problematization of privacy is due, along with the problems cited above, to concerns between people with different socio-cultural backgrounds particularly in online communities. Within these artificial realms, IT, particularly the Internet, is believed to enable people to “transcend the spatial and temporal limitations of his or her body, and, to some extent, his or her cultural milieu with its parochial structures, perspectives, prejudices, and social hierarchies” (Anderson, 2000, p.1). In this respect, Charles Ess, one of the organizers of the conferences on ‘Cultural Attitudes Towards Technology and Communication’ (CATaC), rightfully pointed out the importance of intercultural dialogues as follows: “Given the global scope and influences of information technologies, such a global dialogue is critical—especially if an information ethics is to emerge that respects and fosters those elements of specific cultures that are crucial to their sense of identity. In particular, as we will see, ‘privacy’ intersects with basic conceptions of the human person as refracted through notions of individu-
ality and the larger collective.” (Ess, 2005, p.1). Ess’s remarks on this point have a lot in common with our own definition of IIE as dealing with the comparative analysis of norms and morality of communication (Capurro, 2006a).
PrEsENcE Or absENcE OF PrIvacY IN tHE ‘Far East’? Privacy in the ‘Far East’ traditons of collectivism It is one of our fundamental assumptions that people’s attitudes towards ‘privacy’ in Japanese culture, Chinese culture and other Asian cultures are (or supposedly) different from those of ‘Western’ cultures in several basic respects. First of all, it seems that the value of the concepts of privacy or the evaluations of the right to privacy in the ‘Far East’ are supposed to be comparatively at lower levels or remain an ambiguous issue (Nakada & Tamura, 2005). Secondly, as several scholars in Asia point out, Asian cultures seem to lack the recognition of privacy as something intrinsically good (Lü, 2005, p.12). There seem to be various important reasons behind these situations in Asia including people’s orientation to ‘collectivism’. According to Lü Yao-Huai, due to the drastic and rapid changes in contemporary society of China, some of today’s dominant ideas including ‘free individualism’ and ‘privacy’ are quite different from the ones in the immediate past (Lü, 2005, pp.7-8). However, in spite of these changes in regard to privacy and human relations, Lü insists, most of the Chinese scholars discussing and positively evaluating ‘privacy’ still remain in the traditions of collectivism. In fact, the scholars usually point out that maintaining the right to privacy is not only a problem concerning ‘the individual’, but also an important problem concerning ‘the society’ (Lü, 2005, pp.11-12). The situations are similar in Thailand in fundamental aspects. According to Krisana Kitiyadisai,
The Public / Private Debate
in spite of the strong influence of informatization, economic expansion, globalization upon Thai cultures and also despite the change of Thailand in the direction of a modern, industrialized and cosmopolitan outlook, many aspects of cultures in Thailand including people’s understanding the meanings of privacy still remain in the cultural traditions leading to respects for social harmony, social order, family-oriented life, or ‘keeping one’s (social and status-related) face’. Behind these cultural traditions lies Thailand’s long history influenced by feudalism, family systems, scarcity of personal life space along with Buddhism (Krisana Kitiyadisai, 2005, pp.17-19). As these scholars suggest, we have to take into consideration the meanings of individualism or the relations between collectivism and individualism in case of comparison of positions on privacy in the ‘Far East’ and the ‘Far West’. According to Krisana Kitiyadisai, people in Thailand are aware of the necessities of avoiding invasion of someone’s privacy just as the people in ‘Western’ culture(s). But this is because of the importance of keeping or not insulting someone’s social face or because of the common tendency to avoid interference with private affaires leading to denial of master’s or lord’s hierarchical positions within a household (Krisana Kitiyadisai, 2005, p.18). Esyun Hanaguchi, a Japanese scholar dealing with collectivism in Asian culture(s), insists that most of the undoubted criticisms of ‘weak individualism’ in Japan reflect a rather stereotyped presupposition under influence of ‘Westernized views’ leading to non-criticized ‘worship’ of rationalization, objectivism, individualism, division of human mind into reasons and senses or into reasons and emotions as well as other values imported from modernized ‘Western’ cultures (Hamaguchi, 1993). While doubting this sort of presuppositions or uncritical ‘direct’ import of ‘Western’ concepts, Hamaguchi tries to draw our attention to collectivism as not being biased by the undoubted acceptance of ‘Western’ values. In these respects, his understanding of collectivism
in Japan has similarities and differences to those two scholars cited above. Hamaguchi’s evaluations of Asian collectivism, and particularly of collectivism in Japan, are fundamentally affirmative ones. He says that Japanese culture and society in modernized era can’t be explained adequately or fully without a consideration of collectivism as consisting of “principles or basic values leading to high evaluations of mutual human relations, mutual reliance and regard for inter-personal relations not as a means of some purposes but as a valuable end in itself.” (Hamaguchi, 1993)
Privacy and the Equilibrium between collectivism and Individualism in japanese culture The data of the recent research surveys by one of the authors (Nakada) (in 2005) in Japan may endorse the contents of discussions of the scholars cited above, namely the important position of collectivism in the ‘Far East’. (These studies cited here are ‘2005G’ and ‘2005S’. The former was designed as quota sampling with 500 survey monitors of 20-50 years old. ‘2005S’ was done for 459 undergraduate students.) Table 1 shows the percentages of the respondents who gave affirmative responses to views or opinions related with ‘collectivism’ and ‘individualism’. These figures clearly suggest that there is a certain sort of equilibrium between collectivism and individualism in the minds of Japanese people. This finding is in accordance with the discussions of the Asian scholars cited above in fundamental respects. But at the same time, when we examine this equilibrium between collectivism and individualism in a more detailed way (through factor analysis), we get two different types of ‘collectivism’ along with a set of ‘individualism’: ‘belief in collectivism’ and ‘other-oriented type’ (see Table 1). This means that in Japan ‘collectivism’ consists of more complicated or broader meanings than it might generally imagined.
The Public / Private Debate
Table 1. Individualism and collectivism in Japan (1a) Doing your best for other people is good for you.
73.8%(73.7)
(1b)The important thing for maintaining good social conditions is for us to
86.0%(87.6)
be kind to each other and to put the comforts of the others before one’s own convenience. (2a) I usually adjust my thinking to the expectations of the others around me.
38.8%(45.6)
(2b) I usually prefer adjusting my roles to the anticipations of the others
31.8%(34.5)
around me to persisting in showing my abilities. (3a) We can get the better results, when we don’t depend upon the others’
24.4%(26.3)
judgments and when we determine to do something by ourselves. (3b) To assert one’s demands and desires is very important for social life.
42.4%(45.8)
(The percentages are added figures of ‘strongly agree’ and ‘somewhat agree’. The figures without brackets show the findings of ‘2005G’ and those within brackets show the findings of ‘2005S’.)
Concerning the relations between ‘privacy’ and equilibration of ‘collectivism’ and ‘individualism’, our study results gave us interesting findings: ‘Individualism’ and ‘other-oriented’ have almost no relations with ‘concerns for the violation of privacy (‘abuse of security camera’, ‘abuse of information on private e-mail address’ and others)’, while ‘belief in collectivism’ has relatively stronger relations with ‘concerns for the violation of privacy’. These findings show that a naive distinction between individualism and collectivism might lead us to misunderstandings at least in Japan. (Concerning ‘2005G’, ‘2005S’ or related researches, see: Nakada, 2005; 2006; 2007).
Privacy in Diverse spaces of Meanings: Seken-Shakai-Ikai in japan As our research data show, at least in Japan, ‘concerns for privacy’ have practically no relations with ‘individualism’. This means that the generally accepted (Western) dichotomy between the pubic and the private as well as the undoubted
presupposition about the relations between privacy and individualism might be influenced by an unconscious cultural bias. When we try to find out a set of ‘factors’ or ‘values’ that might be related to ‘belief in collectivism,’ we get a list of views, attitudes, values, and ways of feeling as shown in Table 2. As discussed elsewhere (Nakada & Tamura, 2005; Nakada, 2006; Capurro, 2005), some of these views, or ways of thinking and feeling, i.e., (5), (6), (7), (8), (9) could be included in a network of meanings called Seken. In order to understand what we mean by this we have to analyse this concept in its relation with Shakai and Ikai. Japanese culture in modern times consists of three different aspects or networks of meanings, namely Shakai, Seken, Ikai. Shakai is a modernized aspect of Japanese culture which includes modernized worldviews, rationalized ways of thinking and feeling largely influenced by thoughts imported from Western cultures. Seken is the aspect of the world consisting of traditional and indigenous worldviews or ways of thinking and feeling derived from Buddhism, Confucian-
The Public / Private Debate
Table 2. Various views, ways of thinking, and ways of feeling related with ‘belief in collectivism’ (data: 2005G) (1) Acceptance of the view: ‘I sometimes feel that to have friends with whom we can talk together about our private life including memories of shame, sorrow or bad conscience would be a wonderful experience.’ (2) Positive interests in politics. (3) Positive interests in environmental problems. (4) Denial of the view: ‘Voters are powerless.’ (5) Acceptance of the view: ‘Within our modern lifestyles, people have become too distant from nature.’ (6) Acceptance of the view: ‘People will become corrupt if they become too rich.’ (7) Acceptance of the view: ‘People have a certain destiny, no matter what form it takes.’ (8) Acceptance of the view: ‘In our world, there are a number of things that cannot be explained by science.’ (9) Acceptance of the view: ‘The frequent occurrence of natural disasters is due to scourge of heaven.’
ism, Shinto, Bushido (Samurai moral and ethics), traditional views on nature, orientation towards solidarity and so on. In fundamental respects, Seken is a shared imaginative world of meanings that includes morals, social ethics, criteria of value judgments, common sense, desirable aims of life, beliefs in respectable behaviors and the like. Compared to Shakai and Seken, we don’t have enough research evidence about the third aspect tentatively called Ikai. But we believe that consideration on Ikai as a ‘hidden’ but influential aspect of Japanese culture can provide us with deeper views on meanings and values lying outside Seken and Shakai as ‘approved or justified’ dimensions. One of the most important findings from the analysis of data shown in Table 1 and Table 2 is the difficulty in understanding the meanings of privacy in Japanese culture even if we ‘borrow’ the views from Western cultures leading to the undoubted acceptance of ‘individualism’ or the uncritical distinction between the public and the private. As Table 1 and Table 2, along with the related findings, show, Japanese concepts of privacy are far more closely tied with ‘belief in collectivism’, than ‘individualism’. This collectivism, in turn, has strong or fairly strong relations with a wide range of meanings. To put it another
way, Japanese privacy lies within a set of meanings called Seken. Without doubt, Japanese people highly evaluate privacy on the one hand, but on the other hand this high evaluation is based on a different value system from that of Western culture(s). As we stated above, information ethics deals with the problematization of morality in the information era. Our discussions on privacy provide a concrete example of such a problematization of information moralities.
tHE PrObLEMatIzatION OF tHE PUbLIc/PrIvatE DIcHOtOMY IN tHE ‘Far WEst’ some ambiguous Positions of Individualism and Privacy in the ‘Far West’? According to Bin Kimura, a Japanese psychiatrist, the structure of Japanese subjectivity is different from the Western one in that Japanese subjectivity is discontinuous and thus opposite to a classic Western view of subject and identity as something permanent and even substantial (Kimura, 1972). Discontinuous identity means that subjectivity is
The Public / Private Debate
the effect of a network of relations and situations (Capurro, 2005, pp.37-38). Kimura’s explanation of the characteristics of Japanese subjectivity is in accordance with our research data. It is also useful as an interpretation of Japanese privacy. But, on the other hand, Kimura’s discussions might reflect the generally accepted presuppositions about the simplified difference between the ‘Far West’ and the ‘Far East’. Our research data suggest that ‘collectivism’ in the ‘Far East’ includes a wider range of meanings than it might be usually imagined. As the scholars cited above pointed out, in accordance with some philosophers in the filed of the so-called post-modernism, ‘privacy’ and other fundamental concepts such as subjectivity and individualism are now within the scope of ethical problematization also in the ‘Far West’. In the following, we take a look into the range of debates about these problems in the ‘Far West’.
Dwelling Places of People in the ‘Far West’ If the dwelling place for Japanese people is threefold (Seken-Shakai-Ikai), we might say that the traditional Western dwelling has been usually imagined to be two-fold: the world of sensory experience and the world of sensible experience. If we follow Immanuel Kant’s concepts and terminology, we might say that people in the ‘Far West’ are dwellers of two worlds, i.e., the physical world and the world of ‘ends in themselves’ (Reich der Zwecke). While the former is strictly deterministic, the latter is the basis of what he calls ‘human dignity’ (Würde) as different from things that have just a value (Wert). According to Kant, human dignity is grounded in the human capacity of going beyond our natural being (homo phaenomenon). This is because of the characteristic of human beings as ‘rational beings’ (homo noumenon) (Capurro, 2005, p.38). This kind of postulate about a two-fold dwelling or dichotomy of places of meanings in the ‘Far West’ has, as we know, other expressions such as private vs. pub-
lic sphere, individual vs. community, autonomy vs. heteronomy, identity vs. difference. It seems that within these kinds of dichotomies there has been a constant tendency to give primacy to the individual and not to the community. But at the same time it can’t be denied that there has been a set of different interpretations of these dichotomies in the history of debates about privacy or individualism in Western culture(s). In fact, the meanings of ‘private’ seem to be not so stable than we might usually expect, and concomitantly the distinctions between ‘the public’ and ‘the private’ seem to be rather blurring. Hannah Arendt’s discussions provide us with a useful outline of this problematization (Arendt, 1983). According to Arendt the sphere of privacy as a ‘sphere of intimacy’ arises (in Western civilization) in Modernity (Arendt, 1983, p.38). In Antiquity, privacy was seen as a situation of deprivation of political activities that were considered as the highest ones. We do not ‘hear’ any more this meaning of privacy because modern individualism brought a positive valuation of the private sphere. This conception of privacy was not in tension with the public sphere but with the social one. Jean-Jacques Rousseau was the first one to conceive a theory of privacy as intimacy. Rousseau’s fight was not against the government but against society that was supposed to deteriorate human hearts. Society and the ‘intimacy of heart’ are much more difficult to determine than the political sphere because they are more ‘subjective’. Modern individuality is born out of this romantic rebellion of the hearts against society (Nakada & Capurro, 2007). The political principle of social equality that conceives the nation as built by equal individuals is less important in this regard than the view of society as a big family that arises as a substitute of the family that came into decline. The conception of equality refers now to a situation in which there is a despotic power, the one of the family head, under which all members are ‘equal’. The
The Public / Private Debate
problem is, according to Arendt, that this monarchic principle turns out to be substituted by ‘nobody’ in the case of the economic sphere as well as in the salons of the ‘fine society’. But this transformation was by no way less despotic since ‘nobody’ could be identified as the incarnation of the family head or, to put it paradoxically, ‘nobody’ was a neutral or non-personal incarnation of such a head. A special form of this despotism of ‘nobody’ is bureaucracy. Society excludes ‘action’ and allows only ‘behavior’ according to rules that prohibit spontaneous actions as well as high achievements. Arendt describes the development of modern society as a process in which people are substituted by bureaucracy or the ‘domination of nobody’ (Arendt, 1983, p.45). Modern society has a tendency to expand into the sphere of private household. This leads to the paradox that although there is a positive view of this sphere with regard to Antiquity, society as representing the unity of humankind tends to dominate it. Arendt defines ‘public realm’ as ‘the common’. ‘Public’ has two different meanings. Firstly it refers to what everybody can see and hear from what comes out of other’s intimate life, his/her feelings. There are some kind of things, such as love as distinct from friendship, that, following Arendt, are not appropriate to appear in such a brightness. The decline of the public realm might lead to a retreat into the intimate sphere of ‘the own four walls’ (Arendt, 1983, p.51). The second meaning of ‘public’ is the world itself as it concerns what is common to all of us and different from the place we call our ‘private property’. It (the common world) is the world as a product of human interactions or of what is ‘in-between us’. It is the world in which we live together and in which things are literally between us, connecting and separating us from each other at the same time. It is easy to see the influence of Heidegger’s ‘In-der-Welt-sein’ in this second meaning of ‘public’ as the common place in which we originally
live. ‘In-der-Welt-sein’ means, from this perspective, being public in an ontological or structural sense. We, as humans, are structurally worldly or public beings sharing together a common world of meaning or an open semantic and pragmatic network ‘in-between’ us. A mass society is, according to Arendt, difficult to tolerate because it has a tendency to even physically discard what is ‘in-between’ us.
On the relation between our being in the World and IIE Arendt’s inquiry about the relations between the public and the private makes clear that the positions of privacy and individualistic subjectivity are (have been) far more ambiguous in the ‘Far West’ than it might be usually imagined. This problematization urges our consideration of the characteristics of our being in this world, which is originally, as Medard Boss following Heidegger shows, a being in the world with others dealing with common things and sharing the world-openness (Boss 1975). We believe that the discussions about the relation between individualism and collectivism in the ‘Far West’ as well as in the ‘Far East’ need to be related with this ontological structure of human existence as a basis for intercultural dialogue in information ethics. According to Kurt Goldstein (Goldstein, 1934) and Merleau-Ponty (Merleau-Ponty, 1942 and 1945), the patient of aphasia and agnosia whom Goldstein and Merleau-Ponty analysed in their books is unable to do many things: to imitate soldiers’ salute; to understand metaphors; to understand verbal orders and to obey the orders (for example, to touch some part of his body by obeying someone else’s verbal commands); to take a walk without particular purpose; to classify things with abstract rules. And according to Japanese psychiatrist Takeshi Utsumi (who follows the tradition of S. Freud), mental patients of schizophrenia can’t relate things (for example, his experiences) in two separate points: the present
The Public / Private Debate
time and the past. According to Utsumi, for the ‘ordinary’ (not afflicted by mental illnesses) people, the meanings of things are written in two different points of time. To put this in another way, one (if he is not schizophrenic) can see things from two different aspects: the present and the past. Utsumi says that one’s subjectivity lies in this divided space between the present and the past (Utsumi, 2005). If we follow Foucault’s interpretation in one of his earlier articles, these phenomena of mental illnesses are due to contradictory combination of the patients’ retreat into the isolated private sphere (‘privacy’) and falleness (Verfallen) into the inauthentic world. To put it another way, the phenomena of the patients of mental illnesses analysed by Goldstein and Foucault are cases of ‘fallenness’ into the worst individualistic (isolated) subjectivity and at the same time into the worst objectivity (Foucault, 1954). Here we can combine these explanations by Foucault with our own interpretations, by repeating the central contents of one of the author (Capurro)’s discussions on this world: the world as a common place and as world-openness (Capurro, 2006b). According to Heidegger’s analysis in “Being and Time” (Heidegger, 1976) Being itself is groundless making possible all theoretical understanding and practical orientation. In this sense, we can refer also to the term ‘unmarked space’ used by the Niklas Luhmann following George Spencer Brown (Luhmann 1987; Spencer Brown 1973) that points to the original indeterminacy of Being (Capurro, 2006b). What Medard Boss following Heidegger calls ‘world openness’ is this ‘unmarked space’ of our existence as an open and finite context of given possibilities, past, present and future ones, in their partial and socially mediated disclosure (Boss, 1975). Human existence is characterized by its ‘being-outside’ sharing implicitly or thematically with others the ‘sense’ and ‘meaning’ of things in changing contexts. The concept of communication addresses this original sharing together a common world.
Communication using artificial media presupposes both phenomena, namely our openness to Being as the ‘unmarked space’ and the process of making concrete or ontic differences. Our bodily existence differs, as Medard Boss remarks (Boss, 1975, pp.271-285), from the mere physical presence of things in that we exist as temporal and spatial three-dimensional beings exposed to the indeterminateness of Being. The limits of my bodily existence are, paradoxically speaking, identical to the ones of my world-openness (Boss 1975, p.278). The source of morality as the possibility of going beyond our self-interest can be found in this de-centered nature that manifests itself also as being able to give only a finite response to the necessities of bodily life. Even if we can discuss the possibility of the moral status of digital agents (Floridi & Sanders, 2002 ), it is clear that we have to take into consideration the characteristics of our world understood at the ontological level as the precondition enabling the alleged moral status of digital agents in the socalled infosphere to be realized. In these respects, that is, ‘the world as openness’ and ‘the world as a common place sharing implicitly or thematically with others the “sense” and “meaning” of things’, we can distinguish two types of discussions in IIE namely one at the ontological or structural and another at the ontic or concrete level. The discussions on IIE at the ontic level deal for instance with the concrete differences we make when we separate the public from the private sphere. The discussions at the ontological level deal with the underlying conceptual and existential presuppositions regarding our awareness of our being in the world, the interpretation of our bodily and social nature or the view on our finitude.
Problematization of Privacy in the Information society Human existence is characterized by its ‘beingoutside’ sharing implicitly or thematically with
The Public / Private Debate
others the ‘sense’ and ‘meaning’ of things in changing contexts (Boss, 1975). The intercultural discussions about privacy or the relations between the public and the private need to be discussed, if they want to be productive, on the basis of the interpretations of this common ontological presupposition. Following Foucault, the common world is fundamentally an ambiguous place where the intentions of the individualistic self and the intentions of the others meet together. This meeting together is not fixed but changeable dependent upon various conditions or situations such as (a) fallenness (Verfallen) of our existence into the inauthentic world, (b) the presuppositions of our understanding the characteristics of this world that might be influenced by socio-cultural situations and traditions or our ways of living in this world, (c) the possibility of human relations affected by information technology and specific types of computer mediated communication. We’ve already dealt with some of the topics related with (a) and (b). In the rest of this chapter, we will pay attention to (c) while discussing some phenomena tied with the potential influence of blogs or SNS as well commercialized media upon privacy or upon the relations between the public and the private. Most of the TV-advertisement of Japan today (probably in the ‘Far West’ too) rarely compel people to understand the contents of TV-ads in a specific way. Instead, what the producers and senders of ads do is to provide people with halfdigested sources (for example, ‘a face of a famous actress’ and ‘a picture of a bottle of perfume of a famous brand’) and ‘the space of metaphorical combination of matters’ (the space where people metaphorically combine things presented in ads together). People are not forced to understand the meanings of ads in the way intended by the senders and producers but once they are interested in the things presented in ads, they are involved in the process of completing a half-constructed structure of ads. People can interpret und under-
stand the meanings of ads in ‘original’ ways as they wish but at the same time their interpretation and understanding occur only within the space of metaphorical understanding presented by TV or commercial advertisements-senders. This means we are facing the phenomenon of blurring the distinctions between the realm of ‘me’/ ‘the private’ and ‘others’/‘the public’. The phenomena of blurring the distinctions ‘the private’ and ‘the public’ are also problematized with regard to blogs or SNS. Mixi is a very popular form of communication for Japanese youth these days. Mixi is a typical type of SNS. One of the most remarkable characteristics of mixi is the way of limitation of access to one’s bloglike diary. The writer of that diary can decide the range of readers, i.e., who can read his/her diary. In addition, affiliation of mixi is permitted only by introduction of the old members. These kinds of conditions of affiliation or permission of access to one’s diary characterize the fundamental natures of the communication within this networked community, particularly with regard to anonymity. It seems that what stimulates the readers’ interests when reading mixi’s contents rests upon some kinds of spirits of Musi (denial of egocentric self). It is clear that some kind of narcissism influences the contents of mixi’s blog-like diaries but at the same time the evaluations about mixi’s contents seem to depend on some kinds of Musi. To put it another way, the majority of the writers in mixi’s blog-like diaries as well as the majority of writers of blogs in Japan are trying to make the contents of their diaries or blogs ‘transparent (pure)’ or free from one’s arbitrary interpretation of things they encounter in everyday’s life in spite of the fact that these diaries or blogs are filled with the writers’ personal experiences. But a paradox arises here. In spite of the characteristics of Japanese blogs filled with an atmosphere of purity or Musi, this atmosphere appears to be ‘invaded’ by commercialism in several aspects. First, mixi itself is managed by a commercialized company. Secondly, the members of mixi can share the per-
The Public / Private Debate
sonal reviews of books, movies or music and they can also order these personally reviewed pieces of books, movies or music through Amazon or similar companies. In addition, the writers of these blogs seem to borrow ‘the key concepts used for description of moods’ or ‘the styles of narrating one’s experiences’ from commercialized popular songs: ‘Look at me! I am here!’ This phrase is a kind of cliché used in Japanese popular songs (Jpop) very often. One of the most popular themes in J-pop is a combination of a feeling of ‘disappearance of one’s original or authentic place’ and of sharing ‘pity’ or ‘sorrow’ coming from this disappearance. ‘Look at me! I am here!’ is often used to express this kind of feeling. It seems that the majority of the writers of blogs are influenced by the ‘popular’ theme of commercialized ‘popular’ songs (Nakada, 2005). According to one of the authors (Capurro) these phenomena are part of a broader problematization arising from the paradoxical nature of ‘networked individualities’ (Capurro, 2005, pp.40-41). We say ‘paradoxical’ because this concept is somehow an oxymoron, i.e., the two terms contradict themselves or seem to be contradictory. As already stated, our sharing a common world with others is an original dimension of human existence prior to the modern idea of an individual separated from the ‘outside world’. Due to global networking, the modern subject conceived as an autonomous individual is changing. But the fact of being ‘networked’ is, we believe, not enough to describe the present (and future) world situation. There is a paradoxical tension between the ontological or structural sharing of a common world and the ontic possibilities arising particularly from global digital networking, the former being a condition of possibility of the latter. We seem to be networked and divided at the same time and we change the concepts and conceptions of private and public on basis of new ontic experiences. We are now in the situation in which people want to make public many of the things (private letters for instance, now called blogs) that one
hundred years ago were considered as private. Of course, the essential point is the consent but it seems as if people don’t even care too much about this, particularly with regard to what the state power or private companies are doing with their data. We have a new global situation of this controversy that cannot be solved only on the basis of political modern philosophy that is nation oriented. It is clear that IT has brought not just technical changes that influence the new definition (where to draw the line?) of what is public and what is private, that condition our different kinds of ‘housing’ in a common world. According to Heidegger’s interpretation, modern technology “un-conceals” a new possibility of our relation to the world that was not previously there as in the case of, say, medieval technique (Heidegger, 1953). Heidegger calls this new kind of technological relation between man and world a “challenging” one in which everything, including ourselves, becomes an object capable of being manipulated. In our opinion, modern information technology is deeply ambiguous and, contrary to what Heidegger thought, it un-conceals or makes possible a ‘weak’ or ‘networked’ subject which is at the same time an object of all kind of ‘relations’ and (commercial) exchanges. It is a commodified subject but it is also a subject that re-presents himself/herself in an open space of relations sharing ‘techno-logically’ with others a common world.
FUtUrE trENDs There is an ongoing debate on the philosophical foundations of IIE as well as on specific topics. The philosophical debate deals with concepts such as ‘infosphere’ and ‘digital ontology’ (Capurro, 2008; Floridi & Sanders, 2002; Capurro, 2006b). The impact of ICT on different cultures as well as the cultural perspectives on specific ICT are object of recent studies focused particularly on the comparison between Europe, USA and some
The Public / Private Debate
Pacific states such as Japan, China and Thailand. The dialogue with African cultures started in 2007 (Capurro, 2007). Latin America, the Arabic world, Eastern Europe and other cultures of the Pacific region are underrepresented. The public/private debate is only one main topic of IIE today. Other topics are: intellectual property, online communities, governmentality, gender issues, mobile phones, health care, and the digital divide (Capurro, 2008; Hongladarom & Ess, 2007). Due to the introductory character of this chapter we have chosen this topic from a Japanese/Western perspective. We are aware that the ‘Western perspective’ is partial as there are other representatives of traditional Western thinking with regard to the public/private debate or to the more general question on individualism vs. collectivism, such as Karl Marx or Max Weber, that are not mentioned in this contribution. We are also aware that there other important topics dealing with the comparison of the ‘Far West’ and the ‘Far East’ such as ‘direct speech’ and ‘indirect speech’ (Capurro, 2006a) or the comparison of Seken-related meanings in Japan with collectivism and other fundamental values in other Asian cultures. The intercultural dialogue in the field of information ethics has just begun along with a variety of these topics.
cONcLUsION Karl Baier, specialized in the dialogue with Buddhism and particularly with the Kyoto-School (a group of Japanese scholars reflecting on the relation between Eastern and Western thought) has pointed out recently to the importance of the place of ‘moods’ or more precisely of ‘basic moods’ for a common world disclosure underlying cultural differences (Baier, 2006). This topic is developed by Baier following the analysis by Klaus Held. According to Held there is a transcultural experience, common to all human beings that concerns our awareness of the common world. This expe-
0
rience is based on the awareness of our finitude which includes the awareness of the uniqueness of the world as well as of our own existence. We experience our finitude in the mood of ‘Angst’ and the open possibilities arising from the fact of being born in the mood of ‘astonishment’ and ‘shyness’. This insight into the role of moods corresponds to the generally accepted meaning of Japanese Seken. According to Mineo Hashimoto, the concept of ‘Ukiyo’, understood as life in this transitory world accepted with resignation (Ukiyo being thus another name for Seken) is the hidden source of Japanese attitudes towards the meanings of various activities in everyday life including labor and work (Hashimoto, 1975). According to Hashimoto Japanese traditional view on this world as expressed in the concept of ‘Ukiyo’ is fundamentally associated with a basic mood through which people can see their lives in this vulgar world from a more sacred or purified standpoint. In fact, as our empirical research shows, the minds of Japanese people are filled with certain kinds of moods supposedly tied with ‘views on destination’ or ‘denial of material wealth’ although this doesn’t necessarily mean the overall submergence of Japanese minds into pessimistic moods. If our presuppositions are adequate, as we believe they are, the ontological perspective that arises from our common being in the world enables us to see this deeper stratum common to different cultures and allowing a productive dialogue between them without neutralizing their differences. As we suggested above, the relations between the private and the public spheres need to be analyzed on the basis of this deeper stratum of common awareness of the contingency and uniqueness of the world and of human life.
rEFErENcEs Anderson, T. (2000). The body and communities in cyberspace: A Marcellian analysis. Ethics and Information Technology, 2, 153–158.
The Public / Private Debate
Arendt, H. (1983). Vita Activa oder vom tätigen Leben. München: Piper. Boss, M. (1975). Grundriss der Medizin und der Psychologie. Bern: Huber. Baier, K. (2006). Welterschliessung durch Grundstimmungen als problem interkultureller Phänomenologie. Daseinsanalyse, 22, 90-109. Capurro, R. (1995). Leben im Informationszeitalter. Berlin: Akademie Verlag. Capurro, R. (2003). Angeletics – A message theory. In H. H. Diebner & L. Ramsay (Eds.), Hierarchies of Communication, 58-71. Karlsruhe: ZKM Verlag. Retrieved October 1, 2006, from http://www. capurro.de/angeletics_zkm.html Capurro, R. (2005). Privacy: An intercultural perspective. Ethics and Information Technology, 7, 37-47. Capurro, R. (2006a). Ethik der informationsgesellschaft: Ein interkultureller Versuch. Beitrag zur Tagung. Shapes of Things to Come, HumboldtUniversität, Berlin (15-17 February 2006). Retrieved October 1, 2006, from http://www.capurro. de/db.htm (printing in Japanese translation). Capurro, R. (2006b). Towards an ontological foundation of information ethics. Ethics and Information Technology, 8, 175-186. Capurro, R. (2007). Information Ethics for and from Africa. International Review of Information Ethics, (7). Capurro, R. (2008). Intercultural information ethics. In K. E. Himma & H. T. Tavani (Eds.), Handbook of Information and Computer Ethics. Wiley & Sons (forthcoming). Capurro, R., Frühbauer, J., & Hausmanninger, T. (Eds.) (2007). Localizing the Internet: Ethical aspects in intercultural perspective. München: Fink Verlag.
Ess, C. (2005). Lost in translation? Intercultural dialogues on privacy and information ethics (Introduction to special issue on Privacy and Data Privacy Protection in Asia). Ethics and Information Technology, 7, 1-6. Floridi, L., & Sanders , J. F. (2002). Mapping the foundationalist debate in computer ethics. Ethics and Information Technology, 4, 1-9. Foucault, M. (1954). Maladie mentale et personalité. Paris: Presses Universitaires de France. Japanese translation by Nakayama, G., (1997). Seishin shikann to paasonarithi. Tokyo: Tcikuma-syobou Foucault, M. (1983). Discourse and truth: The problematization of parrhesia. Presentation at University of California at Berkeley, OctoberNovember. Retrieved October 1, 2006, from http://foucault.info/documents/parrhesia/ Goldstein, K. (1934).Der Aufbau des Origanismus. Haag: Martinus Nijhoff. Hamaguchi, E. (1993). Nihongata moderu to wa nanika (About the characteristics of Japanese culture and society). Tokyo: Shinyosya. Hashimoto, M. (1975). Ukiyo no sisou (Ways of thought based upon views on this transitory world with resignation). Tokyo: Koudansya. Heidegger, M. (1976). Sein und Zeit. Tübingen: Niemeyer Verlag. Heidegger, M. (1953). Die Frage nach der Technik. In Gesamtausgabe Band 7. Frankfurt am Main: Vittorio Klostermann Verlag. Hongladarom, S., & Ess, C. (Eds.) (2007). Information technology ethics: Cultural perspectives. Pennsylvania: Idea Group. Jullien, F. (1995). Le detour et l’accès. Stratégies du sens en Chine, en Grèce. Paris: Grasset. Jullien, F. (1985). La valeur allusive. Paris: Presses Universitaires de France.
The Public / Private Debate
Jullien, F. (Ed.) (1982-). Revue extrême-orient extrême-occident. Presses Universitaires de Vincennes. Kimura, B. (1972). Hito to hito to no aida. Tokyo: Koubundo. German translatation by E. Weinmayr. (1995). Zwischen Mensch und Mensch. Strukturen Japanischer Subjektivität. Darmstadt: Wissenschaftliche Buchgesellschaft. Krisana Kitiyadisai (2005). Privacy rights and protection: Foreign values in modern Thai context. Ethics and Information Technology, 7, 17–26. Lü Yao-Huai (2005). Privacy and data privacy issues in contemporary China. Ethics and Information Technology, 7, 7–15. Luhmann, N. (1987). Soziale Systeme. Frankfurt am Main: Suhrkamp. Lyon, D. (2001). Facing the future: Seeking ethics for everyday surveillance. Ethics and Information Technology, 3, 171–181. Johnson, D. (1994). Computer ethics, 2nd ed. Englewood Cliffs, New Jersey: Prentice Hall, Inc. Merleau-Ponty, M. (1942). La structure du comportement. Paris: Presses Universitaires de France. Merleau-Ponty, M. (1945). Phénoménologie de la perception. Paris: Presses Universitaires de France. Moor, J. (1989). How to invade and protect privacy with computer ethics. In C. C. Gould (Ed.), The Information Web (pp.57-70). Boulder: Westview Press. Moor, J. (1997). Towards a theory of privacy in the information age. In J. van den Hoven (Ed.), Computer ethics: Philosophical enquiry. Department of Philosophy, Erasmus University. Nakada, M. (2005). Are the meanings of japanese popular songs part of Seken-Jinseikan meanings?
Ronsyuu-gendaibunnka-koukyouseisaku, 1(1), 105-130. Nakada, M. (2006). Privacy and Seken in Japanese information society: Privacy within Seken as old and indigenous world of meaning in Japan. In F.Sudweeks, H. ,Hrachovec, & C. Ess (Eds.), Cultural Attitudes towards Technology and Communication 2006 (pp.564-579). Perth: Murdoch University. Nakada, M. (2007). Japanese traditional values behind political interests in the information age: Japanese political minds within Seken as old and indigenous world of meaning in Japan. In T. W. Bynum, S. Rogerson, & K. Murata (Eds.), ETHICOMP 2007(Globalization: Bridging the Global Nature of Information and Communication Technology and the Local Nature of Human Beings (pp.427-435). Tokyo: Meiji University. Nakada,M., & Tamura, T. (2005). Japanese conceptions of privacy: An intercultural perspective. Ethics and Information Technology, 7, 27-36. Nakada, M., & Capurro, R. (2007). The public/private debate between the “Far West” and the “Far East” (2007) (Unpublished six month e-mail debate between Rafael Capurro and Makoto Nakada. This debate provided the authors with theoretical foundations and the empirical material underlying this chapter. This debate will be published on Capurro’s website: http://www. capurro.de/ ) Spencer Brown, G. (1973). Laws of form. New York: Bantam Books. Sudweeks, F., & Ess, C. (Eds.) (2004). Cultural attitudes towards technology and communication. Perth: Murdoch University. Thompson, P. (2001). Privacy, secrecy and security. Ethics and Information Technology, 3, 13–19. Rachels, J. (1975). Why is privacy important? Philosophy and Public Affairs 4, Summer, 323-333.
The Public / Private Debate
Utsumi, T. (2005). Tougoushityousyou no seishinryouhou kanousei ni tsuite (About the possibility of psychiatrical treatment for schizophrenia). Seishin-Ryouhou, 31(1), 9-18.
KEY tErMs IIE (Intercultural Information Ethics): Problematization of topics such as the relations between the public and the private in the information era from the ontological viewpoints and also from the views motivated by comparison of people’s ways of life in different cultures. Ikai: The hidden or forgotten aspect of Japanese culture from which negative meanings tied with evils, disasters, crimes, and impurity emerge – along with freedom and the sources of energy related to art and spiritual meanings.
Ontological Perspective of IIE: The perspective on intercultural information ethics as arising from our common being in the world with others sharing the world-openness. Seken: The indigenous aspect of Japanese culture reflecting Japanese attitudes towards nature , denial of material desires, destiny, purity of minds, denial of selfishness. Shakai: The modernized aspect of Japanese culture strongly influenced by ‘Western’ culture(s) of modern era. The IIE Public/Private Debate: The debate about different interpretations of the relations between the public and the private spheres as arising from different cultural perspectives particularly with regard to individualism and collectivism.
s; g
Chapter XXIV
Ethical, Cultural and SocioEconomic Factors of Software Piracy Determinants in a Developing Country: Comparative Analysis of Pakistani and Canadian University Students Arsalan Butt Simon Fraser University, Canada Adeel I. Butt Simon Fraser University, Canada
abstract Consumer software piracy is widespread in many parts of the world. P2P based websites have made it easier to access pirated software, which has resulted in an increased emphasis on the issue of software piracy in both the software industry and research community. Some factors that determine piracy include poverty, cultural values, ethical attitudes, and education. Earlier empirical studies have looked at software piracy as an intentional behaviour. This study explores the demographic, ethical and socioeconomical factors that can represent software piracy as a social norm among a developing country’s university students. The authors have conducted a comparative analysis of university students from Pakistan and Canada, two countries that differ economically, socially, and culturally. The results of the study indicate that software piracy behaviour is different in both groups of students, but that there are also some similarities. Future research directions and implications are also presented.
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Ethical, Cultural and Socio-Economic Factors of Software Piracy Determinants in a Developing Country
If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively posses as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of everyone, and the receiver cannot dispossess himself of it. Its peculiar character, too, is that no one possesses the less, because every other possesses the whole of it. He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me. That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature, when she made them, like fire, expansible over all space, without lessening their density at any point, and like the air in which we breathe, move, and have our physical being, incapable of confinement or exclusive appropriation. Invention then cannot, in nature, be a subject of property. -Thomas Jefferson Congress shall have power … to promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries. -The Constitution of the United States, Article 1, Section 8, 1788
INtrODUctION Ethicist Richard Mason (1986) identified four main ethical issues of the information age: privacy, accuracy, property and accessibility. It has been suggested that Mason’s work was very significant in the field of Management Information Systems ethics (Freeman & Peace, 2005). Mason (1986)
considered intellectual property (IP) “as one of the most complex issues we face as a society” (p. 9). Mason identified bandwidth as the real threat in the digital world and viewed it as a scarce and fixed commodity at the time. However, with the rapid progress of hardware and software technology, bandwidth has increased immensely and has therefore made peer-to-peer (P2P) technology possible making e-file sharing a matter of few mouse clicks. According to Husted (2000), knowledge and information are now more important factors in a national economy than the traditional physical assets that used to indicate economic well-being. Therefore, the protection of intellectual property (IP) has received increased attention in the recent past. Intellectual property refers to “the results of intellectual activity in the industrial, scientific, literary or artistic fields” (Forester & Morrison, 1990, p. 31). A government plays its role to protect the rights of owners by preventing unauthorised use of this intellectual property for a limited period of time (Seyoum, 1996) by using different measures such as copyrights, trade agreements and patents. Legality aside, there are ethical and moral issues that have risen from the use of software and its unauthorised copying both at the consumer and commercial level. The concept of Technoethics1 deals with such aspects of technology. “It designates that portion of ethics which deals with questions arising from technological development and activities. More precisely, technoethics deals with moral questions governing or resulting from the conception, production, distribution and the use of artifacts or technological systems” (Findeli, 1994, p. 50). In our chapter’s context technoethics or ethics refers to the moralities and ethical values presumed or perceived with the use and copying of commercial software. Software is a form of intellectual property and its unauthorized duplication is a crime. However, the practice of making illegal copies of software amounts to high rates in various parts of the world and in environments such as universities,
Ethical, Cultural and Socio-Economic Factors of Software Piracy Determinants in a Developing Country
businesses, and government, the behaviour has become socially acceptable (Sims, Cheng, and Teegen 1996; Cheng, Sims, and Teegen 1997; Hinduja 2001; Christensen and Eining 1991). Cheng et al (1996) and Husted (2000) suggested that economics play a vital role in determining software piracy. Other studies have however shown that low national income and low personal incomes are not the only reasons for which software is pirated. Swinyard, Rinne, and Kau (1990) observed that attitudes towards software piracy are affected by cultural standards and customs. Hence, “the neglect of culture as an explanation of software piracy seems odd given the fact that cultural values have such a significant impact on a wide array of business practices in different countries” (Husted, 2000, p. 200). Thus, this chapter will not only look into the relationship between economic factors and software piracy, but will also reflect on the cultural and ethical values and social norms that affect the trends of software piracy amongst students.
rEsEarcH qUEstION This study focuses on software piracy amongst university students – specifically with regard to its occurrence over the Internet, sharing (copying or borrowing) software on physical media such as floppy disks and CD-ROMs and buying pirated software from retail outlets2. This research explores the question whether software piracy behaviour among university students of a developing country can be conceptualized in terms of social and cultural norms rather than in terms of intentions as has been described (for piracy amongst university students) in most of the literature (e.g. Kwong & Lee, 2002; Rahim, Seyal & Rahman, 2001; Limayem, Khalifa & Chin, 1999; Tang & Farn, 2005; Gopal, Sanders & Bhattacharjee, 2004).
rEsEarcH jUstIFIcatION Empirical studies have been done on the subject of software piracy in different countries such as Saudi Arabia (Al-Jabri & Abdul-Gader, 1997), Thailand (Kini, Ramakrishna & VijayaRama, 2003; Leurkittikul, 1994), People’s Republic of China (Wang, Zhang, Zang & Ouyang, 2005) and India (Gopal & Sanders, 1998). Other empirical studies include those done by Gan & Koh (2006); Ki, Chang & Khang (2006); and Mishra, Akman & Yazici (2006). Although Husted (2000) and Proserpio, Salvemini & Ghiringhelli (2004) included Pakistan as one of the countries in their respective analytical studies, empirical studies on software piracy issues of Pakistan do not exist in the literature. A comparative study can provide a means of highlighting differences and possible similarities of software piracy determinants between a developed and a developing country. Therefore Canada was chosen for this purpose as it is culturally and economically in contrast with Pakistan. Moreover, there hasn’t been any recent Canadian scholarly literature3 in this context. This research can therefore help fill a part of that void and the results can provide a better understanding of a developing country’s software piracy issues that can help the policy makers to address the problem more effectively.
LItEratUrE rEvIEW Intellectual Property and its Protection According to Merriam-Webster’s Dictionary of Law, intellectual property (IP) is a “property that derives from the work of the mind or intellect”. Basically IP “refers to a legal entitlement which sometimes attaches to the expressed form of an idea, or to some other intangible subject matter” (Wikipedia, 2006, para. 1). Moreover, IP law “regulates the ownership and use of creative
Ethical, Cultural and Socio-Economic Factors of Software Piracy Determinants in a Developing Country
works, including patent, copyright and trademark law” (Nolo, 2006). Simply put, intellectual property is a realization of someone’s idea or thought. Composed music, lyrics, paintings, published written work, and software are the intellectual property of the artists or the professionals that produced or developed them. Several authors, however, still debate the justifiability of the intellectual property laws (Siponen, 2004; Hettinger, 1989; Ladd, 1997; Stallman, 1995; Weckert, 1997). Although detailed discussion on their arguments is out of the scope of this research, it is important to have a broad view of some of the important concepts. There are several IP protection organizations across the world, such as Business Software Alliance (BSA), Canadian Alliance Against Software Theft (CAAST), International Intellectual Property Alliance (IIPA), Software & Information Industry Association (SIIA), and World Intellectual Property Organization (WIPO). Many countries have their own intellectual property laws protecting the rights of individuals and organizations alike. The Copyright Act of Canada (Department of Justice Canada, 2006) provides protection to intellectual property in Canada. It considers computer programmes (or software) to be literary work and therefore has several laws controlling their unauthorized copying and distribution. Similarly, computer programmes are considered literary work under the Copyright Ordinance, 1962 of Pakistan. A significant amendment was made to this ordinance in 1992 called the Copyright (Amendment) Act, 1992. This amendment addressed the copyright issues of computer software in more detail than the original Copyright Ordinance. However, some laws are also considered debatable. For example, the Digital Millennium Copyright Act (DMCA) is usually seen as a controversial law approved by the U.S. Congress in 19984 (SearchCIO.com, 2006). With society’s transition to a digital world, copyright protection has become an important
area of IP law (Blanke, 2004). It is evident from the discussion above that intellectual property rights hold immense importance in today’s world but justification of IP rights and laws continues to be a debate among the subject experts. “On one hand are those who believe that anything they conjure up, anything that transforms an idea into form, is intellectual property. On the other are the individuals who believe just as passionately that the entire notion of intellectual property is at best a farce, at worst just another way to suck profits out of the ether” (Gantz & Rochester, 2004, p. xxiii). For example, Hettinger (1989), Ladd (1997), Stallman (1995) and Weckert (1997) view software as an intangible commodity and therefore favour its copying. Weckert reflected on intellectual property rights concerning software as unjustifiable. Hettinger holds similar views on IP rights and their protection. Hettinger suggested that patents and trade secrets are more difficult to justify than copyrights which “restrict only copying an expression of an idea” (Hettinger, 1989, p. 52). Siponen however argues that “it is fair and just for people to claim financial rewards for their creations” (Siponen, 2004, “Concluding remarks” section) and that respecting IP laws and rights is necessary for the society to live in harmony.
software Piracy Sims et al (1996) define software piracy as “the illegal copying of computer software” (p. 839). Copying software is easy and can be carried out in many forms. Moores (2003) identified common forms of software piracy as counterfeiting, Internet piracy, and softlifting. He further noted, that “counterfeiting and Internet piracy both involve creating bootlegged copies of licensed software for sale or distribution. Internet piracy makes use of the Internet to distribute the software, and has become a particular concern for vendor organizations” (Moores, 2003, p. 208). Softlifting is also a very common type of software piracy among businesses that install single-user licensed
Ethical, Cultural and Socio-Economic Factors of Software Piracy Determinants in a Developing Country
software on multiple machines (Rahim, Seyal, & Abdul-Rahman, 2001; Simpson, Banerjee, & Simpson, 1994). Another kind of software piracy involves software installation by retailers onto the hard disk drives of customers’ personal computers (PCs) in order to encourage the sale of hardware.
Can Software Piracy be Justified? As is the case with intellectual property, issues surrounding software piracy are debated as well. For example, worldwide software piracy figures reported by BSA are cited by almost every published article on software piracy.5 However, many authors consider BSA’s methodology for calculating the levels of software piracy and the amount of reported monetary losses incurred as highly controversial (Locklear, 2004; The Economist, 2005). Even IDC (www.idc.com), an organization which has worked for BSA on the latter’s several world software piracy reports, has commented that the conclusions presented in the 2004 BSA study were exaggerated (Locklear, 2004). Some authors, however, have argued that software piracy increases the popularity of the product itself as suggested by Slive and Bernhardt (1998) that “a software manufacturer may permit limited piracy of its software. Piracy can be viewed as a form of price discrimination in which the manufacturer sells some of the software at a price of zero” (p. 886). A similar opinion was voiced by Microsoft Founder, Chairman, and Chief Software Architect Bill Gates in 1998. Gates reportedly said, “Although about three million computers get sold every year in China, people don’t pay for the software. Someday they will, though. And as long as they’re going to steal it, we want them to steal ours. They’ll get sort of addicted, and then we’ll somehow figure out how to collect sometime in the next decade” (CNN.com, 2001). This may also suggest that software manufacturers can allow initial piracy of their product as a strategy
to enter or monopolize the market by making consumers attached to a particular software only (as could be the case for Microsoft’s operating system Windows), since the purchasing power of the average consumer does not allow him/her to purchase the legal product at full price. Bill Gates faced a lot of criticism for his comments as Microsoft itself is probably the strongest advocate of the anti-software piracy campaign, the company’s products being widely pirated all around the world. Givon, Mahajan and Muller (1995) stated that “software piracy permits the shadow diffusion of a software parallel to its legal diffusion in the marketplace, increasing its user base over time. Because of this software shadow diffusion, a software firm loses potential profits, access to a significant proportion of the software user base, opportunities for cross-selling, and marketing its other products and new generations of the software. However, shadow diffusion may influence the legal diffusion of the software. Software pirates may influence potential software users to adopt the software, and some of these adopters may become buyers” (p. 1). LaRue (1985) also suggested that software publishers could eventually benefit by adopting a shareware marketing strategy of their software. This strategy is currently adopted by many software manufacturers and as Karon (1986) noted, the idea of such marketing strategies has also been supported by the president of an education software firm who believes that some pirates may eventually buy their products due to value-added benefits. Slive and Bernhardt (1998) also stated that piracy of software by home users can be viewed as a price discrimination strategy by the manufacturer (selling software for free) which will eventually increase the demand for the software by business users. According to The Linux Information Project [LINFO] (2006), the critics of the concept of software piracy argue that the terminology associated with this concept is deliberately manipulated by the major commercial software developers. “That
Ethical, Cultural and Socio-Economic Factors of Software Piracy Determinants in a Developing Country
Figure 1. Can software piracy be justified?
is, use of the term piracy itself is also highly controversial in a software context” (The Linux Information Project [LINFO], 2006, “Inappropriate Terminology” section, para. 1). And “this is also because it implies that people or organizations that create or use copies of programs in violation of their [end user licensing agreement] EULAs are similar to pirates. Pirates are violent gangs that raid ships at sea in order to steal their cargoes and rob their crews; they also frequently injure or kill the crews and sink their ships. Critics of this terminology claim that it was chosen for its dramatic public relations value rather than because of any relationship to the traditional use of the word” (LINFO, 2006, “Inappropriate Terminology” section, para. 2). Another factor that has been shown to associate directly with computer abuse is called the Robin Hood Syndrome (Forester & Morrison, 1990; Perrolle, 1987; U.S. Department of Justice, National Institute of Justice, 1989a, 1989b). Harrington (2002) describes the Robin Hood syndrome as “the belief that harming a large organization to the benefit of an individual is the right behavior” (p.
180). In her study of software piracy, Harrington found that people high in Robin Hood Syndrome are more likely to pirate software as this syndrome allows an “individual to neutralize ethical judgments about software piracy and copy software offered for sale by large organizations” (p.181). The Robin Hood Syndrome could be applied in the context of developing countries as well, where software piracy is justified on the grounds “that it is unfair to charge prices in low income countries that are comparable to those in the higher income countries, and thus virtually unaffordable by most citizens and many businesses in such countries” (LINFO, 2006, “Reasons and Justifications For” section, point 4). The purpose of this research is not to justify software piracy in a developing country such as Pakistan or any other part of the world. The authors believe that software piracy in any form is illegal and an unethical behaviour. However, it is equally important to stress the fact that depending upon the circumstances, individuals either inevitably have to indulge in this behaviour, reasons for which will be discussed later; or they have the
Ethical, Cultural and Socio-Economic Factors of Software Piracy Determinants in a Developing Country
option of pirating software, that is to say they do it because they can. Another important clarification that needs to be established at this point is that this research mainly focuses on individual piracy rather than commercial piracy (organizations producing pirated software on a large scale for selling purposes) which is purely done for profit. Commercial piracy however is a crucial element that creates a piracy facilitating environment in a society and will therefore be discussed where relevant. However as stated earlier, the focal point of this research is individual piracy by university students.
software Piracy in Pakistan and canada Developed nations such as U.S. or Canada have anti-piracy policies and organizations to control unauthorized publishing or copying of software. However, “developing countries are passive in addressing computer ethics in general and intellectual property rights in particular” (Al-Jabri et al., 1997, p. 335). Al-Jabri et al. also suggested that developing countries lack interest groups that combat software piracy. However, it has also been observed that even the presence of these organizations and the existing copyright laws of
the country cannot make a significant difference in developing countries (IIPA, 2004). According to IIPAA (2006) Canada also falls short in meeting the objectives laid down in the WIPO Copyright Treaty (WCT) and the WIPO Performances and Phonograms Treaty (WPPT). IIPA notes that the Canadian government introduced Bill C-60 in order to comply with these treaties but the bill eventually died as a result of a call for federal elections in November 2005. IIPA further points out that “Canada remains far behind virtually all of its peers in the industrialized world with respect to its efforts to bring its copyright laws up to date with the realities of the global digital networked environment. Indeed, most of the major developing countries have progressed further and faster than Canada in meeting this challenge” (IIPA, 2006, p. 2). As per the IIPA recommendation, Canada remains on the Watch List of countries. As indicated in Figure 2, the software piracy rates have remained nearly constant over the past few years in Pakistan and Canada.
rEsEarcH MODEL Theory of reasoned action (TRA) suggested by Fishbein & Ajzen (1975) and theory of planned
Figure 2. Software piracy levels in Pakistan & Canada (Source: BSA 2005, 2007)
Piracy rates
comparison of Pakistan & canada 0% 0% 0% 0% 0% 0% 0% 0% 0% 0%
Pakistan Canada
000
00
00
00 Years
0
00
00
00
Ethical, Cultural and Socio-Economic Factors of Software Piracy Determinants in a Developing Country
behaviour (TPB) later developed by Ajzen (1991) have been used extensively in the literature to explain software piracy behaviour and intentions. Both of these theories look at behaviour as an intentional act. While it is true that the literature on software piracy (significant portion of which is based on TRA and TPB) has helped in understanding various aspects of the matter, there have been no empirical studies to prove that software piracy can be conceptualized as an unintentional behaviour or as a behaviour that is the product of the social and cultural environment within which the behaviour is carried out. The model developed for this research6 is shown in Figure 3. It includes social norms as one of the variables. The basic structure of this model has been adopted and modified from a model that was used by Proserpio et al. (2004). Their model was based on a multi-causality approach to determine software piracy factors in 76 countries (including Pakistan and Canada) and is therefore appropriate for this research.
rEsEarcH HYPOtHEsEs Economic Development According to Marron and Steel (2000), intellectual property rights encourage novelty and economic
growth. In a study on the relationship between national economy and piracy levels, they concluded that a strong inverse correlation existed between piracy rates and the income level of the country. Bezmen & Depken (2006) also found a negative relationship between software piracy and income, tax burdens, and economic freedom. Therefore, it is hypothesized that: H1: Income will have an inverse influence on piracy intention of subjects. H2: High price of original software will have a direct influence on the piracy intention of subjects.
Culture Culture has been defined as “the collective programming of the mind which distinguishes the members of one group or category of people from another” (Hofstede, 1997, p.260); yet culture is a very broad concept, and has little power if it is used as a residual category (Child, 1981). Marron & Steel (2000) and Shin et al. (2004) concluded that the collectivistic culture is to blame for the high software piracy rates. Therefore, social norms and culture will be taken into consideration and the following is hypothesized.
Figure 3. Software piracy behavior model (Source: Butt, 2006)
Ethical, Cultural and Socio-Economic Factors of Software Piracy Determinants in a Developing Country
H3: Social/Cultural norms will have a direct influence on the piracy behaviour of subjects.
Other Piracy Facilitating Factors Besides the effects of social and cultural norms and poor economy on software piracy, the availability of pirated products is a very important factor that could be significantly related to higher piracy rates. For instance, Rainbow Market in Karachi, and Hafeez Centre in Lahore are two of the biggest and best-known hardware and software malls in Pakistan. Each of these malls comprises hundreds of retailers selling pirated software. The Canadian software market’s situation is totally different from the one described above. There are few, if any retailers openly selling pirated software. However, according to a CAAST news release (CAAST, 2005), forty seven percent of the surveyed students admitted to pirating software. Moores & Dhaliwal (2004) suggested that high availability of illegal software, absence of legal punishments and high cost of legal software reflect the high piracy rates of the regions being studied. Simpson et al. (1994) also included the element of legal factors in the piracy model they proposed, and found that these factors have an effect on the ethical decision process, which leads to the actual piracy behaviour. The following is therefore hypothesized in regards to the availability and legal factors. H4: There will be a positive influence of the availability of pirated software on the intent of subjects. H5: Legal enforcement will have a negative influence on the piracy intent of the subjects. H6: Legal enforcement will have a positive influence on the social norms. Previous studies have also suggested that gender is an important demographic factor that affects
one’s intention to pirate software. In a survey of moral intentions towards software piracy, Kini et al. (2003) found that males were more inclined towards pirating software, and similar results were proposed by Higgins and Makin (2004). Therefore, the following is considered as the null hypothesis for gender. H7: There will be no difference between males and females regarding their software piracy behaviour. “People’s perceptions of a particular behaviour are shaped by the existing value system of the society” (Lau, 2003, p. 234). The decision-making model proposed by Jones (1991) suggested that an individual’s attitude toward ethical issues will affect the individual’s ethical judgement and then their ethical behaviour. It is therefore hypothesized that: H8: There will be a direct relationship between attitudes towards piracy and the piracy behaviour of subjects. The discussion that has been presented so far in this research elaborates on the fact that current literature regards piracy behaviour as intentional. To conform to the current literature, the following final hypothesis is made. H9: Intention to pirate software will have a positive influence on the actual piracy behaviour of subjects.
rEsEarcH MEtHODOLOGY site selection As is the case with many research projects, this study also had limited resources in terms of time and money. The sites for the study were therefore chosen with these factors taken into consideration.
Ethical, Cultural and Socio-Economic Factors of Software Piracy Determinants in a Developing Country
For the Canadian part of the study, the authors’ home university was chosen. For the Pakistani study, the city of Lahore was chosen since it has one of the biggest pirated software markets in Pakistan and also has several IT institutions. Five universities were chosen in Lahore.
sampling characteristics Students were chosen as the target population in order to conform to the existing research, most of which is based on samples of college and university students. Students at both undergraduate and graduate levels from information technology and computer science departments were included in this study.
Pilot & actual studies A self-administered survey instrument/questionnaire was developed. This questionnaire consisted of closed-ended questions that were used to collect demographic details about the research participants. The questionnaire also consisted of, 31 items, each rated on a 5-point Likert scale to assess respondents’ attitude towards ethical, economical and demographic implications of software piracy. Negatively worded items were included to detect response patterns. Various items in the questionnaire were adopted from current literature including Moores & Dhaliwal (2004), Siegfried (2005) and Al-Jabri et al. (1997). Based on the feedback of pilot study (conducted in Canada), minor changes were made to the format and content of the questionnaire, and it was also modified to make it adaptable in Pakistan. While conducting the study in Pakistan, hard copies of the questionnaire were physically distributed at the same time in four classrooms and one computer laboratory at each of the five universities. The questionnaire at the university in Canada was administered through the Internet with the use of a secure program written in PHP/CGI to capture responses.
Data aNaLYsIs Descriptive statistics The online survey conducted at the university in Canada returned 208 responses out of which 196 were usable. Most of the Canadian respondents were under the age of 26 (n=172, 88%).7 There were 122 (62%) males and 74 (38%) female respondents. The survey in five Pakistani universities returned 365 responses out of which 339 (n=339) were usable. As was the case in Canadian data, most of the respondents were under the age of 26 (n = 325, 96%). There were 221 (65%) males and 118 (35%) female respondents.
Hypotheses testing For testing hypothesis, the questionnaire items in both Pakistani and Canadian questionnaire were grouped together to make the statistical tests feasible. The groupings (Table 1) were made based on (1) the face validity, i.e. interpretability; (2) factor loadings; and (3) reliability aka Cronbach’s alpha. Structural Equation Modelling (SEM) with LISREL was used to test the relationships between the above groups/variables. Based on composite scores, the ‘Norm-attd’ group in the Pakistani data was further split into ‘socnorm’ for ‘social norms’ and ‘attit’ for ‘attitude’. In the SEM for the Pakistani group, a latent variable called ‘sociomor’ was created which was composed of ‘socnorm’ and ‘attit’.
Fitting Data on structural Models In addition to fit statistics, structural equation modelling (SEM) produces estimates for partial regression coefficients (referred to as path coefficients), standardized regression coefficients and estimates of squared multiple correlations” (Wagner & Sanders, 2001, p. 165). LISREL was used to fit the data on a structural equation model.
Ethical, Cultural and Socio-Economic Factors of Software Piracy Determinants in a Developing Country
Table 1. Composite variables in the study8 Pakistani Study – Variable (Group) Names
Canadian Study – Variable (Group) Names
Availability
Availability
Legal
Legal
Intent
Intent
Norm-attd (i.e. Norms and socially or culturally mediated attitudes grouped together)
Ethical beliefs and Attitudes
Price
Price
Piracy Behaviour
Piracy Behaviour
-
Norms
The resulting path coefficients for Pakistan and Canada are shown in Figure 4.9 The structural models developed in this research represent the direction and strength of independent variables’ relationship with dependent variables. In the Pakistani model, the price factor (price piracy behaviour, 0.03) does not seem to have any effect at all on the piracy behaviour, therefore rejecting hypothesis 2. Price also has a fairly strong negative relationship (-.047) with the intent variable, thus contradicting hypothesis 1. In the Canadian model, price has very weak relationships with intent (.01) and piracy behaviour (.02), therefore rejecting hypotheses 1 and 2 for Canadian students. Legal issues have a fairly strong influence on intent (0.54) and not so strong (although it is in the right direction) on the sociomor variable (0.40) in the Pakistani model. Hypothesis 5 and hypothesis 6 are accepted and rejected respectively. The Canadian legal construct has a weak relationship with intent (0.07) and a fairly strong relationship with ethical beliefs and attitudes (0.46), thus rejecting hypothesis 5 but accepting hypothesis 6. The gender piracy behaviour paths in both models have very weak coefficients indicating no difference in piracy behaviour between males and females, thus accepting hypothesis 7. The availability of pirated software has a very small effect on the intent of Pakistani (0.13) and Canadian (0.23) students rejecting hypothesis
4. Intentions in the Pakistani model have a very high negative relationship (-0.76) with the piracy behaviour. Hypothesis 9 is therefore rejected. The sociomor construct, despite having a positive relationship (0.39) with the piracy behaviour, is not strong enough to accept hypotheses 3 and 8 in the Pakistani model. As far as the Canadian model is concerned, the norms variable has a strong influence (0.52) on ethics and attitudes toward piracy but negligible effect on piracy behaviour with a path coefficient of 0.05. Intent on the other hand shows a relationship in the correct direction (though not very strong) with a path coefficient of 0.34 with the piracy behaviour of the students.
DIscUssION This research has focused on the cultural dimension of software piracy and its effect on the behaviour of university students. Two structural models that incorporate social and cultural norms, economic conditions, ethical attitudes towards piracy and the availability of software piracy have been developed and tested. Since Canada and Pakistan are culturally and economically different, they were chosen to provide a contrasting view of the software piracy phenomenon. The IT industry in Pakistan is progressing, though not at a rapid pace. Still in its infancy, it began in Pakistan with the introduction of
Ethical, Cultural and Socio-Economic Factors of Software Piracy Determinants in a Developing Country
Figure 4. Pakistani and Canadian SEMs
the Internet in 1996. The analysis of economic factors (high price of legal software and low income) in this study provides a rationale for the reluctance of Pakistani government in enforcing intellectual property rights. Despite being aware of the rampant software piracy, governments of countries such as Pakistan are aware of the economic conditions of the mass population. People (students, in the context of this research) in the developing countries need to have cheap access to resources (software) in order to keep up with the rapid pace of technological advancement in the developed world. It can therefore be assumed that governments of developing countries are aware of this and therefore are always reluctant to enact and enforce strict IP protection laws. The empirical evaluation provides support that social norms and positive attitudes towards piracy are correlated with the actual piracy behaviour of Pakistani students.10 This finding is similar
to that of Proserpio et al., (2004), Seale et al., (1998) and Al-Jabri, I. & Abdul-Gader (1997). Intentions proved to be stronger predictors of piracy behaviour of Canadian students and this finding conforms to the literature which regards piracy behaviour as intentional. The achieved results indicate that software piracy behaviour in Pakistan cannot be regarded as purely intentional. It should rather be conceptualized as a consequential behaviour resulting from various elements, with customs or social norms being the strongest of them all. This also indicates that in two culturally different countries, the conditions that are responsible for creating a piracy-favouring environment are essentially different. Due to a lack of IP related awareness (unlike the developed world), this culture of copyright infringement is deeply rooted in the Pakistani society in such a way that students buy or share pirated software without even realizing that their action might
Ethical, Cultural and Socio-Economic Factors of Software Piracy Determinants in a Developing Country
be considered illegal and/or unethical. It is an established norm: a custom; the way an act is supposed to be normally carried by a member of the society. Gopal and Sanders (1998) correctly identified the need for a behavioural model for software piracy activity that would help software publishers gain insight into the behavioural dynamics of software pirates. However, as they also found that their economical model was appropriate for the U.S. and not for India, caution should be practiced in all future research that attempts to study piracy behaviour. This study was an exploratory, cross-cultural investigation of piracy in two very different cultures. The applicability of Western constructs such as ‘attitudes’ and ‘intentions’ to collectivist societies must always be critically examined. Future research should look at the questions left unanswered by this study. Subjects from more countries should be included in future cross-country studies of software piracy behaviour so that the results of this study could not only be generalized for the general student population but also to the population at large. Authors of this chapter feel necessary to reflect upon the fact that a discovery or invention of a new technology cannot be blamed for its eventual uses. Albert Einstein reportedly said, “My name is linked to the atomic bomb in two different ways. Almost fifty years ago I discovered the equivalence of mass and energy, a relationship which served as the guiding principle in the work leading to the release of atomic energy. Secondly, I signed a letter to President Roosevelt, stressing the need for work in the field of the atomic bomb. I felt this was necessary because of the dreadful danger that the Nazi regime might be the first to come into possession of the atomic bomb” (Nathan & Norden, 1960, p. 569). Einstein however considered signing the letter a “great mistake” (Tobar-Arbulu, 1984). Technology itself can therefore not be blamed for its eventual uses.
FUtUrE trENDs Findeli (1994) correctly argued that technoethical issues have been continuously rising from the unparalleled developments in technology. One could argue that such issues can act as catalysts to invoke other socio-political and ethical dilemmas. It is not only in software or computer technology that further development will raise ethical questions, but other walks of life such as medicine, engineering, journalism, etc. can have equally grave issues: human cloning, embryonic stem cells research, and inappropriate use of media to shape public opinion during conflicts (e.g. in a war), are only a handful of examples. It is a matter of time before such conflicts will give birth to new unforeseen technoethical issues.
cONcLUsION This study has found that there are more ways than one of understanding piracy behaviour across different countries. Although poor national economy plays a substantial role in software piracy rates, culture is also part of the equation. This study has also suggested that software piracy behaviour in a developing country such as Pakistan should not be conceptualized in terms of intentions alone. Caution should be practiced in all future research that attempts to study piracy behaviour as the applicability of Western constructs such as ‘attitudes’ and ‘intentions’ to collectivist societies must always be critically examined. Subjects from more countries should be included in future cross-country studies of software piracy behaviour so that the results of this study could not only be generalized for the general student population but also for the population at large. There is also a lack of longitudinal research and also of other forms of electronic piracy, such as the availability of pirated e-books on the Internet. Future research could therefore attempt to study both of these domains as well.
Ethical, Cultural and Socio-Economic Factors of Software Piracy Determinants in a Developing Country
rEFErENcEs Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179-211. Al-Jabri, I. & Abdul-Gader, A. (1997). Software copyright infringements: An exploratory study of the effects of individual and peer beliefs. Omega, 25, 335-344. Bezmen, TL. & Depken, CA. (2006). Influences on software piracy: Evidence from the various United States. Economic Letters, 90, 356-361. Blanke, JM. (2004). Copyright law in the digital age. In Brennan, L. L. & Johnson, V.E. (Eds.), Social, ethical and policy implications of information technology, (pp. 223-233). Hershey, PA: Idea Group Inc. Business Software Alliance (BSA). (2005). Second annual BSA and IDC global software piracy study. Retrieved January 7, 2006 from http://www. bsa.org/globalstudy/upload/2005-Global-StudyEnglish.pdf Business Software Alliance (BSA). (2007). Fourth annual BSA and IDC global software piracy study. Retrieved July 27, 2007 from http://www. ifap.ru/library/book184.pdf Butt, A. (2006). A cross-country comparison of software piracy determinants among university students: Demographics, ethical attitudes & socio-economic factors, emerging trends and challenges in information technology management. Proceedings of the 2006 Information Resources Management Association International Conference. Hershey: Idea Group Publishing. Canadian Alliance Against Software Theft (CAAST). (2005). Software piracy runs rampant on canadian university campuses. Retrieved Nov 1, 2005 from http://www.caast.org/release/default. asp?aID=139
Cheng, HK. Ronald, RS. & Hildy, T. (1997). To purchase or to pirate software: An empirical study. Journal of Management Information Systems, 13(4), 49-60. Child, J. (1981). Culture, contingency and capitalism in the cross-national study of organization. Research in Organization Behavior, 3, 303-356. Christensen, AL. & Martha, M.E. (1991). Factors influencing software piracy: Implications for accountants. Journal of Information Systems, 5 (spring), 67-80. CNN.com. (2001). Microsoft in China: Clash of titans. Retrieved March 19, 2006 from http://archives.cnn.com/2000/TECH/computing/02/23/ microsoft.china.idg/ Department of Justice Canada (2006). Copyright Act, R.S., 1985, c. C-42. Retrieved March 4, 2006 from http://laws.justice.gc.ca/en/C-42/index. html Findeli, A. (1994). Ethics, aesthetics, and design. Design Issues, 10(2), 49-68. Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention and behavior: An introduction to theory and research. Reading, MA: Addison-Wesley. Forester, T. & Morrison, P. (1990). Computer ethics: Cautionary tales and ethical dilemmas in computing. Cambridge, MA: The MIT Press. Freeman, L. A. & Peace, A. G. (2005). Revisiting Mason: The last 18 years and onward. In Freeman, L. A. & Peace, A.G. (Eds.), Information ethics: Privacy and intellectual property, (pp. 1-18). Hershey, PA: Idea Group Inc. Gantz, J. & Rochester, J. B. (2004). Pirates of the digital millennium. Upper Saddle River, NJ: Prentice Hall. Gan LL. & Koh. HC. (2006). An empirical study of software piracy among tertiary institutions in Singapore. Information & Management, 43, 640-649.
Ethical, Cultural and Socio-Economic Factors of Software Piracy Determinants in a Developing Country
Givon, M., Mahajan, V. & Muller, E. (1995). Software piracy: Estimation of lost sales and the impact on software diffusion. Journal of Marketing, 59(1), 29-37.
International Intellectual Property Alliance (IIPA). (2006). 2006 special 301: Canada. Retrieved March 21, 2006 from http://www.iipa. com/rbc/2006/2006SPEC301CANADA.pdf
Gopal, R. D. & Sanders, G. L. (1998). International software piracy: Analysis of key issues and impacts. Information Systems Research, 9(4), 380-397.
Jones, T. M. (1991). Ethical decision making for individuals in organizations: An issue contingent model. Academy of Management Review, 16(February), 366-395.
Gopal, R. D., Sanders, G. L. & Bhattacharjee, S. (2004). A behavioral model of digital music piracy. Journal of Organizational Computing and Electronic Commerce, 14(2), 89-105.
Karon, P. (1986). Software industry groups set sail against pirates in academe. PC Week, 9 December, 62.
Harrington, S. J. (2002). Software piracy: Are robin hood and responsibility denial at work? In Salehnia, A. (Ed.), Ethical issues of information systems, (pp. 177-188). Hershey, PA: IRM Press. Hettinger, E. C. (1989). Justifying intellectual property. Philosophy and Public Affairs, 18(1), 31-52. Higgins, G. E. & Makin, D. A. (2004). Does social learning theory condition the effects of low self-control on college students’ software piracy? Journal of Economic Crime Management, 2(2), 1-22. Hinduja, S. (2001). Correlates of internet software piracy. Journal of Contemporary Criminal Justice, 17(4), 369-82. Hofstede, G. (1997), Cultures and organizations: Software of the mind. New York: McGraw Hill. Husted, B. W. (2000). The impact of national culture on software piracy. Journal of Business Ethics, 26, 197-211. International Intellectual Property Alliance (IIPA). (2004). 2004 special 301 report: Pakistan. Retrieved November 20, 2005 from http://www. iipa.com/rbc/2004/2004SPEC301PAKISTAN. pdf
Ki E., Chang B., & Khang K. (2006). Exploring influential factors on music piracy across countries. Journal of Communication, 56, 406-426. Kini, R. B., Ramakrishna, H.V. & Vijayaraman, B. S. (2003). An exploratory study of moral intensity regarding software piracy of students in Thailand. Behavior & Information Technology, 22(1), 63-70. Kwong, T. C. H. & Lee, M. K. O. (2002). Behavioral intention model for the exchange mode internet music piracy. Proceedings of the 35th Annual Hawaii International Conference on System Sciences -Volume 7, 191. Washington, DC, USA. Ladd, J. (1997). Ethics and the computer world: A new challenge for philosophers. ACM Computers & Society, 27(3), 8-13. Lau, E. K. W. (2003). An empirical study of software piracy. Business Ethics, 12(3), 233-245. LaRue, J. (1985). Steal this program. Library Software Review, 4(5), 298-301. Leurkittikul, S. (1994). An empirical comparative analysis of software piracy: The United States and Thailand. Unpublished doctoral dissertation, Mississippi State University. Limayem, M., Khalifa, M., & Chin, W. W. (1999). Factors motivating software piracy: A longitudinal study. International Conference on Information
Ethical, Cultural and Socio-Economic Factors of Software Piracy Determinants in a Developing Country
systems (pp. 124-131). Association for Information Systems.
ca/aimac2005/PDF_text/ ProserpioL_SalveminiS _GhiringhelliV.pdf
Locklear, F. Z. (2004). IDC says piracy loss figure is misleading. Retrieved March 21, 2006 from http://arstechnica.com/news.ars/post/200407194008.html
Rahim, M. M., Seyal, A. H. & Abd. Rahman, M. N. (2001). Factors affecting softlifting intention of computing students: An empirical study. J. Education Computing Research, 24(4), 385-405.
Mason, R. O. (1986). Four ethical issues of the information age. MIS Quarterly, 10(1), 5-12.
Seale, D. A., Polakowski, M., & Schneider, S. (1998). It’s not really theft: Personal and workplace ethics that enable software piracy. Behavior & Information Technology, 17, 27-40.
Marron, D. B. & Steel, D. G. (2000). Which countries protect intellectual property? The case of software piracy. Economic Inquiry, 38, 159-174. Mishra, A. Akman, I. & Yazici, A. (2006). Software piracy among IT professionals in organizations. International Journal of Information Management, 26(5), 401-413. Moores, T. T. (2003).The effect of national culture and economic wealth on global software piracy rates. Communications of the ACM, 46(9), 207215. Moores, T. T. & Dhaliwal, J. (2004). A reversed context analysis of software piracy issues in Singapore. Information & Management, 41, 1037-1042. Nathan, O. & Norden, H. (1960) Einstein of peace. New York: Simon and Schuster. Nolo. (2006). Intellectual property. Retrieved March 19, 2006 from http://www.nolo.com/definition.cfm/Term/519BC07C-FA49-4711-924FD1B020CABA92/alpha/I/ Perrolle, J. (1987). Computers and social change: Information, property, and power. Belmont, CA: Wadsworth Publishing. Proserpio, L., Salvemini, S. & Ghiringhelli, V. (2004). Entertainment pirates: Understanding piracy determinants in the movie, music and software industries. The 8th International Conference on Arts & Cultural Management. Retrieved January 7, 2006 from http://www.hec.
SearchCIO.com. (2006). Digital Millennium Copyright Act. Retrieved March 19, 2006 from http://searchcio.techtarget.com/sDefinition/ 0,290660,sid19_ gci904632,00.html Seyoum, B. (1996). The Impact of intellectual property rights on foreign direct investment. Columbia Journal of World Business, 31[1], 50. Elsevier Science Publishing Company, Inc. Siegfried, R. M. (2004). Student attitudes on software piracy and related issues of computer ethics. Ethics and Information Technology, 6, 215–222 Simpson, P. M., Banerjee, D. & Simpson Jr., C. L. (1994). Softlifting: A model of motivating factors. Journal of Business Ethics, 13(6), 431-438. Sims, R. R., Cheng, H. K., & Teegen, H. (1996). Toward a profile of student software piraters. Journal of Business Ethics, 15, 839-849. Siponen, M. (2004). A justification for software rights. ACM Computers and Society, 34(3), 3. Slive, J. & Bernhardt, D. (1998). Pirated for profit. Canadian Journal of Economics, 31(4), 886-899. Stallman, R. (1995). Why software should be free. In Johnson, D. & Nissenbaum, H. (Eds.), Computers, Ethics & Social Values, (pp. 190-199). Englewood Cliffs, NJ: Prentice Hall.
Ethical, Cultural and Socio-Economic Factors of Software Piracy Determinants in a Developing Country
Swinyard, W. R., Rinne, H., & Kau, A. K. (1990). The morality of software piracy: A cross-cultural analysis. Journal of Business Ethics, 9(8), 655664. Tang, J. & Fam, C. (2005). The effect of interpersonal influence on softlifting intention and behavior. Journal of Business Ethics, 56, 149-161. The Copyright Ordinance, 1962 (of Pakistan). Retrieved November 11, 2005 from http://www. pakistanlaw.com/Copyright_Ordinance_1962. php The Economist (2005). Dodgy software piracy data. Retrieved March 21, 2006 from http://www. economist.com/research/articlesBySubject/displayStory.cfm?story_ID=3993427&subjectid=1 198563 The Linux Information Project (LINFO). (2006). The “software piracy” controversy. Retrieved February 27, 2006 from http://www.bellevuelinux. org/ software_piracy.html Tobar-Arbulu, J. F. (1984). Plumbers, technologists, and scientists. Research in Philosophy and Technology, 7, 5-17. Tobar-Arbulu, J.F. (n.d.). Technoethics. Retrieved September 23, 2007 from http://www.euskomedia. org/PDFAnlt/riev/3110811103.pdf U. S. Department of Justice, National Institute of Justice (1989a). In J. T. McEwen (Ed.), Dedicated computer crime units. Washington, DC: U.S. Government Printing Office. U. S. Department of Justice, National Institute of Justice (1989b). In D. B. Parker (Ed.), Computer crime: Criminal justice resource manual (2nd ed.). Washington, DC: U.S. Government Printing Office. Wang, F., Zhang, H., Zang, H. & Ouyang, M. (2005). Purchasing pirated software: An initial examination of chinese consumers. Journal of Consumer Marketing, 22(6), 340-351.
0
Weckert, J. (1997). Intellectual property rights and computer software. Journal of Business Ethics, 6(2), 101-109. Wikipedia - The free encyclopaedia. (2006). Intellectual property. Retrieved March 16, 2006 from http://en.wikipedia.org/wiki/Intellectual_property
KEY tErMs Culture: Source: Hofstede (1997): “The collective programming of the mind which distinguishes the members of one group or category of people from another.” Individualism and Collectivism: Source: www.Geert-Hofstede.com, An individualistic society is one in which the ties between individuals are loose: everyone is expected to look after him/herself and his/her immediate family. On the collectivist side, we find societies in which people from birth onwards are integrated into strong, cohesive in-groups, often extended families (with uncles, aunts and grandparents) which continue protecting them in exchange for unquestioning loyalty. The word ‘collectivism’ in this sense has no political meaning: it refers to the group, not to the state. Again, the issue addressed by this dimension is an extremely fundamental one, regarding all societies in the world. Intellectual Property: Source Forester & Morrison (1990): Results of intellectual activity in the industrial, scientific, literary or artistic fields. P2P: Source (www.tech-faq.com): Peer-topeer (P2P) file sharing is a system of sharing files directly between network users, without the assistance or the interference of a central server. Software Piracy: Unauthorized duplication of computer for personal and/or commercial purposes. Types of software piracy addresses in this chapter (Source: www.siia.net):
Ethical, Cultural and Socio-Economic Factors of Software Piracy Determinants in a Developing Country
a.
b.
c.
d.
Softlifting: It occurs when a person purchases a single licensed copy of a software program and loads it on several machines, in violation of the terms of the license agreement. Typical examples of softlifting include, “sharing” software with friends and co-workers and installing software on home/laptop computers if not allowed to do so by the license. In the corporate environment, softlifting is the most prevalent type of software piracy - and perhaps, the easiest to catch. Hard-Disk Loading: It occurs when an individual or company sells computers preloaded with illegal copies of software. Often this is done by the vendor as an incentive to buy certain hardware. CD-R Piracy: It is the illegal copying of software using CD-R recording technology. This form of piracy occurs when a person obtains a copy of a software program and makes a copy or copies and re-distributes them to friends or for re-sale. Although there is some overlap between CD-R piracy and counterfeiting, with CD-R piracy there may be no attempt to try to pass off the illegal copy as a legitimate copy - it may have hand-written labels and no documentation at all. Internet Piracy: It is the uploading of commercial software (i.e., software that is not freeware or public domain) on to the Internet for anyone to copy or copying commercial software from any of these services. Internet piracy also includes making available or offering for sale pirated software over the Internet. Examples of this include the offering of software through an auction site, IM, IRC or a warez site. Incidences of Internet piracy have risen exponentially over the last few years.
ENDNOtEs 1
2
3
4
5
6
7 8
9
For a detailed discussion on technoethics, see Tobar-Arbulu (n.d.) and Findeli (1994). These types of software piracy are defined at the end of this chapter. There was only one Canadian empirical (scholarly) study found in the literature (see Limayem et al., 1999). This study however relied on 98 research participants only and therefore cannot be considered very extensive. For more discussion on the controversial aspects of DMCA, see http://www.eff. org/IP/DMCA/20020503_dmca_consequences.pdf; http://chronicle.com/free/v48/ i47/47b00701.htm This chapter also cites BSA figures but does not rely on any one of them as a major argument. It is important to emphasize here that the model shown in Figure 1 presents a very basic structure which represents the theoretical base of this research. All percentages are rounded off. Factors with Cronbach’s alpha >= .70 were only retained. The positive paths between two variables indicate a positive relationship between them and vice versa. The closer the coefficient is to 1, the stronger the relationship. Authors feel the need to emphasize that the structural models were modified until an acceptable fit was achieved. The model was modified because (1) some of the independent variables had significantly non-normal distributions and (2) the relatively small sample size in the Canadian case made parameter estimation in more complex models more difficult. This is due to the under-identification problems that often arise when the number of degrees of freedom—a function of sample size and number of free parameters—is small. In SEM, structural coefficients between ob-
Ethical, Cultural and Socio-Economic Factors of Software Piracy Determinants in a Developing Country
served variables and latent variables and between latent variables and other variables are parameters to be estimated. For these reasons as well as for parsimony, a less complex SEM model was adopted.
10
Absence of very strong relationships in SEMs can be attributed to small sample sizes in both studies. Nonetheless, the direction of relationships supported the underlying hypothesis: intentional vs. un-intentional piracy behaviour.
Chapter XXV
Nanoethics:
The Role of News Media in Shaping Debate A. Anderson University of Plymouth, UK S. Allan Bournemouth University, UK A. Petersen Monash University, Australia C. Wilkinson University of the West of England, Bristol, UK
abstract Recent evidence on genetically modified crops, cloning and stem cell research suggests that the news media play a significant role in shaping wider agendas for public debate about ‘the rights and wrongs’ of newly emergent technologies. This may prove to be especially pertinent to nanotechnologies, which currently register low public visibility and yet are predicted by many scientists, policymakers and other stakeholders to have far-reaching implications in the years ahead. This chapter, drawing upon data from the authors’ British study on nanotechnologies and news production, examines how the press may influence the terms of public debate about such ethical issues as the dangers posed by particular applications, who has access to the technologies, and who is likely to benefit or be disadvantaged by developments. Efforts to enhance public deliberation about the ethical implications of nanotechnologies, it is argued, must attend to the complex ways in which journalists mediate between contending claims about their benefits and risks. Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Nanoethics
INtrODUctION The field of nanotechnologies poses a significant challenge for ethics and governance, especially in relation to how information about technological innovation is communicated during early phases of development. Levels of public knowledge of the substantive issues raised by nanotechnologies, including possible benefits and risks, will likely depend on how this information is portrayed in the media, how widely it is disseminated, and under what circumstances. At present, applications of nanotechnologies have achieved relatively low public visibility, with their attendant media representation subject to fairly routine forms of negotiation amongst interested stakeholders. In the years ahead, however, it is anticipated that questions regarding how pertinent issues are configured in and by the media are likely to be increasingly important for the formation of public knowledge about (and responses to) these technologies. This chapter discusses the role of the newspaper press in shaping portrayals of nanotechnologies, recognising as we do that this process influences competing agendas for public debate about its ethical implications during a period of growing scientific and policy interest in this field. It focuses particular attention on the role of stakeholders in this process, such as with regard to their efforts to affect the ways in which journalists frame the relevant issues. Informing this chapter’s discussion are findings drawn from our case study into the production and portrayal of nanotechnology news by British newspapers.a The study was conducted in the wake of widespread media coverage – and intense public controversy – about bovine spongiform encephalopathy (BSE) and genetically modified (GM) food and crops (see also Allan, 2002), accompanied at the time by growing concerns among science groups and policymakers about a public backlash against virtually any emergent technology associated with scientific uncertainty.
We argue that the ways in which the significance of nanotechnology risks are recurrently framed during the early stages of their growing public visibility is likely to be crucial to how citizens understand and subsequently respond to the technologies, whether they believe the benefits outweigh the risks, and whether they trust experts and information concerning ethical issues. The dynamics of news reporting figure only rarely in discussions about the ethical implications of new technologies, and yet given the significance of the media in influencing the terms for public debate about technology issues, we contend, it deserves a more central place in deliberations in this area in the future. The discussion begins with a brief account of what we mean by ‘nanotechnologies’, their convergence with other ‘new’ technologies, and some of their current applications. Next, it outlines certain key concerns highlighted in the Royal Society and Royal Academy of Engineering (2004) Report Nanoscience and Nanotechnologies: Opportunities and Uncertainties. On this basis, the chapter proceeds to situate nanotechnologies in relation to the literature on representations of science and technology so as to enable comparisons to be made with earlier biotechnology controversies. Questions raised include: what, if anything, is novel about the framing of nanotechnologies? In particular, what part do the media play in shaping public discourse about the ethical implications of their applications? It is against this backdrop that the chapter presents and discusses relevant findings drawn from its case study. In so doing, it considers several pressing implications for the study of nanoethics, as well as future avenues of research.
PUbLIc aWarENEss aND trUst Nanotechnology, the design and manipulation of matter at the level of atoms and molecules is, according to Friends of the Earth, “…set to be one of the defining issues of our time.” (Friends of the
Nanoethics
Earth website, 2006) It is neither new nor a single technology; rather, it involves the fusing together of elements of chemistry, physics, biology and materials science (Wood et al., 2007); hence, the commonly adopted ‘nanotechnologies’. Cutting across a number of different fields of science, the very definition of ‘nanotechnology’ is itself the subject of considerable debate since older technologies are increasingly being repackaged with a nano-prefix in the fierce competition to attract funding. While still in its infancy, there are currently in excess of 1,400 nanotech related companies and over 400 nano based consumer products available on the market including: sun creams; cosmetics; razors; condoms; toothbrushes and toothpastes; wound dressings; food and beverages; food containers; air purifiers; fridges; computer hard-wear; stain-resistant clothing; selfcleaning glass and window sealants (Project on Emerging Nanotechnologies, 2007). At the same time, however, there are serious concerns about a lack of adequate regulatory control (Michelson & Rejeski, 2006). Moreover, the convergence of nanotechnologies with related technologies -- such as biotechnology, artificial intelligence, information and communication technologies – is deemed likely to generate new configurations of risk. Ethical questions, as Jotterand (2006) suggests, are intrinsic to the conduct of scientific research. ‘The social and ethical issues raised by science and technology,’ he contends, ‘are no longer the responsibility of a “relatively closed bureaucratic-professional-legal world of regulation,” but are relocated within the public arena which is characterized by competing models of scientific justification and political legitimation’ (2006: 661). Science and technology exist within a social context and the power to define what counts as a ‘social and ethical’ issue is crucial (Lewenstein, 2006). Efforts to separate social and ethical issues from ‘pure scientific’ ones are seriously problematic since they are likely to conceal the way in which power operates across a wide range of areas. Indeed, as Khushf argues:
Through critical reflection on the scientific and engineering enterprise itself, the values and core ideals of nanotechnology need to be made explicit, so that a more sustained form of ethical reflection can be developed which might accompany the research at the most preliminary stages. This…is probably the single most important development that needs to take place…Without this, nanotech will continually struggle with the kind of polarization of scientific and ethical analysis that characterize other areas of science, and, especially in the more controversial areas of nano, this is likely to provoke a public debate like that associated with GMOs and nuclear technology, where there is poor integration of the realities of scientific practice and the public perception of what is at issue. (cited in Jotterand 2006: 663-664) In the case of nanotechnologies, there are particular ethical concerns over the possible health hazards of nanoparticles (these can enter the body via the lungs, intestinal tract and skin, although the skin acts as a more complex barrier (Hoet et al., 2004)). To date, relatively little research on the health and safety implications of nanotechnologies has been undertaken. The potential misuse of nanotechnologies in medical research raises many of the same ethical issues arising from stem cell research. While acknowledging the risks, advocates argue that nanotechnologies could bring about a range of medical benefits including better targeted drug delivery and super-hard, light implants. Communication and information systems could be improved by faster micro-processors alerting people in developing countries about new outbreaks of disease (Hunt, 2006). From an environmental point of view, nanotechnologies are seen to carry both risks and benefits. There is considerable uncertainty about the effects of nanoparticles on the environment, but some emerging evidence suggests that they may cause biological harm (Colvin, 2003: Preston, 2006). On the other hand, proponents argue that nanotechnologies could have a range
Nanoethics
of environmental benefits through, for example, enhancing cleaner energy technologies, such as wind and solar power, to increase environmental efficiency. Ethical concerns also arise over privacy issues (eg use of the technologies in surveillance, more advanced medical records databases made possible by faster microprocessors) and intellectual property issues (eg conflicts over ownership rights between industry and academic research departments). The development of an information tracking nano-device in the human body could raise fresh concerns over the potential misuse of people’s personal information. Military applications, in particular, are likely to contribute to concerns - especially regarding the potential deliberate misuse of nanotechnologies for purposes of military aggression. In this regard questions arise over where the line is drawn between offensive and defensive military objectives. Moreover, the potential for a ‘nano-divide’ is considered to raise further issues, given that developing countries are competing on unequal terms and applications are likely to exacerbate the gap between the rich and the poor. Responding to these debates, some bioethicists, together with environmental pressure groups, have called for a moratorium on all research until the apparent gap between the science and ethics is closed (Mnyusiwalla et al., 2003; ETC, 2003). Meanwhile, as nanotechnologies are steadily becoming more commonly used in everyday products (including sun creams, cosmetics, sports items, paints, cars, clothing and computer hard drives), the portrayal of the risks and benefits is emerging as a key issue for public debate. A number of recent science reports grant a privileged role to the news media where the public understanding of nanotechnologies is concerned (HM Government, 2005; Nanoforum, 2004; RS/RAE, 2004). However, these reports tend to ignore the framing processes that influence public debate; indeed, in some cases the media are simply viewed as public relations conduits for science.
This neglect of the media occurs in a context where, as noted above, there is very low public awareness of nanotechnologies to begin with. In the case of the research conducted as part of the UK’s Royal Society and Royal Academy of Engineering Enquiry, it was found that only 29% of respondents had heard of the term and only 19% could provide a definition (RS/RAE, 2004: 64). The benefits associated with nanotechnologies were judged by the authors of the Royal Society and Academy of Engineering Report to be considerable, including advancements in disease diagnosis and drug delivery, engineering and environmental sustainability. Possible risks were viewed as ‘non-unique’ with regard to health and safety, though ‘uncertain’ in relation to the toxicity of (and exposure to) nanoparticles and nanotubes. The report stated that some applications, especially in the medium (5-15 years) and the longer term (over 20 years), are likely to generate significant social and ethical concerns (2004: 56). It was acknowledged that some of these will involve governance issues, such as ‘who will decide and control developments and who will benefit from their exploitation’ (2004: 81). The report highlighted the prospect of a potential nano-divide, suggesting that there are significant risks that some short-term developments could simply benefit the wealthy and powerful. Indeed, a number of commentators on this report expressed their concerns that a public backlash could result, especially were the media to amplify the possible risks in an irresponsible manner (see International Risk Governance Council, 2006; Mills and Fleddermann, 2005; Renn and Roco, 2006). Although there have been attempts in the UK to assess and incorporate public attitudes and perceptions in an ‘upstream’ manner (Involve, 2006; Kearnes et al., 2006; Wilsdon and Willis, 2004), the contribution of such strategies to policymaking remains unclear. Given the report’s acknowledgement of the potential of the media to undermine trust (2004: 62), which many scientists and science bodies see as a key issue in
Nanoethics
relation to nanotechnology, it is surprising that so little attention is given to the dynamics of media influence in the UK. Similarly, studies conducted in the US indicate a lack of public awareness of what nanotechnology entails in scientific terms (Bainbridge, 2002; Cobb and Macoubrie, 2004; Gaskell et al., 2005; Kahan et al., 2007; Renn and Roco, 2006; Waldron et al., 2006). Tellingly, however, these same studies raise media-related concerns when they suggest that US publics tend to be more optimistic about potential applications compared with Europeans. Should stakeholders fail to engage with the ethical issues, Sandler and Kay (2006) maintain, they may end up impeding development in certain areas of nanotechnology. In this context, the involvement of diverse publics in decision-making is crucial, given that the values at stake in nanotechnologies need to be discussed and debated by expert and non-expert alike. In their words: …knowledge of what can and cannot be done, and of what is and is not required to do it, is quite different from knowledge of what ought and ought not be done. What ends should be prioritized, how resources should be allocated in pursuit of those ends, and constraints on how those ends ought to be pursued are ethical and social questions to be addressed in the public sphere, not economic and technological ones to be worked out in boardrooms or laboratories. They depend on value judgments and conceptions of the good regarding which business acumen and scientific knowledge afford no special privilege or insight. So while scientists and industry leaders may be “elite” in their knowledge of the science and business of nanotechnology, this status does not imply that they are “elite” with respect to the SEI [social and ethical issues] associated with nanotechnology, and it in no way justifies marginalizing the SEI concerns raised by researchers and the concerned public. (Sandler and Kay, 2006: 679-680)
Sandler and Kay’s (2006) intervention is part of an emergent set of investigations into the moral and ethical dimensions of nanotechnologies, several of which recognise the important ways in which different types of institutions – economic, political, cultural as well as scientific – mediate between these contending perceptions (see also Baird and Vogt, 2004; Cameron, 2006; Gordjin, 2003; Grunwald, 2005; Jotterand, 2006; Khushf, 2004; Mnyusiwalla et al., 2003; Roco, 2006; UNESCO, 2006; Weil, 2001). In our judgement, the importance of the news media amongst these institutions is deserving of much closer attention than it has typically received to date. Press coverage, we suggest, is not a simple reflection of different public attitudes and cultural values; rather, it has been demonstrated to play an important role in framing scientific issues and contributing to agenda-building, especially where there is marked uncertainty over the implications at stake for society. By framing nanotechnologies in particular ways, the press helps to set the preferred terms in which questions about possible problems and potential solutions are defined (Scheufele, 2006). These include, for example, who is likely to own and have access to these technologies? Moreover, who is likely to be advantaged or, alternatively, suffer discrimination as a result of their use? In contextual terms, it is worth noting that several of the biotechnology controversies that preceded nanotechnology has led to increasing recognition among scientists and policymakers that public acceptance of their work cannot be taken for granted. As Kulinowski (2006) observes, the enthusiastic emphasis upon revolutionary benefits associated with nanotechnologies, the “wow factor”, is bound to create controversy and could quickly turn into a “yuck” response. In particular, salutary lessons have been learnt from the intensive consumer backlash against GM foods, widely referred to in the national press as “Frankenstein foods”. This illustrates how the deficit model of the public understanding of sci-
Nanoethics
ence posed a major barrier for institutions which had responsibility for assessing and regulating the technology. Indeed, there remains the very real possibility that controversy in one issue realm may spill over into another, thereby undermining trust in the regulatory processes. Public attitudes, it follows, may become less optimistic if the risks associated with nanotechnologies gain greater visibility. Macoubrie’s (2006) study of informed adults’ attitudes to nanotechnologies in the US found concerns focused on long-term environmental and health effects, as well as the apparent lack of adequate regulation. This preliminary research suggests ‘industry and government appear to be creating a disaffected public through what individuals expressed as “a rush to profit before risks are known”’ (Macoubrie, 2006:235). This and other related research suggests that as nanotechnologies acquire a higher profile within press coverage, the ensuing debate may become much more sharply polarised, possibly reflecting a shift from predominantly scientific frames to increased concern with ethical implications (see Anderson et al. 2005; Te Kulve, 2006). To the extent that ethical issues have been incorporated into funding initiatives in the early stages in nanotechnology research (see Hunt, 2006; Nano Ethics Bank, 2007), critics have been heartened. However, it is more typically the case that nanotechnologies tend to be framed in science reports and surveys in more narrow, administratively calculative terms. As Wilsdon and Willis (2004) observe, ethical issues have usually been couched through a utilitarian approach to ‘benefits’ and ‘risks’. This diverts attention away from unforeseen or unknown effects, by their nature difficult to anticipate, and neglects to give publics a practicable opportunity to express their perceived social and ethical concerns. Such an approach assumes that identified problems can simply be tackled through enhanced regulatory procedures and further research, thereby overlooking the social complexity of nanotechnolo-
gies, the exercise of power in problem definition and the vested interests involved in particular ‘framings’ of issues (see Petersen, et al., 2007). In the case of nanotechnologies there is a real opportunity to address social and ethical issues early on in the process of technology development and to open up the debate to involve a multitude of perspectives.
NEWs rEPOrtING: sOME EtHIcaL cONsIDEratIONs The news media play an important role in defining what may be regarded as ‘social and ethical issues’ in relation to nanotechnologies. Just as science and technology operate within a social context and are to some degree value-laden, so is the process of news-making (Anderson, 1997: Edgar, 1992; Kieran, 1998). Newspaper reports are inevitably selective, reflecting numerous factors such as: time and space constraints; the nature and type of social relations that journalists develop with their sources; news values; editorial policy, ownership and ideological cultures; the extent of policy activity in the field; advertising pressure; and a journalist’s own personal knowledge of the topic and perspective on it (Allan, 2002; Anderson, 1997). Journalists tend to be guided by the ideal of objectivity, together with professional commitments to fairness and balance, leading to an emphasis on verifying the ‘facts’ often through approaching authoritative sources before searching out conflicting perspectives. Part of the professional orientation of journalists is to cultivate ‘trustworthy’, ‘credible’ and ‘legitimate’ sources in the field, not least to safeguard their integrity as reporters. This ‘strategic ritual of objectivity’ (Tuchman, 1978) tends to advantage powerful institutional stakeholders in the battle to gain prominence as sources within science reporting (Stocking, 1999). On a routine basis they gain material from a variety of sources, including press conferences,
Nanoethics
press releases, information and public relations officers, professional society meetings, scientific journals, and interviews (Nelkin, 1995: 105). However, journalists often lack the time, means or expertise to seek out independent verification of facts (Dunwoody, 1999; Miller & Riechart, 2000), sometimes being overly dependent on prepackaged information over which they have little control (Goodell, 1986; Petersen, 2001; Logan, 1991; Manning, 2001). A tendency to accept press releases on their own terms often leads general reporters, in particular, to make exaggerated claims - a problem acknowledged in the development of nanotechnology communication guidelines for reporters in the US (Bell, 2006). Complex scientific debates are often simplified and given a human interest slant in the press in an attempt to make them more appealing to readers. Metaphors and popular analogies are used extensively in news reports to assist in this process of simplification; to connect with what readers already know (Petersen, 2001). The extensive use of fictional imagery, such as ‘designer babies’ in reports about embryonic stem cell research (Petersen, et., al., 2005), or ‘nano-subs’ and ‘nanobots’ in coverage of nanotechnologies, however, serve to blur the lines between ‘fact’ and ‘fiction’. Scientists often take exception to such imagery, seeing them as ‘distorting’ or ‘misrepresenting’ ‘the science facts’ (Petersen, et al., In press), notwithstanding that scientists themselves employ such imagery in their own work (Petersen, 1999). News reports may also hype the issues and raise false expectations about nanotech through visual images as well as news reports and features (Losch, 2006; Pitt, 2006). In previous biotechnology controversies a ‘cycle of hype’ has been identified (especially evident in the popular press), with significantly more attention devoted to benefits compared to risks (Brown, 2003; Caulfield, 2004b). To some extent this reflects funding pressures on both scientists and industry; indeed, the source of the spin can often be identified as originating from
scientists and research institutions themselves (Caulfield, 2004a). Increasing public relations pressure often results in sensationalised news stories that make exaggerated claims. Placing the blame solely on the news media ignores the contribution of a wider range of factors including news sources themselves (Kitzinger, 2006). As sources, scientists potentially exert a significant degree of control over the news production process. Indeed, a considerable amount of material is source generated – some estimates put it as many as half or more newspaper stories, so scientists are able to strategically package news items for journalists (Nisbet & Lewenstein, 2002: 362). Although journalists may select the topic for a news item scientists have the opportunity to help define the boundaries from which story choices are made (Dunwoody, 1999: 63). Industry also exerts a significant influence on the news reporting of nanotechnologies and lessons from past controversies over emergent technologies have underlined the importance of attending to public perception (e.g., see Meili, 2005). Commercial pressures encourage newspaper organisations to connect themes that are likely to be familiar and relevant to readers. Issues deemed to be controversial are usually viewed as especially ‘newsworthy’ (Hansen, 1994). The factors discussed above serve to ‘frame’ stories in ways that are likely to relate to readers’ interests, but sometimes appear to non-journalists to ‘distort’ or ‘misrepresent’ science and technology. By controlling the timing of news releases, and by choosing particular language and metaphors, scientists may seek to shape public discourse and influence the direction of policy. With controversial issues, scientists have an opportunity to play a significant role in shaping understandings of science, especially through their ability to adjudicate between contending voices of expertise seeking to define what is at stake. They may turn conflicting opinion and uncertainty to their advantage, not least by using the opportunity to highlight the validity of their own work and the
Nanoethics
uncertainties of others with whom they disagree (Dunwoody, 1999: 73). This has led some writers to suggest that journalism and science occupy two separate cultures, which are composed of distinctly different views and aims. The corollary is that science reporting involves a process of translating or popularising science fact for lay readers/audiences. However, critical research on science reporting suggests a more complex set of interaction processes. Important in this regard is the need to avoid overlooking the ways in which scientists, and other stakeholders, may attempt to control news at various stages of its production. This may occur, for example, through the regulation of the flow of information (e.g., press releases, news conferences, news ‘leaks’), and the promotion of preferred imagery, language and rhetorical devices. Also, any assumed separation between ‘science’ and ‘popularization’ needs to be resisted, especially when it denies the input of popular views into the research process and the simplification that is an intrinsic part of scientific communication (Hilgartner, 1990: 523-4; Lewenstein, 1995). Journalists, it is important to acknowledge, are often specialists in their field, some of whom are trained scientists themselves. Risk communication studies often tend to make the assumption that the media influence the public in a simple, direct, linear manner. However, there is little evidence to support early thinking that media messages were akin to a hypodermic needle being injected into audiences who respond in predictable patterns. Research highlights the complexity of media effects; publics react to the same media texts in a multitude of contradictory ways, reflecting a range of factors such as their cultural capital, extent of prior knowledge, and their own personal experience. The Social Amplification of Risk Framework (SARF) is based on the idea that the problems arise from the distortion of expert knowledge that is transmitted to publics leading to false or exaggerated perceptions (Anderson, 2006; Hornig Priest, 2005a). However,
0
this is overly simplistic since it tends to assume a linear flow of messages between media and audiences. As Murdock et al. (2003) observe, it is not simply a case of examining how much coverage is devoted to particular issues or social actors. It is vital to consider the discursive parameters within which this coverage is inflected, and the extent to which power may be exercised to suppress as well as to publicise controversial issues. A media-centric focus tends to overlook the behind-the-scenes struggles between competing news sources, all of whom are vying for media attention to varying degrees. While limited short-term media-framing effects in relation to nanotechnologies have been found by some experiment-based studies (e.g., Cobb, 2005), it is likely that longer-term more subtle effects are more significant (Hornig Priest, 2005b). The national press has been shown to play a formative role in framing scientific issues and contributing to agenda-building, especially where there is considerable uncertainty over potential risks (Nisbet & Lewenstein, 2002; Nisbet et al. 2003). Frames, in this context, may be conceptualised as ‘principles of selection, emphasis and presentation composed of little tacit theories about what exists, what happens, and what matters’ (Gitlin, 1980: 6). News frames set up antagonistic patterns whereby complex scientific debates often become simplified and individualised through aligning news sources in a manner that accentuates their differences. Where there are considerable uncertainties about long-term implications and risks, issues of credibility and trust become crucial in making sense of competing truth claims. Stigma, which occurs when the potential of a technology becomes tainted or blemished by discourses of risk, is remarkably difficult to overcome once it takes root (Gregory et al., 2001; Wilkinson et al., 2007). A series of studies in the US, Canada and the UK suggest that publics have little trust in government or industry to manage risks (Michelson & Rejeski, 2006). Given that nanotechnology is in the early stages of public
Nanoethics
visibility, citizens are only just beginning to make sense of the implications, benefits and risks. Research suggests that to a large extent, the news media are tending to rely upon ready made scripts drawn from previous reporting of biotech controversies (see, for example, Faber, 2005; Friedman and Egolf, 2005; Gaskell et al. 2005; Lewenstein et al. 2004; Lewenstein 2005; Stephens 2004, 2005). Related studies have drawn attention to the initial hyping of new technologies in the national press, partly brought about the growing commercial pressures placed on scientists in university research centres (see Caulfield, 2004a). In relation to nanotechnologies, a study undertaken by the authors, to which we now turn, highlights how the news media may emphasise certain aspects of nanotechnologies, particularly the science and the beneficial applications, and neglect other key issues, including the attendant social and ethical implications.
tHE stUDY Our study focuses on how developments associated with nanotechnologies were framed in the British newspaper press during the formative period of debate when a number of related issues began to gain public salience. These include extensive coverage of Prince Charles’ purported comments on the so-called ‘grey-goo’ scenario, which preceded (and which likely played some influence on the establishment of) the aforementioned Royal Society/Royal Academy of Engineering (2004) enquiryb. The study sought to gain fresh insights into how the initial terms of debate were established and how scientists and journalists saw their respective roles in the news production process. Here we focus on the content analysis of findings, though these were supplemented by an e-mail survey followed up with in-depth interviews with scientists and journalists (see Petersen et al., in press; Wilkinson et al., 2007)c.
the Methods The sample of national newspaper coverage included all pertinent news items over the period April 1, 2003 to June 30, 2004. A pilot study of three newspapers was undertaken which established that the keywords ‘nanotechnology’, ‘nano’, ‘grey goo’, and ‘nanobot/nanorobot’ were sufficient to identify relevant news items. For the main study the sample included ten national daily newspapers and eight national Sunday newspapers, in order to establish a comparative evidential basis. The daily newspapers were The Times, The Guardian, The Daily Telegraph, The Independent, The Financial Times, The Daily Mail, The Daily Express, The Daily Mirror, The Sun and The Daily Star. The Sunday newspapers were The Sunday Times, The Observer, The Sunday Telegraph, The Independent on Sunday, The Mail on Sunday, The Sunday Express, The Sunday Mirror and The News of the World. In order to ensure that the sample was comprehensive news items were identified via Lexis Nexis Professional and NewsBank Newspapers UK search facilities. This resulted in a total of 344 newspaper articles. Each item was analysed using a coding schedule, which recorded quantitative details including newspaper, date, page number, author attribution, and sources cited or referred to. It also enabled qualitative aspects of the data to be examined. Each news item was coded for its leading news ‘frame’ and ‘tone’, with the help of a schedule developed by Stephens (2004) in his analysis of the newspaper framing of nanoscience and nanotechnologies in the US. In addition, we also sought to identify additional frames specific to the UK context. A sample of 12 percent of the newspaper articles was coded separately to ensure inter-coder reliability. Electronic copies of each news item were stored using an N5 database so that relevant news articles could be kept in their entirety, but also categorized according to their dominant news frame for the purposes of analysis. We also took hard copies of each item which
Nanoethics
Table 1. News frames News Frame
Frequency
Science Fiction and Popular Culture (e.g. reviews of books, TV, films, radio)
55 (16%)
Scientific Discovery or Project (e.g. molecular motors, micro-aircraft, computing)
54 (16%)
Business Story (e.g. Polaron flotation, Nanotechnology Index)
52 (15%)
Prince Charles Interest (e.g. specific focus on perceived attitude/concern of Prince Charles)
39 (11%)
Other (e.g. competitions, interviews)
33 (9.5%)
Social Implications and Risks (e.g. discussions of regulation, impact on developing countries)
32 (9%)
Funding of Nano (e.g. DTI funding pledges, Research Council announcements)
27 (8%)
Educational or Career Advice (e.g. Closure of Chemistry departments, City and Guilds reports) Medical Discovery or Project (e.g. ‘nanofabricated’ joints, ‘nanoshell’ targeted cancer treatments) Celebratory (e.g. Royal Society of Chemistry Awards, The Foundation for Science and Technology Dinner) Total
allowed us to gain a ‘feel’ of the original article, which can sometimes be obscured when using electronic versions alone (Hansen et al. 1998).
Findings and analysis The content analysis found that coverage of nanotechnologies during the sample period was concentrated in a relatively small number of elite newspapers. Eighty-six per cent (n=296) originated from the ten sampled daily newspapers and 14 per cent (n=48) from the eight sampled Sunday newspapers. Of the daily newspapers, the great majority of articles (n = 255, or 74 percent) featured in the elite press (mostly broadsheet) which have relatively low circulation figures, while the remainder (n = 89, or 26 percent) featured in the more popular (i.e. mostly high circulation) newspapers. The Guardian carried the greatest number of articles (n=81, or 24 percent) of the daily newspaper coverage. This was followed by The Times (n=65, or 19 percent), The Financial Times (n=47, or 14 percent), The Independent (n = 36, or 10 percent), and The Daily Telegraph (n = 26, or 7 percent).
26 (7.5%) 20 (6%) 6 (2%) 344 (100%)
Our study found that the newspaper press during this period tended to emphasis the benefits of nanotechnologies and to focus on the scientific and science fictional aspects of developments, in comparison providing relatively little coverage of the risks, implications and uncertainties of nanotechnologies. The social implications and risks of nanotechnologies as a content theme appeared in only nine per cent (n=32) of the coverage (see Table 1). Stephens (2005) examined a sample of US and non-US newspaper coverage of nanotechnologies over the period 1992 to 2004 and found a tendency for non-US newspapers to emphasize social and ethical implications to a greater extent than US newspapers. However this may reflect the particular inflection that The Guardian gave to nanotechnologies over the period, given that this was the principal newspaper used in the comparison. Our study, which included all the principal national UK daily newspapers, found that frames were far from uniformly spread across the press and there was considerable variation reflecting their different ideological positions and readerships (see Anderson et al. 2005).
Nanoethics
The highest proportion of all articles in our study featured a science-related frame, which indicates strong news interest in the scientific implications or applications of nanotechnologies. However, the publication of an approximately equal number of articles with the ‘science fiction and popular culture’ frame and the ‘scientific discovery or project’ frame indicates some uncertainty about whether nanotechnologies can be most adequately contextualised within the realms of science fiction or science fact. The ‘business story’ frame was similarly prominent which suggests strong news interest in economic implications. News frames were also found to vary over time. The highest single number of news items appeared during the first quartile, April to June 2003. Following this the coverage declined, but remained at a reasonably steady level for the rest of the period. By far the greatest amount of the news items on ‘social implication and risks’ appeared during the first quartile from April to June 2003 (20 of 32, or 62 percent). This phase was dominated by Prince Charles’s comments on nanotechnology which were interpreted by some newspapers as lending support to the ‘grey goo’ scenario, although overall his input gained little credibility. Rather than draw attention to possible social, environmental and ethical issues this ‘celebrity’ involvement instead meant coverage reverted between a political tone (surrounding unnecessary royal intervention in a scientific matter) to a humorous tone (surrounding Prince Charles’ abilities to comment). Our analysis of the attribution of benefits and risks found that the greatest number of news items presented benefits as outweighing risks. However, a significant number of articles featured concerns over the uncertainties about potential benefits or risks: • •
Benefits outweigh risks (n=132) Risks outweigh benefits (n=38)
• •
•
Risks/Benefits need to weighed but unclear if a benefit or risk (n=56) Technical limits to progress, not limits associated to ethical, legal or social implications (n=16) Not applicable (n=102)
Again there were some interesting variations between newspapers. Most of the sampled newspapers devoted more coverage of benefits outweighing the risks, whereas the mid-market Daily Express and Daily Mail were found to most often offer a balance of risks and benefits. In terms of the benefits and risks identified, these covered a diverse range of factual and fictional imagery, from existing market products to the ‘grey goo’ scenario, nanosubs and nanobots. Issues surrounding nanoparticle safety contributed to 15 of the 32 articles focused on social implications and risks, all of which came from the ‘quality’ newspaper coverage. Interestingly, in the accompanying interviews with journalists and scientists involved in coverage, it was the issue of nanoparticle safety that was most frequently identified as an area warranting further research – and more sustained media attention (Wilkinson et al., 2007). Our study also revealed that scientists were dissatisfied with much of the coverage, especially that involving references to ‘grey-goo’, ‘nano-submarines’ ‘nanobots’, and so on, which they saw as ‘misrepresenting’ the science, although they did acknowledge that some media (e.g., the BBC) provided a ‘better’ or more ‘accurate’ portrayal of issues (Petersen, et al., in press). In summary, during this fifteen-month period of study, media coverage was found to be largely limited to relatively few (the ‘elite’) newspapers which have a restricted readership and to be portrayed in a generally positive light, with relatively little coverage of social and ethical implications and risks. The high coverage of science fictional aspects (e.g., references to films, TV, radio programmes) may however suggest growing public curiosity about the implications of nanotech-
Nanoethics
nologies and growing receptivity to discussion about their applications and impacts. Although it is difficult to know what impact this coverage has engendered, the limitation of reporting to the ‘elite’ newspapers and, a particular, narrow framing of issues, may serve to restrict public discourse about the social and ethical implications of nanotechnologies and about the direction of the science itself. A mostly uncritical portrayal of science (as pre-social and separate from ‘ethical’ considerations) and the relative absence of attention to implications and risks, we contend, is likely to work against reflection on the moral ‘rights and wrongs’ of developments. Although this study included only the British newspaper press, and covered a relatively short period of time, it highlights some of the ways public perceptions of problems and solutions may be influenced. If publics are to have a say about the direction of nanotechnology developments, and in so doing be able to properly assess the benefits and risks of particular innovations, then new, more meaningful conceptions of dialogue must be secured.
NaNOEtHIcs aND tHE MEDIa IN tHE FUtUrE While it can be argued that it is not the role of journalists to provide a forum for debate about the social and ethical implications of new technologies, precisely how news coverage influences public perceptions of the benefits and risks of technologies needs to be addressed as a matter of pressing concern. In our view, there needs to be greater recognition of how news is socially produced and how different stakeholder groups – their respective positions stratified in relation to power relations – strive to establish their preferred representations of issues in ideological terms. In the UK, controversies surrounding GM crops and food (like the fiasco surrounding GM Nation), along with the BSE crisis, have profoundly affected scientists’ and policymakers’ approaches
to engaging ordinary citizens. The consequent ramifications for nanotechnologies have not been adequately acknowledged. A perceived decline in public trust in science and its regulatory institutions has shaped responses in a number of technology fields, a dilemma which becomes especially acute when it is recognised that nanotechnologies is a field that has been largely defined by its attendant uncertainties. Establishing trust, or managing mistrust, has become a key issue in policy development and risk assessment, but not to a sufficient extent in this context. Familiar questions concerning how the news media frame the significance of emergent technologies are being re-shaped by nanotechnologies. The ways in which the parameters of debate, particularly in relation to the ‘rights and wrongs’ of developments, will be of critical importance for future policy developments and for how publics respond. Anxiety about the risks associated with nanoparticles, for example, could lead to the stigmatisation of nanotechnology, a condition that is very difficult to overcome once established. We suggest that as nanotechnologies acquire a higher profile within press coverage, and there is little doubt that they will do in the years to come, the debate is likely to become more polarised, possibly reflecting a shift from predominantly scientific frames to a greater concern with social and ethical implications. This partly reflects journalistic news values where controversial science is concerned (conflict, drama and novelty being perceived as necessary to keep interest in issues alive), as well as an increasing range of social actors engaging in intense debate as its public policy significance becomes more apparent. In the UK context, a number of engagement activities with a focus on nanotechnology are presently coming to fruition (Involve, 2006; Smallman and Nieman, 2006). Deliberations over their impact provide an opportunity to reflect on the potential contribution of the news media to future policy making, and thereby invite a thoughtful consideration of how journalists may better convey the uncertainties
Nanoethics
and risks inherent to all technologies in an ethically responsible manner.
rEFErENcEs Allan, S. (2002). Media, risk and science. Buckingham and Philadelphia: Open University Press. Anderson, A. (1997). Media, culture and the environment. London: UCL Press. Anderson, A. (2006). Media and risk. In Walklate, S. & Mythen, G. (eds.), Beyond the risk society, (pp. 114-31). Maidenhead: Open University Press. Anderson, A., Allan, S., Petersen, A. & Wilkinson, C. (2005). The framing of nanotechnologies in the British newspaper press. Science Communication, 27(2), 200-220. Bainbridge, W. S. (2002). Public attitudes towards nanotechnology. Journal of Nanoparticle Research, 4(6), 561-70. Baird, D. & Vogt, T. (2004) Societal and ethical interactions with nanotechnology, Nanotechnology Law and Business, 1(4), 391-396. Bell, T. E. (2006). Reporting risk assessment of nanotechnology: A reporter’s guide. http://www. nano.gov/html/news/reporting_risk_assessment_of_nanotechnology.pdf Brown, N. (2003). Hope against hype – accountability in biopasts: Presents and futures, Science Studies, 16(2), 3-21. Cameron, N. (2006). Nanotechnology and the human future: Policy, ethics, and risk. Annals of the New York Academy of Science, 1093(Dec), 280-300. Caulfield, T. (2004a) Nanotechnology: Facts and fictions? Health Law Reform, 12(3), 1-4. http://www.law.ualberta.ca/centres/hli/pdfs/hlr/ v12_3/12-3-01%20Caulfield.pdf
Caulfield, T. (2004b). Biotechnology and the popular press: Hype and the selling of science, Trends in Biotechnology, 22(7), 337-39 Cobb, M.D. (2005). Framing effects on public opinion about nanotechnology. Science Communication, 27(2), 221-39. Cobb, M. D. and Macoubrie, J. (2004). Public perceptions about nanotechnology: Risks, benefits and trust. Journal of Nanoparticle Research, 6(4), 395-405. Colvin, V. L. (2003). The potential environmental impact of engineered nanoparticles. Nature Biotechnology, 21(10), 1166-1170. Dunwoody, S. (1999). Scientists, journalists, and the meaning of uncertainty. In Friedman, S. Dunwoody, S. & C. L. Rogers (eds.), Communicating uncertainty: Media coverage of new and controversial science, (pp. 59-80). Lawrence Erlbaum: New York. Edgar (1992). Objectivity, bias and truth. In Belsey, A. & Chadwick. R. (eds.) Ethical issues in journalism and the media, (pp. 112-29). London: Routledge. ETC Group (2003). The big down: Atomtech – Technologies converging at the nano-scale. Winnipeg: Canada. Faber, B. (2005). Popularizing nanoscience: The public rhetoric of nanotechnology, 1986-1999. http://people.clarkson.edu/~faber/pubs/nano.tech. tcq.3.0.doc Friedman, S. M., and Egolf, B.P. (2005). Nanotechnology, risks and the media. IEEE Technology and Society Magazine 24 (4), 5-11. Friends of the Earth (2006). Nanomaterials, sunscreens and cosmetics: Small ingredients, big risks. FoE, May. http://www.foe.org/camps/ comm/nanotech/nanocosmetics.pdf Gaskell, G. Ten Eyck, T., Jackson, J. & Veltri, G. (2005). Imagining nanotechnology: Cultural
Nanoethics
support for technological innovation in Europe and the United States. Public Understanding of Science, 14(1), 81-90. Gitlin, T. (1980). The whole world is watching: Mass media in the making and unmaking of the new left. Berkely: University of California Press. Goodell, R. (1986). How to kill a controversy: The case of recombinant DNA. In S. Friedman, S. Dunwoody & C. Rogers (eds.), Scientists and journalists: Reporting science as news, (pp. 70181). Free Press: New York. Gordjin, B. (2003) Nanoethics: From utopian dreams and apocalyptic nightmares towards a more balanced view. Paris, France: UNESCO. Available at http://portal.unesco.org/shs/en/ev.phpURL ID=6603&URL DO=DOTOPIC&URL SECTION=201.html. Gregory, R., Flynn, J. and Slovic, P. (2001). Technological stigma, In Flynn, J., Slovic. P. and Kunreuther, H. (eds) Risk, Media and Stigma: Understanding Public Challenges to Modern Science and Technology, (pp. 3-8). London: Earthscan. Grunwald, A. (2005). Nanotechnology – A new field of ethical inquiry? Science and Engineering Ethics, 11, 187-201. Hansen, A. (1994). Journalistic practices and science reporting in the British press. Public Understanding of Science, 3, 111-34. Hansen, A., Cottle, S., Negrine, R., & Newbold, C. (1998). Mass communication research methods. London: Macmillan. Hilgartner, S. (1990). The dominant view of popularization: Conceptual problems, political uses. Social Studies of Science, 20(3),519-539. HM Government (2005). The government’s outline programme for public engagement on nanotechnologies. August 2005, HM Government in Consultation with the Devolved Administra
tions. http://www.ost.gov.uk/policy/issues/programme12.pdf Hoet, P. H. M., Bruske-Hohlfeld, I. & Salata, O.V. (2004). Nanoparticles: Known and unknown health risks. Journal of Nanobiotechnology, 2, (12). Hornig Priest, S. (2005a). Risk reporting: Why can’t they ever get it right? In Allan, S. (ed.) Journalism: Critical Issues, (pp.199-209). Maidenhead and New York: Open University Press. Hornig Priest, S. (2005b) Commentary: Room at the bottom of Pandora’s box: Peril and promise in communicating nanotechnology. Science Communication, 27(2), 292-99. Hunt, G. (2006) The global ethics of nanotechnology. In Hunt, G. & Mehta, M. (eds.), Nanotechnology: Risk, ethics, law, (pp. 183-95). London: Earthscan. International Risk Governance Council (2006). White paper on nanotechnology risk governance, June. Geneva: International Risk Governance Council. Involve (2006). The nanotechnology engagement group. Policy Report One. March 2006. London: Involve. Jotterand, F. (2006). The politicization of science and technology: Its implications for nanotechnology. Journal of Law, Medicine & Ethics, Winter, 658-666. Kahan, D. M., Slovic, P., Braman, D., Gastil, J. & Cohen, G. L. (2007). Affect, values, and nanotechnology risk perceptions: An experimental investigation. In Nanotechnology risk perceptions: The influence of affect and values. Washington, DC: Woodrow Wilson International Center for Scholars, Center Project on Emerging Nanotechnologies. Kearnes, M., Macnaghten, P. & Wilsdon, J. (2006). Governing at the nanoscale: People, policies and emerging technologies. London: Demos.
Nanoethics
Khushf, G. (2004). The ethics of nanotechnology: On the vision and values of a new generation of science and engineering. In National Academy of Engineering, Emerging Technologies and Ethical Issues in Engineering. (pp 29-56) Washington, DC: National Academies Press. Kieran, M. (1998). Objectivity, impartiality and good journalism. In Kieran, M. (ed.), Media Ethics, (pp 23-36). London: Routledge. Kitzinger, J. (2006). The role of media in public engagement. In Turney,J. (ed), Engaging science: Thoughts, deeds, analysis and action, (pp. 45-9). London: Wellcome Trust. http://www.wellcome. ac.uk/doc_WTX032706.html Kulinowski, K. (2006). Nanotechnology: From “wow” to “yuck”? In G. Hunt & M. D. Mehta (eds.), Nanotechnology: Risk, ethics and law, (pp 13-24). London: Earthscan. Lewenstein, B. V. (1995). Science and media. In S. Jasanoff, G. E., Markle, J. C., Petersen, & T. Pinch (Eds.), Handbook of science and technology studies. Thousand Oaks: Sage. Lewenstein, B.V. (2006). What counts as a social and ethical issue in nanotechnology? In Schummer, J. & Baird, D. (eds.), Nanotechnology challenges: Implications for philosophy, ethics and society, (pp. 201-16). London: World Scientific. Lewenstein, B.V., Radin, J. & Diels, J. (2004). Nanotechnology in the media: A preliminary analysis. In M.C. Roco & W.S. Bainbridge (eds.), Societal implications of nanoscience and nanotechnology II: Maximizing human benefit. Report of the National Nanotechnology Initiative Workshop, December 3-5, 2003, Arlington, VA, Washington DC, National Science & Technology Council and National Science Foundation. Logan, R. A. (1991). Popularization versus secularization: Media coverage of health. In L Wilkins & P. Patterson (eds.), Risky business: Communicating issues of science, risk and public policy. (pp 43-59). Greenwood: New York.
Losch, A. (2006) Anticipating the futures of nanotechnology: Visionary images as means of communication. Technology Analysis and Strategic Management, 18(314), 393-409. Macoubrie, J. (2006). Nanotechnology: Public concerns, reasoning and trust in government. Public Understanding of Science, 15(2), 221-41. Manning, P. (2001). News and news sources: A critical introduction. London: Sage. Meili, C. (2005). The “ten commandments” of nano communication – or how to deal with public perception. Innovation Society. http://www.innovationsgesellschaft.ch/images/publikationen/ Ten%20Commandments%20of%20Nano-Communication.pdf Michelson, E.S. & Rejeski, D. (2006). Falling through the cracks: Public perception, risk and the oversight of emerging technologies. IEEE Report. http://www.nanotechproject.org/reports Miller, M. M. and Riechert, B. P. (2000). Interest group strategies and journalistic norms: News media framing of environmental issues. In S. Allan, B. Adam & C. Carter (eds.), Environmental risks and the media, (pp.45-54). London: Routledge. Mills, K., & Fledderman, C. (2005). Getting the best from nanotechnology: Approaching social and ethical implications openly and proactively. IEEE Technology and Society Magazine, 24(4), 18-26. Mnyusiwalla, A., Daar, A.S. & Singer, P.A. (2003). Mind the gap: Science and ethics in nanotechnology. Nanotechnology, 14, R9-R13. http://www. utoronto.ca/jcb/pdf/nanotechnology_paper.pdf Murdock, G., Petts J. & Horlick-Jones, T. (2003). After amplification: Rethinking the role of the media in risk communication. In Pidgeon, N., Kasperson, R.E. & Slovic, P. (eds) The social amplification of risk, (pp. 156-78). Cambridge: Cambridge University Press.
Nanoethics
Nano Ethics Bank (2007). Nano ethics bank. Retrieved May 17 2007, from http://hum.iit.edu/ NanoEthicsBank/index.php NanoForum report (2004). Benefits, risks, ethical, legal and social aspects of nanotechnology, June. http://med.tn.tudelft.nl/~hadley/nanoscience/references/nanoforum1.pdf Nelkin, D. (1995). Selling science: How the press covers science and technology. New York: W.H. Freeman and Company. Nisbet, M.D. & Lewenstein, B.V. (2002). Biotechnology and the American media: The policy process and the elite press, 1970 to 1999. Science Communication, 23(4), 359-91. Nisbet, M., D., Brossard, & A. Kroepsch (2003). Framing science: The stem cell controversy in an age of press/politics. Harvard International Journal of Press/Politics, 8(2), 36-70. Petersen, A. (1999) The portrayal of research into genetic-based differences of sex and sexual orientation: A study of “popular” science journals, 1980-1997. Journal of Communication Inquiry, 23(2), 163-182. Petersen, A. (2001). Biofantasies: Genetics and medicine in the print news media, Social Science and Medicine, 52, 1255-1268. Petersen, A., Anderson, A. & Allan, S. (2005) Science fiction/science fact: Medical genetics in fictional and news stories. New Genetics and Society, 24(3), 337-353. Petersen, A., Anderson, A., Wilkinson, C. & Allan, S. (2007). Nanotechnologies, risk and society: Editorial. Health, Risk and Society, 9(2), 117-124. Petersen, A., Anderson, A., Allan, S., & Wilkinson, C. (In press). Opening the black box: Scientists’ views on the role of the mass media in the nanotechnology debate. Public Understanding of Science.
Pitt, J. (2006). When is an image not an image? In Schummer, J. & Baird, D. (eds.) Nanotechnology challenges: Implications for philosophy, ethics and society. (pp. 131-41). London: World Scientific. Preston, C. (2006). The promise and threat of nanotechnology: Can environmental ethics guide us? In Schummer, J. & Baird, D. (eds.), Nanotechnology challenges: Implications for philosophy, ethics and society. (pp. 217-48). London: World Scientific. Project on Emerging Nanotechnologies (2007) Project on emerging nanotechnologies. Retrieved May 17 2007, from http://www.nanotechproject. org/ Renn, O. and Roco, M.C. (2006). Nanotechnology and the need for risk governance. Journal of Nanoparticle Research, 8(2), 153-91. Roco, M. C. (2006). Progress in governance of converging technologies Integrated from the nanoscale. Ann. N.Y. Acad. Sci., 1093, 1–23. Roco, M.C., and Bainbridge, W. S. eds., (2001). Societal implications of nanoscience and nanotechnology. Dordrecht: Kluwer Academic Publishers. RS/RAE (2004). Nanoscience and nanotechnologies: Opportunities and uncertainties. London: Royal Society & Royal Academy of Engineering. Sandler, R and Kay, W. D. (2006). The national nanotechnology initiative and the social good. Journal of Law, Medicine & Ethics, Winter, 675-681. Scheufele, D.A. (2006). Messages and heuristics: How audiences form attitudes about emerging technologies. In J. Turney (ed.), Engaging science: Thoughts, deeds, analysis and action (pp. 20-25). London: The Wellcome Trust. http://www. wellcome.ac.uk/doc_WTX032706.html
Nanoethics
Smallman, M. & Nieman, A. (2006). Small talk: Discussing nanotechnologies. November 2006. London: Think-Lab.
Wilsdon, J. & Willis, R. (2004). See-through science: Why public engagement needs to move upstream. London: Demos.
Stephens, L.F. (2004). News narratives about nano: How journalists and the news media are framing nanoscience and nanotechnology initiatives and issues. Paper presented to Imaging and Imagining Nanoscience and Engineering Conference, University of South Carolina, March 3-7.
Wood, S. Jones, R. & Geldart, A. (2007). Nanotechnology: From the science to the social. A Report for the ESRC. http://www.esrc.ac.uk/ ESRCInfoCentre/Images/ESRC_Nano07_tcm618918.pdf
Stephens, L. F. (2005). News narratives about nano S&T in major U.S. and non-U.S. newspapers. Science Communication, 27(2), 175-99. Stocking, S. H. (1999). How journalists deal with scientific uncertainty. In S. M. Friedman, S. Dunwoody, & C. L. Rogers (eds.), Communicating uncertainty: Media coverage of new and controversial science, (pp. 23-41). Mahwah, NJ: Lawrence Erlbaum Te Kulve, H. (2006). Evolving repertoires: Nanotechnology in daily newspapers in the Netherlands. Science as Culture, 15(4), 367-82. Tuchman, G. (1978). Making news. New York: Free Press. UNESCO. (2006). The ethics and politics of nanotechnology. Paris, France: UNESCO. http://unesdoc.unesco.org/images/0014/001459/145951e. pdf Waldron, A. M., Spencer, D., & Batt, C. A. (2006). The current state of public understanding of nanotechnology. Journal of Nanoparticle Research, online publication. Weil, V. (2001). Ethical issues in nanotechnology. In M.C. Roco, & W.S.Bainbridge (eds.) Societal implications of nanoscience and nanotechnology. (pp. 244–251) Dordrecht, Netherlands: Kluwer. Wilkinson, C., Allan, S., Anderson, A & Petersen, A. (2007). From uncertainty to risk? Scientific and news media portrayals of nanoparticle safety. Health, Risk and Society, 9(2), 145-157.
KEY tErMs Convergence: The multiple ways in which nanotechnologies will combine together with other technologies in the future. Deficit Model of Science Communication: The notion that public scepticism towards science and technology is mainly caused by a lack of adequate knowledge about science, and that this can be overcome by providing people with sufficient information to fill this gap. Ethical, Legal and Social Issues (ELSI): The ethical, legal, and social issues associated with the Human Genome Project. Framing: The processes of selection and emphasis through which journalists adjudicate over competing truth claims. Nano-divide: The potential for nanotechnologies to increase the gap between the rich and the poor countries and reinforce global inequalities. Nanoparticle: Particles of less than 100nm (i.e., a nanometer) in diameter that exhibit new or enhanced size-dependent properties compared with larger particles of the same material. Nanotechnologies: At its simplest it is the manipulation of matter at the level of atoms and molecules. However, it is an umbrella term to describe nanoscale research conducted in a range of different disciplines, each with its own
Nanoethics
particular methods, underlying values and scientific discourses. News Values: Journalists’ typically takenfor-granted assumptions about what constitutes a newsworthy story. Social Amplification of Risk (SARF): A model developed in the late 1980s by North American scholars that aimed to provide an integrative theoretical framework for a fragmented range of risk perspectives in the growing field of risk communication and risk perception. It sought to explain why certain risk events, defined by experts as relatively non-threatening, come to attract considerable socio-political attention (amplification), and other risk events, defined as posing a greater objective threat, attract relatively little attention (attenuation). Stigma: When the potential of a technology becomes tainted or blemished by discourses of risk. Upstream Public Engagement: The problematising of social and cultural dimensions of scientific knowledge, and proper consideration of public views and values, at an early stage in the development of an emerging technology.
0
ENDNOtEs 1
2
3
This chapter draws on the findings of an ESRC funded project examining the production and portrayal of news on nanotechnologies in the British press (RES-00022-0596). Prince Charles intervened into the debate in April 2003 expressing concerns about potential risks associated with nanotechnologies. This was characterized by some newspapers as a forthcoming ‘grey goo’ crisis, although it should be noted that the Prince did not actually use this phrase. The grey goo scenario refers to the notion that self-replicating nanobots could run amok, consuming the biosphere and turning all matter into ‘goo’. For a more detailed overview of the findings of the media analysis, see Anderson, et al. (2005).
Chapter XXVI
Computing and Information Ethics Education Research Russell W. Robbins Marist College, USA Kenneth R. Fleischmann University of Maryland, College Park, USA William A. Wallace Rensselaer Polytechnic Institute, USA
abstract This chapter explains and integrates new approaches to teaching computing and information ethics (CIE) and researching CIE education. We first familiarize the reader with CIE by explaining three domains where information ethics may be applied: Information Ownership; Information Privacy; and Information Quality. We then outline past and current approaches to CIE education and indicate where research is necessary. Research suggestions for CIE education focus upon developing a deep understanding of the relationships between students, teachers, pedagogical materials, learning processes, teaching techniques, outcomes and assessment methods. CIE education exists to enhance individual and group ethical problem solving processes; however these are not yet fully understood, making research necessary. We then discuss CIE education research results to date and suggest new directions, including applying insights from the field of learning science as well as developing dynamic computing and information tools. Since these tools are dynamic and interactive, they will support collaboration, iteration, reflection, and revision that can help students learn CIE. Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Computing and Information Ethics Education Research
INtrODUctION The primary purpose of this chapter is to indicate the need for research regarding the education of computing and information professionals about ethics. We start by first abstractly describing computing and information ethics (CIE), but then we provide three concrete examples of contested CIE issues. We then move to discuss the progress to date in developing and implementing CIE pedagogy and material. In our last section, we describe new research directions for CIE pedagogy and explore how computing and information technology can support CIE teaching and learning. We also discuss research results that can enhance our ability to apply computing and information technology to support CIE education.
cOMPUtING aND INFOrMatION EtHIcs EDUcatION: FOcI CIE foci include concerns about who [using computers] should create, provide, own, access, use, transform, manage, or govern information. Foci also include considering consequences of creating, providing, owning, accessing, using, and transforming information (Bynum 1985, Johnson 1985, Moor 1985, Mason 1986, Weiner 1954) as well as discussions about the rights and responsibilities of individuals, groups, and societies as they interact with information. Finally, CIE foci include issues of equity, care, and virtue as information is used to transform our world. Does anyone who creates a computer program have the right to accrue economic benefits related to use of that program? Should the program be owned by society? What best serves the individual and society? Does an economically disadvantaged youth from an urban area have a right right to use the Internet in order to learn? If so, what responsibilities do governments, corporations, not-forprofits, you as an individual, or we as a society have to provide this access? What responsibility
do we have to support access to information for individuals in China? Alternatively, is the Chinese government’s censorship of the Internet appropriate? How can a multinational corporation based in the US support the right to earn a living in an information economy for young [non-emigrant] Indian citizen software engineers while concurrently maintaining its commitments [for its US employees] and [to its stockholders]. When are the social benefits derived from use of private personal information appropriate? These are just some of the questions considered in CIE. Is there one answer to each question? Or are there multiple answers for different people in various situations, using different techniques and criteria? To introduce the reader to information ethics, and its importance to society and individuals, we now discuss three currently unresolved CIE issues below: (1) who should control procedural information (i.e., software)—information ownership, (2) who should control personal information—information privacy, and (3) information (i.e., software and data) quality.
Information Ownership “In the information age, there may be no more contentious issue than the scope of ownership rights to intellectual property” (Spinello and Tavani 2004, p. 247). Intellectual property (IP) is an idea, invention, process or other form of property created by use of the mind or intellect; alternatively, IP is the right to control the tangible or virtual representation of those forms of property. The argument for the ethical appropriateness for intellectual property and legal supports (e.g. trade secrets, trademarks, copyrights, patents) follows. Software is invaluable to our information economy. The development of unique software that solves problems is intellectually involved and time-consuming, and therefore very expensive. This first unit of software is very expensive, while copies of that software, after development, are
Computing and Information Ethics Education Research
very inexpensive, due to the low cost of media and hardware such as compact discs (CDs) and personal computers (PCs). If return on investment (supported by control of leasing or sale of the software) is not provided by governments, in the form of copyrights, patents, etc., individuals and corporations will not invest in developing new software, damaging the economy and depriving society of the benefits of new software products. However, a new paradigm is developing. Software can be developed [with the same quality as that developed under intellectual property protections] by individuals and corporations interested in robust, need-fulfilling software; this is as opposed to an interest in software predominantly because it is a commercial product leading to corporate revenue. This paradigm is known as open source development (Mockus et al. 2002). One argument for open source follows. Software essentially provides a service to those using the software. Intellectual property laws focus companies on the development of software, as opposed to the support of software. However, the continuing support of software (bug fixing; upgrades, training) is essential to software fulfilling the service that it was leased or purchased to perform. Because legal mechanisms related to intellectual property such as patents and copyrights ensure economic benefits to software companies via the non-introduction of similar products and services, software companies have no incentive to provide strong support (in terms of help desks, bug fixes, new versions, etc.). Because these organizations have no incentive, they typically pay only lip service to support and service after the sale (Quinn 2006c, p. 188). As a result, companies that paid for software may not be able use software to its fullest extent. Further, the benefits of open source code development are: Users are able to improve the software by fixing bugs, creating new functions, or determining how it might otherwise be used. Because users are involved, more people are
working on the code: when more people work on the code, it evolves more quickly. Since there are no copyright or patent infringements, users don’t need to worry about the conflict between choosing to pirate software—perhaps in pursuit of another objective—that is ethical, by sharing a copy or allowing another to install your licensed version of the software, and [thereby] breaking the law (e.g., infringing upon a copyright). Finally, in the open/community source paradigm, since the user base “owns” the software, as long as the user base is interested in maintaining and evolving the software, the software remains available. With commercial software, if the company that “owns” the software, and licenses it to users decides to stop evolving or supporting the software, the user [individual or company] is caught in a difficult position (Quinn 2006c, p. 189-190). Some organizations, because of the reasons just cited, have embraced open source. The Sakai community (a consortium of universities and colleges) has developed educational software to support electronic learning. These universities found that available commercial software products did not fulfill their educational needs (i.e., the software was buggy, did not evolve quickly, and was not adequately supported). Their success has been so self-evident that some of the participating universities are considering starting open source projects for their financial systems (Baron 2006). The question about who should “own” software remains open.
Information Privacy “Our challenge is to take advantage of computing without it taking advantage of us” (Moor 1997). CIE not only considers who should control software (procedural information) and reap its benefits; it also considers who should control personal information, and who may be hurt when it is possessed/used by the wrong parties. Consider the ubiquity of our ability to capture, store, and study information: Commercial and
Computing and Information Ethics Education Research
government databases; data mining; radio frequency identification; transportation, mall, and cell phone cameras; key, membership, and credit cards; GPS; ATMs; cookies; thumb print readers; and facial recognition systems. Information professionals are called upon to support this infrastructure daily. How they support this infrastructure is not simply technical but also ethical. Where is surveillance appropriate? How should surveillance targets be notified? Should data be combined? How can prospective personal damage be forecasted? When information is used, and people are hurt, who is responsible? While some research has been performed to balance the need of the person for control of information about them with the need of society for commerce and security, the questions are still open (HodelWidmer 2006). The globalized economy and multicultural society create further disagreements regarding privacy (Capurro 2005). Society needs to handle information about individuals carefully. While academic, political, and legal arenas help with privacy-related strategy and policy, the operational aspects of privacy fall squarely on information professionals.
Information quality The final example CIE issue discussed here involves questions about the quality of procedural information (i.e., software) or declarative information (i.e., data). Considering the quality of software includes understanding the levels of robustness, performance, recoverability, reliability, flexibility, configurability, reusability, and portability of software. Dimensions of data quality include accuracy, reliability, believability, objectivity, relevancy, timeliness, completeness, volume, interpretability, understandability, consistency, and conciseness (Fisher et al. 2006, p. 43). It has been shown that poor quality of information systems software has caused loss of life; one example is the Therac-25 electron accelerator accidents (Birsch 2004). Consider the following:
Between 1985 and 1987, six accidents involving Therac-25 machines occurred in the United States and Canada. These accidents produced large patient overdoses that resulted in four deaths and two serious injuries. The accidents were caused by a series of factors including two errors in the software. (Birsch 2004, p. 241) It has also been shown that poor quality information systems data has caused loss of life; one example is the USS Vincennes/Iran Air Flight 655 disaster, in which 290 passengers were killed (Fisher and Kingma 2001). On July 3, 1988, a team aboard the USS Vincennes, an anti-air warfare cruiser, using the Aegis battle management system, on its Captain’s order, shot down Iran Air Flight 655 (an Airbus passenger aircraft), believing it was an attack aircraft. Fisher and Kingma (2001) explain how data quality (specifically, inconsistent data) played a role in the disaster. One major issue was that the software recycled a set of identifiers for aircraft presented within the user interface. The identifier TN4474 was used twice; once for the airbus (Flight 655) and once for an attack aircraft (an A6 Intruder). Flight 655 was subsequently assigned TN4131. The captain asked for the status of TN4474 and was told it was an attack aircraft; but the crew had been discussing Flight 655. The Vincennes crew subsequently then shot down Flight 655. The quality of the data was inconsistent; leading to confusion as to which aircraft should be targeted. Another issue with data quality in the USS Vincennes/Iran Air Flight 655 disaster was that the data that the crew and captain received was incomplete. White half-diamond “dots” indicated hostile aircraft; white half-circle “dots” identified friendly aircraft. White lines projecting from the dots indicated course and speed. Using relative length of these white lines to indicate speed hampered using relative length to indicate aircraft size; an Airbus is much larger than an attack aircraft. Note that the quality of the data
Computing and Information Ethics Education Research
identified by Fisher and Kingma (2001) as poor was actually manipulated by software that had poor quality as well. These three sample CIE foci (information ownership, privacy, and quality) can be affected by information professionals or users, and hence, society. Other CIE domains exist, such as the digital divide (van Dijk 2005). Finally, future computing and information professionals will face new types of CIE problems. They may resolve issues that arise when humans become more dependent upon embedded-withinthe-human artificial systems that enable life. Current examples include cochlear implants, cardiac defibrillators and pacemakers, and robotic limbs. It is easy to imagine more information technology continuing to join the human body. As this occurs, it seems likely that individuals will use information ethics in order to deal issues raised by medically-oriented information-intensive extensions/replacements. This is not the only potential CIE domain. If and when autonomous machines surface, CIE will become especially important (Moor 2006). Thus, as the speed of [information-oriented] technological change increases, developing an understanding and appreciation of and solutions to CIE issues are clearly of significant and increasing importance to society. As a result, it is very important that the educational experiences of computing and information professionals include sufficient preparation in methods to consider CIE issues.
cOMPUtING aND INFOrMatION EtHIcs EDUcatION: aPPrOacHEs Why is computing and Information Ethics Education Important? Teaching courses in CIE as part of information –technology, -systems, -studies, -science, -management, and –policy and computer science
programs is essential to deepen and broaden the ethical perspectives of computing and information professionals. Computer science graduates develop software. Information technology graduates administer software. Information systems graduates design/install software. Information science graduates support search/access to information. Information management students enable their organizations’ use of information and information policy graduates suggest best uses and enablers of information. In order to make our discussion more tangible, we present a potential ethical scenario for one type of information professional (a software engineer) next: Imagine that you are a computing or information professional with a consulting business that helps other companies develop new software products. You have just accepted a contract with a software company, based in Greece, to assist in the development of a software product. Early on, you discover that their software appears to violate the patent of a rival company, based in India, whose software you are also involved in developing. How can you resolve this situation to protect the interests of both of your clients without violating (1) the non-disclosure agreements that you signed with each of these companies, (2) your professional code of ethics, or (3) your own personal sense of integrity?1 This situation presents the reader with a potential ethical conundrum related to intellectual property within an international context. Unfortunately, computing and information professionals are ill-equipped in practice to deal with such scenarios. For example, Chuang and Chen (1999) found that in four different countries, including the US, computing and information professionals do not have sufficient educational opportunities to learn about CIE. It is important that students not only have the opportunity to learn about CIE, but also that they are exposed to a wide range of ethical perspectives (Clarkeburn et al. 2003).
Computing and Information Ethics Education Research
Computing and information professionals must be provided not only with technical training but also ethics education to prepare them to deal with CIE issues such as information ownership, privacy, and quality within an international context.
What Has been Done? CIE education has been built upon the work of pioneers such as Walter Maner (1978/1980), Terrell Bynum (1985), James Moor (1985), and Deborah Johnson (1985). Walter Maner, when teaching a medical ethics course, realized that the computer could affect ethics decisions in medicine (Bynum 2001). He subsequently self-published A Starter Kit for Teaching Computer Ethics in 1978. As Maner marketed and made his starter kit known, he convinced Terrell Bynum of the efficacy and importance of computer ethics. Terrell Bynum, in 1983, as the editor of the journal Metaphilosophy, held an essay contest aimed at a special issue focused upon computer ethics. The winner of the essay competition was James Moor, with “What is Computer Ethics?” (Bynum 2001). Also, in 1985, Deborah Johnson wrote the first textbook in computer ethics, entitled Computer Ethics. Since the founding of CIE (then, and sometimes still called, “computer ethics”) education, an increasing number of scholars have contributed knowledge. In 1991, the Association for Computing Machinery (ACM) and the Computer Society of the Institute for Electronics and Electrical Engineers (IEEE-CS) published Computing Curricula 1991. Within the “Underlying Principles” section, and further in its “Social, Ethical, Professional Issues” sections, it suggested that: Undergraduates also need to understand the basic cultural, social, legal, and ethical issues inherent in the discipline of computing…Students also need to develop the ability to ask serious questions about the social impact of computing and to evaluate proposed answers to those questions. Future practitioners must be able to anticipate
the impact of introducing a given product into a given environment. Will that product enhance or degrade the quality of life? What will the impact be upon individuals, groups, and institutions?... Future practitioners must understand the responsibility that they will bear, and the possible consequences of failure…To provide this level of awareness, undergraduate programs should devote explicit curricular time to the study of social and professional issues. (Computing Curricula 1991, Section 5.3) In 1994, the National Science Foundation funded a 3-year Project on the Integration of Ethics and Social Impact Across the Computer Science Curriculum. Its first report (Huff and Martin 1995, Martin et al. 1996a) reported the creation of a conceptual framework for undergraduate CS ethics education and identified ethical principles and skills that a computer science undergraduate should incorporate and develop. The second report, (Martin et al. 1996b), outlined a complete knowledge area that should be taught to undergraduate CS students; this knowledge area described coverage of five educational units that addressed the principles and skills identified in Report 1. It also suggested strategies for incorporating the material into a CS curriculum, for example via a course on computers and society or in a capstone course. The second report also spoke to actual techniques that could be used by instructors in the classroom. In its third report (Martin 1999, Martin and Weltz 1999), the group presented in-depth models of how to integrate the material developed throughout the project across the undergraduate CS curriculum. Another group that developed goals and standards for CIE education was the Social and Professional Issues Knowledge Area Focus Group of the ACM/IEEE Computer Curriculum 2001 effort. While Computing Curricula 1999 was an initial step towards suggesting discussions of the social context of computing, responsibilities of the computing professional, and intellectual
Computing and Information Ethics Education Research
property, Computing Curricula 2001 identified and suggested many more facets of CIE, such as making and evaluating ethical arguments, identifying assumptions and values, and using moral philosophic theories such as utilitarianism or the Categorical Imperative. It also discussed CIE applications areas such as information privacy, computer crime, etc. At this same time, as these projects were developing CIE pedagogy, non-computer science information-oriented schools began considering information ethics (Carbo and Almagno 2001, Froelich 1992, Hauptman 1988, Koehler 2003, Kostrewski and Oppenheim 1980, Samek 2005, Severson 1997). Management scientists identified major information ethics issues. For example, Richard Mason described four ethical issues of the information age: privacy, access, property, and accountability (1986). Others began work on how organizational information systems provided the environment for ethical dilemmas and made this work available to educators (Paradice and Dejoie 1988, Mason et al. 1995). In addition to this work, many other resources were developed. They include websites such as ComputingCases.org, the Online Ethics Center for Engineering and Science (http://onlineethics. org), and Ethics In Computing (http://ethics.csc. ncsu.edu/); organizations such as the Electronic Frontier Foundation, the Centre for Computing and Social Responsibility, and the International Society for Ethics and Information Technology; conferences such as ETHICOMP and Computer Ethics: Philosophical Enquiry; and journals such as Journal of Information Ethics and Ethics and Information Technology.
Where are We Now? CIE educators and researchers have made significant progress in CIE education delivery. The most recently published study on computer ethics education involved interviews with representatives from fifty of the two hundred four-year
undergraduate computer science programs in the United States (Quinn 2006a). Thirty percent of these departments meet their need to provide CIE education by including discussions of social and ethical issues of computing within courses not primarily centered upon CIE, such as introductory or capstone courses (Quinn 2006a, p. 336). Fifteen percent of the departments require students to take an ethics course from another department (usually philosophy). Fifty-five percent of the departments require CS majors to take a social and ethical issues of computing course from the CS department; in most cases this is an upper level course that is worth three credits (Quinn 2006a, p. 336). Unfortunately, many respondents indicated the reason they include ethics within the curriculum is to pass accreditation, not because of an earnest belief in the importance of CIE education (Quinn 2006a). Another recent study involved a survey returned by sixty of the 167 schools that teach Management Information Systems or Computer Information Systems (Towell et al. 2004). This group includes primarily “information systems” programs as compared to computer science or information science programs. The top methods used to teach CIE in these schools, but not necessarily in standalone courses, were: case studies (56.3%), personal experiences (54%), readings (41.4%), codes of ethics (28.7%), guest speakers (18.4%) and research papers (10.3%) (Towell et al. 2004, p. 294, Figure 1). In terms of the distribution of ethics within the curriculum the responses were: minimal coverage (40.2%), focused in a few courses (24.1%), woven throughout courses (16.1%), focused in a single course (10.3%), and largely ignored (6.9%). Finally, in the field of library and information science, a recent study of forty-nine graduate programs accredited by the American Library Association found only sixteen actively taught courses that focused primarily on addressing information ethics issues (Buchanan 2004). These courses addressed “major ethical issues such
Computing and Information Ethics Education Research
as privacy, censorship, aspects of intellectual freedom, and the information rich and poor” (Buchanan 2004, p. 56). In terms of teaching methods, “most classes embraced a multi-method approach of lecture, discussion, and case analysis, with the readings as the basis for departure” (Buchanan 2004, p. 58). Other issues include: Some computing and information faculty do not feel qualified to teach CIE (Quinn 2006b). Courses outside of the CS or IS department (e.g., philosophy) often don’t provide opportunities for computing and information students to practice applying ethics to CIE issues (Quinn 2006b). Finally, in the case where there are specialized courses in CIE, the focus is typically on basic moral philosophies such as the Categorical Imperative or Utilitarianism, codes of ethics, and a few quintessential CIE cases (Quinn 2006b). However, CIE issues can be resolved using many complementary approaches: normative philosophical theories, specialized normative philosophical theories (Smith 2002), descriptive models (Robbins 2005), finding and using previous similar cases (McLaren 2006), referencing, interpreting, and using professional, industry, or corporate codes of ethics or conduct (Smith 2002), as well as referring to one’s values, experience, perspective, and exercising complex problem solving (Robbins and Hall 2007). Research about engaging students should continue. Yet another area for further research is the use of computing and information technology to teach CIE.
cOMPUtING aND INFOrMatION EtHIcs EDUcatION: rEsEarcH DIrEctIONs Pedagogical research To ensure that we provide the highest quality of CIE education to future computer and information professionals, we need to create innovative approaches to CIE education and empirically
evaluate their effectiveness. For example, Huff and Frey (2005) integrate research from moral psychology with ethics pedagogy. One interesting aspect they outline recognizes that there may be two types of cognitive processes. One [unconscious, intuitive] process is quick to suggest action, but modifies itself very slowly, and the result of the process is a firm, internalized feeling about an issue (Huff and Frey 2005). Another [conscious, reasoning-oriented] process is slow relative to its counterpart, but can adapt to changing circumstances. Individuals in moral dilemmas apply both; first the intuitive process leading to a judgment, followed by conscious reasoning. Huff and Frey further indicate that adjustment of the intuition (e.g., learning) is better obtained via analogical, metaphorical, and narrative argument, than by rule-based reasoning. Based on their understanding of moral psychology research, Huff and Frey (2005, p. 395-396) suggest that ethics education, primarily through cases, sometimes with roleplay, should support students: 1. 2. 3. 4. 5.
Mastering knowledge about applying basic and intermediate ethical concepts Taking the perspective of the other as well as generating novel solutions to problems Learning how to sense problems Adopting professional standards Becoming part of an ethical community with other individuals
Keefer (2005) reports his study of the processes used by students and ethicists (experienced ethical problem solvers) as they resolve [posed] ethical dilemmas, presented as cases. Keefer studied how students resolved the problems, and identified three methodologies; application of a principle or rule, study of consequences of alternatives, and use of role-specific obligations to when considering the problem. The study found that students who developed complex answers used role-specific obligations only or primarily, with principle/rule or consequence-based analyses. It also found that students who used principles/rules
Computing and Information Ethics Education Research
and consequence-based analyses perceived the problem to be a decision where one alternative was to be chosen, at the exclusion of other options, and that students that used the role-specific obligation technique created multi-dimensional solutions to problems. Finally, it found that ethicists almost exclusively used role-specific obligations to develop their answers. These ethicists also developed solutions that were stronger than students. This work shows that development and use of cases in education should be careful to support the development of multi-dimensional solutions and to help students think about professional rolespecific obligations. Weber (2005), based upon experiences in the classroom, suggests that pedagogy should vary based upon students learning styles and personality traits. One categorization of learning styles is the following: Visual, Aural (or auditory), Read/ Write, and Kinesthetic. Visual learners learn by viewing graphical representations; Aural learners learn best in lectures; Read/Write learners take in and process information by reading and writing; Kinesthetic learners need to “do” the task (Lujan and DiCarlo 2006). He also suggests that learners be part of the process of selecting what is studied or how to learn. Weber also focuses upon the need to actively support inductive learning in the student, but with subtlety by the instructor. Instructors should not provide answers, but instead let students know when there may be incomplete analyses. Weber also suggests transparent pre- and post-measurement so that positive, reflection-supporting feedback can occur. Finally, Loui (2005) provides four examples of good practice: parsing a large problem into “bite-size” pieces, adjusting presentation of material based on interactions with students, use of active learning, and facilitating student cooperation.
What Pedagogical cIE Education research still Needs to be Done? To achieve this new type of learning, CIE education researchers should apply (and empirically
validate within the CIE education domain) findings from the field of learning science (Bransford et al. 1999, Sawyer 2006). This view on learning has developed as a result of advances in information about memory and knowledge, problem solving and reasoning, meta-cognitive processes such as explaining to one’s self, and scientific knowledge about community and culture (Bransford et al. 1999). This new science has and is developing suggestions for learning and transfer, design of learning environments, and effective teaching. In terms of learning and transfer, these researchers have found that helping students understand how they learn helps these same students learn (Bransford et al. 1999: Learning and Transfer). Learning environments have been shown to be more successful when (1) educators know what the learner knows, and believes—prior to teaching/learning, (2) the curriculum is a very structured, (3) assessments tied to learning goals are performed frequently to provide learner feedback, and (4) a learning community exists; where norms that reflect the importance of learning, high standards, and reflection and revision are fostered (Bransford et al. 1999, Sawyer 2006). It is now known that effective teachers “understand in a pedagogically reflective way; they must not only know their own way around a discipline, but must know the ‘conceptual barriers’ likely to hinder others” (McDonald and Naso, 1986, p. 8, in Bransford et al. 1999, Chapter 7). In order to develop an improved environment for CIE education, it is important to understand the relationships between pedagogical materials used, teaching techniques used to transfer information and skills as students learn, and outcomes for CIE education. Many researchers have provided reports of their experiences with materials and techniques. Many teaching materials exist (e.g., ComputingCases.org, Online Ethics Center for Engineering and Science, Ethics in Computing). Techniques have been suggested: these include students playing roles and teachers using of the
Computing and Information Ethics Education Research
blackboard heavily, using surveys to involve students, and managing discussions carefully (Quinn 2006a, 2006b, 2006c; Greening et al. 2004; Muskavitch 2005). Outcomes have been provided by curricula standards and by groups (Computing Curricula 1991, 2001: Project Impact CS Reports 1, 2, 3). Outcome assessment methods have begun to be reported (Sindelar 2003). However, what has not happened in the CIE education research literature is the development of an understanding of the relationships between types of students, kinds of teachers, pedagogical materials, teaching techniques, learning processes, and outcomes. Integrated knowledge such as this could support an instructor that is attempting to help a student (for example, who values openness) learn how to consider a particular issue (e.g., information privacy), using the best technique (perhaps iterative role play with fellow students), with a case that shows the potential societal benefits of using personal information—where questions of privacy may arise. To support the development of this knowledge, it is important to understand the processes used by people as they solve ethical problems. Better yet, it is important to understand how different types of people (e.g., those with varying values) use different processes. Unfortunately, very little is known about the processes that people generally, or students particularly, use when they solve ethical problems. Keefer (2005) has done some of the first work in this area, by extending Rest’s Four Component Model (1986) to seven components. However, these processes are related to values or other personal characteristics. If we know an individual’s characteristics, we can know what processes are likely to be preferred (Robbins 2005). If we know what ethical problem solving processes are preferred, then we may be able to enhance these processes or teach new ones. Robbins and colleagues identified 22 relationships between value [types] and reasoning heuristics (which support processes), 28 relationships be-
00
tween reasoning heuristics and decisions, and 18 relationships between value [types] and decisions within a particular ethical dilemma. Values are “concepts or beliefs about desirable behaviors or end states that transcend specific situations, guide selection or evaluation of behavior or events, and are ordered by relative importance” (Schwartz and Bilsky 1987, p. 551). Reasoning heuristics are rules of thumb used by individuals that are considering an ethical dilemma. Decisions, in this context, are actual components of an overall solution. Unfortunately, because of space limitations, these relationships cannot be described here. This work is important because it shows that this kind of information can be captured; so that [eventually] teachers and students can begin to understand reasons why other individuals approach resolving ethical dilemmas as they do. Further, Fleischmann and Wallace have examined the role of human values in computational modeling. Specifically, this study focuses on how human values influence and are influenced by the use and design of computational models, how human values shape and are shaped by professional and organizational culture, and how these values determine and are determined by the success of the computational modeling process and of computational models as products. This project builds upon earlier collaborations on the importance of transparency in the design and use of computational models (Fleischmann and Wallace 2005) and the ethics of modeling (Wallace and Fleischmann 2005), as well as twenty years of research by Wallace (1994) on the ethics of modeling. The findings of this ongoing study (Fleischmann and Wallace 2006, in press), which adopts a case study approach to data collection and analysis, demonstrates the importance of understanding ethics and values for a specific subsets of computing and information professionals. Further, these results demonstrate that current educational efforts are not sufficient, as computational modelers (a type of computing and information professional) feel that they have
Computing and Information Ethics Education Research
not had adequate opportunities for ethics education in their past experiences in computing and information degree programs.
research towards Using Innovative technology to support cIE Education CIE education does not only have to occur traditionally; it may be supported using modern computing and information technology. Some computer programs have been developed to help students and others understand and appreciate ethics (e.g., Goldin et al. 2001, Gotterbarn 2004, Maner 1998, McLaren 2006, Sherratt, Rogerson and Fairweather 2005, van der Burg and van de Poel 2005). For example, Interactive Computer Ethics Explorer (Maner 1998) leads the user through questions and shares summary information regarding answers from previous participants. Professional Ethics Tutoring Environment (Goldin et al. 2001) helps a student analyze a case in a guided step-by-step manner. Agora (van der Burg and van de Poel 2005) provides structure to ethics courses by providing a portal that instructors can use to present cases that are then considered by students. Finally, Sherratt, Rogerson and Fairweather (2005) built a case-based aiding system for information and computing students that uses a decision tree paradigm and values for ethical domains, professional duties, and personal oral duties to find similar cases that students can use when analyzing an ethical dilemma. Robbins and colleagues developed and assessed a decision support system that helps a student consider an ethical problem from new and different perspectives (Robbins, Wallace, and Puka 2004). The primary goal of the research was to show that information technology can be used to support ethical problem solving—as opposed to building a theoretically correct or complete ethics decision support system. The decision aid is web-based and provides content that summarizes and simplifies five example moral philosophies
(selected to provide a wide range of perspectives): the ethic of care (Gilligan 1977), egoism (Rand 1964), virtue ethics, the Categorical Imperative, and utilitarianism. Other approaches, such as those offered by religious leaders, or the many philosophers not represented, could, at some future point be included within decision aids of this type. The intent was to make the simplified and presented philosophies transparent, as suggested by Fleischmann and Wallace (2005), in order to make the philosophic theories more accessible. The specific ethical dilemma used was the Pharmanet case (Chee and Schneberger 1998), which asks a reader to consider how to handle the prospective implementation of a widely accessible database of pharmacy prescription records. This research demonstrated that webbased ethics decision aids can be built and used, and can improve the decision-making of students confronted with case-based dilemmas in a laboratory environment (Robbins et al. 2004) and showed that understanding the relationships among value types, ethical ideology dimensions, reasoning criteria, and decisions made when resolving an ethical dilemma can be studied and understood (Robbins 2005).
New Directions While each of these information systems makes significant contributions to computer-based ethics education, new directions remain. •
Direction 1: Since these systems tend to have static html-based interfaces that do not vary based on user’s actions, one fruitful direction would be the development of a dynamic system with the potential to embed more interactivity through the use of a more complex programming language such as Java. One of the most important benefits of having a dynamic system is that it supports interaction and iteration, which
0
Computing and Information Ethics Education Research
•
in turn supports reflection and revision of knowledge. Direction 2: Most of these systems are built solely for individual use and decision-making, leaving opportunities for designing systems that allow groups of users to collaborate, or for individual users to interact with intelligent agents, which could demonstrate how ethical decision-making is a social process.
As scientists continue to study learning, new research procedures and methodologies are emerging that are likely to alter current theoretical conceptions of learning, such as computational modeling research. The scientific work encompasses a broad range of cognitive and neuroscience issues in learning, memory, language, and cognitive development...The research is designed to develop explicit computational models to refine and extend basic principles, as well as to apply the models to substantive research questions through behavioral experiments, computer simulations, functional brain imaging, and mathematical analyses. These studies are thus contributing to modification of both theory and practice. New models also encompass learning in adulthood to add an important dimension to the scientific knowledge base. (Bransford et al. 1999, Chapter 1) In these veins, and beginning to address Direction 1: (staticdynamic user responses) Robbins and Wallace have designed, developed, verified, and validated a computational model of a group of software agents solving an ethical problem (Robbins 2005, Robbins and Wallace 2007), based on the study of groups of students solving ethical problems as well as students solving problems in isolation. For these studies the software agent paradigm was used in tandem with the concept of practical reasoning (Wallace 2001). In the context of artificial intelligence, an agent is a computer system that is capable of autonomous action and that interacts with other agents. Resolving ethical
0
problems requires considering criteria, autonomy and sometimes, interaction. Software agents can consider criteria, act autonomously, and interact with other agents (virtual or human). This research demonstrated that a software agent can mimic the practical reasoning-based resolution of an ethical dilemma (Robbins 2005) and contained agents based upon actual people that were studied empirically. These types of agents could be part of ethics education systems of the future. The multi-agent work of Robbins and Wallace particularly supports interaction with a group of individuals, (Direction 2: individual ethical problem solving social ethical problem solving) because multiple agents supports multiple dynamic user profiles, multiple virtual actors, etc. Supporting Direction 1 and 2 concurrently, Fleischmann demonstrates that human values play an important role in the design and use of two additional information technologies: educational simulations (Fleischmann 2003, 2004, 2005, 2006a, 2006b, 2007a) and digital libraries (Fleischmann 2007b, 2007c; Jaeger and Fleischmann 2007). This research identified online versus face-to-face use and individual versus group use of educational simulations as important areas for additional research, emphasizing the importance of face-to-face collaboration in educational simulation use (Fleischmann 2005). Other research areas may be integrated with other CIE education research to support Direction 1 one and 2. For example, towards fulfilling Direction 1: staticdynamic user responses, TRUTH-TELLER (McLaren 2006) compares two cases, and helps users identify similarities and differences. SIROCCO (McLaren 2006) analyzes the relationships between facts of cases and general principles and outputs a list of possibly relevant ethics codes and cases. Harrer, McLaren, and colleagues (2006) have also recently approached learning by individuals in groups by focusing on supporting a group of students solving a problem collaboratively, using a software-based cognitive
Computing and Information Ethics Education Research
tutor, fulfilling Direction 2: individual ethical problem solving social ethical problem solving (Harrer et al. 2006).
cONcLUsION In a world that is becoming increasingly dependent upon information and the computing devices that support this information, it is important that individuals that create, provide, own, access, use, transform, manage, or govern information are knowledgeable about and act according to proper moral considerations. Ethics includes considering the rights of others, consequences of one’s actions, and fairness; it also includes caring and fulfilling one’s duties in the best manner possible. CIE education is relatively young; and has been around for about a quarter century. CIE education can be improved by developing an integrated understanding of the inputs, processes, and outputs of learning as well as ethical problem solving. Further, CIE education can be improved by using new computing and information-based technologies.
rEFErENcEs Beagle, D.R., Bailey, D.R. & Tierney, B. (2006). The information commons handbook. NealSchuman Publishers. Birsch, D. (2004). Moral responsibility for harm caused by computer system failures. Ethics and Information Technology, 6, 233-245. Bransford, J.D., Brown, A.L. & Cocking, R.R. (1999). How people learn: Brain, mind, experience, and school. Washington, DC: National Academy Press. Buchanan, E.A. (2004). Ethics in library and information science: What are we teaching? Journal of Information Ethics, 13(4), 51-60.
Burmeister, O.K. (2000). HCI professionalism: Ethical concerns in usability engineering. In Selected papers from the second Australian institute conference on computer ethics. Canberra, Australia, pp. 11-17. Bynum, T.W. (Ed.) (1985). Computers and ethics. Basil Blackwell. (published as the October 1985 issue of Metaphilosophy). Bynum, T.W. (2001). Computer ethics: Its birth and future. Ethics and Information Technology, 3, 109–112. Capurro, R. (2005). Privacy: An intercultural perspective. Ethics and Information Technology, 7, 37-47. Carbo, T. & Almagno, S. (2001). Information ethics: The duty, privilege and challenge of educating information professionals. Library Trends, 49(3), 510-518. Cerqui, D. (2002). The future of humankind in the era of human and computer hybridization: An anthropological analysis. Ethics and Information Technology, 4, 101-108. Chang, C., Denning, P.J., Cross II, J.H., Engel, G., Sloan, R. Carver, D., Eckhouse, R., King, W., Lau, F., Mengel, S., Srimani, P., Roberts, E. Shackelford, R., Austing, R., Cover, C.F., Davies, G., McGettrick, A., Schneider, G.M., & Wolz, U. (December 15, 2001). Computing Curricula 2001, Report of the ACM/IEEE-CS Joint Curriculum Task Force. http://www.computer. org/portal/cms_docs_ieeecs/ieeecs/education/ cc2001/cc2001.pdf. Also at http://www.sigcse. org/cc2001/ Chee, E., & Schneberger, S. (1998). British Columbia’s PHARMANET project. University of Western Ontario - Richard Ivey School of Business. Chuang, C.P. & Chen, J.C. (1999). Issues in information ethics and educational policies for the
0
Computing and Information Ethics Education Research
coming age. Journal of Industrial Technology, 15, 1-6. Clarkeburne, H.M, Downie, J.R. Downie, Gray, C., & Matthew, R.G.S. (2003). Measuring ethical development in life sciences students: A study using Perry’s developmental model. Studies in Higher Education, 28, 443-456. Coleman, K.G. (2001). Android arete: Toward a virtue ethic for computational agents. Ethics and Information Technology, 3, 247-265. Consequences of Computing: A Framework for Teaching; Project ImpactCS Report 1. http://www. seas.gwu.edu/~impactcs/paper/toc.html Fisher, C. & Kingma, B.R. (2001). Criticality of data quality as exemplified in two disasters. Information and Management, 39, 109-116. Fisher, C., Lauria, E., Chengular-Smith, S., & R. Wang. (2006). Introduction to information quality. MIT Information Quality Program. Fleischmann, K.R. (2003). Frog and cyberfrog are friends: Dissection simulation and animal advocacy. Society and Animals, 11(2), 123-143. Fleischmann, K.R. (2004). Exploring the designuse interface: The agency of boundary objects in educational technology. Doctoral Dissertation, Rensselaer Polytechnic Institute, Troy, NY. Fleischmann, K.R. (2005). Virtual dissection and physical collaboration. First Monday, 10(5). Fleischmann, K.R. (2006a). Boundary objects with agency: A method for studying the designuse interface. The Information Society, 22(2), 77-87. Fleischmann, K.R. (2006b). Do-it-yourself information technology: Role hybridization and the design-use interface. Journal of the American Society for Information Science and Technology, 57(1), 87-95.
0
Fleischmann, K.R. (2007a). Standardization from below: Science and technology standards and educational software. Educational Technology & Society, 10(4), 110-117. Fleischmann, K.R. (2007b). Digital libraries and human values: Human-computer interaction meets social informatics. Proceedings of the 70th Annual Conference of the American Society for Information Science and Technology, Milwaukee, WI. Fleischmann, K.R. (2007c). Digital libraries with embedded values: Combining insights from LIS and science and technology studies. Library Quarterly, 77(4), 409-427. Fleischmann, K.R. & Wallace, W.A. (2005). A covenant with transparency: Opening the black box of models. Communications of the ACM, 48(5), 93-97. Fleischmann, K.R. & Wallace, W.A. (2006). Ethical implications of values embedded in computational models: An exploratory study. Proceedings of the 69th Annual Conference of the American Society for Information Science and Technology, Austin, TX. Fleischmann, K.R. & Wallace, W.A. (in press). Ensuring transparency in computational modeling: How and why modelers make models transparent. Communications of the ACM. Froehlich, T.J. (1992). Ethical considerations of information professionals. Annual Review of Information Science and Technology, 27. From Awareness to Action: Integrating Ethics and Social Responsibility across the Computer Science Curriculum; Project ImpactCS Report 3. http://www.seas.gwu.edu/~impactcs/paper3/toc. html. Retrieved June 12, 2007. Garson, G.D. (2000). Social dimensions of information technology: Issues for the new millenium. Hershey, PA: Idea Group.
Computing and Information Ethics Education Research
Gilligan, C., (1977). Concepts of the self and of morality. Harvard Educational Review, 481-517. (Reprinted in M. Pearsall (ed.), Women and values: Readings in recent feminist philosophy. Belmont, CA: Wadsworth). Goldin, I. M., Ashley, K. D., & Pinkus, R. L. (2001). Introducing PETE: Computer support for teaching ethics. Proceedings of the Eighth International Conference on Artificial Intelligence and Law, (ICAIL-2001). Association of Computing Machinery, New York. Gotterbarn, D. (1992). A Capstone course in computer ethics. In Bynum, T.W., Maner, W., & Fodor, J.L. (Eds.), Teaching computer ethics. First presented at and published in Proceedings of the National Conference on Computing and Values, New Haven, CT. Gotterbarn, D. (2004). An ethical decision support tool: Improving the identification and response to the ethical dimensions of software projects. Ethicomp Journal, 1(1). Greening, T., Kay, J., & Kummerfield, B. (2004). Integrating ethical content into computing curricula. In Raymond Lister & Alison Young (Eds.), Conferences in Research and Practice in Information Technology, Vol. 30. Harrer, A., McLaren, B. M., Walker, E., Bollen, L., & Sewall, J. (2006). Creating cognitive tutors for collaborative learning: Steps toward realization. The Journal of Personalization Research, 16, 175-209. Hauptman, R. (1988). Ethical challenges in librarianship. Phoenix, AZ: Oryx Press. Hodel-Widmer, T.B. (2006). Designing databases that enhance people’s privacy without hindering organizations, Ethics and Information Technology, 8, 3-15. Huff, C. & Frey, W. (2005). Moral pedagogy and practical ethics, Science and Engineering Ethics, 11, 389-408.
Huff, C. & Martin, C.D. (1995). Computing consequences: A framework for teaching ethical computing. Communications of the ACM, 38(12), 75-84. Implementing the tenth strand: Extending the curriculum requirements for computer science, Project ImpactCS Report 2. Retrieved June 12, 2007 from http://www.seas.gwu.edu/~impactcs/ paper2/toc.html Jaeger, P.T. & Fleischmann, K.R. (2007). Public libraries, values, trust, and e-government. Information Technology & Libraries, 26(4), 35-43 Johnson, D.G. (1985). Computer ethics (First Edition). Upper Saddle River, NJ: Prentice Hall. Keefer, M.W. (2005). Making good use of online case study materials. Science and Engineering Ethics, 11, 413-429. Koehler, W. (2003). Professional values and ethics as defined by “The LIS Discipline.” Journal of Education for Library and Information Science, 44(2), 99-119. Kostrewski, B.J., & Oppenheim, C. (1980). Ethics in information science. Journal of Information Science, 1(5), 277-283. Loui, M.C. (2005). Educational technologies and the teaching of ethics in science and engineering. Science and Engineering Ethics, 11, 435-446. Lujan, H.L, & DiCarlo, S.E. (2006). First year students prefer multiple learning styles. Advances in Physiology Education, 30, 13-16. Maner, W. (1978/1980). Starter kit on teaching computer ethics. Self published in 1978. Republished in 1980 by Helvetia Press in cooperation with the National Information and Resource Center for Teaching Philosophy. Maner, W. (1998). ICEE: Online ethical scenarios with interactive surveys and real-time demographics. Proceedings of an International Conference on the Ethical Issues of Using Information Tech-
0
Computing and Information Ethics Education Research
nology, Erasmus University, Rotterdam, 25-27 March, 1998, 462-470. Martin, C.D. (1999). From awareness to responsible action (part 2): Developing a curriculum with progressive integration of ethics and social impact. SIGCSE Bulletin, 31(4). Martin, C.D. & Holz, H. J. (1992). Integrating social impact and ethics issues across the computer science curriculum. Information Processing 92: Proceedings of the 12th World Computer Congress, Madrid, Spain, September, Vol. II: Education and Society, p. 239-245. Martin, C.D. & Weltz, E. (1999). From awareness to action: integrating ethics and social responsibility into the computer science curriculum. Computers and Society, 29(2). Martin, C.D., Huff, C., Gotterbarn, D., & Miller, K. (1996a). A framework for implementing and teaching the social and ethical impact of computing. Education and Information Technologies, 1(1). Martin, C.D., Huff, C., Gotterbarn, D., & Miller, K. (1996b). Implementing a tenth strand in the CS curriculum. Communications of the ACM, 39(12), 75-84. Mason, R.O. (1986). Four ethical issues of the information age. Management Information Systems Quarterly, 10(1), 5-12. Mason, R.O., Mason, F.M, & Culnan, M.J. (1995). Ethics of information management. Thousand Oaks, CA: Sage. McLaren, B.M. (2006). Computational models of ethical reasoning: Challenges, initial steps, and future directions. IEEE Intelligent Systems, 21(4): 29-37. Miller, K. (1988). Integrating ethics into the computer science curriculum. Computer Science Education, 1(1): 37-52.
0
Mockus, A., Fielding, R.T, & Herbsleb, J.D. (2002). Two case studies of open source software development: Apache and Mozilla. ACM Transactions on Software Engineering and Methodology, 11(3), 309-346. Moor, J.H. (1985). What is computer ethics? In Bynum, T.W. (ed.), Computers and ethics. Oxford: Basil Blackwell. pp. 266-275. Moor, J.H. (1997). Toward a theory of privacy for the information age. Computers and Society, 27(3), 27-32. September 1997. Moor, J.H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18-21. Muskavitch, K.M.T. (2005). Cases and goals for ethics education: Commentary on “connecting case-based ethics instruction with educational theory.” Science and Engineering Ethics, 11(3), 431-434. Paradice, D.B. & Dejoie, R.M. (1988). Ethics and MIS: A preliminary investigation. Decision Sciences Institute Proceedings - 1988, Las Vegas, pp. 598-600. Quinn, M.J. (2006a). On Teaching Computer Ethics within a Computer Science Department, Science and Engineering Ethics, 12: 335-343. Quinn, M.J. (2006b). Case-based analysis. Proceedings of the Special Interest Group for Computer Science Education ’06, March 1-5. Houston, TX. Quinn, M.J. (2006c). Ethics for the information age (Second Edition). Boston, MA: AddisonWesley. Rand, A., (1964). The virtue of selfishness. New American Library, New York. Rest, J. R. (1986). Moral development: Advances in research and theory. New York: Praeger Press.
Computing and Information Ethics Education Research
Robbins, R.W. (2005). Understanding ethical problem solving in individuals and groups: A computational ethics approach. Doctoral Dissertation, Rensselaer Polytechnic Institute, Troy, NY. Robbins, R.W. & Hall, D.J. (2007). Decision support for individuals, groups, and organizations: Ethics and values in the context of complex problem solving. Proc. 2007 Americas Conference on Information Systems. Association for Information Systems. Robbins, R.W. & Wallace, W.A. (2007). Decision support for ethical problem solving: A multiagent approach. Decision Support Systems, 43(4), 1571-1587. Robbins, R.W. Wallace, W.A. & Puka, B. (2004). Supporting ethical problem solving: An exploratory investigation. Proc. 2004 ACM SIGMIS conference on computer personnel research: Careers, culture, and ethics in a networked environment , pp. 134-143. ACM Press. Samek, T. (2005). Ethical reflection on 21st century information work: An address for teachers and librarians. Progressive Librarian, 25, 48-61. Sawyer, R.K. (Ed.) (2006). The cambridge handbook of the learning sciences. Cambridge: Cambridge University Press. Schwartz, S.H. & Bilsky, W. (1987). Towards a universal psychological structure of human values. Journal of Personality and Social Psychology, 53(3), 550-562. Severson, R. (1997). The principles of information ethics. Armonk, NY: M.E. Sharpe. Sherratt, D., Rogerson, S., & Fairweather, N.B. (2005). The challenges of raising ethical awareness: A case-based aiding system for use by computing and ICT students. Science and Engineering Ethics, 11, 299-315.
Sindelar, M., Shuman, L., Besterfield-Sacre, M., Miller, R., Mitcham, C., Olds, B., Pinkus, R., & Wolfe, H. (2003). Assessing engineering students’ abilities to resolve ethical dilemmas. Thirty-Third ASEE/IEEE Frontiers in Educuation Conference, November 5-8, 2003, Boulder, CO. Smith, H.J. (2002). Ethics and information systems. ACM SIGMIS Database, 33(3), 8-22. Spinello, R.A. & Tavani, H.T. (2004). Readings in cyberethics, (Second Edition). Sudbury, MA: Jones and Bartlett. Tavani, H.T. (2004). Ethics and technology: Ethical issues in an age of information and communication technology. Hoboken, NJ: Wiley. Towell, E., Thompson, J.B., & McFadden, K.L. (2004). Introducing and developing professional standards in the information systems curriculum. Ethics and Information Technology, 6, 291-299. Tucker, A.B. (Editor and Co-chair), Barnes, B.H. (Co-chair), Aiken, R.M., Barker, K., Bruce, K.B., Cain, J.T., Conry, S.E., Engel, G.L, Epstein, R.G., Lidtke, D.K., Mulder, M.C., Rogers, J.B., Spafford, E.H. & Turner, A.J. (December 17, 1990). Computing Curricula 1991, Report of the ACM/IEEE-CS Joint Curriculum Task Force. Published in 1991 by New York: ACM Press, Los Alamitos, CA: IEEE Computer Society Press. Web: http://www. acm.org/education/curr91/homepage.html Van der Burg, S. & van de Poel. (2005). Teaching ethics and technology with Agora, an electronic tool. Science and Engineering Ethics, 11, 277297. van Dijk, J.A.G.M. (2005). The deepening divide: Inequality in the information society. Thousand Oaks, CA: Sage. Wallace, W.A. (Ed.) (1994). Ethics in modeling. New York: Elsevier Science. Wallace, W.A. & Fleischmann, K.R. 2005. Models and modeling. In Carl Mitcham (ed.), Encyclope-
0
Computing and Information Ethics Education Research
dia of science, technology, and ethics. Macmillan Reference. Weber, J.A. (2007). Business ethics training: Insights from learning theory. Journal of Business Ethics, 70, 61-85. Weiner, N. (1954). The human use of human beings: Cybernetics and society (Second edition Revised). New York: Doubleday Anchor.
KEY tErMs Computing and Information Professionals: Individuals who [using computers] create, provide, own, access, use, transform, manage, or govern information. Computing and Information Ethics: Resolving issues about the rights and responsibilities of individuals, groups, and societies as they interact with information. It includes issues of equity, care, and virtue as information is used to transform a world that is very dynamic, interconnected, (d)evolving, multi-cultural, economically-disparate, and increasingly dependent upon information. Computing and Information Ethics Education: Processes that help computing and information professionals learn computing and information ethics.
Computing and Information Ethics Education Research: Processes that discover knowledge towards improving computing and information ethics education. Information Ownership: A domain within computing and information ethics; Arguments for various owners of information are considered. Current discussions surround whether software written by an individual or group (procedural information) should be owned exclusively by that individual or group as intellectual property, or whether software should be owned cooperatively among users, companies, or society at large. Information Privacy: A domain within computing and information ethics; Arguments about when information about specific persons should be public are considered. A current discussion focuses on trade-offs between personal information privacy and national security. Information Quality: A domain within computing and information ethics; Computing and information professionals must consider the quality of software or data they are creating, providing, owning, accessing, using, transforming, managing, or governing. Low quality software or data have led to loss of human life.
ENDNOtE 1
0
This case excerpt was adapted from Burmeister (2000).
0
Chapter XXVII
The Ethical Dilemma over Money in Special Education Jennifer Candor Gahanna Lincoln High School, USA
abstract The allocation of resources for assistive technology does not have to result in a gap between general and special education. This case study illustrates how a school district can respond to this ethical dilemma with the philosophy that special education technology is money well spent for all education, general as well as special needs. This chapter will discuss the ethical dilemma of funding assistive technology for general education and special education. It will explore the issues of ethnicity, social attitudes and the socio-economic factors regarding technology and special education. It will also examine the tools of technology that provides a bridge to close the learning gap in special education and finally, the benefits that the bridge provides to the special education population and general education.
The secret of education lies in respecting the pupil. -- Ralph Waldo Emerson (Emerson, 1844/2006, p. 251) In the twenty-first century, respecting the pupil, that is, making learning an equitable, accessible, and intellectually blossoming experience, is increasingly difficult, especially when that student is a special education student. The concern has to do with funding. In the work of assistive
technology, school administrators struggle over allocating resources between general education and special education. More often than not, it’s the general education students who receive the bulk of the resources, and the special education students who “lack respect” (McDonald [Weblog]) in receiving resource assistance. The result is an ethically unfair treatment of students. In this chapter, I’ll argue that allocating resources for assistive technology doesn’t have to result in a gap between general and special education. In fact, in my school district, we’ve responded
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Ethical Dilemma over Money in Special Education
to this ethical dilemma with the philosophy that special education technology is money well spent for all education, general as well as special needs. Drawing on the case study of my school district, this chapter will discuss the ethical dilemma of funding assistive technology for general education and special education. It will explore the issues of ethnicity, social attitudes, and the socio-economic factors regarding technology and special education. It will also examine the tools of technology that provide a bridge to close the learning gap in special education. And finally, I will discuss the benefits that this bridge offers to the special education population and general education. In the past decades, the United States has created laws making it possible for people with physical disabilities to access any building, and every public space, allowing them to be individually independent of others. Those laws are designed to protect a minority of people with extreme needs. But, what about education? Educators would agree that their goal for all students is to create an independent learner. Isn’t special education, where a minority of students have high learning needs, similar to the minority of people with physical disabilities? Shouldn’t special education students get the same level of access to educational technology that general education students get? How can this population benefit from the use of technology in the classroom? For clarification, special education refers to a student body that is receiving assistance in education because they have been medically diagnosed with a disability and qualify for assistance. The modified education they receive either in a resource room (a room that the student goes to for extra assistance) or a special education classroom (with other special education students) involves techniques, exercises, and subject matter designed for students whose learning needs cannot be met by the standard school curriculum (American Heritage® Dictionary, n.d. online). These special education students differ from general education students because of the modification they receive; general
0
education students take core school curriculum classes without any assistance (Wikipedia. n.d. online). In other parts of the world the term Special Educational Needs (SEN), would reflect a similar connotation. Special education students who can work on their own will do so if given the proper tools to be an independent learner. As an educator, I have dubbed this the “I do it myself” philosophy, and technology can support that approach. As a result, many immeasurable benefits can accrue from the use of technology in the special education field. For example, looking at education more positively, (Johnston & Cooley, 2001 p.88), technology can facilitate students’ willingness to be more open to new educational challenges (p.88), enlarging their circle of peers that can increase their academic level (p.88), and finally increase their self-esteem (p.88). Consequently, the ethical dilemma facing schools lies in determining how funds should be distributed between the general education population and the special education population. Perhaps there is no ethical dilemma. Maybe the solution lies in leveling out the distributions so as to benefit both groups.
tHE barrIErs tO EFFEctIvE assIstIvE tEcHNOLOGY Perception is a huge problem, and technology has a huge perception issue. One of the core problems for technology is that it is viewed as an “add-on” in various forms for school districts. In the book, The Promise of Technology in Schools; The Next 20 years, authors Charles Stallard and Julie Coker (2001) explain that when Instructional Technology (IT) is “viewed as an add-on to traditional process of running schools, it will be among the first thing to be cut when budgets become tight” (p. 49). Stallard and Corker define instructional technology as the use of computers, compact disc, interactive media, modem, satellite and teleconferencing to support learning in the classroom (p.
The Ethical Dilemma over Money in Special Education
48). They go on to say that the funding for public education over the past twenty years has affected how IT has been developed in public education: “State initiatives tend to be spotty. Different legislatures place different levels of importance on the presence of IT in schools, and funding has been unpredictable” (p. 48-9). I can speak on a personal level to this issue. Over the past five years I have compiled and overseen the special education budget for technology in my school district (one with 11 schools and more than 7,000 pupils, about 18 percent in special education programs). We are the last department to receive funding, and it varies yearly. Funding for education in our district is uneven, thus spending for technology is going to be even more erratic, which is a difficult hurdle in developing long-term technology plans. Indeed, what my district experiences is replicated across the United States. Between 1990 and 2002, U.S. expenditure for public elementary and secondary education increased by 43%, according to their records (The National Public Education Financial Survey [NPEFS], Common Core of Data [CCD] and National Center for Education Statistics [NCES]), during a period in which the Internet, personal computers, and a huge wave of additional digital educational technologies became available (St. John, Hill & Johnson, 2007, p. iii). According to the National Center for Education Statistics (NCES), the total revenue per pupil in the fiscal year 1990 was $7,219, and increased to $8,810 by 2002 (p. iv). That means that over a 12-year period the federal government only increased its per pupil spending to the state level by $1,591, breaking down to $132.59 per year per pupil for the cost of education. Investment in education in the United States is largely done by three contributors; the Federal, the State and the local governments. The disbursements on all of these levels vary from state to state or from local government to local government. The spending on the Federal level also varies from different national initiatives. My school district
receives its funding for special education mainly on the Federal and State level. An example of this would be the Federal level national initiative to support students with disabilities, The Individuals with Disabilities Education Act (IDEA) passed in 2004. This federal program authorizes aid on the states and local level for special education students and related services for children with disabilities including students with learning disabilities (National Center for Learning Disabilities, http://www.ncld.org/content/view/274/321/). Through my own experience of dealing with the State of Ohio for the special education budget, the amount of money that we receive for our school district varies greatly from year to year. Thus, we have to negotiate an unpredictable financial barrier due to the instability of consistent funding, either at the Federal or State level (Johnston & Cooley (2001). A good year often results from local districts making generous investments in their schools’ technology. For example, my school district’s budget in Ohio increased over a 100% for technology in 2006 from what was spent in 2004. This increase was not due to state funding: Ohio’s state spending per pupil in 1990 was a very low $3,258 and increased to $4,336 by 2002 (St. John, Hill & Johnson, 2007), roughly a 2.5% increase in spending over each of the twelve years, and far below national expenditure levels. My school district, Gahanna-Jefferson Public School (GJPS) which is a small suburban district just outside a major metropolitan city, however, experienced above-average spending levels because it is more dependent on local (district-level) funding. According to the Common Core of Data (CCD), which compiles data for public school districts, my district spent $9,384 per pupil in 2003-2004 (National Center for Education Statistics [NCES], fiscal graph). The local portion of this funding comes largely from property taxes, which collected an average of $6,665 per pupil in the district. The district also receives funding from the state and federal governments. Ohio gave an average of $2,468 per
The Ethical Dilemma over Money in Special Education
pupil in my district (compared to average state spending of $4,336 per pupil, according to the NCES 2004 data). These statistics offer a very clear example of how funding at the federal level is imbalanced; my district received only $251.00 per pupil at the federal level in 2003-04 (NCES, fiscal graph) creating a burden to the local level to absorb the funding for the public education in our community. Yet another deterrent to getting technology firmly into classrooms has to do short-term equipment and software funding policies among individual school districts. According to Johnston & Cooley (2001), the purchasing policies of school budgets are not designed with technology in mind. Computers have a short life span and the budget funding is developed from bond levies over a multi-year span. This results in outdated equipment before the property tax levy is paid off. The property tax levy is attached to the tax an owner pays for the property they own within the community. My school district, for example, did not pass a school levy for seven years. For that reason, budgets for each department did not increase over a seven year period. Another problematic situation is the “one-shot-deal” (Gura & Percy, 2005, p. 11). This situation comes about because a teacher, principal or administrator has won a grant or received funding from a private donor, creating a “let’s fix-it now” mentality. The thinking is that with “one fell swoop” the twentyfirst century of technology will arrive (Gura & Percy, p. 11). A foreseeable problem to this “oneshot-deal” is who will fix-it when it breaks, and when the technology is out-of-date who will pay to replace it? My school district has experienced this same problem. Using grant money, my special education director and I updated software used by special education students. Since that time, we have not received funding to replace or update that software, despite more recent versions of that software becoming available. In summary, funding for educational technology can often be the last thing funded by the
school, the district’s available funds are at the mercy of local, state, and federal funding sources, and schools tend to plan for technology on a short term rather than a long-term basis. If technology is an add-on, then technology for special education classrooms/ individual students is often an add-on to the add-on. This is evident through decisions often made by a school district’s technology director, who is mandated to serve the students. As noted above, they are on a limited and irregular budget that often results in serving the needs of the majority, which can be in direct conflict with special education. As Johnston and Cooley (2001) have observed, the technology director will make unilateral decisions, such as which platform, which instructional applications to use, without thinking about those in special need. One possibility (unfortunately not in my school district) would be to find alternative purchasing approaches, for instance, using Open Office to replace the Microsoft Office suite; this would lead to huge financial savings, because it is free. Money that is saved could thus be redirected into special education (or SEN) soft/hardware. Regardless, the importance of funding assistive technology has never been more critical. Educators are increasingly making connections between technology use and educational success among special education students (e.g., Boutin & Chinien, 1998; Engman, 1989; Friedenberg, 1999). In the case of at-risk students--students who are not achieving curriculum standards, are behaving in ways that interfere with their learning, technology has made school more enjoyable, made students feel more successful, and provided at-risk students incentive to stay in school (Cardon, 2000). Malcom (1988) warns that schools that deny technology access to at-risk students could further expand the gap between at-risk students and the general student population: At-risk students who come to our schools in 2020 to be educated will have the differences separating them from advantaged members of society
The Ethical Dilemma over Money in Special Education
magnified or diminished by what happens in their classrooms. And what happens in tomorrow’s classrooms will depend on the actions of today’s researchers, educators, and policymakers. Nowhere is that likely to be more pronounced than with the use of technology in education. (p. 217) Malcom concludes that the school setting is truly the only place that can meet the needs of disadvantaged students because of their limited resources. From my experience, I would agree with Malcom that disadvantaged students often lack a strong support network, either from family or within the school system, which can result in a circle of mediocrity. A circle of mediocrity, from my experience, would be a cycle that refers to a student who lacks any special skill; this lack of skill prevents them from exploring other skill areas and results in a self-perpetuating stagnation: a sort of loop of second-rate work with no exit. Acquiring technology skills, however, can provide an exit. It can enhance student engagement, productivity, and motivation, create actual environments for critical thinking and experimentation, and turn students into independent learners (Malcom, p.224-6). “Technology can help provide authentic learning environments, better opportunities for collaboration, and interesting, innovative learning environments,” write Wheeler et.al (1999) concerning at-risk students. “Technology also allows students easier access to information and more efficient ways to organize and display information.” As Malcom (1988) concurs “students seem to flourish when they have immediate feedback to sustain their motivation” (p. 219). Motivation is essential, as it allows the student to take educational risks, to challenge themselves in subjects in which they have little prior knowledge or confidence. Malcom (1988) suggests that the students’ success is tied to the motivation they bring to the school environment as such motivation “allows students to find out what they want to know and gives them knowledge that
is important to have, if it is nonjudgmental, if it permits multiple modes of expression, then it is likely to make the student want to face the challenge of acquiring new knowledge” (p.223). In addition, Malcom argues that technology might aid the at-risk student who has been discouraged by negative classroom situations, creating a plus for both the teacher and student through the use of technology. Johnston and Cooley (2001) also agree that computer-based teaching is very valuable to the at-risk population. Malcom contends that when technology has been developed to aid the disadvantaged student that process should be on going (Malcom, p.217), I would hypothesize that the benefit is to all students. But who is defined as “at-risk?” Malcom (1988) and others have pointed out that teachers often make false assumptions about students and their learning abilities. Often, these assumptions are due to language, culture, or a student’s family income (p.216-18). Students are all too often tracked as “at-risk” or special education when they actually have high learning potential. But the disparity can be more income-based than anything else, Malcom notes that “minority children in our schools are disproportionately poor as are the schools they attend” (Malcom, 1988, p. 216). Malcom reveals that the academic track (the educational road a student travels in school) for Black and American Indian students, despite 3 or 4 years of mathematics in high school, is unlike the White student who is also taking the same number of years of mathematics but at a higher level. Malcom’s findings of mathematical ability being disproportionate between ethnic groups is also occurring in the area of special education where African American students represent a disproportionate number of students requiring assistance despite only representing less than 15% of the school population. Indeed, the disproportionate number of colored students being assessed as requiring special education is growing. According to Dr. Jonathan D. Becker (2004), African
The Ethical Dilemma over Money in Special Education
American youths aged 6 through to 21, account for 14.8% of the general population but account for 20.2% of the special education population. Beyond this, 10 out of the 13 disability categories, (http://www.nichcy.org/pubs/ideapubs/lg1txt. htm) the percentage of African-American student equals or exceeds the resident population percentage (Becker, 2004, para. 3). However, a more remarkable fact that Becker refers to in his publication from the National Association of the Advancement of Colored People (NAACP), “is the finding that the risk for being labeled ‘mentally retarded’ increases for blacks attending schools in districts serving mostly middle-class or wealthy white students” (Becker, para. 5). Becker also states the cycle of misses and “miscategorization leads to misplacement, and misplacement leads to misinstruction” (Becker, para. 6). Those misses result in an inability to contribute as a high functioning member of society, and the significance is failure. When it comes to technology, girls may also be considered “at-risk.” Studies have shown that girls and boys interact differently when it comes to a computer. Dr. Michelle Johnston and Dr. Nancy Cooley (2001), in their book Supporting New Models of Teaching and Learning Through Technology, suggest that gender-based differences persist and that the differences will lead to future economic imbalance. Part of the problem is perception; girls often view computers as boring, redundant, and isolated (p. 46-7). To have a computer job is perceived as sitting at a computer screen all day long without social interaction. The good news is that the gap can be closed with encouragement to pursue higher levels of mathematics and science courses in middle to high school (Ettenheim, Furger, Siegman & McLester, 2000). One of the major responsibilities that I have as the special education technology consultant is to bridge the communication gap between the technology department and the special education department. I struggle to ensure that the special
education software we use is not in conflict with the district’s operating system or other widely used software applications. In my district, if special education software it is in conflict with the operating system we simply cannot use it despite any student’s need. One solution (not used in my school district) to this barrier could be a Virtual PC or a Linux operating system to increase the durability of the network operating system which may allow for different software to be available on each operating system. Another possible solution could be a Key Server so that instead of buying the software for each machine you buy a certain number of licenses e.g. 30 and as users want to access the software the Key Server registers them on the network and lets them use the software if a license is available, this solution saves the district money and increases the number of applications that our available to the staff and students.
casE stUDY The importants of technology can not be minimized when it comes to education. Digital technology offers tools that allow students to participate on a more level playing field in the classroom (Cardon, 2000). Technology for special education and at-risk students is important (Malcom, 1988). One of the biggest problems with technology in schools is funding, it is often regarded as an add-on, and with special education technology tools they are viewed as an add-on to the add-on (Johnston & Cooley, 2001). There is a solution, educational technology/software that can benefit both special education and general education students. The core for these points are grounded in the technology plan that I created for the Gahanna-Jefferson Public Schools in Ohio as the Special Education Technology Consultant. This case study links the need for technology tools that satisfy the general student population and also proves to stimulate the learning needs of special education students.
The Ethical Dilemma over Money in Special Education
the tools that bridge technology Digital technology offers tools that can allow students to participate on a more level playing field in the classroom (Malcom, 1988 p. 222). In support of students learning with technology, I like to use the analogy that technology is not driving the car, learners are. Taking control of the car symbolizes that the student is taking control of their learning outcome. That analogy is very simple to understand. However, it gets more complicated when there are other decisions that need to be made before the car is in the students’ hands? Who gets to decide what car to use? Who gets to guide its use, and to determine its routes? And who gets to decide who can drive? For technology to succeed in the school environment it does require that someone takes control of the car! Who best fits that description of the driver, the school board, the principal, or the teacher? The role that a building principal takes with regards to technology is very important. Johnston and Cooley (2001) emphasize that the principal not only guide: ...the improvement of teaching and learning, but also the implementation of technology plans, the empowerment of teachers, and the quality of professional development necessary to support instructional decision-making. Principals must ensure that technology is used effectively, both to elevate teaching and to enhance the learning of all students. (p. 41) For the past five years I have been teaching, training and implementing assistive technology in my school district for special education. The support that I receive from my 11 building principals is crucial to the special education department’s success. When I first started my job I created three-year, five-year and ten-year growth plans for the implementation of special education technology across the district. Initially, I did not have the support of building principals.
But, the old advertising slogan, “it only takes one” is very true, since I just needed one model. Of my eleven building principals, the first one to provided support and encouragement was a middle school principal. Of the three middle schools, his was the one that had the largest special education population. Despite this large population of special education students, however, this principal made an important breakthrough in the way he approached technological improvement. He saw the benefits of finding and allocating interdisciplinary software and hardware that served both his general education students and his special education students. He was convinced to lead his staff in training and implementation, and as his staff felt more comfortable with the tools, they became energized. A huge benefit for him was that since he was willing to take the risk of implementing new technology, his building was the first to receive it. From that point forward, other principals viewed my support in implementing assistive technology for special education as a great thing. It made sense to them, because my goal was always to incorporate general technology needs with special education technology needs. It would be a great imbalance to give the building principals all the credit, the special education teachers were vital in effectively using the technology in their classrooms. Early on I made a promise to the special education teachers to make their life easier with technology. It has been five years since I made that promise and I would say that the vast majority of the special education staff believes we accomplished that goal. Of course, the integration of assistive technology has been one of trial and error in my school district. According to Stallard and Cocker (2001) “successes have come only after a long process of trial and error, a practice that is still in wide use” (p.56). However, I support their recommendation of an assistive technology growth plan. Yet, I contend from my own experience that an assistive technology plan should not only include the needs of special education students, but also the needs of
The Ethical Dilemma over Money in Special Education
at-risk students in a way that individualizes their education environment. I would even take this idea of an assistive technology plan one step further to include the needs of all general education students as well. The general educations teachers also endorsed my promise as well because they could see the benefits of technology rippling into their classrooms. The more they supported the idea of technology the more seamless it became to the students (both special and general education). Technology is available that meets all of those goals and for technology to be effectively utilized and accepted by end-users it needs to be implemented in a cohesive and planned manner. A key element of the assistive technology plan is professional development. Johnston and Cooley (2001) summarize the fundamental need to train the teaching staff. With the curriculum benchmarks in mind, technology can be woven into the classroom setting without disrupting the learning process. This has been the experience in my district. According to district principals they acknowledge, the special education teachers are now more engaged and energized to the needs of their students, and now offer a variety of methods to address student needs. We achieved this through three professional development methods. The first method is the traditional teacher in-service method. The school district will provide substitute teachers to cover the day so that the special education teachers can attend and receive training on the technology. In the second method, I go into the classroom and use the technology with the students and teacher, based on a lesson the teacher and I have prepared together weaving the technology into the content area. The teachers love this approach because they see first-hand how easy it is to use the technology; they are not losing time with their students by having to be out for training. I get an added benefit with this approach: when the students see me walk into their classroom they know they will be doing something with technology, and they actually cheer on my entrance. This allows me to
further understand the needs of the students within the district and I can see what they do and where they need assistance with the technology. The third method for professional development is through an online Moodle course. Moodle is a software company that has designed technology for creating online courses, similar to other software, such as Web CT and Blackboard a good starting point to understand the educational use of content management systems is Pullen and Cusack (2007) Schooling without boarders: Using ICT for flexible delivery of education. Found in Green & Sigafoos (Eds.) (2007) Technology and teaching: A casebook for educators. This approach to staff development aids those teachers who are self-motivated and want immediate access to classroom technology (Stallard & Cocker, 2001). School districts that offer online courses assist their staff to undertake professional development by offering training through a structured yet flexible style of learning where the learner can learn anywhere and at any time. By offering this form of professional development online to our staff at GJPS, the teachers can refer to the online courses at any time making their use of technology more streamline, in preparing for the next day’s lesson or unit with technology. What type of returns on this investment of assistive technology can we expect? One result is knowledge gain, which can be demonstrated through data collection technology that is part of the educational technology. As Johnston and Cooley (2001) clarify, data-driven learning for students is authentic learning, where they have taken the responsibility to manage their own learning. Consistent with the goal of authentic learning, data collecting provides a summary of the lesson, making formative information more instantaneous. “Formative information functions as immediate feedback so that learning materials, instructional strategies, learning goals and objectives, and support resources can be realigned with the results that have already been produced” (Stallard & Coker, 2001 p. 89). The goal of this
The Ethical Dilemma over Money in Special Education
approach is to identify and review areas of need daily. This quick feedback method is ideal for special education students. There are a number of different learning styles according to Benjamin Bloom, who identified six different levels of understanding in his Taxonomy of Educational Objectives (William, Howell & Hricko, 2006), ranging from understanding the simplest forms of facts to analyzing, synthesizing, and evaluating complex tasks. The different levels are: (1) Knowledge, (2) Comprehension, (3) Application, (4) Analysis, (5) Synthesis and (6) Evaluation. (Bloom, Hasting & Madaus, 1971, pp.271-273). Most relevant to this case study is Bloom’s definition of mastery learning, to allow all learners to thrive (Bloom et al., 1971). Bloom’s goal was to bridge the learning gap, not widen it. Still, the return on the community’s investment in education is lost if the student is not engaged in the learning process. Engaged learning helps learners to not only help themselves but also the community in which they belong (Johnston and Cooley, 2001). Given my experience in this case study, I found it easier to maintain a higher level of consistent motivation when technology was woven into the daily curriculum. One of the advantages of technology is its perpetual state of improvement. A vast majority of people have benefited from the simplicity of technology, such as the plug and play method for installation (Gura & Percy, 2005). Some tools that have helped to bridge that gap in my school district are Kurzweil (software), iPods (hardware), and Classroom Performance System [CPS] (both hardware and software). My case study demonstrates that technology is a tool that can be used by both general education and special education, and these three tools can work together in seamless integration.
Kurzweil Kurzweil (http://www.kurzweiledu.com) is a comprehensive reading, writing and learning
software for any struggling reader, including individuals with learning difficulties, such as dyslexia, attention deficit disorder or those who are learning English as a second language. A student can have material such as a worksheet, a test or a textbook scanned in and be able to read it on a computer. This software allows the student to make changes to the material that they see on the computer screen, they can modify the text by using the programs’ tools such as post-it notes, record voice notes to the teacher as well as highlighting tools that aid in organizing. Reading the material on a computer is an ideal situation for a student with ADHD (Attention Deficit Hyper Disorder); the pulsing of the computer screen stimulates the eye muscles and allows the student the opportunity to work for longer periods of time without fatigue (Hoffman, 2005). One of the biggest advantages to this software is that it allows a student to become independent. This text-to-speech software enables learners to take responsibility for their learning. With this program, special education students can work alone without having a teacher, an aid or a friend read the material to them. Students can then be tested or measured by what they comprehend, opening up a new door for understanding material and moving the student towards the goal of being an independent learner. In addition to special education students using this software, our at-risk and general education students use this software to aid in comprehension and fluency. And example of how general education uses this program was in foreign language. All the students were on a computer, using Kurzweil at varies levels. Some of the students were using the software to outline the textbook, others reading the textbook in the foreign language and a few more students were pinpointing the learning gap by leaving voice notes to the teacher to explain what they did not comprehend. One feature that both general and special education teachers love about this software is its ability to convert tests or worksheets into MP3 audio files to be uploaded onto an iPod digital audio player. Consequently,
The Ethical Dilemma over Money in Special Education
students can review the subject material on their own time in their own learning environment. An indirect benefit is learning taking place outside the school environment, reinforcing the independent learner situation.
iPods Narrowing the learning gap can be found in a tool that supports the idea of independent learning, the iPod. The simplicity of the iPod is its brilliance as an education tool. The growth of the iPod as an educational tool seems very natural to me because it is a form of technology that students use in their personal lives and are so very familiar with. Thus, asking the students to incorporate a familiar piece of equipment was very easy. In my district we have used iPods in the elementary, middle and high school classes. The iPod is a popular cultural icon as well; for once, being in the special education classroom was cool. The students in my district responded quite easily to this technology. The iPod has three levels of engagement: students hold the devices (tactile), they hear the recording (audio), and see the information on the iPod screen (visual) all at one sitting. This technology extends to the regular education classroom as well, because our special education students use the iPods for playing back teacher notes, viewing PowerPoint slides, taking tests, and listening to their podcast. This is technology on the go—what I like to think of as learning without walls. A student doesn’t need to be at school to be engaged in the learning process with an iPod. When I presented my pilot program of incorporating the iPods, I referred to our student body as the iPod generation. Ironically, most of the teachers that started using the iPods had to be taught how to use them. The teachers felt quite embarrassed because they lack the digital intuitiveness their students (even third graders) already had with the iPod. Significantly, though, special education teachers and students were taking the lead in this form
of technology within my district. The special education students were trained first on how to use the iPods in the special education classroom, as they grew more comfortable with their technology skills we used them to train the general education students. The indirect benefit from this case study was monumental! The special education students showed amazing self-confidence when instructing the general education students. These students demonstrated Bloom’s highest level of attainment. The special education students were the leaders, not the laggards. Jackie Deluna, in her book, Infusing Technology into the K-12 Classroom (2006) described how beneficial project-based learning was for students because it was hands-on learning, and the use of the iPods can do the same (Deluna, 2006, p.60). Deluna explains the ease in which technology such as iTunes is available to anyone with a computer. Students can download and listen to topics from all around the world as well as post their ideas in a podcast on a variety of topics (Deluna, 2006, p.81).
cPs A third tool that can have an impact on the special education student’s learning is Classroom Performance System [CPS] (http://www.einstruction. com). We have successfully used CPS (also known as eInstruction) with both general and special education students. The students use remote control “clickers” to answer the teachers’ questions anonymously. Their responses are collected as a group as well as individually by the teacher to pinpoint the learning gaps. As a result, a teacher can effectively use the data to assist the individual student in closing the learning gap. This form of assessment can be used at anytime in the classroom and with any type of student. In our district, the remote controls were used at the elementary level in preparation for high stakes standardized testing. The enthusiasm for using this device by the students was remarkable. The teachers were able to use the pre-test questions to determine the
The Ethical Dilemma over Money in Special Education
individual learning gaps. From there they were able to assist the individual student in closing the learning disparity. In this situation, students again would stand up and cheer with anticipation as well as excitement when this equipment was brought into the classroom. They wanted to demonstrate to themselves and to their fellow classmates that they were ready to become skilled learners. The CPS tool also made the identity of students as general education or special education invisible. The key to the success of this style of learning is engagement. The teacher is creating a risk-free environment, where the student is comfortable to express ideas and ask questions in order to be involved in the learning process (Deluna, 2006). This CPS system software and hardware offer the teacher data in the form of how the student is doing individually. In going back to Bloom’s (1971) taxonomy levels, teachers can ascertain the thinking process of the student so they can pinpoint how the student reached those goals. Using CPS aids the teacher in reaching Bloom’s highest level, appraising the logic process (Williams, et. al., 2006, p.279). Gura and Percy (2005) sum up the need for technology as a tool: Technology is not too costly or too hard to learn how to use, nor is it terribly difficult to understand how it can enhance education. It will not rob us of valuable time; rather, it can free time up for important things. It will not harm youngsters nor force inappropriate materials or experiences on them. We would do well to take full advantage of it in the field of education. It is not something we can avoid (p. 16).
tHE bENEFIts OF tHOsE tOOLs & FUtUrE trENDs Independent learning is a skill that will aid the special education student in all aspects of life. Is teaching independent learning a moral obligation of the teacher or a mere commitment that comes with the job, and how does technology assist in
achieving that benefit? Shirley Malcom (1988) reflects on this final thought about assistive technology, “If we have a tool that can help students overcome previous disadvantage and reach their educational potential, we have a moral obligation to give priority to this purpose” (p. 229). Ethical philosophy supports the same approach. John Stuart Mill, who promoted the theory of ethical utilitarianism in England during the nineteenthcentury, argued for such a moral imperative: one should act in a way that will “bring about the greatest good, or happiness, for the greatest number of people” (Leslie, 2004, p.86). As demonstrated in the case study, we can make choices with technology that bring such “good” to the greatest number of people, not only general education but also special education students. One of the advantages of assistive technology tools is a willingness by students to perform at their best. Had it not been for the technology to act as an equalizer those special education students would not have faced the challenge of the regular education classroom. Stratham and Torell (1996) had three findings with regard to assistive technology use in the classroom. The first finding contends, “when used appropriately computers technology stimulates increased teacher-student interaction and encourages cooperative learning, collaboration, problem solving and student inquiry skills” (Johnston & Cooley, 2002 p. 86). The second finding says that when educational technology is properly implemented, test scores increases (Johnston & Cooley, 2002 p. 86). The third point that Stratham and Torell make is that computer based teaching is very effective for atrisk students (Johnston & Cooley, 2002 p. 86). Their article, Computers in the Classroom: The Impact of Technology on Student Learning, explains a comprehensive study published by Kulik and Kulik in collecting data of more than 240 studies from elementary to college (most, however, focused on kindergarten to twelfth grade). The Kulik study (as cited in Stratham and Torell, 1996, Conclusion 1 para.2) contend that, “In 81% of the
The Ethical Dilemma over Money in Special Education
studies reported, students in the computer based instruction (CBI) classes had higher exam scores than did students who were taught by conventional methods without computer technology” (Stratham & Torell, 1996, Conclusion 1, para. 2). In addition the students in the CBI classes outperformed the same students in the traditional classroom by 12 percent. The traditional students performed in the 50th percentile while the students in the CBI classroom performed in the 62nd percentile (Stratham & Torell, 1996, Conclusion 1, para. 2). Computer based instruction is different from computer supported teaching and learning because it requires a different level of engagement by the student in the classroom, however the areas where the two approaches overlie is student contribution. Both programs compel the teacher to interact differently with the students (Stratham & Torell, 1996, Conclusion 2, para. 1), the idea of empowering the student through the use of technology to explore problem solving, (Stratham & Torell, 1996, Conclusion 2, para. 3), creative thinking (Stratham & Torell, 1996, Conclusion 2, para. 3), teacher student collaboration (Stratham & Torell, 1996, Conclusion 2, para. 3) and aiding students at-risk (Stratham & Torell, 1996, Conclusion 4, para. 2). These approaches support the idea of technology resources that benefit both general education and special education students The compelling argument of Stratham and Torell’s third point, regarding at-risk students who benefit greatly with computer-based teaching is another link at building the bridge between general education and special education. Stratham and Torell (1996) found that at-risk students who used assistive technologies improved on standardized tests revealing that positive gains “can be made in both reading and math, with students in the experimental computer-rich programs raising their scores from below average to average. Some of the highest gains in achievement consistently come when at-risk students are afforded the opportunity to access computer technology” (Stratham & Torell, 1996, Conclusion 4, para. 2). As stated
0
earlier in the chapter, technology is a tool that can work to benefit both general education as well as special education students. The benefit to have students succeed in a community, and contribute back is far more preferable than having the student’s dropout. The key to the success of this advantage is engagement. Schools need to be a place where assistive technology is at the core of learning and not an add-on. The evolving nature of education should piggyback on the technology consumer market that has already tapped into this digital population. The educational movement to incorporate technology is not new, but to incorporate it into special education with the added benefits to general education is the lesson to be learnt from this case study. Nevertheless, there are critics that contend that money spent on technology for the classroom is wasted. Jane Healy, in The Mad Dash to Compute, argues that the rapid change over the next decade, “where old answers don’t always work, where employers demand communication and human relations skills as well as the ability to think incisively and imagine creative solutions to unforeseen problems. Many of today’s computer applications offer poor preparation for such abilities” (Healy, 1999, p. 4). She also contends that the funding for technology in the classroom might have been better spent on key areas of the curriculum (Healy, 1999 p. 3). Healy is disheartened by the technology bandwagon, that maybe too much emphasis is being placed on children to be digital at such a young age (Healy, 1999, p. 1). Mindful of such criticisms, one cannot ignore the immeasurable benefits from assistive technology to general and special education students in the classroom. Johnston and Cooley (2001) discuss significant areas of success not measured by a test score. The first indirect result that students have from the use of technology is a more positive attitude with respect to themselves and learning. My experience as the special education technology consultant in my district confirms this point. I have collected over the years several thank you
The Ethical Dilemma over Money in Special Education
notes from students who have blossomed with the use of assistive technology in the classroom. The second indirect result is student motivation to communicate with the teacher their new level of understanding, creating an open dialog that might extent to family as well as friends (Johnston & Cooley, 2001). As students grow more confident on the subject matter, they are acquiring what Bloom defines as mastery learning. This is a life skill that makes a more productive citizen, for family and community. These indirect results can be found in both general education and special education students. The gap between the two populations is thus narrowed, because both are seeking the same goals. The final benefit of technology in schools will be more choices. Stallard and Cocker (2001) argue that children from a very early age will have been exposed to the choices that technology brings, and that schools will have to keep up to succeed. “They (the children) will be used to choice in all aspects of their lives. Even if schools try, they will not be able to make these children fit or accept a model of education that offers only mass curriculum and instruction” (p.121). Stallard and Cocker (2001) refer to these students as “digital” children (p.121). The intuitive sense that students have now with regards to digital technology will only grow stronger, as reinforced by parents and/or friends. Stallard and Cocker (2001) contend that the educational experience for high school students will be less face-to-face instruction but more online learning by 2015. My school district already offers a few courses online. However, the high school administration staff is looking into offering more online course to respond to two areas of need: 1) non-traditional students and 2) overcrowding. Some students who dropped out have returned, but need an alternative to the traditional setting in which they failed. Consequently, they are now taking the courses online. This form of instruction requires the student to be very engaged in their learning process. Additionally our high
schools are also overcrowded. As a result, flareups of discord can happen between students and between students and teachers. Schools should represent a sanctuary for students to go to for a few hours each day by providing an academic relief for some of the social problems that occur within our communities (Stallard & Cocker p. 132). The online courses create an alternative environment that can help such schools cater for both special need students as well as general education students.
cONcLUsION Emerson’s (Emerson, 1844/2006, p. 251) quote of respecting the pupil is essential in understanding the technology bridge between general education and special education. The learning gaps in education can be bridged and with the aid of assistive technology, the playing field can be leveled for all students. The philosophy that special education technology is money well spent for all education, general as well as special needs, continues to be at the core of my outlook. Bridging the technology gap cannot be done with just one tool, is it a combination of multiple factors. School leadership by administrators, principals, special education directors or the special education teacher needs to be a part of the school environment if assistive technology for special education is going to impact the student population. That leadership team needs to understand the value in training the special education staff to use the technology so that it can be woven into everyday instruction, and not treated as just an add-on. In addition, the technology bridge needs to respond to all the different learning styles that meet the needs of their population. The tools that we use to build a technology bridge between general and special education in my district include Kurzweil, iPods and CPS. Each of these tools in their own unique way offers instruction or assessment on varies levels. Furthermore each of these
The Ethical Dilemma over Money in Special Education
tools offers benefits that can be measured to show an increase in student’s academic achievement and motivation. In many schools, there exists an ethical dilemma over money for special education. But, as educators, we have a moral obligation to provide assistive technology if it can increase a student’s academic level of performance. As this case study points out, though, providing assistive educational technology can be a good for all students, special education and general education. In other words, we may not have an ethical dilemma at all.
rEFErENcEs At-risk students. (n.d.). Wikipedia. Retrieved August 13, 2007, from Answers.com [online] http://www.answers.com/topic/at-risk-students Becker, J.D. (Ed) (2004). The overrepresentation of students of color in special education as a next generation, within-school racial segregation problem. Proceedings from ERASE Racism’04, Long Island, NY. Retrieved May 26, 2007 from http: www.eraseracismny.org/downloads/brown/ speaker_comments/LIBrown_becker_comments. pdf Blaisdell, M., (2006, March). In POD we trust. The Journal, 33, 30-36. Bloom, B., Hasting, J., & Madaus, G. (1971). Handbook of formative and summative evaluation for student learning. New York: McGraw-Hill. Boutin, F., & Chinien, C. A. (1998). Retaining at-risk students in school using a cognitive-based instructional system: Teachers’ reflection in action. Journal of Industrial Teacher Education, 36(1),62-78. Cardon, Phillip L. (2000, Winter-Spring). At-risk students and technology education: A qualitative study. Journal of Technology Studies [Online]. Available from http://scholar.lib.vt.edu/ejournals/ JOTS/Winter-Spring-2000/cardon.html
Deluna, J. (2006). Infusing technology into the K-12 classroom. Youngstown, NY: Cambria Press. Emerson, Ralph Waldo. On education. In Lee A. Jacobus (Ed) (2006), A world of ideas. 7th ed. Engman, L. R. (1989). School effectiveness characteristics associated with black student mathematics achievement. Focus on Learning Problems in Mathematics, 11(4), 31-42. Ettenheim, S., Furger, L. Siegman, & McLester, (2000). Tips for getting girls involved. Technology & Learning, 20(8), 34-36. Friedenberg, J. E. (1999). Predicting dropout among Hispanic youth and children. Journal of Industrial Teacher Education, 36(3). Hager, R.M. & Smith, D. (2003). The public school’s special education system as an assistive technology funding source: The cutting edge (2nd ed.). Buffalo, NY: Author. pp 35-45. Healy, J.M. (1999, April) The mad dash to compute. School Administrator. Retrieved May 28, 2007 from http://findarticles.com/p/articles/ mi_m0JSD/is_4_56/ai_77196005 Hoffman, E. (2005, October) Brain training against stress: Theory, methods and results form an outcome study. Retrieved May 21, 2007 from http://www.mentalfitness.dk/ ?download=Stress%20report%204.2.pdf General education requirements. (n.d.). Wikipedia. Retrieved August 13, 2007, from answers.com, http://www.answers.com/topic/general-education-requirements Gura, M. & Percy, B. (2005). Recapturing technology for education: Keeping tomorrow in today’s classrooms. Lanham, MD: Scarecrow Education. Johnson, F., Ave, E., & Hill, J. (2006, August). CCD data file: National public education financial survey FY 2004 (SY 2003-04). National
The Ethical Dilemma over Money in Special Education
Center for Education Statistics. Retrieved May 30 2007 http://nces.ed.gov/pubsearch/pubsinfo. asp?pubid=2006443 Johnston, M. & Cooley, N. (2001). Supporting new models of teaching and learning through technology. Arlington, VA: Educational Research Service. Leslie, L. (2004). Mass communication ethics: Decision making in postmodern culture, 2nd ed. Boston, MA: Houghton Mifflin. Malcom, S. (1988). Technology in education: Looking toward 2020. In Nickerson, R.S. & Zodhiates, P.P. (Eds.), Educating a diverse population (pp. 213- 230). Hillsdale, NJ: Lawrence Erlbaum Associates. Marra, T. (n.d) Authentic learning. Retrieved August 14, 2007 from http://www-personal.umich. edu/~tmarra/authenticity/authen.html McDonald, D. (2007, February 7). What’s that on your chinn, Ron?? (Disability Activists Work Group: DAWG OREGON), Retrieved August 11, 2007 [online] http://dawgoregon.blogspot. com/2007/02/whats-that-on-your-chinn-ron. html National Center for Education Statistics (NCES) (n.d.) Search for public school districts. Retrieved May 21, 2007 from http:// nces.ed.gov/ccd/districtsearch/district_detail. asp?start=0&ID2=3904696 Pullen, D. & Cusack, B. (2007). Schooling without boarders: Using ICT for flexible delivery of education. In Green, V., & Sigafoos, J. (Eds.), Technology and teaching: A casebook for educators. Hauppauge NY: Nova Science Publishers. Questions often asked by parents about special education services (1999). Retrieved August 13, 2007 from http://www.nichcy.org/pubs/ideapubs/ lg1txt.htm
Sandholtz, J., Ringstaff, C., & Dwyer, D. (1990). Student engagement: Views from technology- rich classrooms. Retrieved May 29, 2007 from http:// www.apple.com/euro/pdfs/acotlibrary/rpt21.pdf Second-rater, mediocrity. (n.d.). WordNet 1.7.1. Retrieved August 13, 2007, from answers.com, http://www.answers.com/topic/second-ratermediocrity Special education. (n.d.). The american heritage® dictionary of the english language, Fourth Edition. Retrieved August 13, 2007, from Answers. com, http://www.answers.com/topic/specialeducation Stallard, C.H. & Cocker, J.S. (2001) The promise of technology in schools: The next 20 years. Lanham, MD: Scarecrow Press, Inc. St. John, E., Hill, J. & Johnson, F. (2007, January). An historical overview of revenues and expenditures for public elementary and secondary education by state: Fiscal years 1990-2002 statistical analysis report. National Center for Education Statistics, Retrieved May 21, 2007 from http://nces.ed.gov/pubsearch/pubsinfo. asp?pubid=2007317 Stratham, D. & Torell, C. (1996). Computers in the classroom: The impact of technology on student learning. Retrieved May 29, 2007 from http://www.temple.edu/lss/htmlpublications/spotlights/200/spot206.htm Wheeler, J. L., Miller, T. M., Halff, H. M., Fernandez, R., Halff, L. A., Gibson, E. G., & Meyer, T. N. (1999, November 1). Web places: Projectbased activities for at-risk youth. Current Issues in Education, 2(6). Available from http://cie.ed.asu. edu/volume2/number6/. William, D., Howell, S., & Hricko, M. (2006). Online assesssment, measurement and evaluation: Emerging practices. Hershey, PA: Information Science Publishing.
The Ethical Dilemma over Money in Special Education
KEY tErMs Assistive Technology (AT): According to the United States Assistive Technology Act of 1998, assistive technology (also called adaptive technology) refers to any “product, device, or equipment, whether acquired commercially, modified or customized, that is used to maintain, increase, or improve the functional capabilities of individuals with disabilities.” Common computer-related assistive technology products include screen magnifiers, large-key keyboards, alternative input devices such as touch screen displays, over-sized trackballs and joysticks, speech recognition programs, and text readers. (Hager, 2003) At-Risk Student: The term refers to students who fall into any of the following categories: ethnic minorities, academically disadvantaged, low socioeconomic status, and probationary student. These students do not get modification on assignments or tests because they lack and IEP. At-risk students. (Wikipedia) Authentic Learning: In this type of learning, materials and activities are framed around “real life” contexts in which they would be used. The underlying assumption of this approach is that material is meaningful to students and therefore, more motivating and deeply processed. (Marra) Circle Of Mediocrity: This cycle refers to somebody who lacks any special skill or flair. They lack a connection to technology, to their surroundings, someone who settles for second rate work. Second-rater, mediocrity. (WordNet) Disability Categories: Certain children with disabilities are eligible for special education and related services. The IDEA provides a definition of a “child with a disability.” This law lists 13 different disability categories under which a child may be found eligible for special education and related services. These categories are: Autism, Deafness, Deaf-blindness, Hearing impairment, Mental retardation, Multiple disabilities, Ortho-
pedic impairment , Other health impairment, Serious emotional disturbance, Specific learning disability, Speech or language impairment, Traumatic brain injury and Visual impairment, including blindness. According to the IDEA, the disability must affect the child’s educational performance. The question of eligibility, then, comes down to a question of whether the child has a disability that fits in one of IDEA’s 13 categories and whether that disability affects how the child does in school. That is, the disability must cause the child to need special education and related services. (Questions, 1999) Engaged Learner: Highly engaged learners take an active role in meaningful tasks and activities, increasing responsibility for their own learning and demonstrating their understanding. They explore a variety of resources and strive for deep understanding through experiences that directly apply to their lives, promote curiosity and inquiry, and stimulate new interests. (Johnston & Cooley. 2001, p13) General Education: The student body that does not have an Individual Education Plan (IEP) or a 504 (A state legal document that observes that the student needs extra assistance but they do not have a medical condition as defined by the United States Federal Government in the IDEA Act). These students attend and take the core school curriculum classes without any assistance. General education requirements. (Wikipedia) Instructional Technology (IT): The use of technology (computers, compact disc, interactive media, modem, satellite, teleconferencing, etc.) to support learning. (Stallard, 2001) Mastery Learning: Also known as criterion referenced instruction, in which students are evaluated as having “mastered” or “not mastered” specific criteria or learning objectives. (Bloom, 1971)
The Ethical Dilemma over Money in Special Education
Special Education: The student body that is receiving assistance in education, because they have been medically diagnosed with a disability and qualify for assistance. The modified education they receive either in a resource room or special
education classroom involves techniques, exercises, and subject matter designed for students whose learning needs cannot be met by the standard school curriculum. (The American Heritage Dictionary of the English Language)
Chapter XXVIII
Educational Technoethics Applied to Career Guidance Pilar Alejandra Cortés Pascual University of Zaragoza, Spain
abstract Educational orientation should be set within a specific socio-historical context, which is nowadays characterized by the Society of Information. From this starting point, we think that the understanding of both an ethical analysis of technology as well as of the means of communication, which individuals will have to deal with in their professional development, must be considered as content linked to professional orientation. This idea becomes more definite in the concept of educational technoethics and it is studied from two parameters: the intrinsic values that technology and the means of communication include (the aim of technoethics) and their use as mediators of ethical values (means of technoethics). Therefore, the proposal that is currently being implemented in the project “Observation Laboratory on Technoethics for Adults” (LOTA) as well as its implications for professional orientation are concisely presented from both points of view. The present text is a review and update of a previously published article (Cortés, 2006).1
To Pedro, my brother and partner of athletics and life
INtrODUctION The information society entails lifelong training in general professional competencies and, in certain cases, in those specific to information and
communication technologies (ICTs). Of course, it is true that due to the resources they provide technologies are being employed to search for employment and training, especially via web pages and some online and computer programs. As part of its aims and contents careers guidance therefore includes finding out about (knowledge guidance) and knowing how to use (skills guidance) technological resources and means of communication for work-related choices and adaptation (Cogoi,
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Educational Technoethics Applied to Career Guidance
Sobrado, Hawthorm, R. and Korte, 2005; Hartley and Almuhaidib, 2007). In particular, with regard to the IAEVG’s international competencies (2003) for educational and vocational guidance practitioners with regard to career development and placement, both categories are linked to careers guidance, and although they suggest use of computer and networked resources for said field (skills), our understanding is that the attitudinal and capacity component of ICTs that we are adding here could also feature. These relationships between ICTs and careers guidance are necessary, but the inter-relationship of a third component is proposed: ethical values. In view of the socio-contextual factors framing the current educative panorama, such as post-modern thinking and the knowledge and information society, it is necessary to study the triangle formed by careers guidance, education in values and technology. In other words, if we talk about the space formed by this trio of variables it is because society itself demands that we do so, and, in educational terms, we will need to come up with a response. In this respect, in our opinion the relationship between careers guidance, education in values, and technology involves two lines of study: the first involving the reinforcement of career values demanded by the present knowledge and technology society (Cortés, 2006), and the second dealing with technoethics as a component of careers guidance (advice on attitudes and capacities). That is, career guidance has to intervene, assess, advise, programme or provide a response to a consultation in three directions: knowing about ICTs, knowing how to use ICTs and having the right attitude to ICTs.
tEcHNOEtHIcs vErsUs EDUcatIONaL aND carEEr GUIDaNcE In this section we will consider the last direction, that is to say academic and professional guid-
ance on the ethical contents entailed by use of technologies, in other words, guidance on technoethics. We shall commence with educational technoethics, a concept we developed in previous works (Cortés, 2005a; 2006) and which here we also integrate within the careers guidance field. A significant part of the research undertaken with respect to educational technology and means of social communication focuses on the ‘what’ and ‘how’ of their existence and use, but there is a lack of works that include an axiological dimension. Nevertheless, Grill (1997) argues that the first thing a professional should do is look for the ‘why’ of things from attitudinal perspectives, and states that technology in itself is not a problem, but rather technopolism understood as the ethical changes that become the cause of problems such as, for example, addictive behaviour at work vis a vis technology, or excessive pressure from use of technology in work environments. The need to axiologically analyse educational technologies in careers guidance is stressed in order to meet full training and educational needs in society both at present and in the future. As Cortina (2001) states, there is a need for an ethic of co-responsibility to guide the current social process and one of IT globalization so that this technical progress serves human beings, without foregoing an ethics of minimum values, which for Cortina (1998) is represented by freedom, solidarity, equality, responsibility and honesty. And it is true that technology and the means of communication for social communication require an ethical analysis in order that they can be employed suitably and coherently, as emphasised by others including Hawkrinde (1991), Nichols (1994), Postman (1995), Sunstein (2003), and Ortega and García (2007). This should leave its mark on careers guidance processes, both those of educational centres, from lower to higher levels, and those of the family and other exo and macrosystemic environments (Bronfenbrenner, 1979), in accordance with constructivist principles of careers guidance (Watson and McMahon, 2006).
Educational Technoethics Applied to Career Guidance
Our argument is linked to the Science, Technology and Society2 (STS) line of research which arose in opposition to the unidirectional technological model (+ science = + technology = + wealth = + well-being), because the latter does not really correspond to a true conception of science and technology in view of the fact that social factors including moral values, professional interests, political pressures and economic determinants are inherently combined in the aforementioned process, with a considerable influence on the scientific-technological (Bloor and Henry, 1996). In this sense, López (2003) suggests that, on the one hand, humanistic information be provided to natural science and science students in the form of critical sensibility and, on the other hand, that knowledge on science and technology be offered to humanities and social science students. In this respect, our proposal is dual and inclusive: understanding the intrinsic values that technologies entail (end) and using them as mediators to transmit values (means). From this point onwards, different studies in both senses will be presented, which are complemented with research experience developed at two Adult Education centres by means of the “Observational Laboratory on Technoethics for Adults” (OLTA) project. A number of reflections referring to the issue dealt with will be included in the final part of the chapter. By end we mean that the technologies and mass-media include within themselves a valueoriented connotation. However, the aim is not to employ an exclusively negative discourse on said end, akin to that which has normally featured (Nichols, 1987; Ward, 2003), and the fact that the person himself or herself is ethically in charge of the technological also has to be taken into consideration (Bunge, 1974; Medrano and Cortés, at press). The main issue is that schools should address these ethos questions, as Katz denotes them (1992), by means of the educational curriculum or in accordance with guidelines “to help us
understand and diffuse the inevitable conflicts in our practice of educational technology”. Likewise, Braun (1992), Pruzan and Thyssen (1994), Postman (1995) and Bilbeny (1997) reflect on actions for working in school contexts with the aim of analysing the social revolution that computers are producing in human competencies. Pruzan and Thyssen (1994) propose that a code of values be agreed at each company, and that said idea is extrapolated and adapted to other contexts, for instance teaching, and more particularly careers guidance. We are interested in their work, and believe that an educational mediator, possibly the careers officer, would be an appropriate figure to consider conflicts between the values demanded by the Knowledge Society and those included within the curriculum for student development. Bilbeny (1997) also proposes a “revolution of the etemas”, that is, a common moral minimum for the technological impact from a cognitivist and constructivist perspective, based on three principles: thinking of oneself, or the initiation of moral autonomy (moral point of view); imagining oneself in the other’s place, or the commencement of reciprocity (ideal role-taking); and thinking in a way that is consequent with oneself, or the beginning of reflexibility (moral insight). In our opinion, the values that should continue to be upheld are those identified by Cortina (2001), quoted above, but via the three procedures suggested by Bilbeny, in other words, moral autonomy, empathy, and reflection, and we would add a further one, which to a certain extent is implicit: commitment, regardless of our social and professional role, and even more so for educators (teachers, parents, monitors, careers advisors…). Bilbeny (1997) also suggests re-learning sensibility, that is, recuperating the emotional, empathy, sensibility, reflection and education in ICTs. Thus, “with the growth of the digital society our understanding of the moral has to radically change” (Bilbeny, 1997: 188). Bilbeny also considers the question of the spatial. If with the communication technologies and means of
Educational Technoethics Applied to Career Guidance
communication there is there is an evolution from a close, presence-based and sensitive relationship, to one which is distant, virtual and subjective, the latter would have different relationship and moral connotations. Consequently, an interaction at a distance may offer people the chance to get to know different people, but the danger indicated by Sunstein (2003) is that in the end these individuals, by means of the aforementioned virtual media, end up seeking common spaces, thereby isolating themselves from other experiences and ideas that are more real. The author (Sunstein, 2003) stresses that some internet sites end up being very selective and very exclusive, and that this process could create a barrier, that is to say a digital divide or, as Castells puts it (1997), a digital dividing line, which also exerts an influence on professional differences. For example, online guidance may reduce these differences, but we must remain aware of the ethical limitations of failure to establish a close and direct relationship between advisor and advisee (Mallen, Vogel and Rochlen, 2005). Up to this point, the conclusion of our analysis has been that technoethics, which in itself entails an ethical purpose, must be taken into account in current society and in educational environments by means of a commitment to and proposals for educational and professional guidance. But, on the other hand, guidance in technoethics as a means also implies that the technological and information resources may themselves be transmitters and mediators of contents and activities of an axiological nature. Said posture accords with certain of the ideas of Ryan, Bednar and Sweeder (1999) on the need to cultivate the moral via educational technology, because of its motivational capacities, with the aim of safeguarding against vanity, a typical trait of American culture according to the authors, and a morality exclusively sustained by reasoning. The authors present the project known as Social Projector Virtual Gatherings, in which they contend that morality currently involves a marriage between feelings and moral behaviour,
and, therefore, sympathy, duty, impartiality or justice, and self-control. The project has the goal of ensuring that justice operates in the lives of the students, and also that feelings of duty and sympathy interact in the practice of personal and social equity; therefore becoming general competencies (attitudinal and capacity) for all professional fields. In particular, for the practice of educational guidance, Ryan et al. (1999) propose four types of strategy: virtual assemblies for working on ethical topics, in which different people would participate by means of virtual contact; social action via the internet, as a medium for searching for texts linked to humanity issues, community work or actions of solidarity; creation of IT simulations to address matters in which action is required and decisions taken as a professional, for example, in the face of environmental problems; and lastly, use of video productions featuring “real” stories with an ethical basis to be used in role-playing. In addition, the Utah State Educational Technology Department has created a technological application program (Jensen, 1993) for middle and higher level students with the aim of providing academic and professional guidance in areas such as: industrial technology and agriculture; business and marketing; and economics and health occupations. It includes 18 practical sessions, with an average duration of 40 minutes, with titles such as “What am I like?”, “Personal assessment”, “Making a decision”, “Decision and emotion” and “Real occupational case histories”. A further module, more in tune with our theme in the present chapter, is “The scale of values”, in which the student is offered information on how a person’s ethical development is constructed. There is a questionnaire on which social and ethical values (pleasure, power, recognition, morality, creativity, work etc.) are the most relevant for their professional development, students are asked to order their values in terms of personal preference, and finally there are self-assessment questions on the session. The IQ Team project (Nevgi, Virtanem
Educational Technoethics Applied to Career Guidance
and Niemi, 2006), developed by the Finnish Virtual University, offers a further proposal for student guidance and tutorials in which it was noted that that students’ collaborative skills improved. And this is because virtual community environments, if they used well (Allan and Lewis, 2006), foment lifelong capacity competences, as promoted by the Center on Educational and Training for Employment3 (Ohio State University).
MaIN FOcUs In our opinion, the proposals for incorporating the axiological in technological career guidance programmes are really interesting, but it is just as important to ensure guidance in the training of a professional who has to understand the whys and wherefores of technological factors in relation to his or her practices and attitudes (Grill, 1997). There is agreement with the idea of Repetto and Malik (1998) concerning the importance of learning to use the ICTs in an affective way, that is to say, via the development of ethical and quality standards. In this respect, the Association for Educational Communication and Technology (AECT) discusses professional ethics in relation to research in collaborative electronic environments (Ravitz, 1997), considering issues related to the proliferation of antisocial information (racist groups, child pornography…), as well as ethical practices on the web in three areas: respect for copyright, level of privacy (Cottone and Tarvydas, 2003; McCrickard and Butler, 2005) and level of accessibility of information (Lin and Kolb, 2006). These issues, all highly topical, might form a theme of ethical educational analysis in technical and humanities disciplines in the line indicated above (López, 2003). Guidance in the knowledge society must be considered in two ways: ICT literacy and “literacy” in non-discrimination and equality of access to the ICTs. In this last sense, there is increasing
0
advocacy (Pantoja, 2004; Touriñan, 2004; Ortega, 2004; Rumbo, 2006) for inclusion of learning to learn and learning to live together within the contents of ICT education for the training of citizens able to seek information in a context of plurality and the democratization of said information. Thus, different electronic addresses can be found on the internet4 which, in our opinion, link the two concepts (technoethics as end and as means), given that they provide web-based analysis of the ethical in academic and work environments (cyber-constructivist perspective) (Luppicini, 2003). It is necessary for students to progress towards technological literacy, but without neglecting other types of learning, which are more human and more social (Flecha and Rotger, 2004; Ortega and Chacón, 2007). A number of writers link the latter with professional responsibility or professional codes of conduct. As Martínez states (2003), the education of a future professional must not only be focused on problem-solving, since he or she must also acquire a moral occupation in view of the fact that many of the decisions taken at work involve a conflict of values. For example, a conflict of interest involving standards, such as that in which a worker has to decide between respecting the confidentiality of work information or disseminating and/or reporting it. For his part, Pantoja (2004) mentions the dilemma between the modernization of professional systems and nondiscrimination and equality of access to training. The isolation of individuals as opposed to fomenting interpersonal relationships can also be added (as occurs in teleworking or networking). In summary, the inclusion of technology as both the objective and contents of career guidance is defended from three standpoints (see Figure 1): knowing about the technologies, knowing how to use them and knowing how to analyse them critically, the aspect on which the final part of our foregoing discourse was based.
Educational Technoethics Applied to Career Guidance
Figure 1. Relationship between knowing, skills, attitudes and capacities in technology and career guidance. Technology and professional guidance
Guidance on Knowledge (knowledge of the technologies).
Guidance on Skills (knowing how to use them)
ObsErvatIONaL LabOratOrY ON tEcHNOEtHIcs FOr aDULts (OLta): FOr FUtUrE trENDs All of this epistemological contextualization is part of the genesis of the “Observational Laboratory on Technoethics for Adults” (OLTA)5 project, developed at two lifelong education centres from 2003 to 2005, under the coordination of the author of the present chapter. Most of the proposals in the field of Adult Education have been focussed on distance training, technologies as an extension of the memory or the internet for inter-generational and generational relations. However, in educational and professional guidance programmes for adults there is a lack of the type of approach defended here, which is more reflexive on resources and means of communication with social repercussions. The OLTA project was started as contribution to a possible a solution. Its aim is to teach skills for analysis, criticizing, choosing and reflecting on the new information and communication technologies by means of an axiological interpretation. During 2003-2004, the project developed a series of modules: “Gathering of prior knowledge and project presentation” (Cortés, 2005b), “Critical and value-oriented analysis of television and radio media instruments”, “Debate on the positive and negative aspects of the technologies”, “ICTrelated dilemmas”, “ICT-based dramatization”, “The influence of marketing in professional success”, and “Internet in the world of work”.
Guidance on Attitudes and Capacities (knowing how to analyse them critically in ethical terms)
During the 2004-2005 academic year, a multimedia platform was created; including a web page (http://usuarios.lycos.es/tecnoeticazgz), CD and printed material featuring all project information and results, and study days on “media credibility” were organized, that is to say on the ethical code of communication and information professionals. In the future, we would like to extend it to more professions. We will now outline very general conclusions of some of these modules. We would like to indicate in advance that this OLTA proposal received an average evaluation of 4.2 on a scale of 1 (very negative) al 5 (very positive) from a total of 72% of the adults consulted at the end of the project. In the first module we investigated the preconceptions of 150 adults (Cortés, 2005b), (110 women and 40 men) aged between 18 and 63. The following is an example of one of the questions that they answered: Do you think that the internet has changed the world of work and society? How? In accordance with the methodological proposal of Pascual-Leone(1978), the analysis identifies three types of preconceptions in responses to that question, approximating to a greater or lesser degree to the most widely-held definition depending on the concept of the internet (Castells, 1997), that is whether it is detailed or globalised (the most ideal); imprecise or one-off (responses describing an ideal idea, not in a complete way, but rather in terms of a partial understanding); and anecdotic or occasional (those describing the concept, with only one item of ideal information and/or an example).
Educational Technoethics Applied to Career Guidance
Thus, responses were close to offering an anecdotic definition including the idea that the internet “unites, although it divides people because it creates addiction” (40.8%), with protocols such as that the web facilitates making friends easily, aids communication, creates addiction and divides people. In second place, 50 (32.9%) participants offered no response, and in third place, 36 (23.7%) adults answered with contents close to the imprecise level, with only two responses at the detailed level. One man of 53 commented that “the internet brings people together at a distance while separating them where there is proximity. It helps you to find out about far away things, while we look into the distance for things we are ignorant of close to hand”. From this, we might infer that axiological reflection on the part of the respondents with regard to the influence of the internet was lacking or minimal. In our opinion, this analysis or diagnostic should occur in proposals for professional guidance and intervention, given that the internet is practically an essential working medium, increasingly known and used, and that therefore, given this data, more critical and axiological analysis of all its labour and personal repercussions should be promoted. We found another example in third module that could lead to a debate in the classroom as follows: “Let us imagine that we are members of an assessment board deciding whether or not it should give economic support to a research project on in vitro fertilization. Would we need to consider ethical factors or would we base our decision exclusively on scientific principles? Why or why not? And what would said ethical principles be if we were to take them into account?” 79 students participated in this activity, 46 women and 33 men, between the ages of 18 and 56. The majority, 85%, was in favour of considering the aforementioned ethical aspects, but there was disagreement as to whether the economic aid had to be public or not; given that if it came from public funds, a first principle would be that the results should be of benefit for the population as a whole. Another principle on
which there was relative agreement (65%) was that no human lives should be endangered. There was general agreement (70%) that science degrees at different universities should feature the study of subjects related to ethics in science. 75% of the adults awarded this debate with a 5 (very positive), on a scale of 1 to 5, with it being the module that received the highest score. Finally, in the last module, three professionals from the world of radio, television and journalism brought to bear their viewpoints with regard to how the mass-media exercises its credibility function. We shall only mention two conclusions: the objectivity of the media is impossible, and it very much depends on the ideology of the publishing or communication group; but there should be certain standards within the professional code of conduct agreed by everyone and which should feature the participation of government spokespersons, viewers, listeners and/or readers. There was also a discussion and a consensus reached that the principles, set out by the government and the public and private television stations in the Agreement to promote self-regulation on television contents and children (2005),6 must be included in career guidance and training of a future professional, in this case, of the media; in particular, television.
cONcLUsION: tOWarDs aN axIOLOGIcaL aNaLYsIs OF carEEr GUIDaNcE Having reached the end and based on the foregoing, we would like to conclude with some ideas that it will be necessary to take account of in formal educational fields in future. E-learning and e-guidance initiatives require this perspective (attitudinal and capacity guidance in relation to ICTs) in employment training, given that the world of work is evolving in accordance with socio-economic factors and technological advances. Careers guidance must be alive to these
Educational Technoethics Applied to Career Guidance
movements not only to ensure technically efficient professionals, but also to guarantee responsible citizens who know how to live together. Thus, we will be able to seek a balance between the existing socio-economic order and the democratic society can be built by lifelong education (Pantoja, 2004; Rumbo, 2006). In addition, it is our understanding that the relationship between official responsibility and professional responsibility is crucial in understanding many of the latter’s features and limitations, and that the link with educational commitment is therefore crucial. We have presented a real adult education initiative (OLTA), which, as described above, obtained a very satisfactory result from its recipients. Participating teachers also evaluated the project positively. But on this point there are also other proposals for lifelong techno-ethical guidance at different levels of education, that is to say primary, secondary, post-secondary etc. We agree with Olcott’s (2002) suggestion that in all technological training programmes there must be a section on ethics and technology. Thus, in primary and secondary education, from our perspective, we propose that these should be included as cross-cutting themes in curriculum subjects, for example by means of activities such as “the internet: a divide and a bridge” to analyse the digital web frontier between those who do and do not have access to the world of the web, as well as the possibility to create links and disseminate information. Other examples might be its employment repercussions in the form of inclusion and exclusion, or “ethics and nanotechnology” with the aim of analyzing the limits of technologies in the study of life sciences, or “the computer”, in order to consider the computer’s potential and limitations (computer games, IT programs,…) in social relations, academic guidance and insertion at work. We believe that a good way to approach these themes is via the use of intervention strategies in values including ethical dilemmas, group work, techniques for clarification of values, debates and
role-playing. In addition to making use of material resources, both printed and technological and audiovisual (videos, slides, educational forums, the internet…). Each profession has a code of conduct on which standards and principles should dictate practice. Therefore, in branches or modules of professional training and university courses the contents of said code has to be studied alongside other more technical and conceptual questions and procures. In part, this is included within the new European Higher Education Space, which refers to the fact that students not only learn contents, but also attitudes via the participatory and personal competencies. And it would be necessary to analyse it regardless of the specific training itinerary and occupational field, but in our case we are more specifically referring to people who use technological instruments and are immersed in telecommunications, for example engineers, scientists, administrators, computer programmers or the director of a television news programme. Although of course also in the training of future educators and advisors, who need to know it in order to be able to provide guidance on it. In this respect, a university experience, presented by us in Cortés’ book (2004), uses an educational seminar on the subject of the new technologies to ensure that the students learn about technoethics activities for their future development as teachers. And Martín (2006) also presents ten simulated cases on science, technology and society within the framework of ethical values. In this direction, it is noticeable that the influence of the world of technology and communication and the life sciences is currently leading to many debates on professional codes of conduct. Camps (2003) points out that committees of experts are becoming increasingly necessary for debating and, eventually, adopting decisions on ethical factors related to protection, especially protection of individuals and the environment, as mentioned in the EULABOR (2005) report, for example. In any case, these themes cannot
Educational Technoethics Applied to Career Guidance
just be dealt with in initial training, since they also transverse lifelong training, as in the case of adult education, which was the focus of the project mentioned in the present chapter, OLTA. In said project, using different modules, the idea was for the students to evolve towards reflecting in different ways about the ethical repercussions related to the use of ICTs and means of communication in their daily and working lives. It ought to be pointed out that OLTA was favourably evaluated by the students. And in this way two (individual and institutional benefit) of the three levels (community benefit is missing) of the ethical incorporation of ICTs are reached (Riley, 2004). Certain of the didactic activities proposed here could be included in a digital portfolio of professional development (Milman and Kilbane, 2005), serving the person receiving guidance as a catalyst in his or her own process of autonomous assessment and acquisition of skills in use of the ICTs. Thus, an essential study topic at present (Hargreaves, 1999; Metros and Woolsey, 2006) is the development of personal competencies in new technologies in a critical and reflective way. We would therefore like to propose the line of investigation discussed in the present chapter and invite research on it,7 that is to say on professional guidance in attitudes and capacities in relation to the ICTs.
rEFErENcEs Acuerdo para el fomento de la autorregulación sobre contenidos televisivos e infancia. (2005). Gobierno de España y Televisiones españolas. Retrieved April 15, 2005, from http://www.cnice. mecd.es/tv_mav/n/f6_normativa.htm Allan, B. & Lewis, D. (2006). The impact of membership of a virtual learning community on individual learning careers and professional identity. British Journal of Educational Technology, 37(6), 841–852
Bilbeny, N. (1997). La revolución en la ética. Hábitos y creencias en la sociedad digital. Barcelona: Anagrama. Bloor, D. & Henry, J. (1996). Scientific knowledge: A sociological analysis. London: Athlone. Braun, J. (1992). Caring, citizensship, and conscience. The cornestones of a values education curriculum for elementary schools. International Jorunal of Social Education, 7(2), 47-56. Bronfenbrenner, U. (1979). The ecology of human development: Experiment by nature and design. Cambridge: Harvard University Press. Bunge, M. (1974). Por una tecnoética, en etica, ciencia y técnica. Buenos Aires: Editorial Sudamericana. Cabero, J. (2001). Tecnología educativa. Diseño y utilización de medios en la enseñanza. Barcelona: Paidós. Camps, V. (2003). Ética para las ciencias y técnicas de la vida. In A. Ibarra & L. Olivé (Eds.), Cuestiones éticas en ciencia y tecnología en el siglo XXI (pp. 225-244). Madrid: Biblioteca Nueva. Castells, M. (1997). La era de la información. Economía, sociedad y cultura. (3 vols). Madrid: Alianza. Cogoi, C., Sobrado, L., Hawthorm, R. & Korte, A. (2005). ICT skills for guidance practitioners: Final results of the research. Work group, CD-rom en AIOSP Conferencia Internacional. Lisboa. Cortés, P. A. (2004). Una mirada psicoeducativa de los valores. Seminario aplicado a las nuevas tecnologías de la educación. Prensas Universitarias de la Universidad de Zaragoza: Zaragoza. Cortés, P. A. (2005a). Educational technology: as a means to an end. Educational Technology Review, 13(1), 73-90. Cortés, P.A. (2005b). Las preconcepciones sobre la tecnoética en los adultos. Revista Mexicana de Psicología, 22(2), 541-552.
Educational Technoethics Applied to Career Guidance
Cortés, P.A. (2006). Valores y orientación profesional: líneas de investigación e intervención actuales. Contextos Educativos, 8-9, 223-238.
Katz, SB. (1992). The ethic of expediency: Classical rhetoric, tecnology, and the holocaust. College English, 54(3), 255-275.
Cortina, A. (1998). ¿Qué son los valores y para qué sirven? Temas para el debate, 42, 20-22.
Lin, H. & Kolb, J. (2006). Ethical issues experienced by learning technology practitioners in design and training situations. Paper presented at the Academy of Human Resource Development International Conference (AHRD) (Columbus, OH, Feb 22-26) pp. 1168-1175.
Cortina, A. (2001). Alianza y Contrato. Política, ética y religión. Madrid: Editorial Trotta. Cottone, R. R. & Tarvydas, V. M. (2003). Ethical issues in counseling (2nd ed.). Upper Saddle River, NJ: Merrill Prentice Hall. EULABOR (2005). European and Latin American systems of ethics regulation of biomedical research: Comparative analysis of their pertinence and application for human subjects protection. Madrid: EPSON Entreprise. Flecha, R. & Rotger, J.M. (2004). Innovación, democratización y mejora de la docencia universitaria en el marco de la Sociedad de la Información. Contextos Educativos, 6-7, 159-166. Gill, D. W. (1997). Educating for meaning and morality: The contribution of technology. Bulletin of Science, Technology & Society, 17, 5-6, 249-260. Hartley, R & Almuhaidib, S.M.Y. (2007). User oriented techniques to support interaction and decision making with large educational databases. Computers & Education, 48(2), 268-284. Hargreaves, A. (1999). Profesorado, cultura y postmodernidad. Madrid: Ediciones Morata. Hawkridge, D. (1991). Challenging educational technology. ETTI, 28(2), 102-110. International Association for Educational and Vocational Guidance (IAEVG, 2003). International competencies for educational and vocational guidance practitioners. Retrieved April 14, 2004, from http://www.iaevg.org/iaevg/nav. cfm?lang=4&menu=1&submenu=5
López, J.A. (2003). Ciencia, técnica y sociedad. In A. Ibarra & L. Olivé (Eds.), Cuestiones éticas en ciencia y tecnología en el siglo XXI (pp. 113158). Madrid: Biblioteca Nueva. Jensen, R. (1993). TLC career guidance. Currículo. Utah: Salt Lake City. Luppicini, R. (2003). Towards a cyber-constructivist perspective (CCP) of educational design. Canadian Journal of Learning and Technology, 29(1). Retrieved April 26, 2006, from http://www. cjlt.ca/content/vol29.1/01_luppicini.html Mallen, M., Vogel, D. & Rochlen, A. (2005). The practical aspects of online counseling: Ethics, training, technology, & competency. Counseling Psychologist, 33(6), 776-818. Manosevitch, E. (2006). Democratic values, empowerment and giving voice: Children’s media discourse in the aftermath of the assassination of Yitzhak Rabin. Learning, Media & Technology, 31(2), 163-179. Martín Gordillo, M. (2006). Controversias tecnocientíficas. Barcelona: Octaedro OEI. Martínez, S. (2003). Ética de científicos y tecnólogos. In A. Ibarra & L. Olivé (Eds.), Cuestiones éticas en ciencia y tecnología en el siglo XXI (pp. 277-300). Madrid: Biblioteca Nueva. Martínez, F. & Área, M. (2003). El ámbito docente e investigador de la tecnología educativa en España. Algunos apuntes para el debate. Paper presented at the meeting of the Meeting of the
Educational Technoethics Applied to Career Guidance
area of Didactics and Scholastic Organization, Valencia, University of Valencia. Spain.
la era digital (pp. 331-353). Madrid: Ediciones Pirámide.
Mccrickard, M.P. & Butler, L.T. (2005). Cybercounseling: A new modality for counselor training and practice. International Journal for the Advancement of Counselling, 27(1), 101-110.
Ortega Ruíz, P. (2004). La educación moral como pedagogía de la alteridad. Revista Española de Pedagogía, 227, 5-30.
Medrano, C. & Cortés, P.A. (2007). Teaching and learning of values through television. International Review of Education, 53(1), 5-21. Metros, S. & Woolsey, K. (2006). Visual literacy: An institutional imperative. EDUCAUSE Review, 41(3), 80-81. Milman, N.B. & Kilbane, C.R. (2005). Digital teaching portfolios: Catalysts for fostering authentic professional development. Canadian Journal of Learning and Technology, 31(3). Retrieved April 26, 2006, from http://www.cjlt.ca/content/ vol31.3/milman.html Nevgi, A., Virtanem, P. & Niemi, H. (2006). Supporting students to develop collaborative learning skills in technology-based environments. British Journal of Educational Technology, 37(6), 937-947. Nichols, R. G. (1987). Toward a conscience: Negative aspect of educational technology. Journal of visual/Verbal Languaging, 7(1), 121-137. Nichols, R. G. (1994). Searching for moral guidance about educational tecnology. Educational technology, 34(2), 40-48. Olcott, D. (2002). Ética y tecnología: desafíos y elecciones inteligentes en una sociedad tecnoética. In En Hanna, D.E. (ed.), La enseñanza universitaria en la era digital (pp. 225-243). Barcelona. EUB. Ortega Carrillo, J.A. & García Martínez, F.A. (2007). Ética en los medios de comunicación e Internet: promoviendo la cultura de paz. In J.A., Ortega Carrillo & A., Chacón Medina (Coords.), Nuevas tecnologías para la educación en
Ortega Carrillo, J.A. & Chacón Medina, A. (Coords.) (2007). Nuevas tecnologías para la educación en la era digital. Madrid: Ediciones Pirámide. Pascual-Leone, J. (1978). La teoría de los operadores constructivos. In J. Delval (Comp.), Lecturas de psicología del niño (pp. 25-36). Madrid: Alianza. Postman, N. (1993). Technopoly: The surrender of cultura to technology. New York: Vintage. Pantoja, A. (2004). La intervención psicopedagógica en la sociedad de la información. Educar y orientar con nuevas tecnologías. Madrid: Editorial EOS. Postman, N. (1995). Making a living. Making a life: Technology reconsidered. College Board Review, 176-177, 8-13. Pruzan, P & Thyssen, O. (1994). The renaissance of ethics and the ethical accounting statement. Educational Technology, 34(1), 23-28. Ravitz, J. (1997). Ethics in scholarly communications: intellectual property and new technologies. In National Convention of the Association for Educational Communications & Technology, 19th, Albuquerque. Repetto, E., Rus, V. & Puig, J. (1994). Orientación educativa e intervención psicopedagógica. Madrid: UNED. Repetto, E. & Malik, B. (1998). Nuevas tecnologías aplicadas a la orientación. In R., Bisquerra, Modelos de orientación e intervención psicopedagógica. Barcelona: Praxis.
Educational Technoethics Applied to Career Guidance
Riley, J. (2004). Ethical drivers of the implementation of instructional technology. Paper presented at the Teaching Online in Higher Education Conference, Fort Wayne. Ryan. F., Vendar, M. y Sweeder J. (1999). Tecnology, narcissism, and the moral sense: Implication for instruction. British Journal of Educational Technology, 30, 115-128. Rumbo, B. (2006). La educación de las personas adultas: un ámbito de estudio y de investigación. Revista de Educación, 339, 625-635. Sunstein, C.R. (2003). República.com. Internet, democracia y libertad. Barcelona: Paidós. Touriñan, J.M. (2004). La educación electrónica: un reto de la sociedad digital en la escuela. Revista Española de Pedagogía, 227, 5-30. Ward, L. M. (2003). Understanding the role of the entertainment media in the sexual socialization of American youth: A review of empirical research. Developmental Review, 23, 347-388. Watson, M. & McMahon, M. (2006). My system of career influences: responding to challenges facing career education. International Journal Educucational Vocational Guidance, 6, 159–166.
Science, Technology and Society (STS): Line of research which arose in opposition to the unidirectional technological model (+ science = + technology = + wealth = + well-being). STS analyse science ant technology in context and social aspects (values, policy, economy, etc.). Technoethics as an End: That technologies and mass-media also include an assessing ethics connotation. Technoethics as a Means: The implication is that technological and informational means can be transmitters of contents and activities of an axiological kind. Technoethics as an End and Means: A reply is required from a doubly interweaved aspect, that is to say, as a refl ection and performance about its axiological purpose and as an instrument to deal with attitudinal and ethical knowledge, its medium. Technology and Professional Guidance: Triple analysis: guidance on knowledge of the technologies, guidance on skills and guidance on attitudes and capacities.
ENDNOtEs KEY tErMs Digital Divide: Social, economic and political differences between communities that have information and communication technologies (whit alphabetization technological, capacity and quality) and those that not. Observational Laboratory on Technoethics for Adults (OLTA): Project, developed at two lifelong education centres from 2003 to 2005, for to teach skills for analysis, criticizing, choosing and reflecting on the new information and communication technologies by means of an axiological interpretation.
1
2
3
Cortés Pascual, P.A. (2006). An analysis of careers guidance from the standpoint of educational techno-ethics. Revista Española de Orientación y Psicopedagogía, 17,2, 181193. With the express authorisation of said publication. On its web page, the Organization of IberoAmerican States: http://www.oei.es/cts.htm, proposes different publications and projects linked with the STS line. Collage of Education. The Ohio State University 1900 Kenny Road Columbus OH 43210-1090. www.cete.org
Educational Technoethics Applied to Career Guidance
4
5
The following are examples: Center for Accounting Ethics (University of Waterloo, Ontario) (http://accounting.uwaterloo. ca/ethics/index2.html), Centre for Applied Ethic (University of British Columbia, Vancouver) (http://www.ethics.ubc.ca) and Wharton Ethics Program (Wharton School, University of Pennsylvania, Philadelphia). Consulted in February 2007. A project approved by the Aragón regional government (Spain) in the calls of 2003 and
6
7
2004, directed by Carlos Sanz, director of the Concepción Arenal centre (Zaragoza), and coordinated by the present writer. Isabel Segura (Teruel) also collaborated. See http://www.cnice.mecd.es/tv_mav/n/ f6_normativa.htm. Undertaken in Spain by the government. The author of this chapter may be contacted directly at
[email protected] Chapter XXIX
The Scholarship of Teaching Engineering: Some Fundamental Issues A.K. Haghi The University of Guilan, Iran V. Mottaghitalab The University of Guilan, Iran M. Akbari The University of Guilan, Iran
abstract In this book chapter, the authors summarize their retrospections as an engineering educator for more than 20 years. Consideration is given to a number of educational developments to which the author has contributed during his career in academia and the contribution made to engineering and technological education. Increasing emphasis is being placed on establishing teaching and learning centers at the institutional level with the stated objective of improving the quality of teaching and education. The results of this study provide information for the revision of engineering curricular, the pedagogical training of engineering faculty and the preparation of engineering students for the academic challenges of higher education in the field. The book chapter provides an in-depth review of a range of critical factors liable to have a significant effect and impact on the sustainability of engineering as a discipline. Issues such as learning and teaching methodologies and the effect of E-development; and the importance of communications are discussed. Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Scholarship of Teaching Engineering
INtrODUctION Ernest Boyer states that: …scholarship means engaging in original research, it also means stepping back from one’s investigation, looking for connections, building bridges between theory and practice, and communicating one’s knowledge effectively to students.1 Therefore, the scholarship of teaching engineering, seeks to find effective ways to communicate knowledge to students. The realization that traditional instructional methods will not be adequate to equip engineering graduates with the knowledge, skills, and attitudes they will need to meet the demands likely to be placed on them in the coming decades, while alternative methods that have been extensively tested offer good prospects of doing so.2 Engineering is the profession in which knowledge of the mathematical and natural sciences gained by study, experience and practices are applied with judgment to develop ways to utilize economically the materials and forces of nature for the benefit of mankind. Engineering is a unique profession since it is inherently connected to providing solutions to some expressed demand of society with heavy emphasis on exploiting scientific knowledge. In the real world, engineers must respond to sudden changes. The engineers of today, and in the decades ahead, also must be able to function in a team environment, often international, and be able to relate their technical expertise to societal needs and impacts.3 4 Yet we start at making transformative changes in our educational system. Our educational challenge is itself a design challenge—making the “right” engineers for our nation’s future. The basis for the reform of engineering education is made up of unique experiences, traditions and everlasting values of specialist training at universities. Engineering educators have to focus on market demand
0
and stop defending the obsolescent and obsolete programmes. In order to prepare engineers to meet these new challenges, engineering training and education must be revised and modernized. Today’s engineer cannot be merely a technician who is able to design the perfect bridge or the sleek skyscraper. Today’s engineer must not only have a breadth and depth of expertise, but must be able to communicate effectively, provide creative solutions with vision, and adapt to ever-changing demands. Today’s engineer, like any other modern professional, must be someone who can see the big picture.
INtEGratING “WHat” INtO “WHY” IN ENGINEErING EDUcatION A professional needs to recognize the “why” dimension as well as the “what” in order to provide a wisdom and understanding. Also, for the profession to attract students there needs to be an enhanced community respect for engineering. This can be assisted if we integrate a person-centered and nature-respecting ethic into engineering education.5 The urgent need to change the teaching method of the current engineering education system was the reason for which the author launched to a new plan. The new plan envisaged changes in the curriculum to meet the demands of the industry, now facing strong competition as a consequence of the recent technological changes. With this aim, the authors developed the courses considering following issues: 1.
“Why” not try replacing one quarter of the lectures with an online resource? As part of online resources, lecture-based courses are taught at many institutions using video taped lectures, live compressed video, television broadcasts or radio broadcasts.6 7 8 9
The Scholarship of Teaching Engineering
2
The student can have an easier time communicating online as opposed to in a full classroom.10 In addition, student is at the center of his or her information resources. The content information is not delivered as a lecture for the students to hear but rather as information for the students to use. Students are free to explore and learn through their successes and sometimes failures Instead of the lecture, students could spend time over a month working through some online materials complete with self-tests, interactions, mini-project or whatever. We can use the replaced lecture time to manage the online course and deal with queries. The presentation would be varied without too much change at once. “Why” not replace half the lectures with a simple online resource, and run smaller seminar groups in the time the lectures would have used (obviously not practical if you lecture to 200 students, but if you online lecture to 40, this might give you a chance to do something different with them face to face. When all of a course’s lectures, readings, and assignments are placed online, anyone with a computer and Internet access can use online resources in the course from any location, at any time of day or night. One institutional goal of this movement toward computer-aided education based on online resources is to make higher education more economical in the long run through an “economy of scale.” If all of lectures, syllabi, and assignments are digitized and put online, tutors could spend less of their time teaching a larger number of students, and fewer of those students would be on campus using the university’s resources.11 According to the survey results conducted in university of Wisconsin-Madison, the vast majority of the students took advantage of the fact that lectures were online to view material in ways that are not possible with
live lectures. When asked if it was easier to take notes to understand the material when viewing lectures on electronic resources than it would have been attending the same lecture live, almost two thirds (64%) of students agreed, either strongly (27%) or somewhat (37%).12 Online education of engineering students requires more than “what” the transfer of the knowledge and skills needed by the profession because should always keep in mind that: • •
•
•
A classical lecture is not the best way to present materials any more. We have to demonstrate something visual. this is difficult for everyone to see in a lecture. We have to work through a simulation or case study which would be better done at a student's own speed. We have to ask regular questions during the presentation of content and we have to monitor responses.
FUNDaMENtaL asPEcts traditional Learning versus Online Learning There are a number of technologies whose integration into modern society has made dramatic changes in social organization. These include the Internet and its attendant promises of more democratic information access and distribution, including distance learning. A totally effective education and training environment, when applied to information technology instructional strategies that are enhanced by the world-wide web, will include factors that have long been identified as contributing to an optimal and multidimensional learning context - a personalized system of instruction.15 The ingredients of such a
The Scholarship of Teaching Engineering
system have long been known to contribute to an optimal learning environment for the individual student.16 Electronic based distance learning as a potential lever play a extraordinary role in the creation and distribution of organizational knowledge through the online delivery of information, communication, education, and training.15 Numerous studies have been conducted regarding the effectiveness of e-learning. To date, there are only a few that argue that learning in the online environment is not equal to or better than traditional classroom instruction Web-based also learning provides education access to many non-traditional students, but if applied to replace existing educational spaces, the potential effects of this replacement must be evaluated and assessed.16 17 18 However, e-learning is not meant to replace the classroom setting, but to enhance it, taking advantage of new content and delivery technologies to enable learning. Key characteristics of online learning compared to traditional learning are shown in Table 1.15 Several considerations must be taken into account for e-learning to be a beneficial investment and an effective knowledge management tool. The elements of the e-learning planning process include assessing and preparing organizational readiness (factors to consider before going online), determining the appropriate content (content that ties into the goals of knowledge management),
determining the appropriate presentation modes (considering factors contributing to effective elearning), and implementing e-learning (content and technology infrastructure considerations).
tools and technologies for Optimal E-Learning Environment In academic institutions, ideally, both students and faculty should be provided with an e-learning environment that is optimal for each to be inspired to do the best job possible. For the student, the objective is to accumulate and learn as much knowledge as their personal make up will allow. The faculty must be stimulated and inspired to be outstanding faculty members for the students and to simultaneously grow professionally as rapidly as their personal abilities will allow. This is a complex environment to build as each individual involved will have different needs and hence different emphasis need to be placed on various components that make up the environment. No doubt it is impossible to provide the optimal e-learning environment for all individuals among the students and the faculty. It is important however to give considerable attention to this problem and to build the best e-learning environment possible with the financial resources available to the academic institution. Planning and faculty discussions make it possible to build e-learning and growing environments, that are far
Table 1. The comparison of characteristics of traditional and online learning
Characteristics of online learning
Characteristics of traditional learning
E-learning should be interactive
Engages learners fully
E-learning should provide the means for repletion and practice
Promote the development of cognitive skills
E-learning should provide a selection of presentation style
Use learners previous experience and existing knowledge
E-learning content should be relevant and practical
Use problems as the stimulus for learning
Information shared through E-learning should be accurate and appropriate
Provide learning activities that encourage Cooperation among them members
The Scholarship of Teaching Engineering
better than would otherwise occur. E-Learning although is a growing trend in the education community Given all its merits, however, institutions considering this option should be aware of the particulars of Web-based training. Next following paragraphs describe some of best known tools and technologies for developing a sophisticated e-learning environment that needs to be planned, implemented and maintained for promotion of interactive online learning process.
Designed E-Learning Space The e-Learning space is where you can access materials and activities for each of your subjects such as subject outlines, readings, lecture notes, assignments, quizzes and discussions. The eLearning space can be accessed from anywhere in the world using a web browser on a computer connected to the Internet.
E-Library Resource The library provides a wide range of online resources and services to support your learning: •
•
E-Books: Electronic books (e-books), are electronic editions of titles covering a range of disciplines; they provide a valuable addition to your collection of resources. The e-book collection includes recent editions of titles. E-books are accessible at any time from any location. Databases and electronic journals: Databases provide access to academic journals across a wide range of disciplines. Many databases contain full text electronic journal articles, and one can use them to search for articles on a particular topic.
Videoconference A videoconference is a conversation between people at different locations. Videoconferences
use technology to exchange pictures and sounds of the participants at each location.There are two types of videoconference, a point-to-point videoconference and a multipoint conference. A videoconference is a two-way discussion rather than a one-way presentation. Involvement in a videoconference is most successful when participants actively engage and contribute in the videoconference conversation; prepare for the videoconference so that able to contribute to the conversation, ask questions and make comments; and are mindful of the fact that the other participant/s may be distant and that their contexts may not necessarily be the same as yours. It is important that participants clearly explain their comments and ideas.
Designing and Delivery of Online Activities Development of web-based resource to accomplish online activity can be considered as key learning strategies. The overall aim is to go beyond technical skill towards a full examination of the teaching/learning issues in technology-enabled instruction. It consists of two general steps: (1) creating and using various media to design activity; (2) focuses on delivery of web-based material. •
Using the Web to design online courses: Web based tools provides information about the selection and use of Web-based media, such as text, audio, video, still images, animated graphics, applets, and scripts, to accomplish a number of different learning strategies. The overall aim is to assist in creating and using web-based media in ways that are appropriate for their students’ learning. E-learning tools were developed to enable educational innovations everywhere by connecting people and technology. Several web based companies like, Blackboard, The Learning Manager, Top Class, Trainer soft
The Scholarship of Teaching Engineering
•
Manager, and WebCT are currently active to offer diverse tools to follow this mission. Their role is to improve the educational experience with Internet-enabled technology that connects students, faculty, researchers and the community in a growing network of education environments dedicated to better communication, collaboration and content. Delivering online interactive courses: Extensive information about realities and successful practices in online course delivery, with particular attention paid to ways of encouraging interactivity as a key influence on students’ learning styles. Prominent emphases include strategies for managing the use of technology in research assignments, small group projects, discussions, and other activities that foster interaction. The object is to promote sound instructive strategies for the online learning environment by offering a range of perspectives from experienced practitioners.
In the following sections, different components and influences that impact on the magnitude of students and faculty’s accumulation of and handling of knowledge is discussed.
tEacHING cULtUrE The development of a suitable teaching culture is a prerequisite of the academic world, once the principle to “have learned to learn” has been promoted. On the other hand, the development of the concept of sustainable employability determines that the university trains people to become future professionals with basic competences: cognitive, social and affective, allowing them to achieve creative and effective professional performances in quick-change work environment.3 Boyer states that:
...without the teaching function, the continuity of knowledge will be broken and the store of human knowledge dangerously diminished.1 Teaching is not a process of transmitting knowledge to the student, but must be recognized as a process of continuous learning for both the lecturer and student. The old adage, if you want to know something teach it, certainly applies. But it needs to be extended.4 The most obvious and instantaneous effect of development of suitable culture can be considered as improving the classroom environment and also classroom effectiveness. Through describing some important concepts the effective strategy for mutual relationship between learner and lecturer will be addressed.
Practical applications Completing a thriving course involves connecting the program of study to existent life problems and to present events. This is often made promising by including realistic applications in the classroom, and the instructor can carry out this by drawing on personal experience or by using student examples as sources of practical problems.19 Students who contribute in certified work programs, research activity while in college can be a precious source of such information. Since there are many situations where bring students into contact with realistic problems, all that is necessary is to take advantage of those activities in teaching situations. Following paragraph give one example related to this partnership between industry and academic institute. Faculty of engineering in Guilan University has a compulsory cooperative education program. Students are placed with an employer in their field of study (currently the school has partnerships with over 100 companies), and they alternate semesters at their worksite and on campus beginning. Under this program, undergraduate students have some financial support from the university,
The Scholarship of Teaching Engineering
and they get practical experience by working for the industry, for governmental agencies, or for public or private non-profit organizations. The connection with industry begins in the first year when students are teamed with alumni to assist the students with their semester writing projects. Through all of these experiences, students are often able to provide practical insight to classroom activities.
Learning styles and class Participation It is extremely attracting and challenging area if one strikes a balance between lecturing and engaging in alternative teaching techniques to stimulate students with various learning styles. Learning style is a biologically and developmentally imposed set of personal characteristics that make some teaching (and learning) methods effective for certain students but ineffective for others. These include the Myers-Briggs Type Indicator,20 Kolb’s Learning Style Model,22 23 the Felder-Silverman Learning Style Model,24 and Dunn Learning Style Model.25 The following statements have been offered based on current research on learning styles to assure that every person has the opportunity to learn:26 a. b. c.
d.
e.
Each person is unique, can learn, and has an individual learning style. Individual learning styles should be acknowledged and respected. Learning style is a function of heredity and experience, including strengths and limitations, and it develops individually over the life span. Learning style is a combination of affective, cognitive, environmental, and physiological responses that characterize how a person learns. Individual information processing is fundamental to a learning style and can be strengthened over time with intervention.
f. g.
h.
i.
j.
k.
l.
Learners are empowered by a knowledge of their own and others’ learning styles. Effective curriculum and instruction are learning-style based and personalized to address and honor diversity. Effective teachers continually monitor activities to ensure compatibility of instruction and evaluation with each individual’s learning style strengths. Teaching individuals through their learning style strengths improves their achievement, self-esteem, and attitude toward learning. Every individual is entitled to counseling and instruction that responds to his/her style of learning. A viable learning style model must be grounded in theoretical and applied research, periodically evaluated, and adapted to reflect the developing knowledge base. Implementation of learning style practices must adhere to accepted standards of ethics.
An instructor who strives to understand his/her own learning style may also gain skill in the classroom. Consider the question, “how does the way you learn influence the way that you teach?” a.
b.
Most instructors tend to think that others see the world the way they do, but viewing things from a different learning perspective can be useful. It is good practice to specifically consider approaches to accommodate different learners, and this is often easiest after an instructor learns about his/her own learning style. An instructor with some understanding of differences in student learning styles has taken steps toward making teaching more productive.19 It is also important for the instructor to encourage class participation, but one must keep in mind the differences in student learning style when doing so. Research has shown that there are dominant learning
The Scholarship of Teaching Engineering
c.
d.
characteristics involved in the perception of information through concrete versus abstract experience.27 28 Some learners need to express their feelings, they seek personal meaning as they learn, and they desire personal interaction with the instructors as well as with other students. A characteristic question of this learning type is “why?” This student desires and requests active verbal participation in the classroom. Other individuals, though, best obtain information through abstract conceptualization and active experimentation. This learner tends to respond well both to active learning opportunities with well-defined tasks and to trial-and-error learning in an environment that allows them to fail safely. These individuals like to test information, try things, take things apart, see how things work, and learn by doing. A characteristic question of this learning type is “how?” Thus, this student also desires active participation; however, hands-on activity is preferred over verbal interaction. The instructor must have a sincere interest in the students. However there is no best single way to encourage participation. Individual student differences in willingness to participate by asking questions often surface.29 30 Still, although the number of times an individual speaks up strongly depends on student personality qualities, a class where all are encouraged to enter into dialog is preferable, and opening the lecture to questions benefits all students.
active Learning An ancient proverb states: Tell me, and I forget; Show me, and I remember; Involve me, and I understand.
This is the basis for active learning in the classroom,31 and extensive research indicates that what people tend to remember is highly correlated with their level of involvement. It has been shown that students tend to remember only 20% of what they hear and 30% of what they see. However, by participating in a discussion or other active experience, retention may be increased to up to 90%.32
tEacHING MEtHODOLOGY Cooperative learning is a formalized active learning structure involves students working together in small groups to accomplish shared learning goals and to maximize their own and each other’s learning. Their work indicates that students exhibit a higher level of individual achievement, develop more positive interpersonal relationships, and achieve greater levels of academic self-esteem when participating in a successful cooperative learning environment.33 34 However, cooperation is more than being physically near other students, discussing material with other students, helping others, or sharing materials amongst a group, and instructors must be careful when implementing cooperative learning in the classroom. For a cooperative learning experience to be successful, it is essential that the following five elements be integrated into the activity. 33 34 1.
2.
Positive Interdependence: Students perceive that they need each other in order to complete the group task. Face-to Face Interaction: Students promote each others’ learning by helping, sharing, and encouraging efforts to learn. Students explain, discuss, and teach what they know to classmates. Groups are physically structured (i.e., around a small table) so that students sit and talk through each aspect of the assignment.
The Scholarship of Teaching Engineering
3.
4.
5.
Individual accountability: Each student’s performance is frequently assessed, and the results are given to the group and the individual. Giving an individual test to each student or randomly selecting one group member to give the answer accomplishes individual accountability. Interpersonal and small group skills: Groups cannot function effectively if students do not have and use the required social skills. Collaborative skills include leadership, decision making, communication, trust building, and conflict management. Group processing: Groups need specific time to discuss how well they are achieving their goals and maintaining effective working relationships among members, Group processing can be accomplished by asking students to complete such tasks as: (a) List at least three member actions that helped the group be successful, or (b) List one action that could be added to make the group even more successful tomorrow. Instructors also monitor the groups and give feedback on how well the groups are working together to the groups and the class as a whole. When including cooperative learning in the classroom, the instructor should do so after careful planning. Also, the students may be more receptive to the experience if the instructor shares some thoughts about cooperative learning and the benefits to be gained by the activity.
As a case study, in our classes, the students should be able to do group work; in fact much of the assessment takes the form of performance presentations in which two students from each group act as directors in a given week. The remaining group members are just helping, so it is important that they have reliable access to the directions in advance of each week’s performance. The student’s on-line collaboration can work on their joint presentations at a distance from each
other. Tutor have to oversee the group interactions to check that groups weren’t copying from each other and to ensure that directions were posted on time, so Tutor should help students to set the cooperative strategy up. It should be noted that there is a big difference from one module to another. So in modules where the tutor has told the students that they will log-on to the on-line discussion at such and such a time and they give replies and responses to the student’s exchange - then you see the majority of students participating. There is an interesting balance - the students want to know you are there even if you are not directly participating in the discussion. We think the students want validation that the opinions they’re expressing are appropriate - so if they feel that the tutor is in the background and will correct any major mistakes they seem to be more open to learn from each other. As an important part of cooperative learning process, our graduates should also be provided with an international platform as a foundation for their careers. With this in mind, language skills are essential for engineers who are operating in the increasingly global environment. Therefore, it is necessary to speak one or two foreign languages in order to be able to compete internationally. It is also necessary to include international subjects in an institution’s programmes of study; to make programmes attractive to international students. Nevertheless, engineering education institutions still grapple with the fundamental concepts and ideas related to the internationalization of their activities and courses. Comprehensive studies concerning curriculum development and its methodology are essential in order to ensure that the main stream of academic activities is not completely lost in the process of globalization. Research should be undertaken on a global engineering education curriculum, in order to identify fundamental issues and concerns in an attempt to devise and develop a proper methodology, which would be used in curriculum development in an era of globalization.
The Scholarship of Teaching Engineering
tEacHING sPacE aND INstrUMENts One key component of productive learning environment is the space itself and the audiovisual resources within it. Existence of audiovisual instruments without well-designed space is inadequate. Highly efficient teaching and learning space should be designed based on the requirements for high concentration of tutor and learner. This level of effectiveness is achieved if quality hearing and observing of lecture material considered in classroom design. For good hearing, two basic criteria must be satisfied to meet the requirements: 1.
2.
A quiet background (e.g. noise from intruding traffic, adjacent classes, ventilation systems etc.): Speech in the classroom must be heard over the prevailing background noise level. A convenient and easily measured descriptor is the Speech to Noise ratio (S/N). There is general agreement that desired S/N ratios for speech recognition are at least 6 decibels for adults. Control of reverberation and self-noise: Reverberation (commonly known as an echo) is defined as the persistence of sound in a room after the source has stopped. In a reverberant space, successive syllables blend into a continuous sound, through which it is necessary to distinguish the orderly progression of speech. The level at
which this sound persists is determined by the size of the space, the speech level and the interior finish materials. Reverberation time (the time it takes for a sound to die off) is measured in seconds, with a low valuearound 0.5 seconds or less-being optimum for a classroom seating about 30 students. Reverberation can be controlled by the use of readily available sound-absorbing wall and ceiling materials that comply with building code requirements. Performances of students also increase by a good visual environment. In order to get a good lighting concept, knowledge of the different tasks in classrooms is important. Each task needs its own light conditions. During the day there are a number of different visual tasks in a classroom. So, high requirements for the light quality are important. Students and teachers have benefit by a lighting which supports them optimally in doing their activities. The European norm EN 12464-1 gives requirements for the illuminances as following table for different class activities. Considering different audio/visual factors is a preliminary step toward a highly valuable teaching environment. Using traditional methods like writing of teaching material on whiteboard or blackboards take lots of time and cause tutor contact diminished to lowest level. Now a day, fortunately, majority of universities are equipped with Audio-Video devices such as opaque, overhead, document camera, video projectors, DVD
Table 2. Overview of tasks in a classroom together with the requirements for illuminances
Task
Tutor
Student
Illuminance
1
Writing on board
Reading on board
500 lux
2
Talking to students
Paying attention to teacher
300 lux
3
Showing a presentation (slides, Power Point, TV program)
Looking on the screen
10-300 lux
4
Paying attention to working students
Writing reading, drawing
300 lux
5
Coaching computer activities
Looking to the computer screen
50 lux
The Scholarship of Teaching Engineering
or Video player and computer to improve the communication among tutor and students. These instruments produce lots of benefits for based on following facts: 1.
2.
3.
The employment of AV equipment results in minimizing of time for write up of material in the class, and attracts student’s attention to completely focus on matter without wasting time. It is necessary that all material would be transferred as hard copy in advance to student to completely use an interactive environment and don’t coerce to write teaching material in class. Making eye contact while speaking to a group may be one of the most difficult aspects of giving a presentation. Using AV instrument give tutor this chance to have higher eye contact with learner. Lack of eye contact creates a barrier between you and the audience; it makes you look untrustworthy, shifty or unsure of yourself. Keeping eye contact is extremely necessary to oblige student to be in class without any absence of mind. As a consequence student goes a head with tutor and main part of understanding process is completed in class. The existence of AV instrument creates a promising potential to employ multimedia for educational purpose. In general, films and videos possess a number of specific features, which clearly distinguish them from other media like texts or pictures. Firstly, they comprise of a mixture of different symbol systems, namely pictures, spoken words and sounds, which are simultaneously presented. Secondly, with regard to pictorial quality, they do not only capture shape, colour and layout of objects and scenes, but also their dynamic changes over time. Therefore, in comparison to other media, films and videos are characterized by a high degree of realism. In addition, a film director can record a given event simultaneously from multiple
4.
viewpoints, and can subsequently choose the best, “canonical” view for each part of the event. In contrast, everyday observers typically are restricted to their particular standpoint. Educational database which contains lots of professional movies, animations or educational soft wares were already produced and can be downloaded from World Wide Web for use of students if AV instruments would be available. Also, a learning platform by which video-based live lectures may be attended to via internet in an interactive fashion.
INNOvatIvE EDUcatION FOr EcONOMIcaL PrOGrEss The economic future within Europe and worldwide increasingly depends on our ability to continue to provide improved living standards, which in turn depends on our ability to add value to products and services continuously. To achieve this, employers increasingly need to look to engineers throughout their organizations to come forward with innovative and creative ideas for improving the way business performs and equally to take greater responsibility within their work areas through personal development on a life-long basis. To compete in the rapidly changing global markets it has become essential for organizations to recognize that they can no longer depend on an educated elite but, increasingly, must maximize the potential of the workforce within the organization by harnessing all available brain power to assure economic survival and success. Life-long learning will provide for the development of a more cohesive society much more able to operate within a cross-cultural diversity. We expect steady growth in global engineering e-learning programs. The main drivers will be the strong industry interest in recruiting students who
The Scholarship of Teaching Engineering
have some understanding and experience of global industry, government funding agencies who are slowly developing support for global engineering education, rising faculty awareness, and high levels of student interest. It is not, however, easy to do, and resource issues will slow the growth rate. The following are the reasons why we think global engineering e-learning programs are good: 1. 2.
3.
4.
Preparing students for the global economy. This is necessary and it will happen. Everyone learning from the comparative method: education, research, and service global collaborations make everyone smarter. There are good research prospects through the education activities such as optimizing virtual global teams, and through research collaborations that are a byproduct of the educational collaborations. It builds international and cross-cultural tolerance and understanding.
cONcLUsION One key finding was that the students wanted a greater sense of community; they wanted more interaction with lecturers and the university. Many professors post a course syllabus, homework assignments, and study guides on the Web, and some ambitious faculty with large classes may even give exams online. But educators aren’t ready to plunge completely into the electronic learning environment and fully adopt simulated lab experiments or self-guided online instruction. One reason is the time it would take for faculty to learn how to use new computer programs and to develop online materials to replace their current versions. Another reason is that laboratory science requires intuitive observations and a set of skills learned by hands-on experience, none of which can be fully imitated in the digital realm. Simulations are a bridge from the abstract to the real, combining old tech with
0
new tech to connect theory in the classroom with real-world experience in the lab. The purpose is to provide a creative environment to reinforce or enhance traditional learning, not to replace it. As students progress through the theory section, they can quiz themselves on what they have learned. As an example, in a virtual lab section, students learn how to construct a data table by using chemical shifts and coupling patterns from spectra of common compounds. Errors entered by the students are automatically corrected and highlighted in red. Once the table is complete, the student is directed to select the molecular fragment associated with an NMR signal, then assemble the fragments to form the molecule. E-learning has great potential for engineering education. Broader use of e-learning will be driven by the next generation of students who will have had exposure to e-learning programs in high school and will start to ask for similar systems at the undergraduate level. E-learning also will likely be adopted more quickly for distance-learning courses or courses for which lab costs or lack of lab facilities may be a factor, such as those for nonscience majors or those offered by community colleges or high schools. It should be noted that the e-learning model adopted in one university can not be the best model to follow in another college. The providers of distance learning may have to accept that there are limitations in all models of distance education. The best opportunity lies in identifying and offering the mode that suits the most students in a particular cultural and regional context at a particular point in time. There is still work to be done on distance learning, but initial signs are positive. For all modes of delivery, learning is the active descriptor, distance is secondary. Nevertheless, the Website appears to be an effective supplementary tool for students with all learning style modalities. The correlations between course grade and the Website usage were weakest for Active and Sensing students. It may be an issue that needs to be addressed through instructional design to make the materials more
The Scholarship of Teaching Engineering
engaging for these particular modalities. Also, the relatively small sample of styles may have affected the results.
KEY tErMs
5
6
Audio-Visual: This refers to audio-visual media. Cooperative Learning: This refers to a formalized active learning structure involving multiple learners. E-Book: This refers to electronic books published online. E-Journal: This refers to electronic journals published online.
7
8
9
10
E-Learning: This refers to online education and training. Reverberation: This refers to the persistence of sound. Traditional Learning: This refers to lecture based learning where teachers provide instruction to students.
11
12
ENDNOtEs 1
2
3
4
Boyer, E.L. (1990). Scholarship reconsidered: Priorities of the professoriate, special report. Stanford, Carnegie Foundation for the Advancement of Teaching. Rugarcia, A., Felder, R.M., Woods, D.R. & Stice, J.E. (2000). The future of engineering education I: A vision for a new century. Chemical Engng. Educ., 349(1), 16-25. Oprean, C. (2006). The Romanian contribution to the development of Engineering Education. G. J. of Engng. Educ., 10(1), 45-50. Al-Jumaily, A., & Stonyer, H. (2000). Beyond teaching and research: Changing engineer-
13
14
15
16
ing academic work. G. J. of Engng. Educ., 4(1), 89-97. Hinchcliff. J. (2000).Overcoming the Anachronistic Divide: Integrating the Why into the What in Engineering Education. G. J. of Engng. Educ., 4(1), 13-18. California State University Chico. Accessed (2002 October 2). http://rce.csuchico.edu/online/site.asp Open University. Accessed (2002 October 2). http://www.open.ac.uk.ezproxy.uow.edu. au:2004 University of Idaho. Accessed (2002 October 2). http://www.uidaho.edu/evo University of North Dakota. Accessed (2002 October 2). http://gocubs.conted.und.nodak. edu/cedp/ Purcell-Roberston, R.M., & D.F. Purcell. (2000). Interactive distance learning. In Distance learning technologies: Issues, trends and opportunities. Hershey, PA: IDEA Publishing Group. Frances, C., Pumerantz, R. and Caplan, J. (1999 July/August). Planning for instructional technology. What you thought you knew could lead you astray, pp. 25-33. Foertsch, J., Moses, G., Strikwerda, J. & Litzkow, M. (2002 Jul). Reversing the lecture / homework paradigm using e-TEACH Webbased streaming video software. Journal of Engineering Education, 91(3), 267-275. Keller, F.S. (1968). Goodbye teacher... Journal of Applied Behavior Analysis, 1, 79-89. Ferster, C.B., & Perrott, M.C. (1968). Behavior principles. New York: AppletonCentury-Crofts. Wild,R. H., Griggs, K. A., Downing, T. (2002). A framework for e-learning as a tool for knowledge management. Industrial Management + Data Systems, 102(7), 371381. e3L project website. (2005). http://e3learning. edc.polyu.edu.hk/main.htm.
The Scholarship of Teaching Engineering
17
18
19
20
21
22
23
24
25
Coleman, D.J. (1998). Applied and academic geomatics into the 21st century. Proc. FIG Commission 2, XXI Inter. FIG Congress, 39-62. Brighton, UK. Kolmos, A. (1996). Reflections on project work and problem-based learning. European J. of Engng. Educ., 21(2), 141-148. Finelli, C. J., Klinger, A., & Budny, D. D. (2001 Oct). Strategies for improving the classroom environment. Journal of Engineering Education, 90(4), 491-501. Myers, L.B. & McCaulley, M.H. (1985). Manual-guide to the development and use of the Myers-Briggs indicator. Palo Alto, CA: Consulting Psychologists Press, Inc. Kolb, D.A. (1984). Experiential learning: Experience as the source of learning and development. Englewood Cliffs, CA: Prentice Hall. Kolb, D.A. (1981). Learning styles and disciplinary differences. In Chickering A. et al. (Eds.), The modern american college. San Francisco, CA: Jossey-Bass Publishers. Felder, R.M., (1993).Reaching the second tier: Learning and teaching styles in college science education. Journal of College Science Teaching, 23(5), 286-290. Dunn, R., (1990). Understanding the Dunn and Dunn learning styles model and the need for individual diagnosis and prescription. Reading, Writing and Learning Disabilities, 6, 223-247. Dunn, R., Beaudry, J.S. & Klavas, A. (1989). Survey of research on learning styles. Educational Leadership, 46(6), 50-58.
26
27
28
29
30
31
32
33
34
Dunn, R. (1992). Learning styles network mission and belief statements adopted. Learning Styles Network Newsletter, 13(2), 1. Kolb, D.A. (1984). Experiential learning: Experience as the source of learning and development. Englewood Cliffs, CA: Prentice Hall. Kolb, D.A. (1981). Learning styles and disciplinary differences. In Chickering A. et al. (Eds.), The modern american college. San Francisco: Jossey-Bass Publishers. Baron, R. (1998). What type am I? Discover who you really are. New York: Penguin Putnam Inc. Jung, C.G. (n.d.). Psychological types. In R.F.C. Hull (Ed), The collected works of Jung C. G., vol. 6. Princeton. Mamchur, C. (n.d.). Cognitive type theory and learning style. Association for Supervision and Curriculum Development. Dale, E. (1969). Audiovisual methods in teaching. Third Edition. New York: Dryden Press. Johnson, D.W., Johnson, R.T. & Smith, K.A. (1991). Active learning: Cooperation in the college classroom. Edina, MN: Interaction Book Company. Johnson, D.W., Johnson, R.T. & Smith, K.A. (n.d.). Cooperative learning: Increasing college faculty instructional productivity. Washington DC: The George Washington University, School of Education and Human Development.
Section IV
Emerging Trends and Issues in Technoethics
454
Chapter XXX
Which Rights for Which Subjects? Genetic Confidentiality and Privacy in the Post-Genomic Era Antoinette Rouvroy European University Institute, Italy
AbstrAct The aim of the present chapter is to elucidate the paradoxical position of the individual legal subject in the context of human genetics. It first discusses the assumed individual “right to know” and “right not to know” about genetic susceptibilities, predispositions and risks when genetic tests exist, and assess the usual assumption according to which more information necessarily increases liberty and enhances autonomy. A second section is dedicated to the issues of confidentiality, intra-familial disclosure and familial management of genetic information. The idea is suggested that those issues challenge the fundamental liberal unit of the individual traditionally understood as a stable, unitary, embodied entity.
IntroductIon Notwithstanding the fears and expectations unleashed by the hype surrounding the “genetic revolution” initiated in the early nineties with the Human Genome Project, the so-called “new human genetics” has not transformed nor provided definitive elucidation of what it is to be human but has undoubtedly shifted the locus of inquiry for characterising commonalities and variations among the human species. Focusing on “genes”, the scrutiny has shifted from ‘visible’ superficial physiognomy and anatomy, from the layer of
physical appearance and expressed behaviours, and from ‘incalculable’ social, economical and environmental contexts, to the ‘invisible’ but locatable and ‘calculable’ internal, molecular milieu. What may the rights and duties of the individual subject be with regard to “his” newly accessible genetic information? Does the individual have a “right to know”, a “right not to know”, a “duty to know” or “liberty to know” about medically or otherwise meaningful features of his own genome? Given the shared nature of genetic information, how are those rights or liberties of the
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Which Rights for Which Subjects?
subject to be weighed against competing claims by blood relatives interested in the same genetic information? Genetic information is the “locus” of intersection of a network of concurring and conflicting interests, and obfuscates the lawyers’ predispositions to think in terms of dual relations of individual rights and correlative individual or collective duties. A second section will be dedicated to the intrafamilial conflicts of interests in genetic information, and to the ensuing challenges this imposes to medico-legal norms such as the health provider’s duty of confidentiality. What are the possibilities and implications of acknowledging the existence of a collective ‘genetic subject’ transcending individual embodiment? The subject of genetic information and of genetic privacy (the patient entitled to care and confidentiality in the patientdoctor relationship) is not even easily identifiable in the genetic context. Enabling the prediction of disease or the assessment of disease-risk with varying degrees of certainty, genetic information is of course important to the tested person,1 but may also be crucial to persons who share the same genetic inheritance and are virtually exposed to the same genetic risks. Those persons (blood relatives) may sometimes be recognized a legitimate and legally protected interest, however not usually raised up to the status of a right to force intra-familial disclosure, but requiring some procedural measures enhancing the patient’s aptitudes to reflect upon the interests of those third parties and to act “morally” towards them. The moral or legal character of the obligations owned by the individual directly concerned regarding disclosure of genetic information to family members is a controversial issue. Indeed, isn’t the subject of genetic information the whole ‘genetic group’ or genetically-related family? The dual doctorpatient relationship seems prone to explode into a complexified network of relationships extending to the whole “genetic family”. The duties owed by one person vis a vis his relatives when aware of the presence of specific familial genetic ailments
(Rhodes, 1998), or when asked to cooperate in a familial inquiry in order to establish the results of a genetic test required by one of the members of his family are to be assessed as well as the consequences of this potential collectivization of genetic rights for our representation of the liberal individual. Indeed, the extension of the medical doctor’s duties towards members of the genetic group and the related issue of intra-familial disclosure of genetic information further challenge the exclusive control traditionally granted to the liberal individual over “his” personal information and biological material, and contradicts current discourses about individual self-ownership and empowerment.
the “rIght to know” And the “rIght not to know” A usual argument favouring the “duty to know” over the “right not to know” is that genetic risk information positively reinforce the ‘genetically informed’ and ‘genetically empowered’ individual’s autonomy. The argument appears particularly compelling as a major ethical and legal imperative of neoliberal societies is the respect and, where necessary, enhancement of individual autonomy. Being aware of one’s genetic risks, it is assumed, allows individuals to better adapt their lifestyle and diet, adopting a preventative attitude in order to keep healthy.2 Yet, the relationship between genetic information and individual autonomy is much more complex than usually assumed. What predictive genetic testing allows is the designation of patients in an anticipatory sense. Although in classical medical practice, the quasi contractual patient-doctor relationship arose because of observable symptoms, a genetic test may be offered to currently asymptomatic, healthy individuals. In the legal sphere, that shift is also resented as a disruption: what rights and obligations should the ‘asymptomatic ills’ be allocated by virtue of their status’ as ‘genetically at risk’?
455
Which Rights for Which Subjects?
Genetic testing is closer to the notion of prognostic than to the notion of diagnostic. Most of the time genetic testing doesn’t reveal a currently existing health problem in a symptomatic individual but rather reveals, for asymptomatic, healthy persons, a mere probability or a particular susceptibility to develop some illnesses for which preventive or curative strategy are most often not, or not yet available. Notwithstanding the uncertainty characterising the genetic predictions,3 what is new is the presentation of a clear genetic causal line (even if genetic make-up is merely exceptionally the exclusive and sufficient cause) going from an identified locus in the genome to the phenotypic manifestation of the disease. ‘The chanciness and luck that accompany present-day risk assessment will be replaced by the clear mark of genetic susceptibility in one’s very identity’, Johnsen noted (Jonsen, 1996: 10). A new dimension that genetic information introduces in the individual takes the form of his inescapably anticipated ‘future self’ which the genetically informed individual can no longer ignore. The relationship existing between ‘genetic self-knowledge’ and autonomy or liberty deserves new assessment in light of that new ‘genetic condition’ of self-experiencing. A seemingly universal intuition is that: the better informed the individual is, the more capable he is of making decisions in line with his own basic wishes, since he is more likely to succeed in realizing his wishes if the beliefs he acts from are well-founded. This makes autonomy a matter of degree: generally, the more information relevant to a decision one has when making it, the more autonomous it is. From this point of view it seems difficult to defend a general right to ignorance. (Radetzki, 2003: 110) Information about risks to oneself is usually considered enhancing individual liberty by allowing for better informed, and thus more rational, actions and choices. The perceived liberating virtues of information emancipating individuals
456
from the constraints that uncertainty imposes on their freedom reinforce the impression that uncertainty adversely affects the autonomous character of acts and choices. Given the potentially devastating psychological, familial, social and economic impacts of adverse genetic test results though, respecting individual autonomy has been considered, both in Europe and in the United States, as implying the individual’s right to know or not to know. By autonomy, I mean the rights and liberties necessary to individuals in order for them to live a life characterized as (in part at least) self-determined, self-authored or self-created, following plans and ideals - a conception of the good - that they have chosen for themselves.4 In the context of human genetics, respect for personal autonomy amounts, for example, to the recognition of a right of an individual to know or not to know about their genetic disorder. But this is contested, given that allowing a right not to know would allow people to stand uninformed, whereas autonomy is sometimes presented as requiring all the available information which may be pertinent in order to choose one’s way of living. Rosamond Rhodes, for example, held that: if autonomy is the ground for my right to determine my own course, it cannot also be the ground for not determining my own course. If autonomy justifies my right to knowledge, it cannot also justify my refusing to be informed. (…) From a Kantian perspective, autonomy is the essence of what morality requires of me. The core content of my duty is self-determination. To say this in another way, I need to appreciate that my ethical obligation is to rule myself, that is, to be a just ruler over my own actions. As sovereign over myself I am obligated to make thoughtful and informed decisions without being swayed by irrational emotions, including my fear of knowing significant genetic facts about myself. (Rhodes, 1998b)
Which Rights for Which Subjects?
The relationship between information and autonomy is not as straightforward as it is usually conceived however, especially when the information involved increases the range of predictable events, as is the case for genetic information. A belief in the positive relationship existing between information and freedom has been criticized, both generally and in the genetic context. Isaiah Berlin, notably, draws attention to the fact that: Knowledge, especially risk-knowledge, if it allows the individual to take some preventive actions or decisions for himself or for others, also potentially impacts negatively on other ranges of opportunities and experiences by which the unknowing individual would have been tempted. What knowledge gives with one hand, it may well take back with the other. The growth of knowledge increases the range of predictable events, and predictability - inductive or intuitive - despite all that has been said against this position, does not seem compatible with liberty of choice. (…) if, in other words, I claim to have the kind of knowledge about myself that I might have about others, then even though my sources may be better or my certainty greater, such self-knowledge, it seems to me, may or may not add to the sum total of my freedom. The question is empirical: and the answer depends on specific circumstances. From the fact that every gain in knowledge liberates me in some respect, it does not follow (…), that it will necessarily add to the total sum of freedom that I enjoy: it may, by taking with one hand more than it gives with the other, decrease it. (Berlin, 2000) Whether in fact genetic information enhances autonomy and freedom is always provisional. The fact that, in many cases, predictive genetic information is not really specific nor sensitive enough (the identification of a predisposing gene in one individual doesn’t mean that he will necessarily develop the disease; conversely, the absence of any known predisposing gene in an individual’s make-up doesn’t guarantee that he will never
develop the illness and is considered ‘medically useless and potentially psychologically harming’ given that no preventive or curative strategies currently exist for the disease at issue), has made some scholars and professional groups advocate that tests for unpreventable diseases such as Alzheimer’s disease should not be provided to patients, even at their request.5 What is seldom taken into account, moreover, in discussions about the right/duty to know and the right not to know is the probable pleiotropic character of the genetic mutations detected through genetic testing. Whereas it may be perfectly sensible to wish to gain information about one’s increased risk of developing a preventable disease, when the same mutation also indicates that one is at the pre-symptomatic stage of an incurable and unpreventable disease or that one is at increased risk of developing such a disease, the test is both clinically useful and potentially devastating. The APOEe4 genotyping testing is one of those pleiotropic tests: it provides information about the risk of both atherosclerosis (coronary artery disease) - a condition for which preventive measures such as cessation of smoking, low-fat diet, exercise, and avoidance of stress may decrease the risk and an increased risk of developing Alzheimer’s disease. Risk information about heart disease is medically useful as it allows early prevention. On the contrary, information about the risk of developing Alzheimer’s disease does not allow the patient to do anything about it, and a positive result may produce net adverse consequences.6 However the strong current presumption existing in favor of genetic transparency, both to oneself and towards others, contributes to concealing the fundamental and subtle ambiguities existing between information and truth (particularly the probabilistic truth of genetic risks identified through genetic tests) on the one hand, and between information and freedom or autonomy on the other. In fact rather than determinism, what is suggested by the availability of genetic services is
457
Which Rights for Which Subjects?
an extension of human agency and choice and a parallel decrease in the scope of luck and necessity. The use of new genetic diagnostic and prognostic tools does not initiate any so-called genetic revolution, they only intensify an existing tendency to shift the responsibility for ill-health away from environmental, social and economic factors to the individual. Despite the claim – which may be partially true - that genetic counselling is fundamentally non-directive and that decisions about genetic risks are always left to individual choices,7 those choices become the precise medium through which a new form of governance is exercised, taking citizens’ bodies as both vectors and targets of normalization. [T]he transformed non-directive ethos is based on the transmission of expert knowledge to create autonomous actors who, through the medium of choice consent voluntarily to act responsibly.(…) In this practice a prominent social rationality emerges: to acquire knowledge about genetic risks and embark on preventive action comes to stand out as the right way of relating to oneself (taking personal responsibility for health), the family (saving lives of relatives) and society (maintaining a healthy population).8 The opposition often assumed to exist between choice and directiveness lacks operability when, instead of opposing directiveness in the name of respect for expressed individual choices, one acknowledges that individual choices, far from being given, natural and objective facts, result, as Foucault suggested, from the disciplines, that is, from the power immanent in the social field which makes up the individual. One may regret that an insistence on the positive impact of information on an individual’s capacity to make decisions in line with his basic wishes be not accompanied by a critical assessment of the conditions under which those basic wishes are formed. Let us note here that the qualification of an individual’s entitlements towards “his” genetic
458
information as either “rights” or “liberties”, though somewhat neglected in current scholarship, is of immense practical importance: if taken seriously, the theory of fundamental legal relations means, for example, that a “right” to know could potentially imply a correlative “duty” for the state to make genetic testing part of the health benefits packages for those willing to know about their genetic make-up, whereas a “liberty” (or privilege) to know merely implies that no restriction can be imposed to an individual willing to get tested for genotypic traits but not that he must be offered the test for free if he cannot pay for it.9 In this line of reasoning, the growing ideas that individuals do have, if not a legal, at least a moral “duty to know” about their genetic predispositions, susceptibilities and risks, and to participate in genetic research10 may appear incongruent in a situation where existing genetic tests are for the most part to be paid by the individuals themselves. Another comment I would like to make here, is that the assessment of public policies about genetic rights and liberties needs to be made taking into account the somewhat contingent context of the current dominant mode of socio-economic and cultural interactions. More precisely, that those issues of individual rights towards genomic information appear so crucial today is inescapably related to the specific drives of neoliberal societies: “informational capitalism” and the “moralization of risks”. Informational capitalism has been described by various critical scholars. Perri 6 (6, 1998, p. 14-15 ) held that: what is distinctive about informational capitalism is that personal information has become the basic fuel on which modern business and government run and (…) the systematic accumulation, warehousing, processing, analysis, targeting, matching, manipulation and use of personal information is producing new forms of government and business (…).
Which Rights for Which Subjects?
According to Julie Cohen (2001): The use of personal information to sort and classify individuals is inextricably bound up with the fabric of our political economy (…). The conflation of information with certainty and projections with predictions is not confined to markets. The destruction of privacy is the necessary by-product of a particular set of beliefs about the predictive power of information that operate in both market and government spheres. Within this belief structure, nothing is random. Success in markets, politics, and policy-making can be predicted, and failure avoided, using the proper algorithms and the right inputs. In Cohen’s view, the use of even partial or incomplete personal information or isolated facts about individuals to predict risks and minimize uncertainties is described as ‘the hallmark of the liberal state and its constituent economic and political markets.’ As for the “moralization” of risks, whereas the ‘insurance society’ had switched the focus from the subjective, moral notions of individual fault and responsibility to the objective notions of risk and solidarity, neo-liberal governance supposes a return from the ‘insurance society’ to the ‘actuarial, post-Keynesian’ society where ‘(...) acceptance of solidarity is (...) accompanied by a demand for control over personal behavior’ (Rosanvallon, 1995). With the gradual substitution of selectivity to universality as a principle for the distribution of welfare benefits, discourses of personal empowerment, activation and responsibility induce individuals to assume personal responsibility for most adverse circumstances resulting from bad (brute) luck for which they would have expected some compensation from the collectivity in a traditional welfare-state (Handler, 2000; Handler, 2001). “Genetic risk” functions as a technology of the self, urging individuals to get the most information they can about their own genetic risk status, to act ‘rationally and
responsively’ to promote their and their relatives’ health upon receiving information, and to take personal responsibility – rather than transferring their risk in a collective pool and awaiting relief from social solidarity – for the adverse outcomes would they have failed to take advantage of the available predictive genetic information. To that extent, genetic testing may well become a privileged disciplinary tool of neoliberal governance, but does not necessarily increase the liberty and autonomy of individuals.
confIdentIAlIty, IntrA-fAmIlIAl dIsclosure, And fAmIlIAl mAnAgement of genetIc InformAtIon: selected Issues Although the horizon of neo-liberal governance is the “responsibilisation and empowerment of individuals”, or their emancipation from the old welfare institutions, at the fundamental level of philosophical anthropology, the genetic ‘representational regime’ induces fundamental perturbations in the liberal representations of the modernist sovereign subject on which, however, neo-liberal governance precisely relies. The liberal individual legal subject understood as a stable, unitary, embodied entity acknowledged as the fundamental unit of liberal societies does not match the ‘subject’ of genetic information, which transcends the boundaries of the individual both “over time”, as has just been suggested, and “spatially”, as intrafamilial conflicts of interests with regards to genetic information makes the very identity of the ‘legal subject’ and its equivalence with the unitary, embodied individual unsure in the genetic context. The ‘subject’ of genetic information has even been identified as a transgenerational, collective, ‘non-material genomic body’ (Scully, 2005), an ‘information structure’ (Harraway, 1997),11 that overflows the traditional limits of material embodiment characterizing the unitary vision of the subject as fundamental unit of liberal societies. 459
Which Rights for Which Subjects?
Both in space and in time - the ‘subject’, contemplated from a genetic point-of-view overflows the boundaries of the individual legal subject.12 Taking inspiration from feminist scholarship it might be useful to consider whether and to what extent the ‘self’ deserving legal protection exceeds the spatially identifiable, physically bounded subject (Karpin, 2005). Disembodied biological and informational samples collected in biobanks ‘create’ informational identities (Franko Aas, 2006) ‘parallel’ to the body and independent from the narratives through which individuals construct and keep their biographical identity. The new identities, as information structures, allow new types of surveillance practices which, because they do not immediately target embodied identities, but merely the virtual identities composed in the dry language of electronic records, are not readily open to negotiation or contestation. Those issues will not be addressed in the present chapter, even though the questions raised by the superposition of virtual identity to the embodied identity gain ascendancy in science and technology studies given current developments in the field of information technologies with research projects in ambient intelligence and ubiquitous computing. It is enough for our present purpose to suggest that both the “genetic revolution” and the “information revolution” (involving profiling techniques, rfid tags, video surveillance, ambient intelligence etc.), despite their apparent heterogeneity, may in fact raise intersecting challenges. Besides this superposition of a disembodied informational identity to the embodied self, the subject of genetic information and of genetic privacy, the patient entitled to genetic confidentiality is not even easily identifiable in the genetic context. Enabling the prediction of disease or the assessment of disease-risk with varying degrees of certainty,13 genetic information is of course important to the tested person,14 but may also be crucial to persons who share the same genetic inheritance and are virtually exposed to the same genetic risks, namely his or her blood-relatives. The nature of the duties owed by one person vis 460
a vis his relatives when aware of the presence of specific familial genetic susceptibility, predisposition or genetic ailment that may increase disease risk, or when asked to participate in a familial inquiry in order allow detection of genetic risks on the request of other persons the family, are highly controversial.15 (Knoppers, 1998; Knoppers, 1998a; De Sola, 1994; Abbing, 1995; Apel, 2001; Rhodes, 1998; Takala and Häyry, 2000). Those persons (family members), “third parties” with regard to the doctor-patient relationship, may sometimes be recognized a legitimate and legally protected interest, however not usually raised up to the status of a right to force intra-familial disclosure, but requiring some procedural measures enhancing the patient’s aptitudes to reflect upon the interests of those third parties and to exhibit some sense of responsibility towards them. The limited scope of the present chapter will not allow a full discussion of the full range of questions ensuing from possible conflicts of interests between the individual “tested” and interested “third parties” in the context of human genetics. Our ambition is merely to outline five major issues that challenge the traditional centrality of the individual legal subject in bioethics and biolaw.
does a child’s right to know his/ her ‘genetic Identity’ trump his/her Parent(s) right to genetic Confidentiality? The questions raised by the new human genetics in this regard are not absolutely novel. The question that the European Court of Human Rights had to confront in Odièvre v. France,16 for instance, involved a woman’s claim to access confidential information concerning her birth (where her biological mother had decided to use the possibility of “anonymous delivery” offered by French law) and to obtain copies of any documents, public records or full birth certificates whereas her biological mother had requested that the birth
Which Rights for Which Subjects?
be kept secret and had waived her rights with regard to the child. The French law governing confidentiality at birth prevented the daughter (claimant) from obtaining information about her natural family. The Court held that the French Law did not constitute, in that case, a disproportionate interference with the claimant’s right to privacy, but nevertheless acknowledged that the right to privacy (Article 8 of the European Convention on Human Rights) protects, among other interests, the right to personal development, and that matters relevant to personal development included details of a person’s identity as a human being and the vital interest in obtaining information necessary to discover the truth concerning important aspects of one’s personal identity. In the United Kingdom, the Administrative Court faced a comparable case in Rose v Secretary of State for Health and the HFEA17 involving the claim brought by two persons, one who had been conceived by artificial insemination in 1972 (before the Human Fertilisation and Embryology Act of 1990) and one born also from an artificial insemination procedure in 1996, both requesting disclosure of information about the respective donors. The Administrative Court held that Article 8 of the Human Rights Act of 1998 encompassed the right to respect of «genetic identity» entitling children born from in vitro fertilization procedures to receive information about their biological fathers. It is far from certain however that a person’s rights with regards to her «genetic identity» implies that she has a right to know about genetic test results relating to her blood relatives, nor that her right to know trumps the tested person’s right to confidentiality and privacy.
Is there a ‘duty of genetic Beneficence’ Towards Family Members? Genetic information, given its collective, inherited character, may challenge the classical duties
of health practitioners: with regard to whom are they obliged to respect their obligations of confidentiality and beneficence? Moreover, one may wonder if the patient’s right to know may imply that his family members are obliged to collaborate in the testing procedure by themselves undergoing diverse tests or by answering the many questions which arise in the context of the familial inquiry. There are indeed other factors, specific to genetic information, which contribute to its shared character. For example, genetic knowledge about individuals may have to be supplemented by information obtained from relatives in order for such knowledge to be meaningful. This problem may be a partially temporary one, due to the fact that there is at present no direct test for the gene itself in many genetic conditions, and a marker or linkage test remains necessary. Linkage tests are now less frequently used, however even where a direct gene test is available, it remains important to confirm the mutation in at least one other affected family member. This is especially important where, as is the case of most genetic disorders, there is more than one form of genetic mutation that causes the disorder (Bell, 2001).Complex dilemmas relating to the familial disclosure of genetic information are worsened by practical difficulties. The dispersion and atomisation of families, which is one of the major specificities of our times, renders it impossible sometimes to carry out research on relevant blood-related persons, from which one would need to obtain information or to whom one would like to communicate certain information concerning their genetic risks. There is indeed something paradoxical in the attempts to reconstruct genetic families when, precisely, one increasingly witnesses the decomposition and recomposition of biological families several times by generation (Knoppers, 1998b). How should the twofold opposition between the right to confidentiality and privacy of some and the right to the protection of health of others, and between the right to know of some and the
461
Which Rights for Which Subjects?
right of others to remain ignorant of their genetic make-up be resolved?
Is There a Professional ‘Duty to Warn’ Family Members of Genetic Risks? The availability of genetic testing challenges one of the most classical ethical rules governing the patient-doctor relationship: the rule of confidentiality. Because genetic disease is transmitted only by way of procreation, information about genetic disease is unique in that there is a propensity (highly variable) for the condition to be shared by members of a family who are biologically related. The issue of how individual patients and their doctors should act in relation to the knowledge that the patient has a genetic condition - specifically, whether the patient and/or the doctor should or must inform relevant members of the patient’s family - is a looming area of medico-legal controversy. There is a tension between the existing legal and professional obligation of the health care professional to keep confidential any medical or otherwise personal information discovered in the context of a medical examination or consultation and his competing obligation to prevent harm to others. In the genetic context, the confidentiality duties of the physician and the privacy rights of the patient may conflict with a perceived duty to prevent harm to others. In the landmark case Tarasoff v. Regents of the University of California,18 the Supreme Court of California ruled that mental health professionals have a duty to provide adequate warning if a patient threatens the life of a third party during counseling sessions. The facts of the case were as follows. Prosenjit Poddar killed Tatiana Tarasoff. Two months earlier, he had confessed his intention to kill her to Dr Lawrence, a psychologist employed by the Cowell Memorial Hospital at the University of California at Berkeley. Tatiana’s parents sued the Regents of the University of California on two grounds: the defendants’ failure to warn the victim of the impending danger and their failure to bring about
462
Poddar’s confinement. The defendants argued in return that they owed no duty of reasonable care to Tatiana, who was not in any doctor-patient relationship with them. But the opinion of the Court was that: the public policy favouring protection of the confidential character of patient-psychotherapist communications must yield to the extent to which disclosure is essential to avert danger to others. The protective privilege ends where the public peril begins. Our current crowded and computerized society compels the interdependence of its members. In this risk-infested society we can hardly tolerate the further exposure to danger that would result from a concealed knowledge of the therapist that his patient was lethal. If the exercise of reasonable care to protect the threatened victim requires the therapist to warn the endangered party or those who can reasonably be expected to notify him, we see no sufficient societal interest that would protect and justify concealment. The containment of such risks lies in the public interest.
In the Case of Genetic Testing, Should the Relevant Health Professional Inform a Patient’s Relatives that They Could be Genetically at Risk Even in Cases where the Patient Does not Consent to Such Disclosure? In another case - Pate v.Threlkel,19 the Florida Supreme Court was asked by the plaintiff to recognise a genetic family as a legal unit. In that case, commenced in the early 1990’s, Heidy Pate claimed that Dr. J. Threlkel, the physician of Pate’s mother, Marianne New, was under the obligation to warn her mother that she suffered from a hereditary disease that placed her children (including the plaintiff) at risk of developing the same condition. At the time of the suit, the plaintiff had fallen ill and claimed that, had she been warned of her hereditary risk, her own condition would have been discovered earlier and might
Which Rights for Which Subjects?
have been curable. The court recognized a duty owed by the doctor to warn his patient’s child as well as the patient herself, but also stated that ‘To require the physician to seek out and warn members of the patient’s family would often be difficult or impractical and would place too heavy a burden upon the physician. Thus we emphasise that in any circumstances in which the physician has a duty to warn of a genetically transferable disease, that duty will be satisfied by warning the patient.’ However, in Safer v. Pack,20 the New-Jersey court went further and imposed a duty to directly warn the family members at risk. The facts were quite similar to the facts of Pate v. Threlkel: in 1990, Donna Safer was diagnosed with a hereditary form of colon cancer from which her father R. Batkin had died twenty-six years earlier. In 1992, Donna Safer brought a suit against Dr. G. Pack, her father’s former physician, asserting that he had provided her with negligent medical care, although Dr. Pack had never treated Donna or acted as her physician in any way. Donna Safer argued that the physician was obliged to warn those at risk that his patient’s condition was hereditary, so that they might have the benefits of early examination, monitoring, detection and treatment, and thus the opportunity to avoid the most baneful consequences of the condition. The Safer court, unlike the Florida Supreme Court in Threlkel, rejected a limited interpretation of the doctor’s duty to warn (a duty to warn his patient, but not to directly warn members of that patient’s family), and defined a broad duty to warn not only the patient, but also to directly warn those members of the patient’s family at risk of falling ill with the hereditary disease at issue. Judge Kestin explained: ‘Although an overly broad and general application of the physician’s duty to warn might lead to confusion, conflict or unfairness in many types of circumstances, we are confident that the duty to warn of avertable risk from genetic causes, by definition a matter of familial concern, is sufficiently narrow to serve the interests of justice.
Further, it is appropriate… that the duty be seen as owed not only to the patient himself but that it also extend beyond the interests of a patient to members of the immediate family of the patient who may be adversely affected by a breach of that duty’. Interestingly enough, the court had considered that there was no essential difference between ‘the type of genetic threat at issue here and the menace of infection, contagion or threat of physical harm… The individual at risk is easily identified, and the substantial future harm may be averted or minimised by a timely and effective warning.’ The more far-reaching implications of that case are controversial. They were even rejected by the New Jersey legislature in 1996,21 when a statute was passed for the purpose of protecting genetic privacy, which allows health care providers to warn relatives of those suffering from genetic disorders only if the patient has consented to such disclosure or after the patient has died. In the Schroeder v. Perkel case, the New Jersey Supreme Court22 had observed that the duties of physicians may extend beyond the interests of his patient, to members of that patient’s immediate family who might be adversely affected by the physician’s breach of duty. In that case, the court reasoned that the doctor’s duty followed from the potential harm that might occur to the patient’s parents in case they were conceive a second child, unaware that the second child might also suffer from cystic fibrosis. In Molloy v. Meyer,23 the Supreme Court of Minnesota held that ‘A physician’s duty regarding genetic testing and diagnosis extends beyond the patient to biological parents who foreseeably may be harmed by a breach of that duty.’ The plaintiffs, Kimberly Molloy and her husband Robert Flomer, had a daughter with developmental retardation. Four years after their daughter’s birth, K. Molloy consulted Dr. Diane Meier to determine whether her daughter’s developmental retardation had a genetic cause. Dr. Meier ordered chromosomal testing of the child, including a test for fragile X
463
Which Rights for Which Subjects?
syndrome, a hereditary condition that causes a range of mental impairments. But the fragile X test was never performed, and the tests that were done revealed no genetic abnormality. After having been told by Dr. Reno Backus, another physician to whom K. Molloy referred her daughter, that the child was developmentally delayed with autistic tendencies of unknown origins, she asked about the risk of having another disabled child, as she intended to have a second child with her second husband, Glenn Molloy. Dr Backus told her that the chances of having a second child with the same impairments were extremely remote. Unfortunately, her second child also appeared to have the syndrome. Two years after the birth of her second child, K. Molloy took a genetic test that identified her as a carrier for fragile X. Her two children were also identified as having the fragile X syndrome. The Molloys sued Dr. Meier, Dr. Backus and another physician for their negligence in the care they owed to the first child and to themselves. They reproached the physicians that they negligently told them that the first child did not have the fragile X syndrome when in fact the child had never been tested for it. In a case decided in Italy by the Garante per la protezione dei dati personali, the Garante allowed a woman to access her father’s genetic data despite the latter’s refusal of consent. The woman’s request was motivated by her wish to take a fully informed reproductive decision by assessing the risk of transmitting a genetic disease that affected her father. The justification provided for the decision consisted in the evaluation by the Garante that the woman’s ‘right to health’ (health being defined by the Guarante as including ‘psychological and physical well-being’) trumped her father’s right to privacy.24 Other cases are imaginable where a patient’s right to privacy and confidentiality would conflict with the interests of members of his/her family with regard to information about their own risk status. One may imagine, for example, the following situation: a woman aware of her strong
464
familial history of breast cancer decides to take a BRCA1 gene test. Although her mother never went for the test, her grandmother had tested positive for the BRCA 1 mutation. If the woman also tests positive for the mutation, it necessarily means her mother is also positive. Should the latter be warned of her increased risk? What if the tested woman does not want to disclose that information to her mother? At the opposite end of the spectrum, the case may arise where a family member who does not want to know that he/she is at risk may be forced to know: a positive test for Huntington disease performed on an unborn child indicates that the parent with a familial history of Huntington disease will for sure develop the disease. Given the Predictive and prenatal testing are available for Huntington disease, but not all people at risk choose to have the test.25 Conflicts of interests may also arise when, for example, having conceived a child, a man with a family history of Huntington disease and thus at a 50 per cent risk of developing Huntington’s disease himself later in life does not wish to be tested and prefers not to know whether he will actually develop the illness. If the woman asks for prenatal genetic diagnosis, a positive test will indicate, with certainty, that the father will have Huntington’s disease. Should the right not to know of the prospective father trump the paramount interest of the mother to know whether the unborn child is affected or not? The current global legal attitude regarding decisions to undertake prenatal genetic diagnosis is to respect the will of the mother who is physically concerned by the test. She is the patient to whom medical doctors and genetic counsellors have a duty of care (Tassicker, 2003). Yet, the American Society of Human Genetics has already suggested that genetic information may be viewed as a ‘family possession rather than simply a personal one’. In a note explaining the suggestion, one even reads the suggestion of a family-health model that contemplates the
Which Rights for Which Subjects?
physician’s patient as the entire family, where family is understood to refer to a genetic network rather than a social institution (American Society of Human Genetics, 1998). The Royal College of Physicians of the United Kingdom similarly suggested in 1991 that: because of the nature of genes, it may be argued that genetic information about any individual should not be regarded as personal to that individual, but as the common property of other people who may share those genes, and who need the information in order to find out their own genetic constitution. If so, an individual’s prima facie right to confidentiality and privacy might be regarded as overridden by the rights of others to have access to information about them. (Royal College of Physicians Committees on Clinical Genetics and Ethical Issues in Medicine, 1991) Even more radically, some scholars dismiss concerns about patient confidentiality by assuming the pre-eminence of the genetic family within which individual identity is subsumed by the identity of the whole. For example, R. Burnett writes that: [T]here is no need to consider confidentiality in the genetic context because, arguably, confidentiality is not sacrificed. Confidentiality is not in danger because, even assuming that policies in favour of confidentiality outweigh a duty to warn, a duty of confidentiality is not violated in the situation involving the warning of genetic diseases. (…) Now, with the introduction of genetic mapping, (…) the patient/physician relationship has been reconfigured to reflect the individual’s ties to his or her ancestors and descendants. (Burnett, 1999: 559) According to this comment, the right to privacy is just not applicable to genetic information in genetic family contexts. At the heart of the
ideological construction of the genetic family is the obliteration of privacy. This appears odd especially to the extent that it would suggest that a person’s right to genetic confidentiality could not be opposed to any bloodrelated person, be that person part of his or her “social family” or not. The European working party set up under Article 29 of the Directive 95/46/ EC on data protection (the so-called Article 29 Data Protection Working Party), acknowledged, in its Working Document on Genetic Data of 17 March 2004,26 that: a new, legally relevant social group can be said to have come into existence - namely, the biological group, the group of kindred as opposed, technically speaking, to one’s family. Indeed, such a group does not include family members such as one’s spouse or foster children, whereas it also consists of entities outside the family circle - whether in law or factually - such as gamete donors or the woman who, at the time of child birth, did not recognise her child and requested that her particulars should not be disclosed - this right being supported in certain legal systems. The anonymity granted to the latter entities raises a further issue, which is usually dealt with by providing that the personal data required for genetic testing be communicated exclusively to a physician without referring to the identity of the relevant individual. One may fear that subsuming the individual’s right to genetic confidentiality to the interests of the other members of the same “genetic group” would adversely impact on the trust and confidence that should prevail in any doctor-patient relationship, with consequences detrimental to both the individual’s and public health.
Individual v. familial consent in biobanking According to regulations in force in most countries, notably in Europe and the United States, the
465
Which Rights for Which Subjects?
establishment of a biobank or a genetic database would require the previous consent of each individual involved. In Iceland however, individual consent has been presumed, each person being automatically included in the database unless she formally expresses her refusal. In Ragnhildur Guömundsdóttir v. The State of Iceland,27 the Icelandic Supreme Court ruled that the Health Sector Database Act of 1998 does not comply with Iceland’s constitutional privacy protections. The case involved the question whether a woman could refuse that health information about her deceased father be included in the Health Sector Database. The court ruled that Ms. Guömundsdóttir could not opt out of the database on behalf of her father, but that she could prevent the transfer of her father’s medical records (especially those concerning her father’s hereditary characteristics) because of the possibility to infer information about her from such records. Moreover, the Court found that removing or encrypting personal identifiers such as name and address is not sufficient to prevent the identification of individuals involved in the database, since they may still be identified by a combination of factors such as age, municipality, marital status, education and profession, and the specification of a particular profession. The mere encryption of direct personal identifiers, and the various forms of monitoring entrusted to public agencies or committees, the court ruled, are not enough to comply with the Icelandic constitution’s protection of privacy. This, according to the Court, required a change in the Health Database Act of 1998 (Gertz, 2004a). Although the above mentioned case appears quite isolate so far, the mushrooming of population biobanks makes it most probable that intra-familial disagreements relating to the inclusion of genetic material in biobanks will become problematic in the future. For the purpose that concerns us here, the case is interesting to the extent that it shows how human genetics disrupts the traditional views about the individual right to consent and withdraw consent to research participation and the extent to
466
which the law is forced to acknowledge the fact that the ‘subject’ of genetic information exceeds the individual liberal unit of the traditional legal subject.
conclusIon The aim of the present chapter was to elucidate the paradoxical position of the individual legal subject in the context of human genetics. In particular, it has observed the discrepancy existing between the neoliberal idea of the subject as a unified, embodied, bounded, autonomous self enclosed in the present, and the somewhat ‘disciplined’ and at the same time ‘collectivized’ subject that results from the complexification of that notion in the “post-genomic genetic era”. The first complexification that has been suggested results from the “risk anticipation” that predictive genetic testing imposes to the subject. Because genetic testing allows the identification of patients in an anticipatory sense, a dimension of otherness that genetic information introduces in the “self” takes the form of an inescapably anticipated ‘future self’ which the genetically informed individual can no longer ignore. Focusing on the debates occurring regarding the individual’s right to know and right not to know, the first section has been the occasion to criticize presupposition that information about one’s genetic risks increases one’s liberty and/or autonomy. Another disruption that genetics imposes on the notion of the unified self as fundamental liberal unit results from the “collective nature” of genetic information as the “patient/genetic data subject”, in the post-genomic era, tends to become a “collective entity”. Issues of confidentiality, intra-familial disclosure and familial management of genetic information, discussed in the second section, illustrate the disruptions undergone by the notion of the traditional legal subject in this regard.
Which Rights for Which Subjects?
Some authors, like Dolgin (2000), worry that the emerging notion of the genetic group, and the shift in the locus of privacy and identity from the autonomous individual to the genetic group threatens long-standing Western values that depend upon the ideological centrality of autonomous individuality.28 Dolgin also warns that: The genetic family insists only on one thing - the recognition of biological information. In that, it upsets a society and a legal order committed to the position that autonomous choice can sustain a moral frame within which family life is distinguished from life in other social domains (…) At the edges of a broad commitment to freedom, and thus choice, the law faces the medicalized family, and begins to elaborate its variant: the genetic family. At least in the first instance, this family serves neither individualism nor choice. It reflects the amorality of DNA through which it is delimited, and to which it can be reduced. Unlike the notion of biogenic substance as traditionally defined, DNA is indifferent to the content of family life. This construct of family differs from others in abandoning even the repentance that contemporary families should be modeled on nostalgic images of traditional families within which, it is presumed, enduring love and absolute loyalty were assured. (Dolgin, 2000) Others (Karpin, 2005; Sommerville and English, 1999) insist that genetic challenges to the individualist conceptions of the subject provides the beneficial opportunity to deconstruct the liberal myth of the self-sufficient, autonomous individual and to acknowledge the inherent interdependency of human beings. Anyway, debates about genetic confidentiality, genetic privacy, and intra-familial management of genetic information provide a fresh opportunity to reassess our political and cultural conceptions of what it means to be a “subject” in the circumstances of our times.
references Abbing, H D C R (1995). Genetic information and third party interests. How to find the right balance? Law and the Human Genome Review, 2, 35-53. Agamben, G. (2006). Qu’est-ce qu’un dispositif? Paris: Rivages Poche / Petite Bibliothèque. Andrews, L B (2000). The clone age: Adventures in the new world of reproductive technology. Owl Books. Apel, S B (2001). Privacy in genetic testing: Why women are different. Southern California Interdisciplinary Law Journal, 11(1), 1-26. Berlin, I (2000) From hope and fear set free. In Berlin, I, Hardy, H & Hausheer, R (eds), The proper study of mankind: An anthology of essays. Farrar Straus & Giroux. Brown, N (2003). Hope against hype: Accountability in biopasts, present and futures. Science Studies, 16(2), 3-21. Cohen, J E (2001). Privacy, ideology, and technology: A response to Jeffrey Rosen. Georgetown Law Journal, 89(2029). De Sola, C. (1994). Privacy and genetic data: Cases of conflict. Law and the human genome review, 1, 173-185. Dreyfuss, R C & Nelkin, D (1992). The jurisprudence of genetics. Vanderbilt Law Review, 45(2), 313-348; Feminist Health Care Ethics Research Network (1998). The politics of health: Geneticization versus health promotion. In Sherwin, S (ed.), The politics of women’s health: Exploring agency and autonomy. Temple University Press; Foucault, M. (1975). Surveiller et punir. Paris: Gallimard.
467
Which Rights for Which Subjects?
Franko Aas, K. (2006). The body does not lie: Identity, risk and trust in technoculture. Crime, Media, Culture, 2(2), 143-158.
Knoppers, B M (1998a). Towards a reconstruction of the genetic family: New principles? IDHL, 49(1), 249.
Gilbert, W (1993). A vision of the grail. In Kevles, D & Hood, L (eds.), The code of codes : Scientific and social issues in the human genome project. Harvard University Press.
Knoppers, B M, Godard, B & Joly, Y (2004). A comparative international overview. In Rothstein, M A (ed.), Genetics and life insurance: Medical underwriting and social policy (basic bioethics). The MIT Press.
Gulati, C (2001). Genetic antidiscrimination laws and health insurance: A misguided solution. Quinnipiac Health Law Journal, 4(2), 149-210. Handler, J (2000). The third way or the old way. U.Kan.L.Rev., 48, 800. Handler, J F (2001). The paradox of inclusion: Social citizenship and active labor market policies. University of California, Los Angeles School of Law Research Paper Series, 01-20. Haraway, D J (1997). Modest\_Witness@Second\_ Millenium. FemaleMan\_Meets\_OncoMouse: Feminism and technoscience. Routledge. Harris, J and Keywood, K (2001). Ignorance, information and autonomy. Theoretical Medicine and Bioethics, 22(5), 415-436. Hedgecoe, A (2000). Narratives of geneticization: cystic fibrosis, diabetes and schizophrenia. PhD thesis, University of London. Hohfeld, W.N. (1923). Fundamental legal conceptions. New Haven, CT: Yale University Press. Jacob, F (1976) La Logique du vivant. Gallimard. Jacob, F (1987) La statue intérieure. Seuil. Karpin, I (2005). Genetics and the legal conception of the self. In Mykitiuk, R & Shildrick, M (eds), Ethics of the body: Postconventional challenges (basic bioethics). The MIT Press. Knoppers, B M (1998). Professional disclosure of familial genetic information. Am. J. of Human Genetics, 62, 474-483.
468
Koch, L and Svendsen, M N (2005). Providing solution-defining problems: the imperative of disease prevention in genetic counselling. Social Science and Medicine, 60, 823-832. Lippman, A (1992), The geneticization of health and illness: Implications for social practice. Romanian J Endocrinol, 29(1/2), 85-90; Lupton, D (1993). Risk as moral danger: The social and political functions of risk discourse in public health. International Journal of Health Services, 23(3), 425-435. McTeer, M A (1995). A role for law in matters of morality. McGill Law J., 40, 893. Novas, C and Rose, N (2000). Genetic risk and the birth of the somatic individual. Economy and Society, 29(4), 485-513. Nys, H, Dreezen, I, Vinck, I, Dierick, K, Dequeker, E & Cassiman, J J (2002). Genetic testing: Patients rights, insurance and employment. A survey of regulations in the European Union. European Commission, Directorate-General for Research. O’Connell, K. (2005). The devouring: Genetics, abjection, and the limits of law. In Shildrick, M. And Mykitiuk, R. (eds.), Ethics of the body: Postconventional challenges. MIT Press. O’Neill, O (2002). Autonomy and trust in bioethics (Gifford Lectures, 2001). Cambridge University Press. Rosanvallon, P (1995) La nouvelle question sociale. Repenser l’État providence. Seuil.
Which Rights for Which Subjects?
6, Perri. (1998). Private life and public policy. In Lasky, K & Fletcher, A (eds), The future of privacy: Public trust in the use of private information (Vol. 2). Demos Medical Publishing. Radetzki, M, Radetzki, M & Juth, N (2003) Genes and insurance. Ethical, legal and economical issues. Cambridge University Press. Rhodes, R (1998). Genetic links, familiy ties and social bounds: Rights and responsibilities in the face of genetic knowledge. Journal of Medicine and Philosophy, 23(1), 10-30. Rouvroy, A (2000). Informations génétiques et assurance, discussion critique autour de la position prohibitionniste du législateur belge. Journal des Tribunaux, 585-603. Rouvroy, A. (2007 in press). Human genes and neoliberal governance: A Foucauldian critique. London & New York: Routledge – Cavendish. Scully, J L (2005). Admitting all variations? Postmodernism and genetic normality. In Shildrick, M & Mykitiuk, R (eds). Ethics of the body: Postconventional challenges (basic bioethics). The MIT Press. Sherwin, S and Simpson, C (1999) Ethical questions in the pursuit of genetic information. Geneticization and BRCA1. In Thompson, A K & Chadwick, R F (eds), Genetic information: acquisition, access, and control. Springer. Stempsey, W E (2006). The geneticization of diagnostics. Medicine, Health Care and Philosophy, 9(2), 193-200. Sutter, S M (2001). The allure and peril of genetic exceptionalism: Do we need special genetics legislation? Washington University Law Quarterly, 79(3). Takala, T. & Häyry, M (2000). Genetic ignorance, moral obligations and social duties. Journal of Medicine and Philosophy, 25(1), 107-113.
Tassicker, R, Savulescu, J, Skene, L, Marshall, P, Fitzgerald, L & Delatycki, M B (2003). Prenatal diagnosis requests for Huntington’s disease when the father is at risk and does not want to know his genetic status: Clinical, legal, and ethical viewpoints. BMJ, 326, 331. Teichler-Zallen, D. (1992). Les nouveaux tests génétiques et leurs conséquences morales. In Hubert, F & Gros, G (eds). Vers un anti-destin? Patrimoine génétique et droits de l’humanité. O. Jacob. ten Have, H (2001). Genetics and culture: The geneticization thesis. Medicine, Health Care and Philosophy, 4(3), 294-304. Wachbroit, R S (1998). The question not asked: the challenge of pleiotropic genetic tests. Kennedy Institute of Ethics Journal Ethics Journal, 8(2), 131-144. Wolf, S (1995). Beyond genetic discrimination: The broader harm of geneticism. American Journal of Law and Medicine, 23, 345-353. World Health Organization (2005). Genetics, genomics and the patenting of DNA. Review of potential implications for health in developing countries. Human Genetics Programme, Chronic Diseases and Health Promotion.
key terms Confidentiality: A duty held by professionals towards their clients and patients whereby they are committed to keep secret anything they learn in the course of the context of their professional relation with their client or patient. Genetic Risk: Revealed and quantified by assessment of family history and/or by genetic testing, a genetic risk may, in exceptional cases, indicate with certainty that the individual will develop a specific disease but at unknown time, or,
469
Which Rights for Which Subjects?
most of the time, merely indicate that the individual may be particularly predisposed or susceptible to develop a specific disease if exposed to specific chemicals, aliments, or lifestyle. Informational Capitalism: A contemporary political, economic and cultural tendency to perceive personal information as a basic resource (just as energy), an essential input to the management of public and private enterprises, as the most reliable element on which to build safety enhancement and efficiency strategies, and as a commodity, exchangeable on the “information market”. Legal Subject: Classically, the legal subject as central unit of liberalism, is perceived as a unified, fix embodied entity. Post-modern and post-conventional scholars have challenged that unitary vision of the subject. The “genetic paradigm” further questions the adequacy of the liberal vision of the subject for the law. Privacy: As fundamental human rights, encompasses both the right for the individual to control some aspects of his personality he projects on the world, and a right to freely develop his personality without excessive interference by the State or by others in matters that are of his exclusive concern. Right Not to Know: Used in the context of genetic testing, an individual’s right not to know refers to the right an individual who has undergone a genetic test to refuse information about the full or partial test results. Right to Know: Used in the context of genetic testing, an individual’s right to know refers to the right an individual who has undergone a genetic test to know the full test results if he so wishes. The right to know does not necessarily implies the right for a person to benefit from genetic testing for free nor his right to learn about the result of a genetic test performed on a member of his family, be that person genetically related.
470
endnotes 1
2
3
4
The judicial relationship between that person and the genetic information produced by the tests are usually qualified as individual rights. The important financial support provided by the European Commission to research consortia such as the Public Health Genomics European Network (PHGEN) preparing all relevant actors for the future integration of genomic insights in general public health policy amplifies the general level of expectation that indeed genetic information will become central in managing individual and public health. Depending on the patterns of inheritance of the genetic diseases studied - whether the illness is monogenic or not, whether it is monofactorial or not -- the degree of certainty and accurateness of the conclusions driven by genetic information regarding the future outburst of an illness or as to the probability of transmission of that illness to the offspring will vary substantially. The matter will be developed further later. See Onora O’Neil (O’Neill, 2002), recalling the wide variety of notions that have been associated to the concept of autonomy by scholars such as Gerald Dworkin’s (Dworkin, 1988), listing liberty (positive or negative), dignity, integrity, individuality, independence, responsibility and self-knowledge, self-assertion, critical reflection, freedom from obligation, absence of external causation, and knowledge of one’s own interest as concepts that have been equated to the concept of autonomy, or as Ruth Faiden and Thomas Beauchamps (Faden, 1986) according to whome autonomy may also be defined as privacy, voluntariness, selfmastery, choosing freely, choosing one’s own moral position and accepting responsibility for one’s choices.
Which Rights for Which Subjects?
5
6
7
8
A correlation has been detected, for example, between APOE4 genotype and a greater probability of developing Alzheimer’s disease, but this is only one of the factors of the illness. See Wachbroit (1998). For more information about ApoE genotyping, see http://www. labtestsonline.org/understanding/analytes/ apoe/test.html. The World Medical Association, in its Declaration on the Human Genome Project explicitly mentioned that ‘One should respect the will of persons screened and their right to decide about participation and about the use of the information obtained.’ (World Health Organization, 2005) Similarly, the Council for International Organizations of Medical Sciences, in its 1991 Declaration of Inuyama, held that ‘voluntarism should be the guiding principle in the provision of genetic services’. See Koch (2005) See also Lupton (1993: 433): ‘[R]isk discourse as it is currently used in public health draws upon the fin de millennium mood of the late 20th century, which targets the body as a site of toxicity, contamination, and catastrophe, subject to and needful of a high degree of surveillance and control. No longer is the body a temple to be worshipped as the house of : it has become a commodified and regulated object that must be strictly monitored by its owner to prevent lapses in health-threatening behaviours as identified by risk discourse’, and (Novas, 2000, p 507) : ‘[G]enetic forms of thought have become intertwined within ethical problematizations of how to conduct one’s life, formulate objectives and plan for the future in relation to genetic risk. In these life strategies, genetic forms of personhood make productive alliances and combinations with forms of selfhood that construct the subject as autonomous, prudent, responsible and self-actualising.’
9
10
11
12
For a clear analysis of jural opposites (Right/ No-Right; Privilege/Duty; Power/Disability; Immunity/Liability) and jural correlatives (Right/Duty; Privilege/No-Right; Power/Liability; Immunity/Disability), see the foundational work of Wesley N. Hohfeld (1923). The principle of « solidarity », as endorsed by the HUGO in its recent Statement of Pharmacogenomics (PGx): Solidarity, Equity and Governance (Genomics, Society and Policy, 2007, Vol.3, No. 1, pp. 44-47), relies on the assumption that « because of shared vulerabilities, people have common interests and moral responsibilities to each other. Willingness to share information and to participate in research is a praiseworthy contribution to society” yet is complemented by a principle of “equity” according to which “to reduce health inequalities between different populations, and to work towards equal access to care is an important prerequisite for implementing genomic knowledge for the benefit of society.” ‘Most fundamentally,(…) the human genome projects produce entities of a different ontological kind than flesh-and-blood organisms (…) or any other sort of “normal” organic being (...) the human genome projects produce ontologically specific things called databases as objects of knowledge and practice. The human to be represented, then, has a particular kind of totality, or species being, as well as a specific kind of individuality. At whatever level of individuality or collectivity, from a single gene region extracted from one sample through the whole species genome, this human is itself an informational structure.’ (Harraway: 1997, p. 247) ‘Genetic technologies, even as they seem to promise the perfectly delimited and controlled human body, break down and disrupt other boundaries. In learning how to control body boundaries, geneticists inevitably
471
Which Rights for Which Subjects?
13
14
15
16
17
18
19 20
21
22
23
24
472
shift them, producing anxiety, horror, and disgust. Modernist definitions of the body, based on boundaries of self and other, human and animal, organism and machine (and the context of nature and culture) are disrupted by a blueprint that allows unprecedented interactions, swapping over, interference, and convergence of the subject. This is genetics at its mot threatening, at least to a self-conception based on stable boundaries.’ (O’Connell, 2005, p. 225) Even though the current practice does not allow, in most cases, to determine the time of occurrence of late onset illness. The judicial relationship between that person and the genetic information produced by the tests are usually qualified as individual rights. Rhodes, R (1998b). Genetic Links, familiy ties and social bounds: Rights and responsibilities in the face of genetic knowledge. Journal of Medicine and Philosophy, 23(1): 10-30. Odièvre v. France, 42326/98 (2003) ECHR 86 (13 February 2003). Rose v Secretary of State for Health and the HFEA [2002] EWHC 1593 Tarasoff v. Regents of the University of California, 17 Cal. 3d 425, 551 P.2d334, 131 Cal. Reptr. 14 (1976). See Riccardi (1996). Pate v.Threlkel, 661 So.2d 278 (Fla. 1995). Safer v. Estate of Pack, 677 A.2d 1188 (1996) Genetic privacy Act, NJ, Stat. Ann & 17B:30. Schroeder v. Perkel, 87 N.J. 53 at 69-70 (1981). Molloy v. Meyer, 679 N.W.2d 711, 719. Molloy v. Meier, 2004 Minn. Lexis 268 (May 20, 2004). See Offit (2004) Garante’s Bulletin (Cittadini e Società dell Informazione 1999, No.8, 13-15), cited in Article 29 Data Protection Working Party, Working Document on Genetic Data, 17
25
26 27
28
March 2004, 12178/03/EN, WP 91 (The Working Party was set up under Article 29 of Directive 95/46/EC as an independent European advisory body on data protection and privacy.): ‘In linea de principio deve osservarsi che la conoscenza, prima del coceptimento o durante la gravidanza, del rischio probabilistico di insorgenza di patologie, anche di tipo genetico, sulla persona che si intende concepire o sul nascituro puo certamente contrubuire a migliorare le condizioni di benessere psico-fisico della gestante, nel quadro di une piena tutela della salute come diritto fondamentale dell’individuo ex art.32 Cost. L’accesso ai dati sanitari del padre della richiedente appare giustificato dall’esigenza di tutelare il benessere psico-fisico della stessa e tale interesse puo, nella circostanza in esame, comportare un ragionevole sacrificio del diritto alla riservatezza dell interessato.’ It has even been found that most people at risk for Huntington’s disease choose not to be tested (Binedell, 1998). 12178/03/EN WP 91. Icelandic Supreme Court, No. 151/2003, 27 November 2003. Dolgin is also concerned that ‘The genetic family insists only on one thing - the recognition of biological information. In that, it upsets a society and a legal order committed to the position that autonomous choice can sustain a moral frame within which family life is distinguished from life in other social domains (…) At the edges of a broad commitment to freedom, and thus choice, the law faces the medicalized family, and begins to elaborate its variant: the genetic family. At least in the first instance, this family serves neither individualism nor choice. It reflects the amorality of DNA through which it is delimited, and to which it can be reduced. Unlike the notion of biogenic substance as traditionally defined, DNA is indifferent
Which Rights for Which Subjects?
to the content of family life. This construct of family differs from others in abandoning even the repentance that contemporary families should be modelled on nostalgic
images of traditional families within which, it is presumed, enduring love and absolute loyalty were assured.’ (Dolgin, 2000).
473
474
Chapter XXXI
Predictive Genetic Testing, Uncertainty, and Informed Consent Eduardo A. Rueda Universidad Javeriana, Colombia
AbstrAct This chapter focuses on showing legitimate ways for coping with uncertainties within the informed consent process of predictive genetic testing. It begins by indicating how uncertainty should be theoretically understood. Then, it describes three dimensions of uncertainty with regard to both the role of genes in pathogenesis and the benefit to patients of undergoing predictive genetic testing. Subsequently, the ways by which institutions tame these uncertainties are explained. Since viewing genes as exceptional informational entities plays an important role in taming uncertainties, it explains why this conception should be abandoned. Then, it discusses how institutional taming of uncertainty becomes a source of paternalism. What is stressed is that in order to avoid paternalism and ensure transparency within the informed consent process, open-to-uncertainty mechanisms should be implemented before the public and the individual. How patients should deal with potential implications of testing for their relatives is also considered.
IntroductIon As is well known, predictive genetic testing has been considered an important tool for the prediction of the future health status of an individual. As such, this includes presymptomatic and predisposition tests, which determine the risk of develop-
ing a particular disorder by identifying a single or several genes presumably related to it, as well as pharmacogenetic tests, which determine the predisposition of individuals to react differentially to drugs (European Commission, 2004). Since it is expected that our knowledge about the role of combined genes in pathogenesis and drug reac-
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Predictive Genetic Testing, Uncertainty, and Informed Consent
tion will grow in the immediate future, clinical institutions and physicians have confidence in the increasing role of this technology within the health care setting. A lot of declarations and writings have been provided concerning this promising future, and several specialized institutions already have arisen to activate this new technology (genetic centers). In the same way, lay people have introduced “genetic language” to explain their potential diseases as well as to redefine what they understand by “medical care.” This chapter analyzes the process of activation of this new technology from an institutionalconstructivist point of view, considering the consequences for understanding and implementing the informed consent process. In this sense, this chapter attempts to remain in the happy middle ground between social analysis of science, which conceives of ethical and political problems as issues that should be analyzed only in a descriptive way, and ethical and political approaches to science and technology that leave aside the problem of the institutional complexities which indeed may impede an appropriate fulfillment of normative demands. What the chapter pursues is a clarification of how, in the field of clinical genetics, the institutions frame uncertainties in a way that allows them to reduce those uncertainties and manage them. Because there are strong reasons to consider this framing judgment as a way of bypassing the autonomous deliberative process, this chapter stresses that in order to avoid this undesirable effect and ensure transparency, it is necessary to focus on procedures that “coping with untamable uncertainties and complexities” (Van der Slijs, 2006, 68) before the public and the individual. Bypassing paternalism undermines due respect for autonomy, since it implies that a person will not decide on the basis of unbiased prospects of action. On the contrary, because of being confronted with biased information, he/she will decide in a predictably induced manner. It is broadly accepted that the rhetorical framing of information
undermines the free formation of the person’s will. Certain ways of communication can induce a person to adopt causal beliefs that clearly favor the values or interests of the person or institution that communicates. As Stokes (1998) has correctly pointed out, these ways of communication should be considered as pathologies of deliberation as long as they induce certain preferences in the people by providing them with value-laden information. In order to prevent this undesirable situation, it is mandatory that people deal with a deliberatively processed picture of prospects. As I will show, only through this method can people deal with an impartial view of the situation at stake. The implementation of postnormal ways of mapping, assessing and disclosing scientific information is therefore mandatory.
defInIng uncertAInty Although uncertainty has been systematically understood as a situation in which knowledge about a topic can be described as inexact, unreliable or almost absent (Funtowickz and Ravetz, 1990), it is more useful, as Walker et al. have pointed out, to understand uncertainty as a multi-dimensional concept that in general terms refers to the “deviation from the unachievable ideal of completely deterministic knowledge of the relevant system” (Walker et al., 2003, 5). Therefore uncertainty represents a cognitive situation not necessarily related to a lack of knowledge (2003). In fact, an increase of knowledge “can either decrease or increase uncertainty” (Walker et al., 2003, 8). Instead of conceiving of uncertainty as a cognitive situation that only emerges when, given a particular set of inputs, there is incomplete knowledge about future scenarios, as Hansson (1996) and Vlek (1987) had raised in their classical contributions, Walker et al. have helped to widen the spectrum of what uncertainty implies. From their point of view, unavoidable theoretical uncertainties about the basic presumptions that
475
Predictive Genetic Testing, Uncertainty, and Informed Consent
support the investigation of a complex situation and uncertainties about the nature of the uncertainty itself should be added to the uncertainty about future scenarios that Hansson and Vlek had described. Hence, the comprehensive dimensions of uncertainty that Walker et al. have proposed are: a.
b.
c.
476
The location of uncertainty, which has to do with the theoretical basis that might ground potential conclusions with regard to a system’s behavior. Usually this dimension involves important doubts with regard to the way in which researchers should take particular elements for designing a model of a system’s behavior. The nature of uncertainty, which has to do with the “ontogeny” of uncertainty itself. According to Walker et al. (2003), uncertainty can arise because of either the imperfection of available knowledge or the inherent variability of the target system. To identify the nature of uncertainty becomes very important in order to know how specific uncertainties can be addressed. Thus, if uncertainty emerges from epistemic causes, it will be a reason for reinforcing research programs. If, on the contrary, uncertainty is caused by the inherent variability of the system it should be accepted that an epistemic reduction of uncertainty is not feasible. The level of uncertainty, which refers to the point at which we can locate our knowledge along the “spectrum between deterministic knowledge and total ignorance” (Walker et al., 2003, 8). The nearest point to the ideal of completely deterministic knowledge along the spectrum is the statistical uncertainty. More distant from that point are the scenario uncertainty and the recognized ignorance points. Whereas statistical uncertainty has to do with any uncertainty that can be described adequately in quantitative terms, scenario uncertainty refers to plausible
future events that, without any knowledge about probabilities, might happen. Finally, recognized ignorance implies that scientists don’t know almost anything about the system’s behavior that allows them to know what might happen in the future (Walker et al., 2003).
uncertAInty In the fIeld of clInIcAl genetIcs In the field of clinical genetics it is possible to identify these three dimensions of uncertainty that were mentioned above. This identification allows us to obtain a comprehensive picture of the uncertainties that emerge with regard to both the role of genes in biological development and pathogenesis and the benefit to patients of undergoing predictive genetic testing.
the Theoretical and Nature dimensions of uncertainty As long as genes can be conceived of as “controllers” or “interactors”, theoretical uncertainties emerge with regard to the basic conception that should guide our understanding of their role in biological development and pathogenesis. Stotz et al. (2004) recently have shown that whereas molecular biologists endorse a conception of the gene function given in terms of “determining phenotypical outcome” or “coding for the primary structure of a protein”, developmental biologists endorse a different conception, namely, that of genes “releasing and biasing the expression of latent morphogenetic capacities” or “providing a developmental resource, on a par with epigenetic and environmental resources, for the construction of the organism” (p. 667). While molecular biologists prefer to conceive of genes as “controllers”, developmental biologists prefer to conceive of them as “interactors”. It is evident that different research programs have emerged from these two
Predictive Genetic Testing, Uncertainty, and Informed Consent
different ways of understanding gene function. Moss (2001) has called the research program of molecular biologists “Gene-P”, whereas he identifies as “Gene-D” the research program of developmental biologists: “Gene P is the expression of a kind of instrumental preformationism […]. When one speaks of a gene in the sense of Gene-P one simply speaks as if it causes the phenotype [...]. A Gene-D is a specific developmental resource, defined by its specific molecular sequence and thereby functional template capacity and yet is indeterminate with respect to ultimate phenotypic outcomes” (pp. 87-88). To conceive of genes either as replicators or interactors is however not a matter that results from scientific inquiry but a matter of research commitment: genes are “targets of research rather than something with which research is acquainted” (Stotz, 2004, 649; Rheinberger, 1997). As such, each one of these conceptions should be considered in principle as a matter of epistemological choice rather than a matter of ontological evidence. On the other hand, researchers explain uncertainties with regard to the role of genes in biological development and pathogenesis either as an epistemic problem or as an unavoidable feature of the inherent variability of biological systems. Whereas “Gene-P” researchers tend to consider these uncertainties as epistemic uncertainties “Gene-D” researchers usually regard them as “variability uncertainties”. This means that whereas the first category of researchers considers that “final evidence for the involvement of genes in human diseases must come from extensive epidemiological studies” (Peltonen and McKusick, 2001, 1224), the second group thinks that attempts to “assess the possible consequences of the unruly complexity arising from the genes’ intersecting and interacting dynamics [with non-genetic factors] will frequently be doomed to failure” (Nunes, 2004, 3). The intensification of research seems to push the transformation of many of the statistical uncertainties about the role of genes in pathogenesis into scenario uncertainties (Colhoun et al.,
2003). Since statistical conclusions about this role frequently can not be replicated from one study to another, knowledge about probable scenarios changes into knowledge about those that are only possible. The complexity of biological systems seems to be progressively untamable.1
the Level dimension of uncertainty: statistical and scenario uncertainties Medical and psychosocial uncertainties emerge with regard to the advantage of undergoing testing, the purpose of which is to identify genes that presumably are causally involved in specific pathogenesis. Whereas many of the medical uncertainties frequently have assumed the form of statistical uncertainties, most of the psychosocial ones usually have assumed the form of scenario uncertainties.
Medical Uncertainties Medical uncertainties emerge with regard to the reliability of risk status assessment in the pre-test scenario, the validity and reliability of potential positive results and the effectiveness of available treatments and preventive strategies. •
Reliability of pre-test risk assessment. Estimation of risk status of potential candidates for predictive testing is crucial in order to ration institutional resources appropriately (Prior, 2001). As long as there are different methods to ponder different risk factors, the conclusions about a particular pre-test risk status of a potential candidate for testing will differ, depending on the method that the experts decide to use. In his research, Prior (2001) discovered “that the proportion of women allocated to a high risk (of breast cancer) category varied markedly ―from 0.27 using one method, against 0.53 for a second method. A third method allocated only 0.14 of the high risk category” (p. 583).
477
Predictive Genetic Testing, Uncertainty, and Informed Consent
•
478
Since the lack of convergence between methods usually depends on the different way in which institutions and experts give different relative weights to different factors, there is great uncertainty with regard to the model that should be implemented for assessing risk status in the pre-test scenario. Validity and reliability of potential positive results. Several uncertainties have emerged with regard to validity and reliability of positive results. Evans et al. (2001a) have highlighted the uncertain meaning of a positive test, which identifies a gene presumably associated with a specific disease: “A positive test […] always contains a substantial component of uncertainty, not only about whether a specific condition will develop, but also about when it may appear and how severe it will be” (p. 1053). Moreover, associations between genes and diseases frequently can not be replicated from one study to another. For example, “the initial positive associations between the G460 polymorphism of alpha-adducin and hypertension have not been replicated by other studies” (Colhoun et al., 2003, 865). In the same way, studies that have been developed to prove associations between genetic polymorphisms and diseases such schizophrenia, autism, bipolar disease, multiple sclerosis and some forms of cancer have often revealed weak or conflicting results (Wilkie, 2001). Even genes such as BRCA1 and BRCA2, which have been considered useful predictors for breast and ovarian cancer, have shown very different levels of statistical meaningfulness among the available studies. Whereas the proportion of the carriers who will develop breast cancer moves from 36 to 85% among the different studies, the proportion of them who will develop ovarian cancer moves from 10 to 44% (Antoniou et al., 2001). The picture becomes more complicated in pharmacogenetic testing because potential susceptibil-
•
ity to the exhibition of side-effects from being exposed to a particular drug could be associated with special susceptibility to certain forms of toxicological cancer (Netzer y Biller-Andorno, 2004). As Wilkie (2001) has correctly pointed out, “the ‘oligogenic model’, which causally links a gene with a disease, looks increasingly unlikely” (p. 624). Indeed, most of the “oligogenic” predictive genetic testing shows low clinical validity (predictive value).2 Exceptions to this rule are Huntington’s disease, Multiple Endocrine Neoplasia Type 2, Phenylketonuria, Familiar Adenomatous Polypus’s and Hereditary Hemochromatosis. Predictive testing for complex diseases such as hypertension, diabetes, cancer and cardiovascular disease, among many others, falls into the category of low clinical validity tests. This circumstance clearly raises many uncertainties relating to their medical utility (Burke et al., 2001). Effectiveness of treatments and preventive strategies. Medical uncertainties have emerged with regard to the utility of testing when effective treatments are not available or when available treatments or preventive measures do not represent a real advantage for patients. Clearly, the degree of clinical utility of predictive tests not only depends on their validity or statistical reliability but also on the availability of either effective treatments or preventive means. Although early identification of gene carriers permits the use of effective surgery in cases such as Multiple Endocrine Neoplasia Type 2 and Familiar Adenomatous Polyposis, a lack of both effective treatments and preventive measures is the rule. As is well known, unavailability of effective prevention for Alzheimer’s and Huntington’s diseases makes testing clinically useless as well as potentially harmful. In other cases, the utility of testing becomes uncertain, since
Predictive Genetic Testing, Uncertainty, and Informed Consent
treatments are not clearly effective or safe. Effectiveness of breast cancer chemoprevention with tamoxifen is uncertain because it increases endometrial cancer risk as well as the incidence of vascular diseases (Lerman et al., 2000). Preventive phlebotomy in patients “genetically susceptible” to Hemochromatosis may also result in unnecessary side-effects because a significant proportion of the gene carriers never will develop the disease. Prevention is not necessarily improved by testing either. For example, detection of BRCA1 and/or BRCA2 carriers does not imply better surveillance in patients older than 40, since regular mammography is recommended for women up to 40 years old. Early detection of these genes would make sense only if recipients of testing had already considered mastectomy and/or oophorectomy as a preventive, but also risky, option. Predictive testing for hypertension or hypercholesterolemia might make no sense because universal prevention, screening and treatment are the rule (Evans et al., 2001a).
Psychosocial Uncertainties Psychosocial uncertainties emerge with regard to the psychological, behavioral and social effects that testing may cause in the patient and his/her relatives. •
Potential negative psychological effects after receiving a positive result. An important amount of evidence shows that negative psychological effects may arise in adult patients after receiving a positive result (Broadstock et al., 2000). Michie et al. (2001 and 2005) have shown that a significant proportion (43%) of the patients experience depression and/or anxiety once they have been informed about a positive result. Children also exhibit a tendency to
•
be more anxious and/or depressed when they receive a positive result (Michie et al., 2001). Furthermore, these symptoms do not seem to diminish when clinicians improve explanations about its clinical significance both in the pre-test and post-test scenarios; i.e., by comparing genetic risk status with other risk situations (Lerman et al., 1995), because what seems to be more important in causing those symptoms is the psychological functioning: namely, patient’s self-esteem, optimism and his/her framework of beliefs about genes, inheritance and genetic testing (Michie et al., 2005). Asking patients to anticipate negative effects has not been successful either because often patients tend to underestimate their emotional reactions (Lessick et al., 2001). This set of circumstances makes it hard to know whether a patient will suffer anxiety and/or depression after receiving a positive result. Here again, uncertainty is the rule. Counterproductive potential impacts. Evidence suggests that information about genetic susceptibility for particular forms of cancer (breast and colorectal cancer) does not necessarily increase a patient’s motivation to undertake surveillance programs or prophylactic treatments. On the contrary, it may decrease motivation (Evans y Marteau, 2001b). As long as large numbers of patients understand genetic susceptibility as a non-modifiable condition, information about genetic susceptibility may cause counterproductive results. Sometimes a sense of fatalism, based on the belief that genetic predispositions are immutable, reduces motivation to change unhealthy behavior (such smoking or consuming saturated fats). Lessick et al. (2001) have shown that it is not rare that under the umbrella of positive results, patients become apathetic in undergoing regular screening programs
479
Predictive Genetic Testing, Uncertainty, and Informed Consent
•
•
480
as well. Nonetheless, these undesirable behaviors are unpredictable. Consequences for relatives. The nature of genetic factors as related to the family implies that a patient’s positive result might be important for relatives. On that ground, relatives might have the possibility of making autonomous decisions about how to deal with medical risks. As a consequence, patients are charged with the responsibility of deciding how and when to communicate testing results to their relatives. As Forrest et al. (2003) has shown, telling family members about genetic risk “was generally seen as a family responsibility” (p. 317). Such a responsibility may cause emotional distress to patients, who usually know neither how to inform nor what the psychosocial effects that the information would cause on his/her relatives might be. Emotional distress that this responsibility may cause in patients is here again unpredictable. Negative social effects. Both an individual and his or her relatives can be discriminated against on the basis of the genetic profile. Although labor and insurance discrimination are the most frequent ways in which people are affected (National Partnership, 2004), it is well known that discrimination based on genetic profile also has to do with legal and institutional decisions about parental rights, penalties and parole status (Dolgin, 2001). As predictive genetic tests become cheaper, the incentives to use this information to discriminate against individuals will increase (Advisory Committee on Health Research, 2002). The discrimination may be stronger in developing countries because the lack of legislation prohibiting the use of predictive genetic tests by insurance companies, employers or potential employers. In countries with discriminatory practices against women, genetic information can make women unmarriageable and
stigmatized as well (Davis, 2001). Once more, discrimination represents a category of non-predictable potential negative effects of predictive genetic testing.
tAmIng uncertAIntIes In clInIcAl genetIcs Taming strategies are focused on framing uncertainties in a way that lets institutions reduce and manage them.3 This process includes using familiar ways of talking about uncertainties, hiding value loadings and particular assumptions that support modeling the role of genes in development, and disregarding certain sources of uncertainty. This taming process is, as Smits has shown, focused either to exorcise uncertainties or to adapt them to institutional habits (2004). According to Smits, when institutions have to deal with a phenomenon that fits into two cognitive categories that habitually are considered to be mutually exclusive; i.e., knowledge vs. ignorance or facts vs. values, they cope with them by using defensive strategies. Such a phenomenon that simultaneously fits into two mutually exclusive categories is described by Smits as a cultural monster, because it causes both cognitive discomfort and fascination. As long as uncertainty fits into the categories of both knowledge and ignorance, it constitutes a cultural monster that threatens “a symbolical order in which science is seen as the producer of authoritative objective knowledge” (Van der Slijs, 2006, 74). In order to tame this monster, institutions mobilize their cognitive forces in a strategic way: whereas exorcising strategies want to expel uncertainty from institutions, adapting strategies attempt “to fit the uncertainty monster back into the categories” (Van der Slijs, 2006, 74). In the field of clinical genetics, these exorcising and adapting strategies also can be recognized. As such, these strategies pursue both reinforcing the belief in the power of science to solve and control uncertainties and
Predictive Genetic Testing, Uncertainty, and Informed Consent
keeping hidden or reducing some of them to manageable cognitive formats. Certainly, scientific institutions tend to reduce uncertainties to a narrow spectrum in which many of them are omitted. As Wynne (2001) has stated, these institutions “reduce intellectual representations of issues they have to deal with to forms which appear to reflect only control and orderly management” (p. 1). In the field of clinical genetics, scientific institutions frequently omit a significant part of the uncertainties involved, and reduce medical uncertainties to manageable risks. Three general exorcising and adapting strategies are implemented by institutions for dealing with uncertainties; namely, framing uncertainties as medical risk, treating genes as exceptional informational entities, and purifying models of risk assessment.
framing uncertainties as medical risk Talking about “medical risk” allows institutions to manage uncertainty as a problem of giving a patient information about known probabilities of suffering from a particular disease; that is to say, as a problem of clinical risk communication. By talking about “medical risk”, institutions expel some dimensions of the uncertainty that patients should recognize in order to make decisions about the activation of predictive genetic testing technology. “Risk talking” entails a specific message about what should be regarded as relevant and who should be in charge of the situation (Douglas, 1986). In this way, the whole spectrum of uncertainties is reduced to a picture in which the probabilities of both suffering and preventing a particular disease occupy the main place (Julian-Reynier et al., 2003); at the same time the problem of disclosing them is put in the hands of new experts: genetic counselors. Communication of uncertainties is then transformed into the problem of giving “more accurate understanding” of involved medical risks (Lloyd et al., 2001, 2413).4
Two translations frame uncertainties as medical risk. Firstly, by transforming epidemiological information, which refers to the probability of occurrence of a particular disease within a population, into information about individual susceptibility, experts are allowed to interpret a potential positive result as a direct indicator of a certain amount of individual medical risk (Prior, 2001). Secondly, by transforming individual risk into pathology, uncertainties become a clinically manageable condition (Gifford, 1986). This means that institutions and patients must deal with risk as if this were a clinical condition demanding some kind of medical care. By giving medical risk information, institutions provide patients with tacit instructions about the importance of considering possible medical solutions (Sachs, 1999). Since giving risk information implies this kind of meta-message, uncertainties with regard to the effectiveness of treatments and preventive strategies can easily be transformed into the medical risks involved in the available solutions; i.e., into the risks involved in prophylactic surgery.5 As is well known, a clinical relationship, which involves not just the classical physician-patient relationship but the relationship between patients and the whole spectrum of institutional agents, is regulated by the normative idea of the good of the patient. As a consequence, the patient’s expectations within the clinical scenario favor receiving “information” about a new technology as implicit advice to use it (Thomasma & Pellegrino, 1981). In this way, institutional agents (counselors and physicians) provide patients with tacit recommendations on how to perceive a new medical technologyas something good duly subjected to institutional control (Sachs, 1999).
treating genes as exceptional Informational entities Theoretical and nature dimensions of uncertainty are hidden by treating genes as exceptional informational molecules. This treatment tacitly implies
481
Predictive Genetic Testing, Uncertainty, and Informed Consent
that the understanding of their role in development and pathogenesis is a matter of refining an already available field of knowledge. According to this “central dogma”, genes are packets of instructions for biological development (protein synthesis), whereas non genetic factors are just a source of materials and conditions necessary for the expression of those instructions. As Oyama (2000) has stated, this conception implies that the “gene plays mind to environment’s matter” (p. 340). This way of thinking promotes an asymmetrical conception about the role of genes in biological development and pathogenesis in which genetic sequences are considered as controllers of the biological process, whereas non genetic factors are regarded as either facilitators or hindrances of genetic information (Griffiths, 2005). Conceiving of genes as programs induces not only the idea that phenotypic traits are preformed in the genome, but also the perception that there is a “one-to-one relationship between genotype and phenotype” (Van der Weele, 1995, 87). It is obvious that under this theoretical image of genome, potential positive results tend to be considered as very meaningful predictors of health. Juengst (1995) has confirmed that patients often consider that predictive genetic testing has more predictive power than other kinds of health risk assessment because the link of genetics to disease in the public mind is defined by examples in which genes are shown as “future diaries” or “crystal balls”; i.e. Huntington disease. Such images encourage both genetic determinism and reductionistic clinical accounts of diseases: “the more genetic information is treated as special, the more special treatment will be necessary” (Murray, 1997, 71). The idea of “a gene for X” encourages reductionistic accounts of causation in biology. This way of talking tends to obscure the fact that there is not a gene for X-ishness (a trait) unless one specifies which environment and population are involved. As Lewens (2002) has correctly pointed out, there
482
is simple no answer to the question whether it is a gene for X-ishness. Nonetheless, to talk of genetic risk factors can suggest that a gene raises the chances of having a trait in any circumstance, which is clearly wrong. By describing the role of genes in development and pathogenesis as the role of fixed programs, scientists favor genetic determinism.6 Using this picture, scientists can also be confident about the epistemic nature of uncertainty; namely, the view that uncertainties will disappear if further research is done.
Purifying methods of risk Assessment Despite the fact that there is often a lack of convergence of outcomes between different statistical methods that address the same issue, there is a strong belief among scientists in the field of clinical genetics that medical uncertainties can be reduced by further research. For example, Zimmern et al. (2001) have suggested that dissection of the environmental and genetic causes of diseases will be successful if governments, research groups, universities and investors decide to work together. Khoury et al. (2004) share with Zimmern et al. their confidence about reducing uncertainties by reinforcing research programs when they state that “it is imperative to develop and agree on common epidemiological methods and approaches that can be used to generate and test hypothesis on genetic and environmental influences and gene-environment interactions, and that this will allow pooling and synthesis of results across different populations and groups” (p. 940). Methods such as “mendelian randomization”, “neural networks”, “combinatorial partitioning” and “multifactor dimensional reduction”, which are focused on reducing type I (false positive association between a gene and a disease) and type II (false negative association) errors, have been recently developed. Until now, the lack of
Predictive Genetic Testing, Uncertainty, and Informed Consent
convergence between them has been, however, a general occurrence (Khoury, 2004). Limitations of each one of these methods have been highlighted as well (Smith & Ebrahim, 2003). By optimizing available methods along these lines, confidence in the power of science to reduce medical uncertainties can be maintained. Purification of epidemiological methods is not the only way in which institutions reduce medical uncertainties. Methods for determining pre-test risk status of potential candidates for predictive genetic testing are also exposed to this strategy. Prior (2001) has noted that by using two different models for assessing risk in the same case; namely, the Gail model and the Claus model, the estimation of breast cancer risk varied hugely ―from 0.19 using the first, as against 0.39 for the second. Since these different methods for defining risk status are not assessable in terms of objective scientific reasons, subjective evaluations by experts become the main source for both selecting one of them and interpreting its outcomes. By using expressions such as “imperfect” or “not sufficiently flexible”, experts provide a “qualitative” way by which it is possible both to assess alternative methods and to alter the categorization stemming from risk mathematics. By introducing non-scientifically based evaluations into the process of both selecting a model for determining pre-test risk status and interpreting its outcomes, medical uncertainties with regard to who could obtain medical benefit from undergoing predictive genetic testing can be reduced. As Prior (2001) has stated, “this means that there is considerable room for the application of expert judgment in arriving at any risk assessment. Indeed, geneticists are keen to indicate how essential access to expert advice actually is” (p. 586). In that sense, the decision by which a person is allocated in a, say, high risk category seems, as Prior (2001) has pointed out, arbitrary: “risk categories have wheels” (p. 579).
reVIsIng the role of genes In order to discuss why conceiving of genes as exceptional informational molecules is problematic, it is necessary to show, firstly, what the evidence is through which genes do not appear as preexistent intentional entities but products of the developmental system dynamics, and secondly, what the reasons are because of which the developmental system as a whole seems better than genes to be considered as the intentional structure in biology.
genes as Products Recently, K. Stotz (2004) has systematically explained the mechanisms in which information encoding a protein is created by the co-specification of different phenotypic agents “rather than inherited” (Gilbert and Sankar, 2000, 6). These mechanisms include the selective use of the information contained in the DNA sequences, the creation of new templates for protein synthesis, and the strong regulation of DNA, mRNA and protein synthesis by a network of transcriptional proteins, non-coding RNA’s and the environment.
Selective Use of DNA Sequences Genetic modules can indeed be “re-assorted to form new genes in reaction to new types of regulation” (Stotz, 2004, 5). These new types of regulation include the activity of many molecules (promoters, enhancers, non-coding introns, coding exons and transregulatory modules). Under their influence, different mRNA molecules can be transcribed from the same gene (Mottus et al., 1997) or from two different genes (cotranscription) (Finta et al., 2002).
Creation of New Templates Even new templates for protein synthesis (mRNA) can be created without a pre-existent DNA
483
Predictive Genetic Testing, Uncertainty, and Informed Consent
sequence. The mechanisms that explain these phenomena include trans-splicing, mRNA-editing and translational recoding (Stotz, 2004). Transsplicing is the way in which different mRNA segments are spliced together to create new matures mRNA and hence new proteins (Finta et al., 2002). Protein trans-splicing is the way in which one protein can be synthesized by the translation of different mRNA segments that are processed separately (Handa et al., 1996). RNA-editing consists of the modification of a transcribedfrom-DNA sequence by the insertion, deletion or substitution of some (even some hundreds) of its nucleotides (Samuel, 2003). Translational recoding is the mechanism by which the ribosome yields different proteins by reading an overlapping codon in different ways, suspending reading of an overlapping codon or redefining codon meaning (Stotz, 2004).
Regulatory Network Important regulatory mechanisms are responsible for the activation or repression of genes, the attenuation of transcription and translation of mRNA, the inhibition of translation at the ribosomal level, the modulation of DNA replication and the emergence of stable epigenetic effects (Stotz, 2004). Many of these effects depend directly on the activity and number of transcriptional proteins and the presence of non coding RNA’s. Portions of mRNA form special non-coding RNA receptors for particular environmental molecules (riboswitches) that are involved in regulatory activities such as the control of transcription. These non-coding RNA’s also participate in assembling the molecular complex (spliceosome complex) that is necessary for splicing nuclear genes (Mansfield et al., 2002) and tRNA segments, among other functions. This set of factors reacts to environmental influences such as hormones, temperature changes, nutritional resources (Van der Weele, 1995) and psychosocial inputs (Meaney, 2001), which “can have stable epigenetic effects at the
484
genetic level” (Stotz, 2004, 12). Such epigenetic effects include variations in the distribution of chromatin along DNA sequences, methylation or demethylation of genes, and alterations in the architecture of the nuclear compartmentalization (Francastel et al., 2000).
Intentionality: from the genome to the developmental matrix Intentional information, that is to say, the information for biological development, has classically been found in the genome. The genome seems to be the only intentional biological structure because its particular sequences specify, by semantic means, particular products that are necessary “to cause the development of organisms able to survive in a given environment” (Maynard Smith, 2000, 193). On the contrary, other biological structures such as enzymes do not exhibit any semantic role. In accordance with this classical picture, their role in biological development is just a chemical one. However, the property of intentionality also can be recognized in non genetic causes of development. Sterelny et al. (1996) have explained how the notion of “replicator”, that is to say, the notion of “informational and intentional biological structures”, should be extended from the genome to a more complex system in which epigenetic factors are included. Indeed, epigenetic factors co-specify protein products and new transcriptional templates, among other functions. As long as traits and cycles of development emerge, as we saw above, from the interaction between genetic and non-genetic factors within an already available developmental system, the property of intentionality should be extended from the genome to the developmental matrix in which they interact (Griffiths, 2001). As Godfrey-Smith (2000) has highlighted, considering genes as the only intentional biological structures can be described as a product of an observational bias. According to this bias, since final products of a genetic sequence are constantly
Predictive Genetic Testing, Uncertainty, and Informed Consent
kept in mind (proteins), only proximal products of non-genetic factors are considered; i.e., a chemical catalysis of a reaction. This observational imbalance impedes the identification of the intentional roles of non-genetic factors. From a higher point of view, the property of intentionality should be predicated from the developmental matrix as a whole. Usually, considering genes as the only intentional biological entity is grounded in a wrong connection between the semantic properties of the genetic sequences and their role in producing an adapted organism. It is evident that semantic properties of the genome do not necessarily imply this role: the CCC code will codify prolin “even when it is part of a section of junk DNA with no selection history or when it has been inserted by an incompetent biotechnologist who intended it to represent leucine” (Griffiths, 2001, 403). Semantic properties of the genome imply the intentional role of producing an adapted organism only when they act as meaningful signals within the “extended replicator”; namely, the developmental matrix as a whole. In this sense, isolated replicators such as genes are not carriers of intentional information but carriers of causal information. Since their role in development is a function of the system dynamics in which they act, it should be evident that the property of intentionality should not be predicated on isolated replicators such genes but on the complex system in which they are embedded (Griffiths, 2005).
AVoIdIng PAternAlIsm Paternalism and framing effects Paternalism has been regarded as “the interference with a person’s actions or knowledge, against that person’s will, for the purpose of promoting that person’s good. Such interference is thought to be inconsistent with autonomy because the intervention involves a judgment that the person
is not able to decide by himself or herself how to pursue his or her own good” (Trout, 2005, 408409). According to Feinberg (1986), “voluntariness-vitiating factors make choices substantially non-voluntary” (p. 12). In that sense, such factors must be considered paternalistic. Prevention of “voluntariness-vitiating factors” within the process of making choices can be ensured by providing the agent with a background of complete non-framed (unbiased) information (Trout, 2005). Framing a problem can be thought of as a way of paternalism, since it bypasses the autonomous deliberative process (Hurley, 2004). Although the effects of framing a problem are not often intentionally manipulated because they occur “without anyone’s being aware of the impact of the frame on the ultimate decision” (Tversky and Kahneman, 1984, 346), framing processes violate a basic principle of autonomous choice; namely, the principle of invariant understanding of information (Tversky and Kahnemann, 1984). According to this principle, agents should decide between prospects which may assume alternative equivalent formulations. As long as decisions are predictably influenced by the way the prospects are framed, they can not be understood as a product of autonomous choice between unbiased prospects. Thus, by mean of rhetorical devices autonomous deliberation and choice are bypassed.
the limited role of genetic counseling Many authors have stated that genetic counseling allows discussing with patients the complexities and uncertainties involved in predictive genetic testing in a non directive way. According to them, “counselor provides information about genetic risk and explains choices regarding testing or management, but does not provide recommendations about the appropriate course of action” (Burke et al., 2001, 234). This non-directive conception of counseling is then focused to help the patient to decide the course of action in accordance to
485
Predictive Genetic Testing, Uncertainty, and Informed Consent
his/her personal values. Therefore, the counselor must inform patients in a neutral way about the accuracy and reliability of tests, and the potential medical benefits, psychological consequences and social implications of undergoing testing. This goal is however very difficult to fulfill. There are at least four ways by which genetic counseling bypasses autonomous deliberation. Firstly, genetic counselors and patients seem to interpret probabilities differently: “Genetic counselors have been found to interpret a 15% to 50% risk figure as ‘high’ or ‘very high’ whereas patients tend to interpret this as ‘moderate’” (Wertz et al., 1986). This interpretation may affect patient’s image of the situation at stake especially when counselors prefer to use words rather than numbers when communicating risk. Secondly, there is some evidence on the ways in which counselors may steer decisions: “direct advice, opinions about issues relevant to the decision and selective reinforcement of statements made by the patient” (Michie et al., 2005, 594). Thirdly, counselees have shown “a stronger psychosocial agenda compared to counselors” (Pieterse et al., 2005, 1677); at the same time they may be reluctant to verbalize psychosocial issues during a genetic consultation. The consequence is that patients may decide without enough awareness about the psychosocial issues involved. Fourthly, counselors do not talk about the framework of beliefs (cognitive representations) that the patients bring to the consultation, which determines how they perceive risk and acting on the information received (Marteau & Weinman, 2006). During a genetics consultation the beliefs through which patients filter and process information are not elicited and subjected to deliberation. As such, these beliefs include socially shaped presumptions about genes, genetic risk and health institutions.
Promoting Autonomy by Postnormal means Since in the field of clinical genetics, institutional defensive strategies frame uncertainties by the 486
rhetorical ways described above, it is necessary, in order to avoid bypassing paternalism, that within the informed consent process for predictive genetic tests all dimensions of uncertainty can be subjected to autonomous deliberation. Indeed, mapping and disclosing uncertainties in an open way is decisive for ensuring the patient’s autonomy. As Parascandola et al. (2002) have explained, “patients most want to introduce their own extra-medical values when medical factors alone do not seem to be decisive” (p. 246), and this is precisely the circumstance in the field of clinical genetics. Hence, “a widening in focus from reducing uncertainties to coping with untamable uncertainties and complexities” is necessary (Van der Slijs, 2006, 68). To cope with them appropriately, it is not enough to implement what Trout (2005) has called the “opposite approach” (p. 419). As such, the latter consists of providing patients with opposing reasons for a judgment made, say, for allocating a potential candidate for predictive genetic testing in the high risk category for suffering breast cancer. “Providing patients and subjects with both sides of the story in the hopes of avoiding the gaps in understanding that framing effects may produce” (Faden and Beauchamp, 1986, 321) often fails because, as Trout (2005) has shown, to be effective it demands a set of conditions such as motivation of the experts, awareness of the distorting influence framing of information causes, and an “undistracting environment to carry it out” (p. 419), which are difficult to fulfill. On the one hand, implementing the opposite approach implies that experts need to be effectively educated and motivated, a condition that does not seem easy to satisfy. On the other hand, the standardization of procedures makes difficult the institutional appropriation of complex strategies for mapping and communicating uncertainties. However, there is yet another reason to consider the opposite approach unsuitable for coping with untamable uncertainties and complexities. Since the strategy of opposing reasons consists of gen-
Predictive Genetic Testing, Uncertainty, and Informed Consent
erating pros and cons for an already framed problem, it can not be implemented when intellectual representations of a problem are not available; i.e., when some dimensions of uncertainty are hidden. It is evident that unavailable representations can not be subjected to any rational scrutiny. Therefore, in order to ensure transparency and avoid bypassing paternalism, it is mandatory to implement what Funtowicz and Ravetz (2003) have called a postnormal science approach. According to them, this approach should be understood as: •
•
•
An open-to-uncertainty procedure by which “facts” about uncertainties and underlying “values” are conceived of as an integrated dominion of things that should be discussed in a transparent way. A strategy by which uncertainties are understood not in a “true” way but in a “quality” way. “Quality” has to do with dialogue, involving plurality of perspectives and mutual learning within extended peer communities, for mapping, assessing and deciding how uncertainties should be communicated. A deliberative mechanism by which epistemic demands, which have to do with the quality of knowledge, as well as normative ones, which have to do with respect for autonomy, can be appropriately fulfilled.
Extended peer communities, which include citizens’ juries, focus groups, consensus conferences, among other deliberative mechanisms, are the ways by which the postnormal science approach should be implemented in practice. As such, these methods satisfy both scientific and normative expectations. From a cognitive point of view, the deliberative process provides two ways to deactivate framing effects. On the one hand, deliberation permits framing values to be highlighted and discussed; on the other hand, deliberation implies that potential recipients of predictive genetic testing can be exposed to diverse images of the problem; namely, the role of genes
in pathogenesis and the advantage of undergoing predictive genetic testing. These methods let them identify and prioritize uncertainties, defining the robustness of scientific images at stake, justifying values by which disclosure of uncertainties should be guided, deciding contexts and representations that should lead to disclosure of uncertainty information in concrete scenarios, and developing educational plans for the general population (Janssen et al., 2005). In this sense, the postnormal science approach can really ensure both transparency and reasoned outcomes.7 From a normative point of view, transparent deliberative procedures constitute the way in which the public should be engaged in the process of mapping, assessing and deciding how uncertainties should be communicated. Facing several uncertainties, patients’ decisions depend on the availability of different perspectives, which allow them to be involved in a learning process about how to cope with them. In these cases, respect for autonomy necessarily implies the disposition of deliberative procedures through which potential recipients of the new technology can realize what is at stake (O’Neill, 2001). The extended peer-community method therefore represents the legitimate way to involve them in the process of defining the terms of its activation. Since through this means, both alternative prospects of mapping, assessing and communicating uncertainties, and alternative public educational programs will be critically discussed, final agreements will be duly immunized against unjustified values and prejudices.
two stages of Informed consent In order to secure “genuine, legitimate informed consent” in the face of new genetic technologies, O’Neill (2001) has suggested “thinking of informed consent as having two quite different stages” (p. 701). According to her point of view, seeking consent should imply seeking both public consent to systems and individual
487
Predictive Genetic Testing, Uncertainty, and Informed Consent
consent to particular acts. Whereas the first one implies public legitimation for institutional ways of activating those technologies, the second one implies individual decisions within the set of already legitimated institutional ways of activating genetic technologies. In the face of predictive genetic testing, the postnormal science approach provides the appropriate means by which it is possible to generate public consent to the institutional ways of mapping, assessing and disclosing uncertainties, and to the public educational programs on these issues. Only once this background has been provided can individual consent be genuine; that is to say, duly protected against bypassing paternalism.
respect for relatives’ Autonomy Since undergoing predictive genetic testing might have medical and psychosocial implications for a patient’s relatives, the patient should ask each one, before undergoing the test, whether he or she would want to be informed about potential results. This procedure avoids giving tacit meta-messages, which might be offered when the patient asks relatives once she has already undergone testing. At the same time, it ensures that relatives can make autonomous decisions with regard to knowing or not knowing potential results. Despite the guarantee of confidentiality safeguards in the clinical setting, patients usually consider it a matter of responsibility to decide whether, and if it is the case, how to disclose this kind of information to their relatives (Hallowell et al., 2003). The way proposed above offers a solution to both pragmatic and prudent models of coping with that responsibility. According to Forrest et al. (2003), patients assume either a pragmatic model or a prudent one in order to decide when and how to inform their relatives about testing results. Assuming that the pragmatic model implies full communication of results, patients who fit within this model usually think that “the
488
society is currently on the brink of a technological breakthrough in the treatment or prevention of diseases” (Hallowell et al., 2003, 10). Therefore, their appraisal is influenced by what Hallowell et al. (2003) has called “discourses of hope” (p. 76); namely, the view that scientific progress has enabled them to tell relatives about results. On the other hand, assuming that the prudent model implies looking for the appropriate opportunity in order to cause less anxiety or distress to relatives, prudent patients frequently prefer either to wait for a good moment to tell relatives about results or to delegate the disclosure responsibility to others (Hallowell et al., 2003). As long as both of these models involve substantive assumptions about what is good for relatives, they must be regarded as paternalistic ways of solving the “disclosure problem”. Asking relatives before testing avoids these sorts of paternalistic assumptions. It should be clear that relatives could only answer the question of whether or not they wished to be informed about results in a genuine way if they were or might be provided with an appropriate picture of the several uncertainties involved in the activation of predictive genetic testing. Therefore, public educational programs about these issues are mandatory. Educational programs reviewed by extended peer groups must provide any person with the necessary background that allows him/her to decide in a really autonomous way.
conclusIon In order to map the different dimensions that uncertainty assumes in the field of clinical genetics, I have used the typology of uncertainty that Walker et al recently have raised. Since it has permitted the identification and assessment of uncertainties related to both the role of genes in pathogenesis and the advantage of undergoing genetic testing, this typology has also been very useful in understanding how institutional devices
Predictive Genetic Testing, Uncertainty, and Informed Consent
for taming uncertainties function. Taming strategies are focused on framing uncertainties as if they were under scientific control. These framing effects are generated by rhetorical means, which involve reducing the dimensions of the problem to manageable biased formulations. As long as they vitiate the cognitive conditions under which decisions can be made in a transparent way, they should be considered as paternalistic ways of bypassing autonomous deliberative processes. The postnormal approach constitutes a very suitable way to overcome framing effects and generate public legitimation of the procedures by which institutions activate predictive genetic testing. Instead of calling clinicians to “think harder” or “to give some thought to how they present uncertainty” (Parascandola et al., 2002, 254), the postnormal approach makes a case for supporting broad deliberation in coping with epistemic and normative problems. Unfortunately, activation of the postnormal approach does not seem easy to carry out. As Lorenzoni et al. (2007) have stated, it is difficult to implement because of “the reticence of current structures in the facilitating process intended to address these issues in constructive ways” (p. 16). Certainly, complex emergent situations are challenging society to rethink both agency (O’Neill, 2001) and knowledge. In the face of these new complexities, “truth” should be replaced, as Funtowicz and Ravetz (2003) have claimed, by “quality”. “Quality” implies that knowledge should be framed by extended peer communities rather than generated by normal scientific bodies. Instead of seeking to reduce or eliminate uncertainties by scientific means, quality-based processes are committed to transparency in the social assessment of scientific discourse. Asking relatives whether they would want to be informed about potential results before a patient undergoes testing.
references Advisory Committee on Health Research (2002). Genomics and world health (Report). Canada: World Health Organization. Antoniou, A., Gayther, S., Stratton, J., Ponder, B., & Easton, D. (2000). Risk models for familiar ovarian and breast cancer. Genetic Epidemiology, 18, 173-190. Broadstock, M., Michie, S., & Marteau, T. (2000). The psychological consequences of predictive genetic testing: A systematic review. European Journal of Human Genetics, 8, 731-738. Burke, W., Pinsky, L., & Press, N. (2001). Categorizing genetic tests to identify their ethical, legal, and social implications. American Journal of Medical Genetics, 106, 233-240. Colhoun, H., & Mckeigue, P. (2003). Problems of reporting genetic associations with complex outcomes. The Lancet, 361, 865-872. Davis, D. (2001). Genetic dilemmas: Reproductive technology, parental choices, and children’s futures. London: Routledge Publishers. Dolgin, J. (2001). Ideologies of discrimination: Personhood and the genetic group. Studies in History and Philosophy of Biological and Biomedical Sciences, 32(4), 705-721. Douglas, M. (1986). Risk acceptability according to the social sciences. New York: Russell Sage Foundation European Commission (2004) Ethical, legal an social aspects of genetic testing: Research, development and clinical applications. Luxembourg: Office for Official Publications of the European Communities. Evans, J., & Marteau, T. (2001b) Genetic testing: Studies point to variable prognostic abilities and question if testing results in behavioral change. Genomics and Genetics Weekly, June 1, 11-12.
489
Predictive Genetic Testing, Uncertainty, and Informed Consent
Evans, J., Skrzynia, C., & Burke, W. (2001a). The complexities of predictive genetic testing. British medical Journal, 322, 1052-1056. Faden, R., & Beauchamp, T. (1986). A history and theory of informed consent. New York: Oxford University Press. Feinberg, J. (1986). Harm to self. New York: Oxford University Press. Finta, C., Warner S.C., & Zaphiropoulos, P. (2002). Intergenic mRNAs: Minor gene products or tools of diversity? Histology and Histopathology, 17(2), 677-682. Forrest, K., Simpson, SA., Wilson, BJ., van Teijlingen, ER., McKee, L., Haites, N., & Matthews E. (2003). To tell or not to tell: Barriers and facilitators in family communication about genetic risk. Clinical Genetics, 64, 317-326. Francastel, C., Scübeler, D., Martin, D., & Groudine, M. (2000). Nuclear compartmentalization and gene activity. Nature Reviews Molecular Cell Biology, 1, 137. Funtowicz, S.O., & Ravetz, J. (2003). Post-normal science. International encyclopaedia of ecological economics, 1-10. Retrieved October 24, 2006, from http://www.ecoeco.org/pdf/pstnormsc.pdf Funtowicz, S.O., & Ravetz, J.R. (1990). Uncertainty and quality in science for policy. Dordrecht: Kluwer Academic Publishers. Gifford, S.M. (1986). The meaning of lumps: A case study of the ambiguities of risk. In C. Janes, R. Stall & S.M. Gifford (Eds.), Anthropology and epidemiology: Interdisciplinary approaches to the study of health and disease (pp. 213-246). Dordrecht: D. Reidel. Gilbert, S., & Sarkar, S. (2000). Embracing complexity: Organicism for the 21st century. Developmental Dynamics, 219, 1-9. Godfrey-Smith, P. (2000). Information, arbitrariness, and selection: Comments on Maynard Smith. Philosophy of Science, 67, 202-207. 490
Gostin, L. (1995) Genetic privacy. Journal of Law, Medicine and Ethics, 23, 320-330 Griffiths, P. (2001). Genetic information: A metaphor in search of a theory. Philosophy of Science, 68, 394-412. Griffiths, P. (2005). The fearless vampire conservator: Philip Kitcher, genetic determinism and the informational gene. In E.M. Neumann-Held & C. Rehmann-Sutter (Eds.), Genes in development: Rereading the molecular paradigm (pp. 175-197). Durham, ND: Duke University Press. Gutmann, A., & Thompson, D. (1997). Deliberating about bioethics. Hastings Center Report, 27(3), 38-41 Hallowell, N., Foster, C., Eeles, R., Ardern-Jones, A., Murday & Watson, M. (2003). Balancing autonomy and responsibility: The ethics of generating and disclosing genetic information. Journal of Medical Ethics, 29, 74-79. Handa, H., Bonnard, G., & Grienenberger, J. (1996). The rapessed mitochondrial gene encoding a homologue of the bacterial protein Cc11 is divided into two independently transcribed reading frames. Molecular & General Genetics, 252(3), 292-302. Hansson, S.O. (1996). Decision making under great uncertainty. Philosophy of the Social Sciences, 26(3), 369-386. Hurley, S. (2004). Imitation, media violence, and freedom to speech. Philosophical Studies, 117(1/2), 165-218. Janssen, P.H.M., Petersen, A.C., Van der Sluijs, J.P., Risbey, J., & Ravetz, J.R. (2005). A guidance for assessing and communicating uncertainties. Water science and technology, 52(6), 125-131. Juengst, E. (1995). The ethics of prediction: Genetic risk and the physician-patient relationship. Genome Science and Technology, 1(1), 21-36
Predictive Genetic Testing, Uncertainty, and Informed Consent
Julian-Reynier, C., Welkenhuysen, M., Hagoel, L., Decruyenaere, M., & Hopwood, P. (on behalf of the CRISCOM Working Group) (2003). Risk communication strategies: State of the art and effectiveness in the context of cancer genetic services. European Journal of Human Genetics, 11, 725-736. Khoury, M., Millikan, R., Little, J., & Gwinn, M. (2004). The emergence of epidemiology in the genomics age. International Journal of Epidemiology, 33, 936-944. Lerman, C., Hughes, C., Croyle, RT., Main, D., Durham, C., & Snyder, C., (2000). Prophylactic surgery decisions and surveillance practices one year following BRCA1/2 testing. Preventive Medicine, 31(1), 75-80. Lerman, C., Lustbader, E., Rimer, B., Daly, M., Miller, S., Sands, C., & Balshem, A. (1995). Effects of individualized breast cancer risk counseling: A randomized trial. Journal of the National Cancer Institute, 87, 286-292. Lessick, M., Wickman, R., Chapman, D., Phillips, M., & McCaffrey S. (2001). Advances in genetic testing for cancer risk. Medsurg Nursing, 10(3), 123-127. Lewens, T. (2002). Developmental aid: On ontogeny and ethics. Studies in History and Philosophy of Biological and Biomedical Sciences, 33, 195-217. Lloyd, F., Reyna, V., & Whalen, P. (2001). Accuracy and ambiguity in counseling patients about genetic risk. Archives of Internal Medicine, 161, 2411-2414. Lorenzoni, I., Jones, M., & Turnpenny, J. (2007). Climate change, human genetics, and post-normality in the UK. Futures, 39(1), 65-82. Mansfield, S., Clark, R., Puttaraju, M., & Mitchell, G. (2002). Spliceosome-mediated RNA trans-splicing (SmaRT): A technique to alter and
regulate gene expression. Blood Cells Molecules and Diseases, 28(3), 338-338. Marteau, T., & Weinman, J. (2006). Self-regulation and the behavioral response to DNA risk information: A theoretical analysis and framework for research and practice. Social Science and Medicine, 62, 1360–1368. Maynard Smith, J. (2000). The concept of information in biology. Philosophy of Science, 67, 177-194. Meaney, M. (2001). Maternal care, gene expression, and the transmission of individual differences in stress reactivity across generations. Annual Review of Neuroscience, 24, 1161-1192. Michie, S., Bobrow, M., & Marteau, T. (2001). Predictive genetic testing in children and adults: A study of emotional impact. Journal of Medical Genetics, 38(8), 519-526. Michie, S., Lester, K., Pinto, J., & Marteau, T. (2005). Communicating risk information in genetic counseling: An observational study. Health Education and Behavior, 32(5), 589-598. Moss, L. (2001). Deconstructing the gene and reconstructing molecular developmental systems. In S. Oyama, P. Griffiths & R.D. Gray (Eds.), Cycles of contingency: Developmental systems and evolution (pp. 85-97). Cambridge, MA: MIT Press. Mottus, R., Whitehead, I., Ogrady, M., Sobel, R., Burr, R.H.L., Spiegelman, G., & Grigliatti, T. (1997). Unique gene organization: Alternative splicing in Drosophila produces two structurally unrelated proteins. Gene, 198(1-2), 229-236. Murray, T. (1997) Genetic exceptionalism and “future diaries”: Is genetic information different from other medical information? In M. Rothstein (Ed.), Genetic secrets: Protecting privacy and confidentiality (pp. 60-73). New Haven: Yale University Press.
491
Predictive Genetic Testing, Uncertainty, and Informed Consent
National Partnership for Women and Families on behalf of the Coalition for Genetic Fairness (2004). Faces of genetic discrimination. How genetic discrimination affects real people. Retrieved February 28, 2005, from http://www. nationalpartnership.org/site/DocServer/FacesofGeneticDiscrimination.pdf?docID=971 Netzer, C., & Biller-Andorno, N. (2004). Pharmacogenetic testing, informed consent and the problem of secondary information. Bioethics, 18(4), 344-360. Nunes, J. (2004, Spring). The uncertain and the unruly: Complexity and singularity in biomedicine and public health. Paper presented at The Annual New England Workshop on Science and Social Change (NewSSC), Woods Hole, MA. O’Neill, O. (2001). Informed consent and genetic information. Studies in History and Philosophy of Biological and Biomedical Sciences, 32(4), 689-704. Oyama, S. (2000). Causal democracy and causal contributions in developmental systems theory. Philosophy of Science, 67, 332-347. Parascandola, M., Hawkins, J., & Danis, M. (2002). Patient autonomy and the challenge of clinical uncertainty. Kennedy Institute of Ethics Journal, 12(3), 245-264. Peltonen, L., & McKusick, VA. (2001). Genomics and medicine. Dissecting human disease in the postgenomic era. Science, 291, 1224-1229 Pieterse, A.H., Van Dulmen, A.M., Ausems, M., Beemer, F.A., & Bensing J.M. (2005). Communication in cancer genetic counseling: Does it reflect conselees’ previsit needs and preferences? British Journal of Cancer, 92, 1671-1678. Prior, L. (2001). Rationing through risk assessment in clinical genetics: All categories have wheels. Sociology of Health and Illness, 23(5), 570-593.
492
Rheinberger, H.J. (1997). Toward a history of epistemic things: Synthesizing proteins in the test tube. Stanford: Stanford University Press. Sachs, L. (1999). Knowledge of no return. Getting and giving information about genetic risk. Acta Oncologica, 38(6), 735-740. Samuel, C. (2003). RNA editing minireview series. Journal of Biological Chemistry, 278(3), 1389-1390. Smith, G.D., & Ebrahim, S. (2003). Mendelian randomization: can genetic epidemiology contribute to understanding environmental determinants of disease? International Journal of Epidemiology, 32, 1-22. Smits, M. (2004). Taming monsters: The cultural domestication of new technology. Unpublished doctoral dissertation, University of Eindhoven, Eindhoven. Sterelny, K., Smith, K., & Dickison, M. (1996). The extended replicator. Biology and Philosophy, 11(3), 377-403. Stokes, S. (1998) Pathologies of deliberation. In J. Elster (Ed.), Deliberative democracy (pp.123139). Cambridge, UK: The Press Syndicate of the University of Cambridge. Stotz, K. (2004, November). With genes like that who needs an environment? Postgenomics’ argument for the ‘ontogeny of information’. Paper presented at the Symposium Advances in Genomics and Their Conceptual Implications for Development and Evolution, Austin, TX. Stotz, K., Griffiths, P., & Knight, R. (2004). How biologists conceptualize genes: An empirical study. Studies in History and Philosophy of Biological and Biomedical Sciences, 35, 647-673. Thomasma, D., & Pellegrino, E. (1981). Philosophy of medicine as the source for medical ethics. Theoretical Medicine and Bioethics, 2(1), 5-11.
Predictive Genetic Testing, Uncertainty, and Informed Consent
Trout, J. (2005). Paternalism and cognitive bias. Law and Philosophy, 24, 393-434. Tversky, A., & Kahneman, D. (1984). Choices, values and frames. American Psychologist, 39, 341-350. Van der Sluijs, J. (2006). Uncertainty, assumptions, and value commitments in the knowledge base of complex environmental problems. In A. Guimaraes, S. Guedes & S. Tognetti (Eds.), Interfaces between science and society (pp. 67-84). Sheffield, UK: Greenleaf Publishing. Van der Weele, C. (1995). Images of development. Environmental causes in ontogeny. Albany: State University of New York Press. Vlek, C. (1987). Risk assessment, risk perception and decision making about courses of action involving genetic risks. Birth Defects Original Articles Series, 23(2), 171-207. Walker, W., Harremoës, P., Rotmans, J., Van der Sluijs, J.P., Van Asselt, M., Janssen, P., & Krayer Von Krauss, M. (2003). Defining uncertainty. A conceptual basis for uncertainty management in model-based decision support. Integrated Assessment, 4(1), 5-17. Wertz, D.C., Sorenson, J.R., & Heeren, T.C. (1986). Client’s interpretation of risks provided in genetic counseling. American Journal of Human Genetics, 39, 253-264. Wilkie, A. (2001). Genetic prediction: What are the limits? Studies in History and Philosophy of Biological and Biomedical Sciences, 32(4), 619-633. Wynne, B. (1995) Public Understanding of Science. In S. Jasanoff, G. Markle, J. Petersen & T. Pinch (Eds.), Handbook of Science and Technology Studies (pp. 361-380). Thousand Oaks (California): Sage Publications (in cooperation with the Society for Social Studies of Science).
Wynne, B. (2001, April). Managing Scientific Uncertainty in Public Policy. Paper presented at Global Conference: Crisis and Opportunity, Cambridge, MA. Zimmern, R., Emery, J., & Richards, T. (2001). Putting genetics in perspective. British Medical Journal, 322, 1005-1006.
key terms Developmental Matrix: The developmental matrix is the dynamic biological system in which different causal factors, such as genes, RNA’s, promoters, enhancers and environmental influences interact to generate an adapted organism. Framing Uncertainties: Framing is the process by which uncertainties are reduced and hidden in a way that allows scientific institutions to manage them. By means of rhetorical devices, framing implies the reduction of intellectual representations of uncertainties, which provide people with tacit instructions about how they should act with regard to new technologies such as predictive genetic testing. Informed Consent: Informed consent is a “special kind of autonomous action: an autonomous authorization by a patient or subject” (Faden & Beauchamp, 1986, 274). Although informed consent also has to do with legal or institutional procedures through which effective authorization is obtained, it should not be reduced to mere formal procedures. In fact, moral acceptability of procedures must depend on the extent “to which they serve to maximize the likelihood that the conditions of an autonomous authorization will be satisfied” (Faden & Beauchamp, 1986, 294). Paternalism: Paternalism is the interference with the autonomy of an agent in such a way that “he or she is not able to decide for himself or herself how to pursue his or her own good”
493
Predictive Genetic Testing, Uncertainty, and Informed Consent
(Trout, 2005, 409). Paternalism implies a lack of respect for autonomy because it interferes with the voluntary making of choices.
2
Postnormal Approach: The postnormal approach is “a new conception of the management of complex science-related issues” (Funtowicz & Ravetz, 2003, 1). It focuses on coping with uncertainties in a deliberative and open way that permits the satisfaction of both epistemic and policy demands. By implementing extended peer communities, the postnormal approach ensures transparent ways of mapping, assessing and deciding how uncertainties should be communicated. Predictive Genetic Testing: Tests that have been developed to predict the future health status of the individual on the basis of his or her genetic profile. As such, they include tests for finding single genes presumably associated with increasing risk of developing particular diseases, as well as tests for determining the predisposition of individuals to react differentially to drugs. Uncertainty: Any deviation from the “unachievable ideal of completely deterministic knowledge of the relevant system” (Walker et al., 2003). In order to help policymakers to identify, assess and report uncertainty, it has been defined through a typology, which includes three dimensions of uncertainty, namely, location, nature and level of the uncertainty.
endnotes 1
494
Although some new statistical tools have recently been developed for coping with biological complexity, it is evident that “increasing the complexity of statistical models decreases the precision of their products” (Van der Slijs, 2006, 73). Indeed, epidemiologists themselves have recognized that these models thus “far have limited applications
3
4
5
6
in epidemiological studies” (Khoury et al., 2004, 942). Although linkage analysis, used to find out statistical associations between polymorphic variations of a gene and a disease, has not been able to prove relevant and consistent correlations in most of the diseases examined, advocators of the potential utility of genetic profiling have argued that greater predictive power could be obtained by combining the susceptibilities conferred by individual genes. However, genetic epistasis, which consists of the additive, multiplicative or without effect interaction between two or more loci, and genetic pleiotropism, which refers to the fact that single genes usually exhibit multiple functions, have increased the statistical uncertainty (false positive and false negative errors) about the potential effects of a combined genotype (Wilkie, 2001; Khoury et al., 2004). “Framing is the process whereby a problem is presented to an audience, preparing them to see a certain range of possible options, solutions, evidential bearing, and so on. The audience’s intellectual habits and explanatory expectations allow carefully framed narrative descriptions to yield defective inductions” (Trout, 2005, 404) What this kind of transformation pursues is, as Wynne (1995) has pointed out, “to measure, explain, and find remedies for apparent shortfalls of correct understanding and use, as if this were free of framing commitments.” (p. 362). For example, prophylactic radical mastectomy “transforms the earlier risks into a new state of illness, involving pain, scars that have to heal, medical complications, and years of adapting to something alien to the body” (Sachs, 1999, 740). By definition, a fixed program is a set of instructions that should not vary among different biological systems. As a conse-
Predictive Genetic Testing, Uncertainty, and Informed Consent
7
quence genetic information is understood as “unchanging and unchangeable” (Gostin, 1995, 324). The irreplaceable role of deliberation in bioethics has also been pointed out by Amy Gutmann and Dennis Thompson (1997): “A well-constituted bioethics forum provides an opportunity for advancing both individual and collective understanding. Through the give-and-take of argument, participants can
learn from each other, come to recognize their individual and collective misapprehensions, and develop new views and policies that can more successfully withstand critical scrutiny […] When citizens deliberate, they can expand their knowledge, including their self-understanding as well as their collective understanding of what will best serve their fellow citizens” (p. 40-41)
495
496
Chapter XXXII
Privacy, Contingency, Identity, and the Group Soraj Hongladarom Chulalongkorn University, Thailand
AbstrAct The chapter argues that there is a way to justify privacy without relying on the metaphysical assumption of an independently existing self or person, which is normally taken to underlie personal identity. The topic of privacy is an important one in technoethics because advances in science and technology today have resulted in threats to privacy. I propose furthermore that privacy is a contingent matter, and that this conception is more convenient in helping us understand the complex issues surrounding deliberating thoughtfully about privacy in many dimensions. It is the very contingency of privacy that makes it malleable enough to serve our purposes. Basically, the argument is that it is possible for there to be a society where individuals there do not have any privacy at all, but they are still autonomous moral agents. This argument has some close affinities with the Buddhist perspective, though in this chapter I do not intend to presuppose it. Then I discuss the issue of group privacy. This is a rather neglected issue in the literature, but group privacy has become more important now that contemporary genomics and bioinformatics have the power to manipulate large amount of population data, which could lead to discrimination. The proposed conception of privacy is more suitable for justifying group privacy than the one that presupposes the inherently existing individual.
IntroductIon Privacy has become a primary concern in many circles nowadays. The increasingly pervasive use of electronic and information technologies has resulted in more sophisticated tools that are used
for surveillance and data mining, which threaten privacy rights of citizens. Furthermore, privacy has become a concern not only in the West, but also in Asia, where there has been significant economic growth in recent decades. This concern has led many scholars to ponder on how the concept of
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Privacy, Contingency, Identity, and the Group
privacy and its implementation could be justified, especially in the context of the East where privacy is generally perceived to be a part of the modern West where Asia has had no exact counterpart, a situation that prompted many papers on how privacy could be justified in Asian contexts (E.g., Ess, 2005; Lü, 2005; Kitiyadisai, 2005; Rananand, 2007; Nakada and Tamura, 2005; Hongladarom, 2007). What I would like to accomplish in this chapter is related to those attempts; however, the chapter is not intended as another contribution to how privacy is to be justified or even criticized from the Asian perspective. It is instead an attempt to map out the conceptual terrain of privacy without relying too heavily on the literature of the traditions of Asia, which in fact has been my concern elsewhere (Hongladarom, 2007). That is to say, I intend what follows in the chapter to be generally applicable in most cultural contexts. This should not be taken to be an argument for the supremacy of one culture over others; rather my concern is to find out a common ground that should be acceptable for all cultures, without privileging one over another. The overall aim of this chapter is, then, to present a philosophical analysis and justification of privacy that differs from what is available in most literature on the topic. The topic is of direct relevance to technoethics, conceived of as an investigation of the ethical implications of science and technology, because these advances have resulted in actual and potential violation of privacy of either individuals or groups of them. It is well known that current technologies, such as genetic databanking, smart ID cards, and others have made it possible to collect, store, and systematize a vast amount of information related to particular individuals. In Thailand, for example, the previous government introduced what is called ‘smart ID cards’ (Thailand introduces national ID with biometric technology, 2007). Basically these are supposed to function as identification cards for each and every Thai citizen, which has been around in Thailand for decades. However,
in recent years the government ordered that a new type of card be issued with a microchip, which is capable of storing a very large amount of information. The rationale was that this new type of card would facilitate interaction with public agencies, as important information that is required for an individual to contact the government would be stored in the microchip, eliminating the need to carry a number of paper documents. However, since the card identifies an individual citizen, it is conceivable that deeper level of individual information might be stored in the card, enhancing the possibility that the government or the authorities might use the resulting huge database in profiling or perhaps discriminating one group against others in one way or another, and so on, thus undermining the privacy of the individuals. Many research works have in fact been done on the Thai smart ID cards, and its potential for misuse.1 The idea to be presented here is that there is an area within and surrounding an individual and indeed group of individuals that should be protected, and that the boundary demarcating the area is an imaginative line, much like the longitudes and latitudes are. In the chapter, I show that the idea of privacy is strongly related to the philosophical concepts of identity, either that of an individual or to a group.2 Privacy is connected to identity because it does not seem at first sight to make much sense in saying that there is a privacy to an individual while the identity of that individual changes through time. In other words, privacy seems to presuppose a rather strict identity of an individual. Without such a strict identity, it would be hard, or so it seems, to identify whose privacy should be protected. However, I don’t believe that privacy does in fact rely on such a strict identity of the individual. If it is the case that an individual is constituted by a set of information that together describes his or her identity vis-à-vis other individuals, then there does not have to be a ‘core set’ of information such that the core uniquely identifies the individual at all times. That is, the individual does not seem
497
Privacy, Contingency, Identity, and the Group
to be constituted through something that works as an ‘essence’ in that without it the individual would cease to be an individual in the familiar Aristotelian sense. There is a way to justify privacy even without strict identity of the individual whose privacy is to be protected. According to the view to be developed, information about an individual, even genetic information, does not on its own succeed in becoming such a core set. Privacy needs to be defended and justified, not through reliance on the metaphysics of the isolated, self-subsisting individual, but through the individual’s relations to her socio-cultural environment and to other individuals. Privacy is needed when there should be a check against the authorities (such as against the Thai government issuing smart ID cards) so that the authorities are prevented from potentially misusing their power. This prevention is crucial for a functional democracy. Since the power of the government is based on consent of the people, not having such a means to limit the power of the government over personal information of the population would mean that the government has too much power, especially power to manipulate groups of population. Such excessive power would be detrimental to democracy. In what follows I will argue that there is a way to justify privacy rights which does not rely on the metaphysics of the inherently existing individual self.
JustIfIcAtIons of PrIVAcy Among the vast literature on justification of privacy, perhaps the ones arguing that privacy is justified because individuals do have a right to their autonomy are the most prevalent. And among the numerous definitions of privacy, a common thread that binds them together seems to be that privacy is something that is cherished by the individual in question, something that she does not want to be exposed to the public.3 That could be the fact that she does not want other people to peer into her house, or data about herself, her
498
‘personal’ information (Parent, 1983). What justifies this right to privacy is that, as an individual citizen, she is entitled to some form of protection against unwanted intrusion, which is considered to be a breach to her autonomy. In a hypothetical polity where the state has unlimited power to take any information concerning its citizens as much as they like, and to have a surveillance scheme, Big Brother style, that provides every detail of the lives of the individuals, in that case it would be correct to say that the individuals do not have any privacy. What is missing is that the individuals do not have a means to operate without the seeing eyes of Big Brother. They do not have a leeway, so to speak, within which they can function on their own without always being aware that their action is constantly being watched. So we might call what is missing here ‘personal space’ where the individual would feel to be free to do their things as they please, so long, of course, that these do not infringe on the rights and liberties of others. Talking about the potential loss of privacy by employees due to increased use of surveillance technologies by the employers, Miriam Schulman quotes Michael J. Meyer as follows: “Employees are autonomous moral agents. Among other things, that means they have independent moral status defined by some set of rights, not the least of which is the right not to be used by others as a means to increase overall welfare or profits” (Schulman, 2000, p. 157). Meyer then continues: “As thinking actors, human beings are more than cogs in an organization--things to be pushed around so as to maximize profits. They are entitled to respect, which requires some attention to privacy. If a boss were to monitor every conversation or move, most of us would think of such an environment as more like a prison then a humane workplace” (Schulman, 2000, p. 157). The key phrase here is ‘autonomous moral agents,’ and in fact we could extrapolate Meyer’s statement to include privacy for individuals in general. The linchpin of a standard justification for privacy is, then, that
Privacy, Contingency, Identity, and the Group
individuals are autonomous moral agents, which imply that they are entitled to some personal and private space where they feel comfortable and where they do not have to behave as if they are being watched all the time. So the standard justification of privacy is that since individuals are autonomous moral agents, they are entitled to some degree of privacy. This argument hinges, of course, on a conceptual link between the two. How is it possible that someone’s being an autonomous moral agent entitles her at least some degree of privacy? Presumably the answer is that, as an autonomous moral agent, one should be accorded some degree of personal space, since if not, then one would not get the respect that one deserves in virtue of one’s being a human being. When we consider the hypothetical state where nobody has any privacy as mentioned above, the standard argument would have it that in such a scenario the individuals are not paid respect to, since the authority (or the employer) has the power to gather all kinds of information pertaining to them. This presupposes that gathering information and constantly monitoring and watching the individuals all the time are not instances of respect. Thus we can sum up the standard argument as follows. As individual humans are autonomous moral agents, which imply that they are capable of making decisions by themselves and that they deserve a degree of respect, their private lives should not be intruded because such intrusion would mean that the intruder does not respect the individuals in virtue of their humanity. However, this argument depends on some other crucial factors. What if the individuals in question willingly give up their privacy and allow the authority to watch their every move? In fact we are already seeing something like this happening with people putting web cameras in their bedrooms and turn them on all the time for all the world to see. Would we say that those who do this do not have privacy? But is their right to privacy being violated? It seems clear that simply the fact that somebody’s private life is being
exposed for the whole world is not sufficient for her privacy right to be violated. In this case it seems that nobody is violating her right, since she willingly does all this by herself. Another factor is that the authority who has the power to intrude on people’s private lives must act in such a way that harms those people through their intrusion; otherwise the authority’s action might not be considered as a violation of privacy. The idea is this: Let us go back to the hypothetical scenario mentioned earlier. Nobody has any privacy; the authority has the all seeing power to know every small detail of their people’s lives; nothing is hidden. Nonetheless, if the authority happens to be a wholly benevolent one and will not use the information in any harmful way, and if, in addition, the people are aware that the authority is watching them, but they don’t mind since they trust the authority completely, then would we also say that their privacy rights are threatened? According to Meyer, privacy appears to be an inherent property of an autonomous agent, but these scenarios seem to complicate the picture. Privacy may still be an inherent property in the case where people willingly put up webcams in their bedrooms and even their bathrooms, and in the case where the people trust the all seeing authority completely, but even so their inherent property here is not expressed. Even if the property is there, it lies dormant, so to speak, since the people willingly forego it. However, if this is really the case, then what is the difference between someone’s having the inherent characteristic of privacy but it lies dormant and someone’s not having the right to privacy at all? The difference, of course, lies in the fact that in the first case someone could decide at any time to enforce her privacy right, which happens when, for example, somebody shuts down her webcam, whereas in the second case that is not possible. But if this is so, then the justification of privacy is not simply a matter of someone’s being an autonomous moral agent who deserves respect, his or her relation with those around her
499
Privacy, Contingency, Identity, and the Group
also play a crucial role. If she trusts the all seeing authority completely, or if she does not think her private life should be kept to herself alone and welcomes the world to see all of her, then the trusting and the willingness to let others enter one’s private domain become important. These are all relational concepts; one trusts another person, and one willingly lets others enter one’s private life. After all, protecting privacy means that one is protecting someone’s private domain from encroachments by others. If one lives alone, like Robinson Crusoe, then there is no need to even start talking about privacy.4 Another point is that it seems that one can even remain an autonomous moral agent without one’s having privacy. In the scenarios described above, the one who trusts the authority completely, who lives in an environment where the authority is fully trustworthy, and who willingly foregoes privacy can still be an autonomous moral agent, since all her decisions are made through her free will in her rational capacity. An autonomous moral agent that willingly puts up webcams around her house is still so. But if this is the case, then the standard justification of privacy is in need of qualification. Being an autonomous moral agent alone is not sufficient, one also needs to relate with others and live in a certain kind of environment (such as one where it is not possible to trust the authority completely), in order for the right to privacy to actually have a force. Nevertheless, an objection to this line of argument is that in these scenarios the individuals always have their privacy rights all along, but as we have seen there does not seem to be much of a difference between having the right to privacy and keeping it dormant (perhaps always so) and not having it at all. This, let me emphasize, is tenable only in a very special case where the authority can be trusted completely and where the individuals are willing to let others view their lives, and this could be extended to include the individuals’ information about themselves, their communication and others (Regan 1995).
500
contIngency And PrIVAcy Another factor affecting the standard justification concerns the metaphysical assumption that it tends to make regarding who exactly is the autonomous moral agent. Does it have to be a metaphysically self-subsisting individual subject? Here the standard justification argument seems to presuppose that the autonomous moral agent does have to be a self subsisting metaphysical entity. This is so because, in order to argue for the privacy of somebody, there has to be some entity whose privacy is to be justified and protected. Furthermore, the entity in question would presumably need to be a self subsisting one because if not, then the entity would continually be in flux and it would be difficult to pinpoint exactly whose privacy is to be justified. It seems to be without saying that justifying privacy presupposes the existence of the one whose privacy is to be justified. After all, defending privacy naturally presupposes that the privacy has to be that of an individual. Justifying privacy would mean that one is attempting to draw a line demarcating a boundary that belongs exclusively to a person and it would be wrong for the authority or anybody else to enter that boundary without the person’s permission. What is more is that the person here is a metaphysically self subsisting person. What this means is that the person or the individual has to be something that exists objectively; there is something that inheres in the person such that it defines who that person is and nobody else without having to enter into any relation with anything outside. So it would naturally appear that, according to the standard argument for privacy, the existence of a self subsisting person is presupposed. This standard justificatory picture is much in accordance with common sense. After all, when one is justifying or defending privacy, one naturally presupposes that there has to be a person whose privacy is to be justified and that the person has to be metaphysically objective. Otherwise it would be difficult to find a conceptual link between the metaphysi-
Privacy, Contingency, Identity, and the Group
cally objective person here and her status as an autonomous moral agent. Being an autonomous moral agent would seem to presuppose that there is something deep down inside functioning as the holder of the qualities of being autonomous, being moral and being an agent. Defending a conception of privacy that is closely related to that of personhood, Jeffrey Reiman writes: “Privacy is a social ritual by means of which an individual’s moral title to his existence is conferred” (Reiman, 1976, p. 39). An individual is recognized as one who deserves to be treated morally, i.e., as one who is morally entitled to existence, when his or her privacy is respected. Respecting someone’s privacy, according to Reiman, is to recognize that he or she exists as a human being who deserves to be treated as an end and never as a means, to use Kant’s terms. Furthermore, Reiman adds that privacy is necessary for the creation of the individual self (Reiman, 1976, p. 39); for without privacy, there is no way, according to Reiman, for an individual to be recognized as such, since there would be no way for her to recognize that the body to which she is attached is her own, to which she has exclusive rights (including that of privacy). That recognition is the process by which the sense of self of the individual is created, and it is in this sense that Reiman makes his startling claim. Reiman further states: “[P]rivacy is a condition of the original and continuing creation of ‘selves’ or ‘persons’” (Reiman, 1976, p. 40). That is, so long as someone’s privacy is respected, to that extent her selfhood and personhood is thereby respected and recognized. Privacy, for Reiman, is a sine qua non for the selfhood or personhood of someone; in other words, privacy necessarily belongs to someone in virtue of her being ‘someone’ or ‘a person’ in the first place. If one were to search for strong arguments attempting to link privacy with selfhood or personhood, Reiman’s must be among the first ones in the list. It would be tempting, then, to test Reiman’s argument here in the scenario raised
above where the individuals do not seem to have any privacy, or do not object to their putative privacy rights taken away at all. The question then would be: Are these individuals in this situation recognizable as selves or persons at all? According to Reiman the answer would have to be no, because in his argument privacy is the sine qua non for the very existence of a self or a person, as we have seen. But this seems counterintuitive. In this hypothetical situation, the individuals there are still very much the very same kinds of individuals that we know. They go about their businesses and they are certainly capable of rational thinking and so on. The hypothesis we already have seen at the start is that this is a place just like our own, except only that the people there totally do not mind the possibility of being in the public’s eyes all the time. For Reiman, these people would immediately cease to be persons, but that is clearly too strong. The point is that if the scenario of people who willingly forego their privacy is a plausible one, then one would be hard pressed to come up with a tenable conception of privacy which is metaphysical and non-relational. Attempts to tie up privacy with conception of selfhood or personhood, like the one proposed by Reiman, seems to fail in the case where the selves or persons do not mind their lack of privacy at all.5 So it seems that one needs another way to justify privacy, one that does not presuppose the metaphysical assumption of the enduring or objectively existing individual self. Since privacy is relational, one justifies it more effectively, I believe, if one looks at it as a political or sociological or pragmatic concept. Privacy is justified because it brings about something desirable.6 In fact when Reiman argues that the selves are constituted through privacy, he appears to be on a right track because that would mean that the self is a creation or a construction and not something that is ontologically prior to anything. The point is that, if the self is a construction, something that emerges out of transactions one has with one’s environment and one’s peer, then the conception
501
Privacy, Contingency, Identity, and the Group
of privacy that depends on the objectively existing self would need to be modified. Considered this way, privacy then is a hypothetical line that one draws around oneself to demarcate the line where one does not wish others to enter without one’s own consent. And the line here is very fuzzy and very much dependent on varying contexts. It is a hypothetical line in the same way that the longitudes and latitudes are. One never finds these lines on the ground, but always in a map as a useful heuristic device to locate one’s position. Let us go back to the no-privacy world mentioned above, the world works because the authority is trusted completely and the individuals there do not mind having no privacy. However, if the situation changes and the authority becomes less trustworthy, and the individuals happen to feel that they need some privacy to themselves, then in this situation the conception of privacy emerges. People there might conceivably devise the concept as well as means to realize it in practice. If this can indeed be the case, then the concept of privacy is not one that is metaphysically attached to the persons or individuals from the beginning, as is a social and legal conception that emerges due to certain kinds of situation. What this implies is that privacy is a contingent concept. A well known example of contingent concepts and rules derived from them is the rule on driving. In Thailand people drive on the left side of the road, which is the same in Britain, Australia, Japan and some other countries. However, the majority of countries in the world drive on the right side. There is no absolute rule specifying which side of the road is the ‘right’ or ‘wrong’ one. People just devise the rules out of their convenience or habits or whatever. Accordingly, privacy is a concept that emerges out of certain kinds of situation and it does not have to be there at the beginning where there is no need for it.
502
PrIVAcy And PersonAl IdentIty Moreover, there have been many research works analyzing and criticizing the metaphysical idea of the individual subject, much of it being based on the Buddhist teaching (See, e.g., Gethin, 1998; The Dalai Lama, 1997; Collins, 1982; Parfit, 1986; Hongladarom, 2007; Varela and Poerksen, 2006).7 A common thread in these arguments is that the idea is a metaphysical one, meaning that it is not there when it is searched for through empirical means. The very idea of the individual subject, according to these research works, is not there to be found when searched for empirically, because one always finds what are supposed to be instances of the subject, but not the subject herself. What constitute a human being are the body and the mind, the physical and the mental. One does not have to follow Descartes’ influential view that the two are radically separated, a view that is currently under attack from many fronts. Nonetheless, one could focus each of the two in turn for the sake of simplicity. It is well known that most cells in the body do change and are replaced by newer cells after a period of time. There may perhaps be some cells in the brain that do not change, but then one would not bet on them to be the seat of the individual subject or the person. On the other hand, when one focuses on the mental episodes, one also finds them to be changing very rapidly. One’s personality changes over time; one’s psychological makeup does not remain constant. At least if one were to point out which episode or which event constituting one’s psychological makeup is to be the individual subject, then one would be hard pressed to find one. This is so because any candidate episode would have to stay frozen and locked up in order for it to function as the core identity of the subject herself. But an episode is by definition a kind of event, which if frozen in time will cease to be an event altogether. However, if the core is to be something physical,
Privacy, Contingency, Identity, and the Group
then we go back to the point made earlier about the physical body. The fact that the core identity of the individual subject cannot be found empirically does not imply that there are no individual human beings whose privacy is to be protected. What I am driving at is not that there are no human beings, which is absurd, but that there is no substantive core that functions as the ‘seat of identity,’ so to speak. Again this does not mean that humans have no identities, which is also absurd. Having no substantive core means only that the identity of a particular human being is a relational concept and is constructed out of the human being’s interaction with her society and her other contextual environments. The relevance of this argument to the analysis of privacy is that there is a way to justify privacy without relying on the idea of there being a metaphysical concept of the person. One can rely only on what is obtainable through empirical means. This way of providing justification is as strong as it can be, because for one thing it is based on publicly observable entities and not on metaphysical construction.8 The normative force of the argument related to privacy would then be derived from shared meanings and understandings rather than from abstract rationalization. Since identity of the person is constructed through her interaction with the world, her privacy would presumably be constructed out of such interaction also. And in more concrete terms this would mean that the social world finds privacy to be important and devise ways and means to enforce it. Since privacy is an imaginative line as we have just seen, it is then constructed out of shared meanings and understandings that members of the social group have together. A common objection to the idea of basing privacy on the conception that presupposes the person to be empirically constructed is that such a conception of person would be too weak. A conception of the person that is empirically based would, so the objection goes, not be ap-
propriate to be a foundation for the autonomous moral agent that seems to be required for a viable conception of privacy. To put the point simply, if the person is always changing and something that is constructed, then whose privacy is one trying to justify? If the person is continuously changing without any substantial identity, then who will be the autonomous moral agent? And without the agent where will the concept of privacy come from? But if the autonomous moral agent is not sufficient for privacy as we have seen, then the argument here loses much of its force. In fact the force of the objection is that, without the autonomous moral agent, there would be no privacy, which implies that the autonomous moral agent is necessary for privacy. But if the proposed conception of the person which is based on empirical observation rather than metaphysical assumption is tenable, then, on the assumption that the autonomous moral agent needs to be a metaphysically substantial person, which seems to be accepted by those who favors this line of argument, the autonomous moral agent is not even necessary for privacy either. That the autonomous moral agent is neither sufficient nor necessary for privacy should come as no surprise, for the agent here is commonly taken to be the substantially existing person whose identity is based on the metaphysical assumption that grounds her identity even though no empirical correlate has been found. And this should by no means be understood to mean that the justification of privacy proposed here assumes that privacy is not a moral concept. The conclusion that the autonomous moral agent is neither sufficient nor necessary does not imply that privacy is not a moral concept because the individuals in society can well construct a concept such as privacy as a tool for regulating their lives and defining the relations between themselves and the political authorities without thereby depending on metaphysical assumptions.9 This conception has another advantage to the standard one in that it fits better with the emerg-
503
Privacy, Contingency, Identity, and the Group
ing scenario where non-humans are becoming more like humans and therefore deserve moral respect.10 As far as I know, no robot or animal has been accorded the right to privacy yet. One might conceivably feel that she can watch and monitor an animal or a robot and to mine any information from it (him?) without infringing upon its (his?) privacy. As animals are certainly sentient beings, and, as Peter Singer suggests (Singer, 1985; 1990; 1993; 1994), they do deserve at least some respect, to extract any information from them without regard to their kind of dignity that renders them worthy of respect would be unethical, and clearly that would be tantamount to violating their privacy. Furthermore, as robots mature enough and are beginning to think and be conscious (just like animals), then the question about robot privacy presents itself. One way to think about this is, of course, to think that animals and robots are starting to attain the status of ‘autonomous moral agents;’ that is indeed the case if being an autonomous moral agent does not imply that they somehow instantiate the abstract model of rationality that have informed humans being long before. Instead of assuming that the autonomous moral agent is a metaphysically substantive entity, we might consider robots more simply as ones who are starting to become like us humans. That is, they are becoming capable of talking, understanding, planning, desiring, dreaming, and so on, characteristics that have defined humans. In this case they deserve privacy. At any rate, it is simpler just to view robots as becoming more like humans than to assume that they are starting to instantiate the abstract model of rationality and moral agency, assumption that incurs the added burden of explaining the identity and the justification for the existence of the said model. Equally, it is even simpler to view animals as sharing many characteristics that belong to humans, such as the capability to feel pain and pleasure, the ability to have an inner life, etc., than to view them as somehow instantiating the abstract model too.
504
grouP PrIVAcy So how does the proposed conception play out in real life? Here is another distinct advantage of the proposed conception over the mainstream one. In emphasizing the role of the individual, the mainstream conception appears to neglect the importance of families and social groups, whose privacy needs to be protected also.11 Privacy of families is violated when the authority or somebody intrudes upon family life with no justified reason. What is happening inside a family seems to be a private matter to the family itself; intrusion is justified only in case where it is suspected that there are physical or verbal abuses going on within the family, in which case the rights of individual family members to bodily integrity trumps over the family’s right to privacy. Here the proposed conception fares better because it is not tied up with justifying privacy through the individual. According to the standard picture, families or other social groups seem to be little more than collections of individuals, and it is individuals who are the atomic units whose rights and privileges should be the main prerogative. Families are but appendages of the individual. But that seems counterintuitive. As philosophers such as Hegel and Charles Taylor have shown, individuals are nothing without their roles and positions within the family or larger social groups (Hegel, 1977; Taylor, 1975; Taylor, 1989). Hegel argued that ontologically the individual derives her individuality and ontological being through her relation with other individuals. So the picture is a reverse from the standard one. It is the social group that is more primary, and the individuals are derived from them. This issue, of course, comprises a standard debate between liberalism and communitarianism in social and political philosophy (See, e.g., Bell, 1993; Bell, 2000; Taylor, 1989; Sandel, 1998). Without being tied up with the individual who is supposed to be the linchpin of a justification of privacy, the proposed conception here makes it easier conceptually to deal with privacy of social
Privacy, Contingency, Identity, and the Group
units. Without assuming anything metaphysical that exists beforehand, the proposed conception would justify privacy of social unit through their needs to protect their boundaries vis-à-vis possible encroachment by the state or other authorities, and the social groups can justify their privacy by referring to the desired goal if privacy right is upheld. For example, there might be a conception of privacy of an ethnic group such that the group is entitled to keep certain set of information private to their own group. The issue has become more significant recently due to the increased sophistication in manipulation of genetic data obtained from a population. This information may be something that is dear to them and something that they don’t want to share with outsiders. If there is no compelling justification for making this information public (such as when the publicizing of the information is necessary in an emergency), then the authority has no right to encroach and to pry upon the information. Since the conception of privacy arises out of needs and contexts, there is no metaphysical baggage to unload. An obvious objection of the concept of group privacy concerns the ontological status of groups. What this actually means is that any view that concerns groups of individuals, treating them just like an entity (such as when the group holds some sort of privacy right together), have to clarify how the group is supposed to be defined and demarcated such that one group is differentiated from another. However, if the individual herself could be regarded as an abstraction or a construction, then there is no problem in regarding groups to be the same too. That individuals are abstractions can be seen from the fact that an individual is composed of various parts, such as bodily parts, cells, memories, emotional traits, and so on. In fact this is the standard Buddhist view of reality in that individual selves do not exist independently of any relation, but they do indeed exist through the relations that the individual maintains with various entities, forming a complex web of relations where an individual is
a node. Thus, a group of individuals, for example a family or an ethnic group, is held together by some special relation among members of the group such that others do not share in it. A family is differentiated from another by the simple fact that the individuals are not the same; they do not share the same backgrounds, the same stories, the same personalities, and so on. Likewise, an ethnic group is also defined through their shared meanings and traditions, not to mention in some cases their shared genetic heritage. In the latter case of groups sharing the same genetic heritage, there is an added dimension related to their privacy in that their genetic makeup may happen to be of some potential benefits for pharmaceutical or other scientific purposes. There is not enough space in this chapter to deal in any detail on this very important topic, which itself is a subject of a vast number of books and articles. What I would like to maintain here is merely that the concept of group privacy may be advantageous to protect these ethnic groups from potential discrimination or intrusions in their collective lives. In the same way as we need to protect the privacy of an individual, we also need to protect privacy of a group in the case where the group shares some genetic heritage or some other type of information together. A family (or a married couple) often has some kinds of ‘secrets’ that they only share among themselves and not to the outside world. This is a clear case of privacy. Obtaining this information from the group without their consent would undoubtedly constitute intrusion of privacy. As the privacy of an individual needs to be protected against the potential abuse by the authority, so too needs the privacy of a group.
conclusIon And future dIrectIons Some future directions resulting from this investigation can be clearly drawn up. Firstly, the study
505
Privacy, Contingency, Identity, and the Group
of how privacy is conceptualized and justified would need to pay more attention to the Buddhist teaching, especially on how the individual is constructed and how this bears a relation to attempts to justify privacy. Secondly, as the notion of group privacy seems to gain momentum as a useful tool to investigate how traits and information belonging to groups of individuals should be protected. The usual conception of privacy as belonging to the individual does not seem to be as effective in providing this solution. My view so far is that there is a way to justify privacy without relying on the metaphysical assumption of an independently existing self or person. In fact I believe that the proposal offered here, that privacy is a contingent matter, is more convenient in helping us understand the complex issues surrounding deliberating thoughtfully about privacy in many dimensions. Thus the new conception appears to be better suited to deal with problems arising from current technologies of manipulation of personal information. For instance, in safeguarding group privacy, the proposed conception does a more effective job in providing a conceptual framework in which a better foundation and justification of the notion of group privacy can be offered. It is the very contingency of privacy that makes it malleable enough to serve our purposes. What is certain in any case is that we in the 21st century do not live in the world where privacy does not matter any longer; hence there is a need to protect privacy against all prying eyes. Thai citizens, to take a specific example, need to find a way to guard against the manipulation of their personal data through the smart ID card policy. What this chapter has accomplished, I hope, is to lay out some foundational issues on privacy so that we know which direction we should be heading as we lay out conceptual maps that will possibly translate to rules and regulations later on. Moreover, the privacy of the group also needs to be recognized, as modern genomics has advanced and is now coupled with information and computational
506
science, the potential for misuse and injustice increases considerably. Hence, a clear direction in the future is a policy matter of providing a better safeguard for protecting not only individual privacy, but group privacy too.12
references Bell, D. (1993). Communitarianism and Its Critics. Oxford: Clarendon Press. Bell, D. (2000). East Meets West: Human Rights and Democracy in East Asia. Princeton: Princeton University Press. Capurro, Rafael. (2005). Privacy: an intercultural perspective. Ethics and Information Technology, 7, 37-47. Collins, Steven. (1982). Selfless Persons. Cambridge University Press. Ess, Charles. (2005). ‘‘Lost in translation’’?: Intercultural dialogues on privacy and information ethics (Introduction to special issue on Privacy and Data Privacy Protection in Asia). Ethics and Information Technology, 7, 1–6. Floridi, Luciano and J. W. Sanders. (2007). On the morality of artificial agents. Retrieved May 13, 2007 from http://www.roboethics.org/site/ modules/mydownloads/download/Floridi_contribution.pdf. Fried, Charles. (1968). Privacy. The Yale Law Journal, 77(3), 475-493. Gethin, Rupert. (1998). The Foundations of Buddhism. Oxford University Press. Hegel, G. W. F. (1977). Phenomenology of Spirit. A. V. Miller transl. Oxford University Press. Hongladarom, Soraj. (2007). Analysis and justification of privacy from a Buddhist perspective. In S. Hongladarom and C. Ess (Eds.), Information
Privacy, Contingency, Identity, and the Group
Technology Ethics: Cultural Perspectives (pp. 108-122). Hershey, PA: Idea Group Reference. Kitiyadisai, Krisana. (2005). Privacy rights and protection: foreign values in modern Thai context. Ethics and Information Technology, 7,17-26. Lü Yao-Huai. (2005). Privacy and data privacy issues in contemporary China. Ethics and Information Technology, 7, 7-15. Nakada, Makoto and Takanori Tamura. (2005). Japanese conceptions of privacy: an intercultural perspective. Ethics and Information Technology, 7, 27-36. Olinger, Hanno N., Johannes J. Britz and Martin S. Olivier. (2007). Western privacy and/or Ubuntu?: Some critical comments on the influences in the forthcoming data privacy bill in South Africa. International Information and Library Review, 39, 31-43. Parent, W. A. (1983). Privacy, morality and the law. Philosophy & Public Affairs, 12(4), 269-288. Parfit, Derek. (1986). Reasons and Persons. Oxford University Press. Patton, Jason W. (2000). Protecting privacy in public? Surveillance technologies and the value of public places. Ethics and Information Technology, 2, 181–187. Rachels, James. (1975). Why privacy is important. Philosophy & Public Affairs, 4(4), 323-333. Rananand, Pirongrong Ramasoota. (2007). Information privacy in a surveillance state: a perspective from Thailand. In S. Hongladarom and C. Ess (Eds.), Information Technology Ethics: Cultural Perspectives (pp. 124-137). Hershey, PA: Idea Group Reference. Regan, Priscilla M. (1995). Legislating Privacy: Technology, Social Values, and Public Policy. Chapel Hill, NC: University of North Carolina Press.
Regan, Priscilla M. (2002). Privacy as a common good in a digital world. Information, Communication and Society, 5(3), 382-405. Reiman, Jeffrey H. (1976). Privacy, intimacy and personhood. Philosophy & Public Affairs, 6(1), 26-44. Robots may one day ask for citizenship. (2007). Retrieved May 13, 2007 from http://media. www.guilfordian.com/media/storage/paper281/ news/2007/03/23/World/Robots.May.One.Day. Ask.For.Citizenship-2788091.shtml. Sandel, M. (1998). Liberalism and the Limits of Justice. 2nd Ed. Cambridge: Cambridge University Press. Scanlon, Thomas. (1975). Thomson on privacy. Philosophy & Public Affairs, 4(4), 315-322. Schulman, Miriam. (2000). Little brother is watching you. In R. M. Baird, R. Ramsower and S. E. Robenbaum (Eds.), Cyberethics: Social & Moral Issues in the Computer Age (pp. 155-161). Amherst, NY: Prometheus Books. Singer, Peter. (Ed.). (1985). In Defence of Animals. New York: Blackwell. Singer, Peter. (1990). Animal Liberation. 2nd Ed. New York: Avon Books. Singer, Peter. (1993). Practical Ethics. 2nd Ed. New York: Cambridge University Press. Singer, Peter. (1994). Ethics. New York: Oxford. Taylor, Charles. (1975). Hegel. Cambridge University Press. Taylor, Charles. (1989). Sources of the Self: The Making of the Modern Identity. Cambridge, MA: Harvard University Press. Thailand introduces national ID with biometric technology. (2007). Retrieved May 9, 2007 from http://www.secureidnews.com/we-
507
Privacy, Contingency, Identity, and the Group
blog/2005/04/15/thailand-introduces-national-idwith-biometric-technology/. The Dalai Lama. (1996). The Buddha Nature: Death and Eternal Soul in Buddhism. Woodside, CA: Bluestar Communications. Thompson, Judith Jarvis. (1975). The right to privacy. Philosophy & Public Affairs, 4(4), 295314. Varela, Francisco J. and Bernhard Poerksen. (2006). “Truth Is What Works”: Francisco J. Varela on cognitive science, Buddhism, the inseparability of subject and object, and the exaggerations of constructivism—a conversation. The Journal of Aesthetic Education, 40(1), 35-53.
key terms Autonomy: The quality of being able to ‘make law for oneself,’ thus implying that one has certain rights and liberty such that the state or other authorities cannot take away. In the context of this chapter autonomy usually is used to refer to the quality of an individual in so far as he or she is a rational being capable of rational judgment. An ‘autonomous moral agent’ is thus a being (not necessarily a human one) who is capable of judging what is right or wrong to act without having to rely on others to tell them.
Communitarianism: By contrast, communitarianism argues that there is something fundamentally wrong in liberalism because liberalism accords the individual a primacy in devising a political system. What is wrong in that conception is that liberalism presupposes that the individual can exist freely on her own as if existing in a vacuum having no essential relation to her communities or surroundings. As communities are in the real world, the norms and expectations of the particular community in fact informs the decisions by the individual. Community Standards: Protocols or norms applied by a collective body to evaluate a given issue. Contingency: What is ‘contingent’ is contrasted to what is ‘necessary’. The latter has to happen as a matter of natural law or some other law of such kind, whereas the former does not have to do so. In the chapter, it is argued that one’s identity is a contingent matter because it is liable to change and grow and the necessary core of one’s identity cannot be found. Cyber Porn: Sexually explicit material available in the online environment as images and text or in audio-visual formats. Ethics: The notion of right or wrong that can influence conduct of people.
Buddhism: A world religion founded by Siddhartha Gautama more than 2,500 years ago in northern India. Gautama achieved the ultimate realization into the nature of reality and thus became the ‘Buddha,’ literally ‘one who is awaken.’ The goal of Buddhism is to achieve Enlightenment, or nirvana, a state of total bliss unspoiled by suffering. Buddhism teaches that the individual self is a construct and is not there ultimately in reality.
Identity: This is another difficult philosophical term. An individual’s identity is whatever that all together make up that particular individual to be the person he or she is. Thus recently technology has made it possible for one’s identity to be stolen, i.e., information pertaining to a particular individual taken away such that the perpetrator assumes the identity of the individual and engages in business dealings in the latter’s name, at the latter’s expenses. This is the reason why privacy has become so important in today’s world.
Child Pornography: Sexually explicit material of children.
Liberalism: This is a standard philosophical term which is difficult to pin down what it means
508
Privacy, Contingency, Identity, and the Group
exactly. Basically liberalism is a term in political philosophy referring to a system of political theory wherein the individual is emphasized as autonomous moral agent and thus is free to enter into political agreements or contracts. According to John Rawls, “[e]ach person is to have an equal right to the most extensive total system of equal basic liberties compatible with a similar system for all” (Rawls, 1971: 302).
3
Pornography: Sexually explicit material. Privacy: According to the Merriam-Webster online dictionary, privacy is defined as “the quality or state of being apart from company or observation.” One feels that privacy is needed when one feels that there is something about oneself that needs to be kept away from the eyes of others. The chapter develops the notion of ‘group privacy,’ which is the privacy belonging to a group of individuals, such as ethnic groups or close communities or the family, rather than to particular individuals. Pseudo-Photograph: This as an image, whether made by computer graphics or otherwise, which appears to be a photograph. Virtuality: The existence of presence in the online platform which may be defined by the lack of materiality.
endnotes 1
2
Rananand also discusses the situation in Thailand as a surveillance state and how this affects the right to privacy in Rananand, (2007), and Kitiyadisai gives a detail about the policy concerning the smart ID card in Thailand in Kitiyadisai, (2005). A possible corollary to the argument presented here is that I am in favor of emphasizing the right to privacy and its irreducibility to other rights. In fact there have been many
4
5
debates on whether there is such a thing as the right to privacy. See, for example, Thompson, 1975; Scanlon, 1975; Rachels, 1975; Reiman, 1976. However, there is not much literature on group privacy at all at the moment. Providing definitions of privacy appears to be a thriving academic industry. In a well known article, Fried writes: “It is my thesis that privacy is not just one possible means among others to insure some other value, but that it is necessarily related to ends and relations of the most fundamental sort: respect, love, friendship and trust. Privacy is not merely a good technique for furthering these fundamental relations; rather without privacy they are simply inconceivable” (Fried, 1968, p. 477). In roughly the same vein, Parent states: “Privacy is the condition of not having undocumented personal knowledge about one possessed by others. A person’s privacy is diminished exactly to the degree that others possess this kind of knowledge about him,” (Parent, 1983, p. 269) where personal information “consists of facts which most persons in a given society choose not to reveal about themselves (except to close friends, family, . . .) or of facts about which a particular individual is acutely sensitive and which he therefore does not choose to reveal about himself, even though most people don’t care if these same facts are widely known about themselves” (Parent, 1983, p. 270). In the same spirit, Priscilla Regan also argues for the concept of privacy being relational, adding that the concept would be more useful if considered as relational rather than singular (Regan, 1995). Reiman might counter this argument saying that in the lack-of-privacy scenario, the individuals there still possess their privacy right, even though they choose not to exercise them. But then the difference between his
509
Privacy, Contingency, Identity, and the Group
6
7
510
and my conception in this case would be then that according to Reiman, the privacy right is kept inside, unexpressed whereas in mine there is no privacy in the first place. Assuming that there is no condition coming up that forces people to exercise their dormant privacy rights, then there is no difference between his and my conceptions at all. The individuals there would go about their lives and their lack of privacy, and everything would remain the same no matter they actually have some privacy rights hidden inside their own selves or not. But if the condition requiring people to enforce privacy rights comes to the fore, such as when the authority is abusing their power, then according to my conception the people can well devise ‘privacy rights’ as a means to counter the authority. This devising does not seem to require that the concept is already there inside their selves. Here Priscilla Regan is absolutely right when she said that the concept of privacy is relational and is necessary for democracy. Democracy is certainly one of the desired goals brought about by recognizing and enforcing the right to privacy. This is one of the best justifications for privacy (Regan, 2002, p. 399). There is an obvious similarity between the Buddhist analysis of the self and the postmodernist one. According to the familiar postmodern stance, the self is ‘deconstructed’ in that it is analyzed and found to be composed of various disparate elements, much like what the Buddhists say. However, a difference between Buddhism and postmodernism lies in the motivation. According to Buddhism, the purpose of realizing that the self is a construct is to let go of attachment to it so that the practitioner realizes her primordial oneness with reality; hence there is clearly the soteriological goal
8
9
10
in Buddhism that is lacking in postmodernism. This conception is not a relativist one because relativism presupposes that objective evaluation of different normative principles is not possible because the different normative principles define their own source and justification of normativity. However, this undermines the very normative force that the principles are supposed to provide a ground for. Basing the justification of privacy on publicly observable entities does not entail that any system of justification is as good as any other, since that would defeat the purpose of having a justification in the first place. However, demanding that there be a metaphysically constructed basis for justification seems too strong. It is too strong because it tends to imply that there must be only one correct system of justification for all contexts. But certainly justifying privacy depends on contexts, for it is possible for there being a society where privacy is not a concern at all, and the demand for protection of privacy arises as a response to the authority’s gaining too much power and using it in unjust ways. In addition, that the autonomous moral agent is neither sufficient nor necessary for privacy should not be understood to imply that people are not autonomous moral agents in the sense that they are capable for making moral judgments and decisions on their own. Nothing in my argument leads to that absurd conclusion. The individuals in the no-privacy world, as we have seen, are as moral and as autonomous as any in our world; only that they do not seem to take privacy seriously and are happy leading their lives totally in public view. To them there is nothing wrong with that. This does not mean that they are not moral agents. Recently there has been an interest in how robots should be accorded with moral re-
Privacy, Contingency, Identity, and the Group
11
spect and moral rights in some form. This is known as ‘roboethics.’ For example, Luciano Floridi and John Sanders (2007) argue for a concept of moral agents and patients could be applied to non-humans even though they are not capable of feeling or free will, such as early stage robots and animals. Another interesting source is “Robots may one day ask for citizenship” (2007). There are only a few references in the literature on privacy that pay attention to group privacy. Olinger, Britz and Olivier (Olinger et al., 2007) discussed the African concept of ubuntu, which puts the interests
12
of the group before those of the individuals. Patton (2000) recognizes the value of group privacy and sees that sociality plays a complimentary role in the analysis and justification of privacy. Research leading to this chapter is partly funded by a grant from the Thailand Research Fund, grant no. 4980016. I would like to thank Prof. Vichai Boonsaeng for his unfailing support. I also would like to thank the anonymous referees for this chapter, whose comments have led to many substantial improvements.
511
512
Chapter XXXIII
The Ethics of Gazing:
The Politics of Online Pornography Y. Ibrahim University of Brighton, UK
AbstrAct This chapter situates the current debates on pornography in the virtual realm and its ethical and legal implications for users and researchers. It examines the ethics of gaze and politics of looking and how these acts of consumption can pose new moral and ethical dilemmas for societies and communities. Debates on online pornography have to integrate and reconcile dialectical elements and values such as aesthetics, privacy, rights, taboos, private pleasures and community standards while acknowledging the intrinsic features which define the Internet as a liminal space between the private and public and between individual consumption and infringement of societal norms. Beyond the legal and ethical paradigm, the chapter explores the issues of empowerment associated with online pornography as well as the epistemological and ontological problems which can face researchers.
IntroductIon How we look at or consume the Internet often draws us into a moral space where the act of gazing can subvert or reiterate offline power arrangements, deviance and social norms. The Internet can bring both private pleasures and communal engagements. The dialectics between private pleasures and public needs raises various dilemmas especially in the domain of the erotic and aesthetics. These are relative and abstract terms
which can vary from individual to individual. However, in the public spaces of the Internet, the need for community standards of decency and acceptability often drag many of the debates about the Internet into a legal space despite its description as a virtual sphere and the libertarian endeavours to keep it free from government and organizational control. The need to subject and apply the laws of the physical jurisdiction into the Internet has consumed much of the rhetoric of the Internet since its inception. While the Internet is a global resource it is often ruled through
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Ethics of Gazing
the laws of its physical embeddedness, and the global nature of the Internet also means that it is consumed and assessed through the differing cultural practices and norms that prevail all over the world. The Internet as a communication and information platform is then subject to varying codes of conduct by different communities whether online or offline. The convergence of various technologies on the Internet has transformed it from a discursive space to one that accommodates sounds and images. The emergence of gaming culture and the simulation of reality through the design of gaming technology raises the age-old issues about image and representation and the effects it can have on our cognitive senses and how these can as a result affect or mediate our ability to reason and engage with interactive technology. These questions become ever more salient with regard to online pornography or sexually explicit material. The distinctive elements about online porn are its use of multimedia, its ubiquity and consumer access to it. Due to the anonymity of the Internet and the difficulties in regulating this transnational and anonymous medium, transgressive forms of entertainment including pornography have flourished online. According to Spencer (1999: 242) the Internet is structured at one level around the economics and politics of consumption, at another level around the politics of individuality and at another around communitarian concerns. Online pornography has been acknowledged as a relatively new form of pornography. Authors Stack, Wasserman and Kern (2004) point out that there were about 900 pornography sites on the web in 1997 and just a year later the figure had burgeoned to between 20,000 to 30,000 sites with revenues reaching US$700 million by the 1990s. Its growth has been attributed to the ‘Triple A-Engine’ of accessibility, affordability and anonymity (Cooper and Griffin Shelly 2002: 11). Fisher and Barak (2001:312) agree that ‘spectacular growth in availability of sexually explicit material on the Internet has created an unprecedented opportunity
for individuals to have anonymous, cost-free, and unfettered access to an essentially unlimited range of sexually explicit texts, moving images and audio materials.’ This increased accessibility and convenience as well as the exploiting of e-commerce by pornographers means that the Internet makes it easier for individuals to come into contact with porn. Some suggest this has enabled the normalization of practices which may have otherwise been stigmatized in traditional markets leading to a mainstreaming of cyber porn through its visibility and presence (See O’Toole 1998; Cronin and Davenport 2001:35). A report by the research and policy group Third Way based in Washington highlights how this accessibility and presence can present new problems for Internet users, particularly children (cf. Whitehead 2005). According to the report only 3% of more than 450 million individual porn web sites ask for proof of age. Additionally, the majority of these websites don’t carry any warning of adult content and nearly three-quarters display free teasers of pornography images on their homepages and it is likely children may accidentally come across a porn site while doing homework or surfing the web. Whitehead (2005: 18) contends that unlike offline pornography which can be curbed through measures imposed by the community such as zoning laws and curfews, the politics of pornography has been altered through technology where online can be seen to be ‘everywhere and nowhere.’ This, Whitehead (2005) argues, has meant a loss of power for parents to control what their children come to contact with. In this sense, the Internet is a cultural artefact that can convey the varying norms of disparate audiences, consumers, citizens, societies, cultures and civilizations, and how we decide on what is ethical, legal or illegal can sometimes be a daunting exercise. The use of the Internet for various activities ranging from e-commerce to social activism means that the medium assumes various guises and can equally be used for social and political empowerment and agency as well as for crime
513
The Ethics of Gazing
and fraudulent behaviour. While the realm of the erotic is often equated with individual pleasure and psyche, the proliferation of pornography on a public platform raises social, moral and legal concerns for states and governments.
the PolItIcs of lookIng This chapter takes the position that there are moral and ethical implications involved in the act of gazing with reference to porn and how the act of looking and consuming can capture or represent the social norms and power arrangements which prevail in our society. This draws directly from Laura Mulvey’s (1975) ‘male gaze’, which film theorists in the 1970’s employed to understand the sexual politics that permeated the screen, and how this celluloid imagination captured sexual politics in off-screen society. While later writers have suggested that both male and female subject positions have access to a controlling gaze (Rodowick 1983: Neale 1986), the objective here is not to critique gender inequalities but to extrapolate the politics of the gaze on the Internet. More specifically, this chapter uses the ‘gaze’ to not only decode sexual politics but also the legal and ethical dimensions with which societies have to grapple in terms of consuming and producing through the Internet, particularly with regard to pornography. The consumerization of sexuality and its ethical dimensions capture the politics of looking on the Internet. The gaze can also convey power dialectics in cyberspace where the gaze can include the gaze of criminals on their victims and the gaze of the authorities on criminal activity (Adams 2002: 137). Denzin (cf. Adams 2002: 137) points out that a gaze is not simply a voyeuristic endeavour, rather it is often regulated, has a trajectory, and evokes emotions and conduct which are differentially reciprocated and eroticized. A gaze may be active, or passive, direct and indifferent. It will always be engendered, reflecting a masculine or feminine
514
perspective and may be the gaze of power or domination. As Kibby and Costello (1999: 352) point out, the structure of the look defines visual pleasure, as the relationship between the subject of the gaze and its object is based in the pleasure of control and consumption. In this sense the directing and direction of the look controls the desire and the fulfilment of that look. Art and nudity are fervently bound in our historical imagination, Walter (1978: 228) observes that since the 19th century the female body has been the signifier of the erotic as ‘the female nude became the central symbol of art’. With the evolution of new forms of media, Shallit (1996) contends that the first rule of any new medium is that it will be used for sex, as in the case of the printing press and photography, highlighting the propensity for sexual expression in the human condition. The Internet, with its global reach, from Shallit’s perspective then is also predisposed to such activities. Historically the gaze has often referred to an embodied gaze rather than an anonymous ‘view from nowhere’ (Adams 2002: 134). Alison Adams (2002: 135), in assessing the gaze on the Internet, argues that it is used to terrible effect in the new media platforms. She points out that in child pornography cases the difficulty of finally removing all copies of the images from computer networks means that one may continue to gaze upon the images long after the original perpetrator has been brought to justice. The ability to circulate and archive images in the Internet means secondary forms of gaze divorced from and yet associated with a sordid event can create new conundrums for societies. While child pornography is almost universally illegal, Adams (2002: 135) contends that the link between paedophilia and child pornography seems inconvertible. Adams points out that technology is often perceived as a trigger in terms of deviant activities as with cyberstalking and Internet paedophile rings where the availability of the technology has been a trigger for some perpetrators. Shirley O’Brien (cf. Catudal 2001:
The Ethics of Gazing
182), a specialist in child protection and abuse, contends that paedophiles use pictures of children engaged in sexual acts to lower their defences so that children come to perceive that the activity is acceptable. While it is not the aim here to raise the issue of technological determinism, it nevertheless situates some of the concerns about the Internet in terms of encouraging deviant behaviour.
the notIon of PlAce In analysing the debates about pornography and the Internet, there is a need to discuss some salient characteristics of the online environment. The intrinsic qualities of the Internet raise both methodological as well as ethical issues for the researcher. Additionally, notions of private and public, the discursive construction of communities on the Web, and the location of the virtual sphere as a place of market exchange also mediate issues of pornography on the Internet. This section considers some of the key features which influence the discussion of Internet pornography. It is necessary at this point to qualify that it is for purely analytical reasons that the terms ‘offline’ and ‘online’ are used in this chapter. It does not implicitly assume that the virtual or online world is divorced from the real. It draws directly from Daniel Miller and Don Slater’s (2002) argument that while new forms of social activities may flourish on the Internet, the use of the Internet is invariably mediated by the social and physical context of the user. In theorising the Internet, space has been regularly used as a way of understanding it. The concept that people are alive on the Internet is supported by the way it is described as a place and space that allows them to congregate (White 2002: 257). Often the physical space of the world and that of cyberspace are dichotomised as real and virtual, respectively. This dichotomisation often infuses a notion that the happenings in the virtual have different consequences and implica-
tions compared to those in the real. Don Slater (2002: 227) points out that virtuality is a condition in which materiality is self-evidently in doubt for both participants and analysts. This issue of nonmateriality and the disembodiment of subjects, according to Slater creates new forms of social order and identity in the online environment. As noted by White (2002: 262) the virtual sphere also justifies the perpetuation of the physical, copying social and cultural conventions from the natural world. In this sense, describing computer and Internet material as ‘people’ also makes stereotypes seem real. According to White (2002), it reproduces instances of harm by re-inscribing larger cultural prejudices, fears and intolerance. Jacobs (2004), in discussing Foucault and De Certeau’s theorisations of space, highlights how these perspectives can be applied to the Internet with ease. Foucault’s notion of space is one in which space is not a delineated entity but one which constantly fragments and dissolves, reforming as other spaces. Thus space is disembodied and transient and is constantly reconfigured through the relationship with other spaces. Foucault’s (1986:25) notion of a heterotopic space can be a space whose function can shift over time and it can also accommodate several sites that are incompatible or ‘foreign to one another’ and juxtapose them within a single space. Space is then theorised as ephemeral as well as capable of reforming while catering to a diverse range of activities and relationships. Michael de Certeau, on the other hand, theorises space as an entity that is constituted through movement and operations of bodies and minds, hence space occurs through the activities that orient it, situate it or temporalize it (cf. Jacob 2004). Spaces are constituted through movements and operations of bodies and minds. The ontological conceptualisation of space in different disciplines is then important for researchers in acknowledging and mapping what may constitute material or empirical data in mapping activities on the Internet. The notions of community, materiality or identity will then
515
The Ethics of Gazing
be influenced by these ontological assumptions which will in turn affect the values and conduct we ascribe to practices. Besides the notion of space and the Internet, the online environment is also often understood through the concept of mediation. Internet material is usually viewed on a screen and is produced from texts, images, animated sequences, and varied forms of programming. According to Jay David Bolter and Richard Grusin (1999), ‘our culture wants to both multiply its media and to erase all traces of mediation: ideally it wants to erase its media in the very act of multiplying them.’ As we are trained to ignore the screen and see things inside or through it, the spectator gazes at highly constructed images and sees them as natural, limiting the critical interventions that can be performed (White 2002: 258). As new technologies are increasingly embedded into our daily lives, there may be a naturalization in which we may not mediate our actions with regards to online engagements. For William Mitchell (1999: 33) a variety of devices allow for both connections as well as the collapse of materiality with representations. Internet technologies such as webcams and personal profiles, suggest that users can see into people’s lives or even reach through the screen to gain access into individuals’ personal domains and private thoughts (White 2002: 259). This act of gazing at the perverse or the unusual may become normalised through interactive technologies. This also raises the constant tension between notions of private and public in the global terrain of the Internet where the boundaries between private and public become blurred and open to contestations. Cavanagh (cf. Bakardejieva & Feenberg 2000:215), in applying the distinction between private and public space to virtual groups, observes that concepts such as ‘public and private are far from monolithic definitions to guide action’. Instead all such definitions are locally produced and therefore relative to the individual communal structures within which they are rendered meaningful. This entails observing the forms and rituals
516
of interaction which can yield an understanding of how the private and public are constructed. Similarly, Gary Marx (2001: 160) argues that the terms public and private become contingent as they involve multiple meanings over time and across cultures, contexts, kinds of interactions, technological controls and social categorisations which emerge in the digital terrain. For Capurro and Pingel (2002) analysing the Internet requires a digital ontology which captures the understanding of being in cyberspace. Online existence involves bodily abstraction, which means abstraction from bodily identity and individuality. Existence online also entails abstraction from our situational orientation and this means sharing time and space with others. In this sense, online orientation is presence as well as globally oriented. Digital ontology should then acknowledge the ethical dilemmas of Internet research as there is a sustained tension between online existence and bodily existence (Capurro & Pingel: 2002). The interface between the two can create new forms of sociality and community. Slater (2002:229), in observing Internet Relay Chat (IRC) rooms which discuss sex, contends that such systems allow large numbers of users to log on simultaneously to a shared communicative space. This, he argues, sets the stage for a continuous flow of both sexually explicit imagery and sexually explicit interaction (i.e. flirting, cybersex, shared fantasies and performances) representing the formation of new sociality, community and market-type transactions. One significant element in the development of the Internet as a market place has been the availability of sexual material, and these electronic networks continue to feed the pornography boom and facilitate new methods for consumers to interact with sexual content as ‘porn’ (Spencer 1999). These networks highlight the ‘privatising’ potential of technology especially in relation to sexual matters while illuminating new forms of formal and informal exchanges (Spencer 1999; Jacobs 2004: 72). According to John Frow (1997)
The Ethics of Gazing
the commodity culture of late capitalism makes gift exchange a way to think through informal economies. On the Internet, besides users paying for pornography, different forms of exchange, reciprocation and sharing is taking place. Pornography users transfer data informally alongside their quest for pornographic commodities as they share and distribute sexuality (Jacobs 2004). As noted by Regan Shade (1996), the process of locating new technology within mainstream culture is linked to its popularity and purchase by significant numbers of people and households. Once a market has been established, tested, and operationalised in relation to sexual material it has in the past been restructured to incorporate more mainstream and culturally ‘acceptable’ material, as well as forms of regulation and classification of what is defined as ‘adult’ (Spencer 1999: 244). The regulation by state and commercial ‘interests’ along with those of the ‘moral entrepreneurs’ (Hall et al. 1978) have combined to create a set of loose alliances (Spencer 1999: 245), categories, as well as formal markets. The digital ontology, besides recognising the tensions between online and bodily existence, also needs to reconcile the global nature of the Internet and the cultural differences that can produce different solutions to the issue of pornography. According to Kuipers (2006: 393), although American culture is dominant on the Internet it is not a monoculture as there are differences in national cultures. These are visible both in the entertainment people seek and the dangers that people perceive and construct in such environments. The comparisons reveal not only the differences but also the mechanisms involved in the social construction of online dangers. In the US, for example, the belief in freedom has been central to American political thought (Fagelson 2002) and this tends to mediate the notion of harm and danger in terms of both legislation and social construction both in the online and offline spaces. As such Fagelson points out these different ‘cultural logics’ not only lead to a different
perception of dangerous entertainment but also to different solutions. The Internet, from being a rather unregulated enterprise a few years ago, has recently become the focus of multiple ethical concerns and debates and in some cases it has amounted to moral panic (Cavanagh 1999; Bkardjieva & Feenerg 2000). In the last few years, undoubtedly, there has been increasing heterogeneity and decentralization on the Internet as a wider variety of producers and consumers participate in the making of globalized markets, and a contemporary notion of pornography should capture such networked sexual agency and politics (Jacobs 2004). Jacobs (2004) observes that private domains such as homes and bedrooms, as well as public sites such as cafes, have become sites for networked activity. There is then a need to negotiate new forms of ethics where private domains such as homes become spaces for new forms of identity politics that extend beyond the conventional stereotypes of the ‘patriarchal’ or ‘matriarchal’ identities that are often associated with the domestic setting. The Internet in this sense creates a nexus between homes and markets, creating networked nodes where individuals are forced to negotiate complex social and ethical codes as well as sexual identities.
ImAge And PornogrAPhy Before delving into the ethical and legal dimensions of pornography on the Internet, it is crucial to discuss some of the debates about images as these have influenced the rulings and legislations on pornography in different contexts. In the fields of art and photography, there is an acknowledgment that there is a societal tendency to emphasize the referential aspect of the image and to describe things depicted by the photograph as if they were real (see Oswell 2006; White 2002; Akdeniz 2002; Gillespie 2005). There is also a continuous documentary tradition which takes
517
The Ethics of Gazing
a photograph as neutral and given (White 2002: 258). For Susan Sontag (1977) cultural artefacts such as photographs become conduits to the past as they seem to make people real. With online material, there is a seemingly indexical relationship between an Internet character and the user’s body. The Internet may seem like a direct imprint of a user that rests directly on the other side of the screen (White 2002: 258259). The status of a photograph as a verifiable fact continues to linger with the Internet despite the radical impact of digital technology on photographic practice. This has been problematic as digital technology can manipulate and distort images, thus further compounding the relationship between reality and representation. David Oswell (2006: 528) describes this as the ethics of virtual observation where the referentiality of the image in representing the scene of abuse that is real and our ethical response to it is predicated by our perceptions of reality. This impresses the need for new legislation, law enforcement and social analytical frameworks for comprehending and tackling the production, distribution and consumption of images of child sexual abuse (Oswell 1999). In discussing the ethics of the virtual, Oswell observes that there are often epistemological inconsistencies and disjunctures between knowledge, law and sociological perspectives. ‘The indecent photograph is not simply constructed as a recording of the ‘real’, it is also interpreted through the subjectivity of the spectator, or more accurately the observer, in that it brings to fore the referential role of the image and the forms of observation’ (Oswell 2006: 246-258). Primary to the framing of online child pornographic images as cultural artefacts and objects of concern is the fact that it may be influenced by the forms of observation that discern the relations between scene of crime, and image and user in terms of their virtuality and may have less to do with the actual evidential status of these images. This means that
518
the ethics of gazing online are mediated by legality and illegality, normality and pathology as well as moral righteousness and condemnation.
VIsuAl dePIctIon And Its legAl stAtus In the context of the US and UK there is often an implicit understanding that child pornography is the record of actual child sexual abuse and this has become widely used in legal discourses and public discourses of law enforcement charities and child protection agencies (Williams 1991:88). Oswell’s (2006: 248) argument is that child pornography is a ‘highly mediatized issue in which the scene of abuse circulates only as a figural, de-objectified entity’ which cannot be dismissed. The distance of the photograph from an image may warrant a more nonchalant attitude as opposed to its proximity. The contrasting series of distal and proximal relations across user, image, scene of abuse and ethical responses help frame the texture of debates pertaining to child pornography (cf. Cooper & Law 1995). The whole issue of space and distance is mediated but this in itself can form the basis of moral condemnation and undoubtedly the affordance of digital technologies frames many public responses to online child pornography. In the legal realm, provisions made for digitally produced or altered images reflect the sort of ontological links that societies have crafted over the years between image and reality. The Internet has undeniably created new visibilities. According to Tate (1990:1) while child pornography has been a problem for decades, until relatively recently it has been a hidden crime. In the UK, the principal legislation that addresses the indecent images of children is the Protection of Children Act 1978 (PoCA), which differentiates between different mediums representing the abuse. Photographic images are the subject of this specific legislation which focuses on child por-
The Ethics of Gazing
nography, whereas all other mediums are treated as obscene articles and are subject to general obscenity legislation (Gillespie 2005). The ontological status of visual depiction has legal bearing both in the context of the UK and the US. In the US, the Child Pornography Prevention Act (CPPA) of 1996 deals with the legal ramifications of visual depiction. In 2002, a ruling by the US Court of Appeals for the Ninth Circuit found that the CPPA ruled solely on the image without considering the set of contextual factors catered for in an earlier ruling in 1973, and in view of this the CPPA’s emphasis on the image was ‘overbroad’ and ‘unconstitutional’ (Oswell 2006). It also reiterated that the proximity or distance of a photograph from the scene of the actual event is an important criterion in the legal understanding of child pornographic images. The court also overruled the argument in the CPPA that virtual child pornography is ‘intrinsically related’ to the sexual abuse of children as the link between the two is contingent and indirect. Harm here does not necessarily accrue from the image but is dependent on the potential for subsequent criminal abuse. The ruling showed the court’s unease with the assumption that the image takes up the position of the ‘modest witness’ whose account of the scene is ‘unadorned, factual and compelling’ (Haraway 1997:26). Virtual child pornography has no link to crime or sexual abuse that has actually been committed, and in the same vein the virtual image has no necessary link to future cases of abuse. As with child pornography, virtual child pornography cannot be prohibited on the basis of its possible harm to some children or the possibility some children may be exposed to it. In this sense the CPPA defies the ‘principle that speech within the rights of adults to hear may not be silenced completely in an attempt to shield children from it’ (Oswell 2006: 251). In April 2002, the US Supreme Court found the Child Pornography Prevention Act (CPPA) unconstitutional. Though it remains illegal to make, show or possess sexually explicit material
of children, the court found that there were not compelling reasons to prohibit the manufacture or exhibition of pictures which merely appear to be children. As a consequence two categories of pornography which were prohibited under the act are now permitted in the US. These include sexually explicit pictures of actual models who appear to be younger than they are and computergenerated sexually explicit pictures of children (Levy 2002). In contrast, the UK treats indecent pseudophotographs of children ‘as indecent photographs of children’. In the UK, the term pseudo-photography was introduced by the Criminal Justice and Public Order Act of 1994. This act defines a ‘Pseudo-photograph’ as ‘an image, whether made by computer graphics or otherwise, which appears to be a photograph’ (cf. Gillespie 2005: 435). With the police finding images on computers which could not be readily verified as those of a child or an adult, the 1994 amendment became the rationale for addressing this problem (Hansard 1994, cf. Gillespie 2005: 435). The Criminal Justice and Public Order Act of 1994 simply inserts ‘pseudo-photographs’ alongside ‘photographs’ (Oswell 2006:251). The Act defines a child as a ‘person under the age of 16 and it also interprets the ‘pseudo-photograph as an image whether made by computer graphics or otherwise howsoever, which appears to be a photography’ (Hansard 1994, cf. Gillespie 2005: 435). In UK legislation, the indecent photograph and the indecent pseudo-photograph are not identical but are treated as identical. The crime of downloading child pornography is a crime regardless of whether the downloaded images are records or not of actual abuse (Oswell 2006). Both acts are deemed as records of crime. In the context of the UK, there is no distinction between the original and the counterfeit: both are defined through the criterion of the ‘virtual image, the simulacra’ (Oswell 2006:252). This legal response to pornography then prefigures a link between an image
519
The Ethics of Gazing
and a crime and it is a legal platform adopted by the authorities and child protection campaigners with reference to Internet child pornography. Oswell (2006: 252) contends that such an equivalence creates challenges in a court of law as reservations can be raised as to the evidential status of the image as to whether the image is an image of the incident of sexual abuse at all. As Oswell observes, the photograph becomes the measure of the real and its observation and hence the implicit prioritization in UK law of virtual child pornography means the crime of possession, making or distribution of child pornography (whether virtual or real) is a crime not only against a particular child but against all children; invoking it as a universal crime against childhood. The debate about the image becomes compounded as it facilitates different forms of consumption, production, as well as market exchanges. The issues of representation and aestheticization of images with regard to the erotic or traumatic yet again raise ethical dimensions not just for researchers but also for young users who are often framed in audience and legal literature as vulnerable and needing a certain degree of protection from such exposure. The distinction between photography and erotica is often used in the context of debates about adult pornography to delineate between material with a sexual use and material with an aesthetic use (Oswell 2006: 253) but these can be subjective and may have to comply with the society’s value judgement and hence the distinction between the two can be subjective and at time arbitrary. Diane Russell (1998: 3) defines sexually explicit material as one that ‘combines sex and/or the exposure of genitals with abuse or degradation in a manner that appears to endorse, condone or encourage such behaviour’. McLelland’s (2000) research indicates that what might be taboo in one culture may not apply to other cultures and with a global conduit such as the Internet this means that the boundaries may be blurred. James Check (1985) on the other hand, terms pornography as ‘sexu-
520
ally explicit material’ without further qualifying it. Nevertheless what constitutes pornography is often contested in societies.
legAl dImensIon of PornogrAPhy Jenkins (2001) posits that child pornography can be accessed in various ways in the online environment where it can be distributed via credit-card access websites, bulletin boards and encrypted emails as well as through peer-to-peer file sharing. These are representative not only of the new configurations between producers and users but also of new forms of abuse (Oswell 2006: 253). As mentioned hitherto, the Internet poses new questions about the reality, regulation, definition and availability of pornography as it has dramatically increased the accessibility of pornography, and of violent pornographic images in particular (Gosset & Byrne 2002). Consequentially there is a need to explore issues of regulation and the social implications of pornography in societies. Prior to the Internet, the debate about pornography in the US centred on the First Amendment, and the need to regulate pornography was stressed on the grounds that pornography violates community standards. Under the US Constitution it is legal for adults to own and distribute most types of pornography. However, since the 2002 Supreme Court Ruling over COPA (Child Online Protection Act) the US government has made serious efforts to monitor and put restrictions on Internet pornography traffic by arguing that juveniles or minors (18 years or less) are automatically exposed to and harmed by pornographic images (Jacobs 2004). Child pornography sexualizes inequality. This is not a contingent fact about our social relations but a reflection of their physical, mental and psychological immaturity. For that reason, sexualizing children for adult viewers is necessarily sexualizing inequality (Levy 2002:322).
The Ethics of Gazing
The Internet, however, extends the debate further for it is enmeshed with concerns about the feasibility and legality of regulating personal computer access to the worldwide market of pornography on the Internet (Gosset & Bryne 2002). Authors Gosset and Bryne (2002) argue that debates about pornography are incomplete without locating the Internet as a unique and rapidly expanding medium for disseminating images with the convergence of a range of technologies. The bias of technology can have an effect on the politics of consumption on the Internet. As Taylor and Quayle (2003: 159-163) point out, one of the principle elements of Internet-facilitated child pornography is an exponential growth in the size of the individual collection. Here the imaging and archiving features of Internet technology cannot be overlooked. In the UK, for example, legislation does not distinguish between accessing images for personal use (including downloading) and the creation and distribution of images (Gillespie 2005). Often censorship and obscenity laws have provided the basis to tackle pornography on the Internet, and this has been the case in the US, Australia, the UK and France. Cyber libertarians such as Gerry Barlow (2000) had prophesied that the conflict between web users and ISPs would be mediated through governments’ enforcement of censorship laws. In Australia, the Western Australia Censorship Act (1996) is designed to protect the local and state territory from the influx of pornographic material over the Internet (Jacobs 2004: 71). On the other hand, South Australia’s Censorship Act criminalizes any content that is deemed ‘unsuitable for children online’ even if the content is intended for adults. This leaves content open to police interpretation as authorities can evaluate and arrest users who post information which is deemed offensive to children. A similar bill passed in Massachusetts, USA makes it illegal to upload or download images containing the nudity of a person aged below 18, even when the images are not considered ‘pornographic’ (Jacobs
2004). Some laws, while enacted for a different rationale, may also conflict with each other. For example, in the US censorship in the guise of the Communication Decency Act may override acts which are enacted to protect personal freedom (Capurro & Pingel:193). In general, concern with Internet content clusters around common themes: children, violence, sex, pornography and extremist political activity (Spencer 1999: 241). However, much of the legal rhetoric regarding illegal content on the Internet has revolved around child pornography despite the fact that child pornography and paedophilia are not Internet-specific problems. Besides child pornography, law enforcement bodies are also concerned with the existence of commercial websites featuring sexually explicit content (Akedeniz 2002). In the UK content created and maintained by UK citizens, which may be deemed obscene, is legislated under the Obscene Publications Act. The danger of pornography to adults is much more disputed and often the arguments for pornography include freedom of speech and the expression of civil liberties, the right to choose and the right to privacy (Kuipers 2006).
communIty stAndArds And PornogrAPhy Akedeniz (2002) stresses that there is a difference between illegal and harmful content, as the former is criminalized by national law while the latter is merely deemed offensive or disgusting by some sections of society. In tandem with this, Jacob (2004) queries whether community standards of decency can be transmitted from one place to another. With the Internet being perceived as a global resource, the issue of community standards creates different cultural and legal approaches to solving the issues at hand. While there is often a societal acknowledgement of child pornography as a heinous and universal crime, there is nevertheless a difficulty in
521
The Ethics of Gazing
defining what constitutes child pornography as different jurisdictions can define it differently, and equally the issue of obscenity can also be culturally mediated in different environments. The consensus in terms of what constitutes child pornography can emerge within the context of supra-national agencies such as the Council of Europe, which defines child pornography as ‘any audiovisual material which uses children in a sexual context’ (Oswell 2006:246). Akdeniz (2002) points out that the legal regulation of this sort of Internet content may differ from country to country and this is certainly the case within the European Union where member countries have taken different approaches to sexually explicit material. In terms of Internet content and young users, harm remains a criterion and this is accepted within the jurisprudence of the European Court of Human Rights (Akdeniz 2002) Harm in societies again is culturally defined. In terms of illegal or harmful content, the UK adopts a multi-layered approach with the involvement of both national and international levels. The government also favours a co-regulatory approach in which there is a role to be played by industry’s own self-regulation. Before the Internet, the US already had some anti-pornography legislation which has since been applied to the Internet. In the US, pornographic sites are legally obliged to refuse access to minors (Kuipers 2006). Until recently, further attempts to penalize or regulate Internet pornography failed due to the fact they conflicted with the First Amendment. Beyond legal restraints, countries can also encourage the use of technology to filter undesirable content deemed harmful to children. In June 2003, the Child Internet Protection Act (CIPA) was approved by the US Supreme Court to force libraries and schools to block pornographic sites. The CIPA requires public libraries to filter their computers if they want to retain federal funding but such software is not completely reliable. Judith Levine (2002) has argued that it is important to promote media literacy and moral
522
intelligence rather than to deal with the Internet through technology. This means that governments and societies should also invest in public awareness and education campaigns instead of phasing out controversial sexuality debates which can polarise the public.
the femInIst PersPectIVe Beyond the legal perspectives which associate pornography with paedophilia, crime, pathology or victimization, there is also a need to view the phenomenon of Internet pornography from the feminist perspective, where new forms of agency, empowerment, and domination can be analysed. According to Jacobs (2004:68) the Internet is the ultimate play zone for adult entertainment as it creates new channels for corporate and small-scale pornography, artists and amateurs, to produce, distribute and consume products. Lane (2000: 113) equally concurs that the Internet creates new forms of agency for women where ‘it provides the ability of women in the pornography industry without the intervention of a (typically male magazine editor or video producer) enabling her to be the head of her own Playboy Channel.’ Traditionally, pornography has been viewed as undermining the agency of women as women’s bodies are the focus of mainstream pornographic images and they are displayed as desirable images by the male gaze (Levy 2002: 321). Women in pornography are objectified as passive entities for the enjoyment of male subjects. The objectification of women through their sexuality then reinforces the representation of women as submissive and even desiring to be raped. According to Rae Langton and Caroline West (1999) pornography functions to introduce new presuppositions into our common conversations: that women enjoy rape, for instance. This may constitute new ways of reconfiguring the politics of the male gaze. Feminist have criticized pornography not only because it produces inequality and that it
The Ethics of Gazing
might lead to actual sexual abuse, but also on the grounds that it is the eroticization of inequality. It presents women as naturally assuming a subordinate position in sex. As a result women have their sexuality shaped by social norms, and when these norms dictate that sexuality is inherently an unequal transaction that is sexy because it is unequal, and that women are naturally suited for the subordinate role in this transaction, then women cannot help but absorb the message. Catherine Mckinnon (1987:7) points out that the eroticization of inequality ‘organizes women’s pleasure so as to give us a stake in our own subordination’. It encourages men and women to think of women as naturally inferior (Levy 2002 : 322). From this perspective, mainstream pornography is contingently harmful as it depicts women as inferior (Levy 2002: 321). Walter Kendrick (1996) and Lynn Hunt (1997) observe that the attempts to restrict pornography have historically been part of the larger objective to limit access to public spaces. Pornography emerged within male elite groups or ‘male domains’ who thoroughly feared the democratization of culture and protected secret collections from the lower classes and women’s communities. The ability to seek out and share sexual practice in response to online commodities and communities creates new public freedoms (Duncan 1996: 136-137). With the Internet there is both the reiteration of the male gaze as well as a resistance to it through new forms of decentralization as well as heterogeneity. These dialectics between the reinforcement of and resistance to stereotypes constructs the Internet as a more complex space compared to traditional forms of pornography where different forms of sexuality and counter-gaze can flourish due to the convergence of various technologies as well as the increasing use and skills of users as producers. These new forms of gaze can be interpreted as resistance, new forms of agency or negotiation of entrenched practices, as well as
the reiteration of the male gaze and sustenance of stereotypes established by the print industry.
PornogrAPhy from dIfferent PersPectIVes While feminist perspectives seek to interpret new forms of agency or domination on the Internet, the phenomenon of Internet pornography can be viewed differentially by different disciplines informed by varying ontological assumptions. For example, psychological perspectives of the image, unlike those of the legal paradigm, do not emphasise jurisdictional issues or those of taste and decency, instead they may map the relationship between the collections of images and how they interface with the behaviour of the user. Taylor et al. (cf. Oswell 2006:253), premising on Kenneth Lanning’s (1992) behavioural analysis of child molesters, draw a distinction between child pornography and erotica. The former is seen as explicitly sexual in terms of the content of the image while the latter may refer to any image used by an individual for sexual purposes. They argue that ‘an objective means of judging the nature of collections independent of legal provisions would help international comparison. The collections may be symptomatic of the pathology of the individual while revealing the broader social relations mediated within online environments. Trading, swapping and selling images construct forms of market and gift relations and corresponding forms of sociality’ (Oswell 2006). In the process they can lead to the construction of different typologies in observing pornography from the perspective of social psychology. In such research the personality of the paedophile is revealed not through direct evidence but through the manifestation of its motivated actions which provides evidence of the personality. For Taylor et al. (2001: 99) such collections are not accidents but are a ‘result of deliberate choices made by the individual to acquire sexual material’. Thus the
523
The Ethics of Gazing
individual agency and the nature of collection and its attendant interface become the premise and focus of research. In the same vein, social scientists may study the occurrence of communities on the Internet which may legitimate discourses deemed as deviant in offline society. As Cass Sunstein (2001: 65) points out, ‘on the net you gain access to a community that legitimises your views. If you are operating in the real world, then meeting other paedophiles will require some organization and will be difficult, but online you’ll find hundreds and thousands of people who share your views worldwide.’ Due to the intrinsic characteristics of the Internet, Alison Adams agrees that ‘the ease of use and relative anonymity afforded by the Internet leads some individuals to join pathological Internet communities.’ Additionally the formation of informal economies such as gift exchange and the act of engaging in virtual communities can also be interpreted and understood through different perspectives. While downloading images of child pornography may be categorized as a criminal activity in certain physical jurisdictions, social scientists studying the Internet may be interested in observing issues of technical skills and media literacy in determining users’ level of engagement. Rana Dasgupta’s (2002) research on pornography surfing, for example, reveals that the act of surfing seeks to reduce the gap between desiring the body and the dispersed organs of sight, hearing and information processing. Dasgupta contends that mental and bodily attachment of web users to distributed sexuality is a distinctive characteristic of pornography consumption habits. The market exists due to the libidinal attachment it cultivates in consumers. Such paradigms seek to decipher the different ways in which technologies can fracture or assemble cognitive functions in the online medium. Our lives, particularly our lives as researchers, and correspondingly our research objects and methods are being informed and thus transformed by digital devices, and particularly by
524
digital networks (Capurro and Pingel 2002). In acknowledging this, Capurro and Pingel stress that there may be a need to create a digital ontology to comprehend digital networks and connections. Ontology in a Heideggerian sense refers to the human capacity for the construction of the world. For Heidegger, ‘our perception of the finite openness of our existence allows us to produce not just new things but new world projects’ (Capurro & Pingel: 190). They contend that in the space of online communication what exactly constitutes the subject domain of research and what exactly constitutes a scientific fact are being radicalized (Capurro & Pingel: 190). Dag Elgesem (cf. White 2002: 262) suggests that research can support undesirable cultural frameworks and researchers must consider the possibility that their research on specific behaviours may work to legitimate those behaviours. In this sense, Internet research ethics should facilitate critical and sensitive engagements with the more troubling aspects of the Internet where the online setting abounds with ageist, classist, homophobic, racist and sexist imagery and ideas (White 2002: 262). As an entertainment platform, it can also be saturated with stereotypes and our online engagements can legitimate the sustenance of such implicit belief systems. Jeffrey Ow (2000) for example has illustrated how gaming environments can sustain race stereotypes and our conceptions of the ‘other’. In this sense, Internet ‘users’ or pleasure seekers can employ limited conceptions of the ‘other’ for their own amusement just in the way that private pleasures can politicise sexualities on the public domain of the Internet. As researchers there is a need to examine carefully the new behaviour which surrounds the creation and use of pornography on the Internet, to understand how far it reinforces and even encourages new and moral liberal modes of expression, and on the darker side, how it reinforces and even encourages abusive behaviour and relationships particularly against women and children (Adams
The Ethics of Gazing
2002: 141). There is also a need for reflexivity in mapping how technology is linked to desire and the ways in which it has been the trigger for some perpetuators of antisocial and criminal behaviour.
conclusIon Pornography and media have had a long historical relationship and the advent of new interactive media technologies present the possibility of universal access as well as new dilemmas which may invoke a local response to global problems. With the Internet creating new forms of community and visibility through digital images, pornography is susceptible to both market pressures in cyberspace as well as new forms of ethical and moral condemnations by offline societies. These standards can vary, are subjective and will involve the cultural and value judgements of the community and users. There is inevitably an ongoing endeavour to keep the Internet free from governance, nevertheless the issues of pornography and in particular child pornography are crimes which have warranted both global co-ordination and local legislation and in the process the issue of online pornography and its attendant research has been limited and constrained by this often clouding the value judgement of researchers forcing them to make a moral stand before embarking on their research agenda. The ways in which a medium is used for exploring sexual identity, the erotic or the aesthetic, often link private and intimate pleasures with public standards of communal ethics as well as taste and decency. The notion of the ‘gaze’ captures the conventionalised and intrinsic power arrangements in society as well as the sexual politics and practices that can occur in new interactive platforms like the Internet. In the examining of pornography on the Internet there has been a need to highlight various qualities of the Internet which can pose regulatory challenges for authorities as
well as for researchers. Inevitably, the issues of virtuality and physical jurisdiction are important components in situating legality and acceptable practice with regard to pornography on the Internet. The ontological status of the digital image and its interpretation by courts also qualifies legal and illegal activities on the Internet. Beyond the legal perspective, the notion of market place, issues of empowerment, pleasure, the exploration of the aesthetic or the erotic for the individual or niche communities create the need to devise a digital ontology to understand the activities of the Internet through different disciplinary as well as epistemological perspectives. The feminist perspective on pornography provides a background to situate pornography and the male gaze traditionally, but equally it requires researchers to integrate the heterogeneous and decentralized sexual activities and identities which can occur in new interactive platforms on the Internet.
references Adam, A. (2002) Cyberstalking and Internet Pornography: Gender and the Gaze. Ethics and Information Technology, 4: 133-142. Akdeniz, A. (2002) UK Government and the Control of Internet Content. Computer Law and Security Report. Bakardjieva, M. and Feenberg, A. (2000) Involving the Virtual Subject. Ethics and Information Technology, 2: 233-240. Barlow, J. P. (2000). Censorship 2000. posted on Internet mailing list , Available at: http: www.pbs.orgwggbh/pages/frontline/shows/porn/ interviews/asher Barthes, R. (1981) Camera Lucida: Reflections on Photography, Richard Howard (trans). New York: Hill and Wang.
525
The Ethics of Gazing
Bearman, D., and Trant, J. (1998). Authenticity of Digital Resources. Archives and Museum Informatics. Available; www.archimuse.com. Accessed 15/05/2007. Bolter, J. D., and Gruisin, R. (1999). Remediation: Understanding New Media. Cambridge, MA: MIT Press. Capurro, R. and Pingel, C. (2002). Ethical Issues of Online Communication Research. Ethics and Information Technology, 4: 189-194. Catudal, J.N. (2001) Censorship, the Internet and Child Pornography Law of 1996: A Critique. In Richard A. Spinello and Herman T. Tavani (eds.) Readings in Cyberethics. Sudbury, Mass: Jones and Bartlett Publishers. Cavanagh, A. (1999) Behaviour in Public: Ethics in Online Ethnography. Available: http://www. socio.demon.co.uk/6/cavanagh.html. Accessed 12/01/07.
Internet Sexuality. Journal of Sex Research, 38: 313-323. Foucault, M. (1986) Of Other Spaces. Diacritics, Vol 16(1): 22-27. Frow, J. (1997) Time and Commodity Exchange. Oxford: Clarendon Press. Gossett, J. L. and Byrne, S. (2002) Click Here: A Content Analysis of Internet Rape Sites. Gender & Society, 16 (5): 689-709. Gillespie, A. A. (2005) Indecent Images of Children: The Ever-Changing Law. Child Abuse Review, 14: 430-443. Hall, S., Clarke, J., Critcher, C., Jefferson, T. and Roberts, B. (1978) Policing the Crisis, Mugging the State and Law and Order. London: Macmillan. Heidegger, M. (1976) Sein and Zeit. Tubingen: Max Niemeyer Verlag (Engl. Trans. Being and Time, J. Macquarrie, E. Robinson, Oxford, 1987).
Check, J. (1985) The Effects of Violent and NonViolent Pornography. Ottawa: Department of Justice, Canada.
Haraway, D. (1997) Modest_Witness@Second Millenium.FemaleMan_MeetsOnco-MouseTM. Routlege: London.
Cooper, A. & Griffin-Shelley, E. (2002) A Quick Tour of On-Line Sexuality: Part 1. Annals of the American Psychotherapy Association, 5: 11-13.
Hunt, L. (1997) The Invention of Pornography: Obscenity and the Origins of Modernity, 1500 – 1800. MIT Zone Books: Cambridge, MA.
Cronin, B & Davenport, E. (2001) E-rogenous Zones: Positing Pornography in the Digital Economy. The Information Society, 17: 33-48.
Kibbey, M and Costello, B. (1999) Displaying the Phallus. Masculinity and Performance of Sexuality on the Internet. Men and Masculinities 1(4): 352-364.
Duncan, N. (1996) Body Space. Routledge: London. Dasgupta, R. (2002) Sexworks/Networks: What do people get out of Internet porn? Available: http: sarai.net. Accessed 17/01/07. Fagelson, D. (2002) Perfectionist Liberalism, Tolerance and American Law. Res Publica 8 (1): 41-70. Fisher, W. & Barak, A. (2001) Internet Pornography: A Social Psychological Perspective on
526
Jacobs, K. (2004) Pornography in Small Places and Other Spaces. Cultural Studies, 18 (1): 67-83. Jenkins, P. (2001) Beyond Tolerance: Child Pornography on the Internet. New York University Press: New York. Kendrick, W. (1996) The Secret Musuem: Pornography in Modern Culture. University of California Press, Berkeley Press.
The Ethics of Gazing
Kuipiers, G. (2006) The Social Construction of Digital Danger: Debating, Defusing and Inflating the Moral Dangers of Online Humour, Pornography in the Netherlands and the US. New Media and Society, 13(3): 379-400. Lane, F. S. (2000) Obscene Profits. The Entrepreneurs of Pornography in the Cyber Age. Routledge: New York. Langton, R. and West, C. (1999) Scorekeeping in a Pornographic Language Game. Australian Journal of Philosophy, 77(3): 303-319. Levine, J. (2002) Harmful to Minors: The Perils of Protecting Children from Sex. University of Minnesota Press, Minneapolis, MN. Levy, N. (2002) Virtual Child Pornography: The Eroticization of Inequality. Ethics and Information Technology, 4: 319-323. Marx, G. (2001) Murky Conceptual Waters: The Public and the Private. Ethics and Information Technology, 3: 157-169. McLelland, M. (2001) Out and about on the Japanese Gay Net. In Mobile Cultures: New Media and Queer Asia, eds C. Berry, F.Martin & A. Yue. Duke University Press, Durham, NC. McKinnon, C. (n.d.) Feminism Unmodified: Discourses on Life and Law. Cambridge: Harvard University Press.
Oswell, D. (1999) The Dark Side of Cyberspace: Internet Content Regulation and Child Protection. Convergence, 5(4): 42-62. O’Toole, L. (1998) Pornocopia: Porn, Sex, Technology and Desire. London: Serpent’s Tail. Ow, J. (2000) The Revenge of the Yellowfaced Cyborg: The Rape of Digital Geishas and the Colonization of Cyber-Coolies in 3D Realms’ Shadow Warriors. In Beth Kolko (ed.) Race in Cyberspace. New York: Routledge, 51-67. Roberson, S. (ed.) (2001) Defining Travel: Diverse Visions. University Press of Mississippi, Jackson, MS. Rodowick, D. N. (1982) The Difficulty of Difference. Wide Angle, 5:4-15. Regan Shade, L. (1996) Is there Free Speech on the Internet? Censorship in the global Information Infrastructure. In R. Shields (ed.) Cultures of Internet Virtual Spaces, Real Histories, Living Bodies. London : Sage. Russell, D. (1998) Dangerous Relationships: Pornography, Misogyny, and Rape. Thousand Oaks, CA: Sage. Shallit, J. Public Networks and Censorship. In P. Ludlow (ed.), High Noon on the Electronic Frontier. Cambridge AM: MIT Press, 275-289.
Mitchell, W.J. (1999) Telematics Takes Command: E-Topia. Cambridge, MA: MIT Press.
Slater, D. (2002) Making Things Real. Ethics and Order on the Internet. Theory, Culture and Society, 19 (5/6): 227-245.
Mulvey, L. (1975) Visual Pleasure and Narrative Cinema. Screen, 16: 6-18.
Sontag, S. (1977) On Photography. New York: Penguin Books.
Neale, S. (1986) Sexual Difference in the Cinema: Issues of Fantasy, Narrative and the Look. Oxford Literary Review, 8: 123-32.
Spencer, J. (1999) Crime on the Internet: Its Presentation and Representation. The Howard Journal, 38(3): 241-251.
Oswell, D. (2006) When Images Matter; Internet Child Pornography, Forms of Observation and an Ethics of the Virtual. Information, Communication and Society, 9(2): 244-265.
Stack, S., Wasserman, I., & Kern, R. (2004) Adult Social Bonds and Use of Internet Pornography. Social Science Quarterly, 85 (1): 75-89.
527
The Ethics of Gazing
Taylor, M., Holland, G. & Quayle, E. (2001) Typology of Paedophile Picture Collections. Police Journal, 74(2): 97-107. Stone, A. R. (1995) The War of Desire and Technology at the Close of the Mechanical Age. Cambridge MA: MIT Press. Sunstein, C. (2001) Republic.com. Princeton: Princeton University Press Tate, T. (1990) Child Pornography: An Investigation. Metheun: London. Vedder, A. and Wachbroit, R. (2003) Reliability of Information on the Internet: Some distinctions. Ethics and Information Technology, 5: 211-215.
528
Walters, M. (1978) The Nude Male: A New Perspective. New York: Paddington. White, M. (n.d.) People or Representations? Ethics and Information Technology, 4: 249-266. Whitehead, B. (2005) ‘Online Porn: How do We Keep It From Our Kids?’ Commonweal, 132: 18. Williams, N. (1991) False Images: Telling the Truth about Pornography. Kingsway Publications: London.
529
Chapter XXXIV
The Ethics of Deception in Cyberspace Neil C. Rowe U.S. Naval Postgraduate School, USA
AbstrAct We examine the main ethical issues concerning deception in cyberspace. We first discuss the concept of deception and survey ethical theories applicable to cyberspace. We then examine deception for commercial gain such as spam, phishing, spyware, deceptive commercial software, and dishonest games. We next examine deception used in attacks on computer systems, including identity deception, Trojan horses, denial of service, eavesdropping, record manipulation, and social engineering. We then consider several types of deception for defensive purposes, less well known, including honeypots, honeytokens, defensive obstructionism, false excuses, deceptive intelligence collection, and strategic deception. In each case we assess the ethical issues pro and con for the use of deception. We argue that sometimes deception in cyberspace is unethical and sometimes it is ethical.
IntroductIon Deception is a ubiquitous human phenomenon (Ford, 1996). As the Internet has evolved and diversified, it is not surprising to find increasing deception in cyberspace. The increasing range of Internet users in particular, and the development of Internet commerce, has provided many opportunities and incentives. Deception is a technique for persuading or manipulating people. We will define deception as
anything to cause people to have incorrect knowledge of the state of the world. This includes lying but also misleading. Deception in cyberspace can be used both offensively (to manipulate or attack computer systems, networks, and their users) or defensively (to defend against manipulations or attacks). Most ethical theories proscribe most forms of deception while permitting some kinds (Bok, 1978). Deception smooths social interactions, controls malicious people, and enables doing something for someone’s unrecognized benefit
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Ethics of Deception in Cyberspace
(Nyberg, 1994). The harms of deception are the failure to accomplish desired goals, and the often long-term damage to the trust necessary to sustain social relationships, without which much human activity could not be accomplished. (Quinn, 2006) provides a useful categorization of ethical theories applicable to computer technology. He identifies subjective and cultural relativism, divine-command theory, Kantian rulebased ethics, social-contract theory, and act and rule utilitarianism. Subjective relativism, cultural relativism, and divine-command theory do not fit well with cyberspace because cyberspace is a social resource that spans diverse cultures with diverse opinions, and it needs cooperation to work properly. Kantian rule-based ethics is difficult to apply in general, though it helps resolve specific issues. Social-contract theory is useful but may not provide specific enough guidance to resolve a particular ethical dilemma. Alternative formulations to cyberethics such as the “disclosive” approach of (Brey, 2000) can also be explored but they are relatively new. That leaves utilitarianism, which attempts to decide ethical questions by assessing the net benefit to society of a particular act or ethical rule. So we will follow a utilitarian approach here, and rule utilitarianism in particular. We shall say a particular policy of deception is ethical if its net of benefits minus costs, in general to a society in the long term, exceeds that of not deceiving, else it is unethical (Artz, 1994). Benefits include achieving the goals of the deceiver and the value of those goals. Costs include the direct damage caused by the deception as when its goals are malicious, direct costs of the deception being discovered such as retaliation, and indirect costs of discovery such as increased distrust of the parties. In cyberspace for instance, if someone is attacking your computer, a deception that could make them go away could be justified if the cost of a successful attack on the computer is hours of work to reinstall the operating system and damaged files. Both benefits and costs must be
530
multiplied by probabilities to obtain expected values when they are uncertain due to such factors as whether the deception will succeed or whether it will be discovered.
bAckground Deception can be verbal or nonverbal (Vrij, 2000). Verbal methods include outright lying, equivocation, failing to state key information, false claims, and false excuses. Nonverbal methods include mimicry, decoying, and various nonverbal forms of pretense. People use deception everyday without being aware of it, and many areas of human activity could not function without deliberate deception such as police work, law, politics, business negotiation, military actions, and entertainment. Much deception as practiced is unjustified, however. Hence there is an extensive literature on detection of deception (Vrij, 2000). Human deceivers try to control the information they reveal, but it is hard to control all the channels of communication, and the truth often “leaks out” through secondary channels. For instance, people who lie tend to fidget, hold their bodies rigidly, and use an unusual tone of voice. Deception can also be detected in verbal utterances from the use of vagueness, exaggeration, high frequency of negative terms, and especially inconsistency between different assertions. But deception detection is difficult in general, and attempts to build automated “lie detectors” have not been very successful. Cyberspace is well suited for many forms of deception because of the difficulty of obtaining collaborating information when assertions are made (Fogg, 2003). For instance, a policeman can pretend to be a 14-year-old girl online to entrap pedophiles. At the same time, a “phisher” can pretend to be a bank by implementing a fake Web site to steal personal data from victims. In addition, the connectivity of the Internet enables social interactions over long international distances,
The Ethics of Deception in Cyberspace
and the speed of the Internet permits damage to be done and criminals to disappear quickly (Spinello, 2003). Cyberspace relationships are usually weaker than real-world relationships because of the limited communications bandwidth, so there is less pressure of social obligation to act in a trustworthy manner (Castelfranchi, 2000). So many problems of deception are worse in cyberspace than in the real world, although there are important differences between offensive (attacking) and defensive deception (Rowe and Rothstein, 2004). Software itself can deceive, as its deception methods can be programmed. We will argue that the ethical issues with such software devolve on the programmer, since the software acts as their agent. Thus there are similar issues with deceptive software and deceptive people who program it. However, new developments in artificial intelligence are continuing to increase the human-like characteristics of software, and we can now conceive of automated agents with their own distinct ethics (Allen, Wallach, and Smit, 2006). The two main professional societies for information-technology professionals, the ACM and the IEEE Computer Society, have a joint policy on ethics (ACM/IEEE-CS, 1999). But it does not address many key issues. For instance, it proscribes deception in speaking to the public about software, but says nothing about writing deliberately deceptive software or foisting malicious cyber-attacks. So we need to develop further ethical principles for cyberspace.
decePtIon In electronIc commerce And commercIAl softwAre The first association most people make with the term “deception in cyberspace” is fraudulent electronic commerce. Cyberspace supports analogs of many common scams. However, new ethical problems occur in electronic commerce as well.
electronic-commerce scams Commercial email and Web sites are susceptible to a wide variety of deceptions ranging from misleading advertising to outright fraud (Leitch & Warren, 2000). Since the bandwidth of the Internet limits the amount of information one can obtain about a product for sale or a customer in electronic commerce, deception is more of a danger than in traditional commerce. For instance, ineffective medicines or cheap imitation electronic products can be advertised, customers can take products without paying by various ploys, and both buyers and sellers in electronic auctions can be cheated by parties failing to deliver goods or payment. Poorly designed policies of an electronic business can foster deception by clients (Harris & Spence, 2002). The goal of almost all electronic-commerce deceptions is unjustified material gain and such deception is unethical. Electronic commerce is also unethical when key information for a transaction is deliberately withheld. Laws regulate electronic commerce in most countries similarly to non-electronic commerce, and many deceptions in cyberspace are illegal by similar laws. For instance, it is unethical to fool customers in electronic commerce by showing a picture of a related but not identical product to one a customer wants to buy. The most serious unethical deceptions satisfy the legal definition of fraud such as selling stock in nonexistent companies.
email deceptions: spam and Phishing Unwanted commercial email or “spam” often involves deception since it is advertising and most advertising is deliberately deceptive. Advertising is usually considered acceptable business practice unless it becomes too obviously deceptive. For instance, it is considered acceptable for a vehicle manufacturer to claim that their truck is “tough” despite any evidence, but it is unacceptable to give
531
The Ethics of Deception in Cyberspace
false statistics showing that it has a longer lifetime than the average truck. However, antispam laws in the United States and other countries broadly prohibit spam because the automated breadth of its delivery makes it a more serious nuisance than traditional forms of advertising. The nature of cyberspace makes certain kinds of deceptive commercial activity more effective. For instance, links in email and on the World Wide Web can be deliberately misleading so you think you are going to a different site than you really are. This can be done by making the wording of a text link false or vague, or making the link an image. Misleading links often occur in spam. Such deceptions are unethical because they abuse the necessary trust for online commerce. Phishing is an especially serious email threat (Berghel, 2006). This kind of scam is designed to accomplish identity theft by inducing a victim, by email, to visit a Web site where their personal information such as account numbers is collected. Phishing requires mimicry of both the email and Web site of a legitimate business, usually a bank or other financial service. The victim is encouraged to visit the site through deceptive urgency in the form of alleged emergencies or serious threats, as well as by deceptive links. This works best when the mimicry is very precise, as small details such as the address of the mimicry site can betray the deception. For instance, a phisher might use “bankofamerica.net” or “bank_of_america.com” to imitate a legitimate site “bankofamerica.com”. Phishing deceptions must be considered very unethical since the deceptions are for outright fraud with a large potential payoff.
Adware and spyware Adware is software that displays advertisements on your computer. Often it is tied to a desirable free software product. Then the adware is the price you pay to run the product. This is ethical if the user is aware of the tradeoff, but many adware suppliers deceptively conceal the purpose of the
532
software in long user agreements so that the user cannot provide informed consent. Spyware is a more serious and rapidly growing problem (Awad & Fitzgerald, 2005). It is software that secretly reports details of what a user is doing on their computer to remote Web sites. It can provide valuable marketing data about what Web sites a user visits, but it almost always violates privacy and can enable identity theft. Again, some spyware vendors deceptively force the user to sign an authorizing agreement that is long and complex. Deception is more necessary than with adware because spyware does not benefit the victim. (Awad & Fitzgerald, 2005) cites four major deceptions of spyware: (1) changing settings or key system parameters, (2) surreptitious downloading, (3) bundling with legitimate software, and (4) slowing systems and causing crashes. These are unethical because they violate privacy, hurt reliability of a system, and violate the principle that owners of something (a computer system) should understand and approve its settings and configuration, much as the owner of a house has the right to approve all activities taking place in it.
deception by commercial software Though infrequently acknowledged, legitimate and legal commercial software can be deliberately deceptive. Pressures of the marketplace can make software try to look better than it really is (Gotterbarn, 1999). For instance, help facilities may not provide much, or most of the control buttons may be useless or redundant. Certain practices lend themselves to deceptive exploitation. For instance, many commercial products take advantage of their “captive audience” to advertise other products of the same manufacturer. Often the main purpose of free products is to market non-free products. The user can be harassed repeatedly with “Do you want to upgrade?” requests or advertising images scattered over a page, or can actually be misled into
The Ethics of Deception in Cyberspace
thinking an additional product must be purchased. Again, it is a matter of degree as to whether this is unethical. Commercial software can also deceive to try to discourage users from using other products. Microsoft, for instance, has repeatedly added its own enhancements to the open-source standard HTML for Web pages in an attempt to lock users into its own fee-based Web products, and has done similar ploys with several other open-source products. Such techniques can be considered unethical interference with fair competition, and are often described and implemented deceptively. Commercial software can also provide deceptively concealed unnecessary services. For instance, the Microsoft Windows operating system now contains a large amount of features that most users never use and are not aware of, apparently as a reason to encourage users to keep buying updates although its basic features have not changed significantly from Windows 95 in 1995. Such unnecessary code increases the vulnerability of the software to malicious attacks, and can be considered unethical in much the same way as a doctor who performs unnecessary surgery on patients to increase his or her income.
deception in electronic games Computer games are ubiquitous, and deception is becoming a serious problem with them. Most cheating in games is acquisition of additional information about the game known to only a few players (“cheat codes”) and is not usually deceptive. But now with the growth of distributed multiplayer games, deceptions occur where players manipulate their computer software to unfairly give themselves additional capabilities and knowledge; this is possible because some of the game software must be kept on the player’s computer for the game to run sufficiently fast (Hoglund and McGraw, 2007). Deceptions can involve finding loopholes in game rules in abnormal situations, such as by aborting play halfway through an ac-
tion. Many deceptions are considered unethical by game companies and other gamers, and perpetrators can be blacklisted from games because of it. Deception can also occur in multiplayer games with collusion between players to form alliances or build up credits by staged encounters.
decePtIon In cyber-AttAcks We now consider deception in direct attacks on computers and software in cyberspace, where deception is used to control a computer or network for personal or group gain.
eavesdropping Since users of computers and networks often share the same hardware, there is a danger of a user being able to read the private information of another if the software is not designed well. An ethical person should refuse to do this. However, malicious people could deliberately eavesdrop on computer systems to learn secrets. They could do this by surreptitiously installing “sniffers”, software that displays all traffic sent over a computer network. Sniffers are a legitimate useful tool for system administrators so they cannot be outlawed. The ethical problems of eavesdropping in cyberspace are similar to those of eavesdropping in general; users should be told if system administrators are doing it. But there are reliable technical solutions to preserving privacy on computer systems such as passwords and encryption.
Personal-Identity deception It is easy to assume a new identity on the Internet, where (unless you connect a video camera) no one can see what you look like or how you are acting. Using new online names is actually encouraged in many online communities where anonymity is beneficial, as in online dating or in support groups for people with stigmatizing problems. Some
533
The Ethics of Deception in Cyberspace
users of the Internet go beyond this to develop online personas different from their real ones. Such personas may be harmless and can provide psychological benefits in role-playing which can be helpful in psychotherapy or just in helping to understand a new point of view. For instance, a teenager can adopt the name “gamemaster” and pretend to be more knowledgeable than they really are to experience more adult interactions. When such deceptions have net benefits, they are usually ethical. However, role playing can grow to interfere with relationships in the real world, as when someone lies repeatedly about their accomplishments in discussion groups to reinforce serious psychological problems, in which case it becomes unethical. Deceptive role playing is key in “social engineering” attacks on computer systems as discussed below.
Address Spoofing
denial of service
Some attacks target data on computers, modifying or deleting it in unauthorized ways. Since computers can record everything that occurs on them, some cyber-attackers delete such “audit” data; since audit data is supposed to be an objective record, tampering with it is definitely unethical. Similarly unethical or illegal are tampering with job rating reports with the goal of improving one’s own rating, vandalizing Web pages to discredit the owner, and changing business financial records for personal financial gain. An important kind of record manipulation often done by attackers concerns their rights on a system. Attackers generally try to become authorized as system administrators of a system because that category of users has special privileges in being able to modify the operating system. A number of classic techniques such as buffer overflows can enable such “privilege escalation” to facilitate other deceptions. Again, this is abnormal usage of a computer system with potentially dangerous consequences and should be considered unethical.
A common attack in cyberspace tries to tie up resources of a victim computer or site with useless processing, what is called “denial of service”. Its goal is to disable a site of which the attacker disapproves. This usually involves deception in the form of insincere requests for services. For instance, a denial-of-service attack on a Web site can involve sending millions of identical requests for the same Web page within a second from millions of sites. Denial of service exploits timing and surprise. Denial of service has been used as a political tool against organizations that the attacker dislikes, as a form of civil disobedience (Manion & Goodrum, 2000). But it is really a form of vandalism that attacks the usability of the resource, and is thus more invasive than traditional civil disobedience such as boycotts. Thus most security professionals consider it unethical even if used against ethically questionable targets. Denial of service could create serious harms if targeted against a critical resource such as a hospital.
534
A frequent feature of malicious attacks on computer systems is deception in the attacker’s location. Data packets on computer networks are supposed to identify the Internet address (IP address) of their source. When attackers are engaged in illegal activities, they often give false addresses. While ordinary software cannot fake addresses, some illegal software can. Most authorities consider this unethical in the same way that malicious anonymous letters are unethical; the author of provocative data should assume responsibility for their data. Accountability is important in maintaining trust in a society.
record manipulation and Privilege escalation
The Ethics of Deception in Cyberspace
trojan horses Many common cyber-attacks involve Trojan horses. These are programs that conceal a malicious intent by clever design. Common Trojan horses are computer viruses (malicious code inserted into programs), worms (malicious self-reproducing processes) and spyware. Viruses and worms are decreasing in importance, while spyware is increasing (Sipior, Ward, & Roselli, 2005). It is often difficult to trace the source of Trojan horses because they attempt to subvert accountability. Trojan horses are generally both unethical and illegal, since they are a form of fraud as to the nature of computer programs. Trojan horses can use steganography to communicate with controllers at other sites. Steganography is methods for sending concealed messages in innocent-looking data, such as a message concealed in the least-significant bits of a picture. Steganography is ethically questionable because activities on a computer system should be known to the system and auditable. The most dangerous Trojan horses are rootkits, programs used to replace key components of the operating system of a computer. They invisibly take control of the computer, exploiting it for purposes such as identity theft, sending spam email, or attacking and taking control of other computers. Rootkits can also “hijack” sessions, connecting you to Internet sites other than those you intend. Sets of computers with installed rootkits can form “botnets”, armies of computers that attackers can control remotely for purposes such as denial of service. Botnets have grown quickly as a problem in the last few years. Rootkits and botnets are definitely unethical and illegal.
hacking “Hacking” is often cited as a key ethical issue for computers and networks. While the term is also used for software modifications by anyone besides the authors of the software, it usually means mali-
cious attempts of amateur computer aficianados to break into computers for which they are not authorized. Almost always deception is used to circumvent the authorization. Since most hackers begin when young, hacking is often similar to teenage trespassing in the physical world: People do it for the excitement and thrill the forbidden. Malicious hackers have proposed various ethical justifications (Spafford, 1992). One is the “socialistic” idea that computer systems are public resources available for whoever wants them. This made sense in the 1960s and 1970s when computer systems were rare resources, but is not valid today when computers are everywhere and are often filled with sensitive personal data. Another is claim that hackers are honing valuable skills for the software industry. The skills necessary for hacking are a quite specialized with little relevance to software design, since software rarely embodies stealth and deceit. Another claim is that hackers offer free testing of computer systems to alert their owners to security flaws. A few small commercial companies use hacking methods (“red teaming”) to test the security of computers and networks. However, often such information is hard to use since it does not clearly suggest clear countermeasures. Beyond that, we rarely see altruistic behavior from hackers, such as by reporting flaws to clearinghouses such as www. cert.org. On the contrary, malicious hackers often are very competitive and want to conceal their attack tricks as much as possible. Discoveries of attack methods are almost always made by victims through inspection of log records and running intrusion-detection systems, not by hacker confessions. Thus we conclude that malicious hacking is unethical. Note that many of the justifications advanced by hackers involve self-deception. Much cyber-attacking involves willful disregard of the implications of actions, such not thinking anything strange about needing to collect your paycheck at a small Pacific island with liberal banking laws (Spammer
535
The Ethics of Deception in Cyberspace
X, 2004). Having so little understanding of what one is doing is unethical in itself.
Information warfare Coordinated cyber-attacks can be used as tactics and strategies of warfare, what is called “information warfare” (Bayles, 2001). This raises ethical issues common to all forms of warfare (Nardin, 1998), but also some new ones (Arquilla, 1999). Ethical issues in warfare are covered in part by the Geneva Conventions and other international agreements. One ethical stance on warfare is pacifism, which views all warfare including information warfare as unethical (Miller, 1991). Even for those who accept limited warfare, information warfare raises serious ethical problems because it is difficult to limit its collateral damage, a key issue with the Geneva Conventions. When one drops a bomb, the damage is limited to a small physical area; but when one uses a computer virus, the damage can easily spread from attacked computers to civilian computers. Spreading of damage is facilitated by the widespread use of key software, such as the Microsoft Windows operating system, so that if an attack succeeds on one machine it can succeed on many others. The Geneva conventions also prohibit unprovoked attacks and say that counterattacks must be proportionate to the attack and of a similar type. Thus cyber-attacks cannot be counterattacks to conventional attacks. Even legitimate counterattacks can easily become disproportionate in cyberspace because of the difficulty in predicting and monitoring how far an attack will spread (Molander & Siang, 1998), and thus become illegal. Another problem is that enemies in cyberspace try to be inaccessible to one another, so to reach an adversary an attacker must trespass through many intermediate computers, few of which would cooperate if they knew (Himma, 2004). Trespassing is illegal and unethical.
536
Damage assessment is another key ethical problem with information warfare. Unlike bombs, the damage caused by cyber-attacks may be subtle, and it may be difficult to find and fix. As is well known to programmers, what seems to be a bug in one module may actually be a result of a bug in another, and attackers can exploit this to hide damage. So cyber-attacks may continue to harm victims long after a peace treaty is signed, much like chemical weapons or battlefield mines, and thus may be unethical. Since the source of a cyberattack is difficult to track, it also may be impossible to attribute an attack to a particular nation, or false evidence may be planted to implicate an innocent nation. Finally, most cyber-attacks can only be used once against an adversary, since when the attack is recognized, forensics experts can usually quickly determine its mechanism and devise countermeasures (Ranum, 2004). This means that cyber-weapons are very cost-ineffective; thus they are unethical considering all the more useful things on which a society could spend money.
social engineering On the border between cyberspace deception and conventional human deception are various forms of “social engineering” designed to fool people into revealing key information for cyberspace such as passwords (Mitnick, 2002). The ethical issues are similar to those of conventional deception. Since many users of cyberspace do not understand the reasons for security procedures, social engineering can be a very effective alternative to accomplish attacks beyond the purely technical means described above.
defensIVe decePtIons In resPonse to cyber-AttAcks Offensive deceptions are difficult to justify ethically from a utilitarian standpoint because their
The Ethics of Deception in Cyberspace
benefit to society is small and their harms can be large. But the balance is different for defensive deceptions. We often see deceptions in military tactics as the recourse of a weaker force defending itself against a stronger (Dunnigan and Nofi, 2001). Defenders in cyberspace are often weaker than attackers because today’s attackers can exploit a wide range of tools with the element of surprise, even against highly-secure computers and networks. So deception might be a good defensive tactic in cyberspace.
honeypots and honeynets The most common defensive deception today is the honeypot. This is a computer designed solely to be an attack victim, with the goal of collecting data about attack methods (The Honeynet Project, 2004). Honeypots can provide the first notice of new kinds of attacks and permit experimentation with countermeasures. Honeypots often are implemented in groups on local-area networks called honeynets. Honeypots do not need to entice attacks, as they get sufficient numbers of attacks just by normal attacker network reconnaissance. Honeypots must deceive to be effective because attackers want to avoid them. Attackers know their activities will be recorded on a honeypot, revealing secrets of attack methods and exposing them to possible legal retaliation. Furthermore, honeypots appear easy to attack, but this is illusory. Honeypots must control the spread of attacks from them to other machines since honeypots are only be permitted on a network if they do not hurt their neighbor machines much. Hence most honeypots limit the number of outgoing network connections in any one time period. So since attackers will avoid a honeypot if they can recognize it, defenders should camouflage honeypots to look like ordinary machines. For example, the Sebek tool of the Honeynet Project (www.honeynet.org) uses ideas from rootkit technology to modify the listing features of the operating system to conceal
itself and its data. Honeypots should also conceal their logging mechanisms, which tend to be more elaborate than on ordinary machines, and they should conceal the mechanisms that limit outgoing attacks. Such deceptions can be considered ethical since all users of honeypots except the administrator are malicious, and the deceptions help reduce attacks both in the short run (by wasting attacker time in fruitless activity) and in the long run (by recording them so countermeasures can be found). Note that users of any computer system must implicitly consent to some forms of monitoring for the system to work well. Since attackers want to avoid honeypots, another useful deception for defenders is to make ordinary systems appear to be honeypots, “fake honeypots” (Rowe, Duong, & Custy, 2006). Real honeypots may be detected by smart attackers from subtle clues in code of their operating system, unusual delays on responding to commands, and the absence of evidence of typical user activity in email and document files (Holz & Raynal, 2005). Many of these clues can be faked, inducing attackers to go away without a fight. Such deceptions appear quite ethical because only the most malicious users will look for them.
defensive disinformation Another defensive ploy is to provide attackers with false information to confuse them. “Lowinteraction” honeypots do this by pretending to offer services, ports, and sites that they do not actually have (Cohen & Koike, 2003). They do this by mimicking the initial steps by which the real protocols would respond. Then when an attacker tries to use these fake resources, they are met after a certain point in their interaction with error messages or failures to respond. As long as innocent users are not provided with the addresses of such machines, this should be quite ethical. To be more convincing, honeypot file systems can be populated with real files and data from ordinary systems, as defensive mimicry. Directory
537
The Ethics of Deception in Cyberspace
design is more important in this than file design since attackers are unlikely to examine many files. Convincing directories can be constructed by simulating the average number of directories, number of files per directory, depth of the directory tree, lengths of directory and file names, sizes of files, creation and modification dates of directories and files, and so on (Rowe, 2006). To intrigue attackers, passwords can be associated with fake files or their content can be random characters suggesting encryption. These deceptions should be ethical since only malicious users such as spies should be examining directories and files of a honeypot. Disinformation can also be provided for an attacker of an ordinary machine rather than a honeypot. If intrusion-detection software assesses a user as sufficiently suspicious, deceptions can be foisted on them to interfere with their typical attack goals. Easy deceptions that are hard to disprove are false error messages claiming unavailability of a resource that the attacker needs for their attack, such as network access or system-administrator privileges (Rowe, 2007). Such deceptions may be more effective in inducing an attacker to leave than outright resource denial because they can sound like temporary obstacles that will encourage an attacker to keep wasting time. These deceptions may not be ethical if the intrusion-detection assessment of a legitimate user is often incorrect, but the cost of a false positive is only wasted time.
deceptive delaying A classic ploy of people faced with suspicious requests is to delay responding until more information is obtained about the request or requester. This is useful in cyberspace too. For instance, an Internet site under a denial-of-service attack should respond very slowly to requests; this not only deceives an attacker that they are slowing the system, but provides more time to implement countermeasures like blocking certain incom-
538
ing addresses. Such defensive tactics should be mostly ethical because they only slow activities. However, they could be triggered by legitimate users who accidentally do something suspicious such as downloading a large number of files together. If the likelihood of such behavior in nonnegligible, the ethics of delaying tactics is more questionable.
deceptive Intelligence gathering on the Attacker Deception can also gather information about an attacker to help track them or to prepare a legal case against them. One tactic is “honeytokens”, digital bait for attackers (Spitzner, 2005): •
•
•
•
•
Addresses of honeypots can be posted on Web sites, on bulletin boards, and in blogs to increase the malicious traffic to them. Passwords to machines can be planted in files on a honeypot to be found by attackers breaking into it. Fake credit card numbers can be planted in a file, then any orders against those numbers checked for email or delivery addresses. Phishing Web sites can be automatically accessed and their forms filled out with obviously false data, making it easier to detect phishers when they submit this data to banks and other victims. Software with hidden spyware can be posted on a Web site for free download. If an attacker installs it, it can report where they are coming from and what they are doing.
Again, it can be argued that these deceptions are ethical because they are passive acts that require additional illegal actions by the attacker. However, they are partly "entrapment" and, as well as ethical concerns, there are legal limitations on how much law enforcement can use them.
The Ethics of Deception in Cyberspace
strategic deception
conclusIon
Deception can also be broader in cyberspace. For instance, an organization could pretend that it has better cyberspace defensive capabilities than it really does, to discourage adversaries from attacking it. It could falsely announce installation of a new software technology or a plan to buy highly secure machines. Such strategic deception does raise serious ethical problems because broad-scale deception affects many people and some are likely to be hurt by it. For instance, contractors may waste time trying to sell software to the organization for the nonexistent capability, or the public may assume the organization is impregnable and underestimate risks to it. It is also difficult to maintain a secret involving many people, so many such deceptions will be revealed early.
Many kinds of deception are possible in cyberspace as enumerated here. Offensive deceptions are generally unethical, and defensive deceptions are generally ethical. Deceptions in electronic commerce may or may not be ethical depending on their nature and the consensus of society.
future trends
Arquilla, J. (1999). Ethics and information warfare. In Khalilzad, Z., White, J., & Marsall, A., (Eds.), Strategic appraisal: the changing role of information in warfare (pp. 379-401). Santa Monica, California: Rand Corporation.
Deception will increase in cyberspace in the future as more human activities continue to shift there. For attacks and deceptive electronic commerce in particular, increasing countermeasures against them means successful attacks must increase their deceptiveness to be effective. At the same time, defensive deception is a new idea with much potential, and it will show significant increases as well. This means that ethical issues of what is acceptable deception in cyberspace will be increasingly important. But as deception increases, awareness of it will increase as well, and ethical issues will be better formulated and understood by people. Following consensus on key ethics issues, we will see increasing numbers of laws and policies relating to cyberspace to enforce these ethical insights.
references ACM/IEEE-CS Joint Task Force on Software Engineering Ethics and Professional Practices (1999). Software engineering code of ethics and professional practice. Retrieved April 15, 2007 from www.acm.org/service/se/code.htm. Allen, C., Wallach, W., & Smit, I. (2006, JulyAugust). Why machine ethics? IEEE Intelligent Systems, 21 (4), 12-17.
Artz, J. (1994). Virtue versus utility: Alternative foundations for computer ethics. Proc. of Conference on Ethics in the Computer Age, Gatlinburg, Tennessee, USA, 16-21. Awad, N., & Fitzgerald, K. (2005, August). The deceptive behaviors that offend us most about spyware. Communications of the ACM, 48 (8), 55-60. Bayles, W. (2001, Spring). Network attack. Parameters, US Army War College Quarterly, 31, 44-58. Berghel, H. (2006, April). Phishing mongers and posers. Communications of the ACM, 49 (4), 21-25.
539
The Ethics of Deception in Cyberspace
Bok, S. (1978). Lying: Moral choice in public and private life. New York: Pantheon.
International Journal of Information Ethics, 2(11).
Brey, P. (2000, December). Disclosive computer ethics. Computers and Society, 10-16.
Leitch, S., & Warren, M. (2000). Ethics and electronic commerce. In Selected papers from the second Australian Institute conference on computer ethics, Canberra, Australia (pp. 56-59).
Castelfranchi, C. (2000, June). Artificial liars: Why computers will (necessarily) deceive us and each other. Ethics and Information Technology, 2 (2), 113-119. Cohen, F., and Koike, D. (2003). Leading attackers through attack graphs with deceptions. Computers and Security, 22 (5), 402-411. Dunnigan, J.., & Nofi, A. (2001). Victory and deceit, second edition: deception and trickery in war. San Jose, CA: Writers Club Press. Fogg, B. (2003). Persuasive technology: using computers to change what we think and do. San Francisco, CA: Morgan Kaufmann. Ford, C. V. (1996). Lies! Lies!! Lies!!! The Psychology of Deceit. Washington, DC: American Psychiatric Press. Gotterbarn, D. (1999, November-December). How the new Software Engineering Code of Ethics affects you. IEEE Software, 16 (6), 58-64. Harris, L., & Spence, L. (2002). The ethics of ebanking. Journal of Electronic Commerce Research, 3 (2), 59-66. Hoglund, G., & McGraw, G. (2007). Exploiting online games: Cheating massively distributed systems. Reading, MA: Addison-Wesley. Holz, T., & Raynal, F. (2005, June) Detecting honeypots and other suspicious environments. In Proc. 6th SMC Information Assurance Workshop, West Point, NY, 29-36. The Honeynet Project (2004). Know Your Enemy, 2nd Ed. Boston: Addison-Wesley. Himma, K. (2004). The ethics of tracing hacker attacks through the machines of innocent persons.
540
Manion, M., & Goodrum, A. (2000, June). Terrorism or civil disobedience: Toward a hacktivist ethic. Computers and Society (ACM SIGCAS), 30 (2), 14-19. Miller, R. (1991). Interpretations of conflict: Ethics, pacifism, and the just-war tradition. Chicago, IL: University of Chicago Press. Mitnick, K. (2002). The art of deception. New York: Cyber Age Books. Molander, R., & Siang, S. (1998, Fall). The legitimization of strategic information warfare: Ethical considerations. AAAS Professional Ethics Report, 11 (4). Retrieved November 23, 2005, from www. aaas.org/spp/sfrl/sfrl.htm Nardin, T. (Ed.) (1998). The ethics of war and peace. Princeton, NJ: Princeton University Press. Quinn, M. (2006). Ethics for the information age. Boston: Pearson Addison-Wesley. Ranum, M. (2004). The myth of homeland security. Indianapolis, IN: Wiley. Rowe, N. (2007, May). Finding logically consistent resource-deception plans for defense in cyberspace. In Proc. 3rd International Symposium on Security in Networks and Distributed Systems, Niagara Falls, Ontario, Canada. New York: IEEE Press. Rowe, N. (2006, January). Measuring the effectiveness of honeypot counter-counterdeception. In Proc. Hawaii International Conference on Systems Sciences, Poipu, HI.
The Ethics of Deception in Cyberspace
Rowe, N., Duong, B., & Custy, E. (2006, June). Fake honeypots: A defensive tactic for cyberspace. In Proc. 7th IEEE Workshop on Information Assurance, West Point, New York. New York: IEEE Press. Rowe, N., & Rothstein, H. (2004, July). Two taxonomies of deception for attacks on information systems. Journal of Information Warfare, 3(2), 27-39. Sipior, J., Ward, B., & Roselli, G. (2005, August). A United States perspective on the ethical and legal issues of spyware. In Proc. of ICEC, Xi’an, China. “Spammer X” (2004). Inside the Spam Cartel. Rockland, Massachusetts: Syngress. Spafford, E. (1992). Are computer hacker breakins ethical? Journal of Systems and Software, 17, 41-47. Spinello, R. (2003). CyberEthics: Morality and Law in Cyberspace. Sudbury, MA: Jones and Bartlett. Spitzner, L. (2005). Honeytokens: The other honeypot. Retrieved May 30, 2005, from www. securityfocus.com/infocus/1713. Vrij, A. (2000). Detecting lies and deceit: the psychology of lying and the implications for professional practice. Chichester, UK: Wiley.
Disinformation: Lies or propaganda. Hacker: An amateur attacker in cyberspace. Honeypot: A computer system intended solely to be a victim of cyber-attacks so as to collect valuable intelligence about attack methods. Identity Deception: Pretending to be someone you are not. Information Warfare: Warfare attacking computers and networks, usually by software exploits. Phishing: A deception involving email as bait to get victims to go to a Web site where their personal information can be stolen. Rootkit: Software that secretly permits a cyber-attacker to control a computer remotely. Social Engineering: Techniques to manipulate people to obtain information from them that they would not give you voluntarily. Spoofing: Pretending to operate from a different Internet address than you really are at. Spyware: Software with a Trojan horse that secretly reports user activities over the Internet to a remote site. Trojan Horse: Software that conceals a malicious intent.
key terms Botnet: A network of computers with rootkits that are secretly controlled by a cyber-attacker.
541
542
Chapter XXXV
Cyber Identity Theft Lynne D. Roberts Curtin University of Technology, Australia
AbstrAct Information and communication technologies (ICTs) provide substantial benefits to governments, organizations and individuals through providing low cost, instantaneous, global communication capabilities. However, an unintended consequence of these new technologies is their use for criminal purposes. The technology can be used as the mechanism for organizing and committing criminal activity and as a means of protecting criminals against detection and punishment. Cyber identity theft is an internationally recognized problem resulting from the introduction of new information technologies. This chapter provides an overview of cyber identity theft and related fraud, describing the impact of cyber identity theft on governments, organizations, law enforcement agencies and individuals. Methods currently being used, or proposed, to combat cyber identity fraud are outlined and the potential impact of these examined. The tension between using technological solutions to reduce cyber identity theft and privacy and civil liberties concerns is explored.
IntroductIon Information and communication technologies (ICTs) provide substantial benefits to governments, organizations and individuals through providing low cost, instantaneous, global communication capabilities. However, an unintended consequence of these new technologies is their use for criminal purposes. The technology can be used as the mechanism for organizing and committing criminal activity and as a means of protecting criminals against detection and punish-
ment through providing anonymity. This chapter provides, from a criminological perspective, an overview of cyber identity theft, a variation of the pre-existing crime of identity theft, which utilizes ICTs to develop new methods of obtaining personal information for fraudulent purposes. Cyber identity theft provides just one example of an unintended consequence of the introduction of ICTs. To contextualize the discussion on cyber identity theft the chapter begins by outlining the nature of identity and the concept of identity to-
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Cyber Identity Theft
kens. The theft of these identity tokens and related fraudulent activity in offline settings is described, with distinctions made between identity crime, identity theft and identity fraud. Building on this overview of identity theft, the ways in which ICTs facilitate identity theft and related fraud and the methods used to conduct cyber identity theft are explored. Estimates of the prevalence of cyber identity theft are provided. This is followed by a review of the impact of cyber identity theft and methods currently adopted to combat cyber identity theft and fraud. The chapter concludes by highlighting the tension between using technological solutions to reduce cyber identity theft and the negative consequences this may have for ICT users.
bAckground Identity is integral to the concept of the self as a unique individual within society. Finch (2007) distinguishes between personal, social and legal identity. Personal identity refers to an individual’s internalised sense of self as a unique individual with a past, present and future. Social identity refers to others’ perceptions of the self within the social realm. Legal identity consists of the accumulation of documentary identifiers (e.g. birth certificates, passports, credit reports) that serve to legally differentiate the individual from all others. It is these legal identifiers that can be stolen in what is commonly referred to as identity theft. There are three elements that comprise identity: biometric identity, biographical identity and attributed identity. Biometric identity consists of the unique physiological attributes of the individual such as fingerprints and DNA profile. Attributed identity is provided at birth when the baby is named by the parents. Biographical identity refers to the accumulation of documentation that builds up about an individual over their life time. Attributed identity is easier to assume than
biometric or biographical identity, requiring the individual to obtain identity tokens such as birth certificates (Cabinet Office, 2002). The value of an identity token is determined by the cost and effort required to acquire a token. For example, a passport is a stronger identity token than an email address (Marshall & Tompsett, 2005). Identity tokens from central number systems are widely used as primary national identifiers in some western countries. These include the Social Security Number (SSNs) in the US and Tax File Numbers (TFNs) in Australia. The widespread reliance of both government and non-government organizations on these identity tokens for identification and authentication has made them key targets for identity theft (Haygood & Hensley, 2006; Slosarik, 2002), to the extent that in the US SSNs have been described as the ‘common denominator’ for identity theft (Haygood & Hensley, 2006). Protection of SSNs was rated as one of the top issues facing social security administration management in the US in 2006. SSNs have become valuable illegal commodities and control systems have been implemented for the issuing of numbers and replacement cards (Inspector General, 2005).
IdentIty theft, IdentIty frAud And IdentIty crIme Identity theft involves the theft of legal identity, manifest in identity tokens such as SSNs, documents or knowledge of factual information that identifies an individual. That is, it is the theft of a pre-existing identity for use in presenting oneself as somebody else (Finch, 2003; Australasian Centre for Policing Research, 2006). Identity theft is a crime in some countries. The Identity Theft and Assumption Deterrence Act of 1998 (available http://www.ftc.gov/os/statutes/itada/itadact.htm) made it an offence under US federal law where an individual “knowingly transfers or uses, without lawful authority, a
543
Cyber Identity Theft
means of identification of another person with the intent to commit, or to aid or abet, any unlawful activity that constitutes a violation of Federal law, or that constitutes a felony under any applicable State or local law”. While in at least some countries, including Australia and the United Kingdom, identity theft itself is not a criminal offence, the use of stolen identities in fraudulent activities is (Jefferson, 2004). The predominant use of stolen identities is to gain financial benefit. Stolen identity tokens can be used online or offline. E-commerce transactions are attractive to users of stolen identities because the anonymity between purchaser and provider provides a lower likelihood of detection than face to face interactions (Davis, 2003). ‘Identity crime’ is the general term that refers to the use of false or stolen identities for criminal purposes. Identity fraud is more specific, relating to the use of false or stolen identities in gaining money, goods, services or other benefits (Australasian Centre for Policing Research, 2006). While identity theft refers specifically to stolen identities, identity crime and identity fraud encompass the use of both stolen and fictitious identities (Lacey & Cuganesan, 2004). The terms identity theft, identity crime and identity fraud are not used consistently across countries. In the US, the term ‘identity theft’ is predominantly used and tends to cover all types of identity crime. Indeed, it has been described as a “catch all phrase to describe a frame with many different faces” (van der Meulen, 2006:5) covering both financial and criminal identity theft (assuming a stolen identity to prevent prosecution). In the UK and Australia the terms ‘identity fraud’ and ‘identity crime’ are used interchangeably (Australasian Centre for Policing Research, 2006). The focus of this chapter is on identity theft and the fraudulent and other criminal uses of these stolen identities. It excludes the use of fictitious identities. The temporary adoption of a stolen identity for the purposes of criminal activities is described by Finch (2003, p. 90) as
544
‘partial identity theft’ and can be distinguished from ‘total identity theft’ which involves an individual permanently adopting a stolen identity for use in all areas of their life. In this chapter the term identity theft is used to refer to the temporary adoption of a stolen identity for the purposes of criminal activities.
cyber IdentIty theft Identity theft in the offline world typically involves the misappropriation of identity tokens through non-technical means such as mail theft, credit card theft and ‘dumpster diving’ (seeking information by going through commercial and residential rubbish). Cyber identity theft represents the online misappropriation of identity tokens. As in offline settings, personal identity online is signified by identity tokens. Common online identity tokens include email addresses, web-pages and the combination of username and password used to access systems. In addition to these online identity tokens, other identity tokens and information are accessible online and can be readily harvested. For example, resumes posted on the Internet typically contain identifying information such as name, contact details (address, telephone number and email) and in the US, SSNs. The combination of these identifiers is sufficient to obtain a credit card (Sweeney, 2006). Information and communication technologies increase the ease and reduce the costs (time, financial and location) of data acquisition. Search engines provide motivated individuals with the opportunity to gather a wide range of information. Even more powerful are sites that provide an aggregation of databases for searching (see, for example http://www.192.com/ which provides access to data from sources including electoral roles, census records and births, marriages and deaths). Documents such as birth certificates can also be ordered online. The ease of obtain-
Cyber Identity Theft
ing identity tokens and identifying information online expands the range of potential victims for intending fraudsters (Marshall & Tompsett, 2005, Finch, 2007). A number of technical factors contribute to the ease of identity theft online. Internet standard protocols have their basis in ease of use rather than the verification of identity and security. For example the SMTP protocol used in email does not check the validity of sending addresses. Malicious software (‘malware’) may be installed on computers without the user’s knowledge and can enable the capture of log-ins and passwords (Marshall & Tompsett, 2005). Thus, identity online frequently relies on ‘chains of trust’ while more secure methods of data transfer (e.g. encryption, digital certificates) are seldom utilised (Marshall & Tompsett, 2005). ICTs change the scale on which identity theft can be perpetrated. The hacking of databases and ease of distribution of emails for phishing scams mean that identifying information for multiple individuals can be obtained in the time it might take to steal identity tokens for one individual in offline settings (Lynch, 2005). For example, hacking was used to obtain account information held by Card Systems Solutions for 40 million credit card customers (Haygood & Hensley, 2006), something that is unlikely to ever be achieved by identity thieves operating without ICTs. Personal factors also contribute to cyber identity theft. New Internet users in their naivety may be susceptible to phishing and other scams where they take on face-value the ‘trustworthiness’ of emails and web pages (Marshall & Tompsett, 2005). Further, the “ethos of trust” (Finch, 2007: 38) online means individuals may be less wary of providing personal information in online environments than in face-to-face settings. The opportunity to conduct fraudulent business transactions online in anonymity may disinhibit behaviour though providing the individual with both spatial and symbolic distancing from their fraudulent acts (Finch, 2003; 2007).
Despite the technological and personal factors conducive to cyber identity-theft, at present offline identity theft remains the most commonly utilised form of identity theft. Indeed, ‘traditional’ identity theft has been estimated to occur at a rate six times higher than that of cyber identity theft (Better Business Bureau, 2005, cited in Mercuri, 2006), although its use is thought to be declining with increased use of ICTs (Johnson, 2004, cited in van der Meulen, 2006). Cyber identity theft may be partially replacing both ‘traditional’ identity theft and fraud based on fictitious identities. For example, Hoskin (2006) outlined the changing nature of fraud in government welfare payments in Australia associated with changing technology. The electronic transfer of welfare payments provides a vehicle for perpetrating identity fraud and stolen identities are replacing the creation of false identities, allowing fast, repeated and difficult to trace fraudulent activities.
methods Methods of cyber identity theft typically combine affordances of new ICTs with social engineering. The range of methods of cyber identity theft include hacking, phishing, pharming, traffic redirectors, advance-fee frauds, fake taxation forms, keyloggers and password stealers (Paget, 2007). Phishing involves the use of emails and websites to ‘fish’ for personal information. The objective is to obtain information such as credit card numbers, back account information and passwords to use for fraudulent purposes. The phishing email frequently directs individuals to a web-site in order to fool individuals into entering confidential information. The use of emails and websites is combined with ‘scare tactics’ (e.g. threats of closing an account) to encourage compliance (Lynch, 2005). Social engineering techniques used in phishing include spoofing reputable companies, providing plausible reasons (e.g. credit card expired), providing assurances of security and indicating a fast response is re-
545
Cyber Identity Theft
quired. Phishing tactics may include exploiting security holes in different browsers, using fake address bars, utilizing different reply and sent addresses and faked security (Drake, Oliver & Koontz, 2004). Lynch (2005) traced the history of phishing to early attempts to obtain passwords for free Internet access and later fraudulent auctions on eBay. Early attacks originated in the US but are now also undertaken in Europe and Asia. While early phishing attempts were amateurish, they now involve increasing sophistication such as the use of spyware and the installation of keyloggers on victims’ computers to record keystrokes (including passwords and logins). Pharming, a variation on phishing, redirects connections between an IP address and the target server. As phishing is low-cost, only a low response rate is required to make it a profitable enterprise. Phishing is no longer the sole province of the technologically skilled. DIY phishing kits are available to purchase online. Information obtained from phishing scams may also be sold or traded online (Lynch, 2005). The practice of phishing is widespread and growing. In April 2007 55,643 unique phishing URLs were detected. These websites were online for an average time of 3.8 days. A quarter of all phishing websites were hosted on US servers and the most targeted industry sector was financial services (92.5%). There were 172 brands targeted in addition to other websites such as social network portals and gambling sites (APWG, 2007).
Prevalence Identity theft and related fraud is a growing concern across countries. Attempts have been made to measure the extent of identity theft and related fraud through population surveys and aggregations of complaints. In the US a series of population surveys have measured the extent of identity theft. The first survey in 2003 (Synovate, 2003) reported that
546
4.6% of respondents were victims of identity theft in the past year and 12.7% in the last 5 years. In the latest iteration of this survey (Javelin Strategy and Research, 2007), it was estimated that 8.4 million Americans were the victims of identity fraud in 2006, with the total cost of this fraud estimated at $49.3 billion. In 2004 the National Crime Victimization Survey also included questions about identity theft for the first time. Results of the survey indicate that three percent of households (3.6 million households) in the US had at least one household member who was a victim of identity theft in the previous six months (Baum, 2006). In 2006, the US Federal Trade Commission received 670,000 consumer fraud and identity theft complaints with losses exceeding $1.1 billion. Of these, 36% involved identity theft complaints. The major categories of how victims information was misused were credit card fraud (25%), phone or utilities fraud (16%), bank fraud (16%), employment related fraud (14%), government documents/benefits fraud (10%) and loan fraud (5%). Some individuals were victims of multiple types of fraud resulting from one identity theft incident (Federal Trade Commission, 2007). Research in the UK has also attempted to measure the extent of identity related fraud A major report on identity fraud in the United Kingdom, Identity fraud: A study (Cabinet Office, 2002) highlighted concerns about identify fraud in relation to both the government and private sectors. The estimated cost of identity fraud was £1.3 billion, accounting for approximately one tenth of all fraud in the United Kingdom. A recent update to the report estimated identity fraud in 2006 at £1.72 billion (see http://www.identitycards.gov. uk/downloads/FINAL-estimate-for-annual-costof-fraud_table_v1-2.pdf). Estimates of the extent and cost of identity theft have been criticized as being based on surveys with methodological weaknesses, over-inflated assumptions and extrapolations (Berg, 2006; Mercuri, 2006). Further, there are difficulties in
Cyber Identity Theft
estimating the proportion of identity theft and related fraud that is attributable to cyber identity theft. Not all studies ask how the stolen identity information was obtained. Where the questions is asked, population surveys at first glance suggest less than one in eight instances of identity theft are cyber-initiated [e.g., Javelin Strategy & Research (2005), cited in Berg, (2006) reported that 11.6% of identity theft victims had identity information stolen online through computer spyware (5.2%), during online transactions (2.5%), viruses or hacking (2.2%) and phishing (1.7%)]. However, a large proportion of victims do not know how their identifying information was stolen. Half of the identity theft victims in the Synovate (2003) population survey of identity theft in the US reported they did not know how their information was obtained by the identity thief with only a third of victims in the 2007 survey (Javelin Strategy & Research, 2007) knowing the perpetrator’s identity.
Impact Cyber identity theft and related fraud impacts on governments, organizations and individuals. A societal cost of cyber identity fraud, and phishing in particular, is reduced consumer trust and confidence in using the Internet to conduct business (Lynch, 2005). For example, more than a quarter of Australians with Internet access who do not engage in online purchasing and transactions do not do so because of security concerns (ABS, 2005). This lack of trust also has the potential to reduce government and other organizations’ use of the Internet for communication purposes. Organizations increasingly need to adopt offline methods for customer communication (Lynch, 2005). Organizations (defrauded creditors) are often regarded as the primary victims of identity theft as they incur the financial cost of the fraudulent use of stolen identities (LoPucki, 2001). Estimates of the financial costs to business vary widely. The
average loss to business for each cyber identity theft victim reported by Synovate (2003) was $4,800, while the Identity Theft Resource Centre estimated the cost at $40,000 (Identity Theft Resource Centre, 2003; 2005). Defrauded creditors may use legal and practical means that are deemed cost effective to recover moneys lost, with remaining costs passed on to other customers (LoPucki, 2001). Where the organization has been the site of the identity theft the impact may include harm to the reputation and credibility of the organization and the payment of direct financial costs and the indirect financial costs required to repair the damage caused (Paget, 2007). The individual who has their identity stolen, while often not experiencing the direct financial cost of fraudulent activities conducted using the stolen identity, may be affected in other ways. This can include an impaired credit rating and the administrative and time burden to correct this, damage to reputation and the integrity of identity, criminal investigation, lost productivity and psychological impacts including stress, emotional trauma and relationship breakdown. There are also financial costs resulting from lost wages, medical expenses and expenses incurred in restoring the integrity of identity (Identity Theft Resource Centre, 2003; 2004; Jefferson, 2004; LoPucki, 2001). Estimates of the indirect financial costs borne by the individual in addressing the identity theft and related fraudulent activities vary widely. Victimization surveys of individuals contacting the Identity Theft Resource Center for help estimated that an average of 330 hours was spent in addressing problems arising from the identity theft (Identity Theft Resource Centre, 2004). Population survey estimates suggest an average of 30 hours pre victim (Synovate, 2003) with $587 out of pocket expenses for an existing account fraud and $617 if a new account was established in the victim’s name (Javelin Strategy & Research, 2007). Differences in estimates between sources in the financial and time costs in dealing with
547
Cyber Identity Theft
identity theft may be due to the different samples used. The Identity Theft Resource Centre surveyed only confirmed identity theft victims who worked with the Centre to resolve their issues and may represent the more serious cases of identity theft. Results from the Synovate (2003) population survey indicate that only a quarter (26%) of identity theft victims report the crime to police and more than a third do not report the theft to anyone. The cost to the individual is partially dependent upon the time from the theft to discovery. The Synovate (2003) population survey found that misuse of personal information was time limited to one day in approximately a quarter of cases. However, some cases lasted 6 months or more. Time to discovery was important, with the cost of fraud increasing over time. Prompt reporting of suspected data breaches to customers may reduce the time from theft to discovery. Fifteen percent of identity theft victims in the Identity Theft Resource Centre surveys found out about the theft through businesses (Identity Theft Resource Centre, 2003; 2004). Victims currently have no or limited recourse against financial institutions and businesses for the costs they incur in addressing the effects of identity theft. However, there are moves to increase the liability of businesses and financial institutions. ChoicePoint Inc was fined $10 million plus ordered to pay $5 million in compensation to customers over the theft of data for an estimated 163,000 customers that resulted in 800 victims of identity theft related crime (Krause, 2006). The impact on the victim is longer term and most marked for those who are unable to easily resolve problems associated with the identity theft. Sharp, Shreve-Neiger, Fremouw, Kane and Hutton (2004) conducted focus groups with 37 identity theft victims to examine the psychological, emotional and physical impact of identity theft. Anxiety and anger were identified as the most commonly experienced emotions and
548
sleep disruption and nervousness were the most commonly experienced somatic complaints immediately following victimization. Two weeks after learning of the theft approximately one in five victims experienced irritability and anger, one in six experienced fear and anxiety and one in six experienced frustration. Six months after learning of the theft approximately a quarter were experiencing distress and desperation, and a quarter experiencing irritability and anger. Victims who were unable to resolve problems associated with the identity theft had higher rates of clinical somatization (67% vs 15%), depression (50% vs 15%) and anxiety (67% vs 38%) than victims who had been able to resolve the problems. The majority of victims (68%) reported that the most helpful coping strategy was to take action to resolve the issues and prevent further victimization. Victims of identity theft may experience secondary victimization through their treatment by law enforcement agencies and other bodies, compounding the original criminal victimization. Police often recognize the financial institution, rather than the individual, as the victim, resulting in a denial of services normally available to victims (Jefferson, 2004; van der Meulen, 2006). In some instances, victims may be treated as ‘suspects’ for crimes the identity thief has committed using the victim’s identity (Baum, 2006; Jefferson, 2004; Kreuter, 2003; Identity Theft Resource Centre, 2004). Secondary victimization may also result from impaired credit ratings attributable to actions of the identity thief. Impacts of an impaired credit rating reported by victims include denial of credit, increased insurance and credit card interest rates, cancellation of credit cards, denial of services (phone, utilities) and continued contact by collection agencies (Baum, 2006; Identity Theft Resource Centre, 2004; Synovate 2003). The time spent resolving these problems varies widely, with approximately one in five cases taking at least a month to resolve (Baum, 2006).
Cyber Identity Theft
government Initiatives Approaches to addressing identity theft include attempts to reduce the purchase and sale of identifying information; increasing requirements on companies to protect identifying information; prohibiting the use of SSNs for identification purposes; the use of increased characteristics in the matching process; improving the integrity of identifying documents; improving identity theft victim assistance; increasing the liability of creditors and credit reporting agencies; and creating databases of victims of identity theft (LoPucki, 2001). Many countries have developed, or are working towards developing, national strategies for methods of addressing identity theft. An overview of measures taken in the US, UK and Australia to combat identity theft and related fraud is provided below.
United States The US is working towards developing nationally consistent approaches to addressing identity theft. The National Strategy to Combat Identity Theft (Office of Community Oriented Policing Services, 2006) consists of seven components: partnerships and collaboration to establish statelevel coordinating centres; reporting procedures based on geographic jurisdiction of victim and uniform definition of identity theft; developing policies for victim assistance; a national public awareness campaign; compilation of identity theft legislation; national public education on information protection; and training of police, public prosecutors and others involved in assisting victims. More recently, the President’s Identity Theft Task Force (2007) has produced a strategic plan for combating identity theft that focuses on reducing access to sensitive customer data, assisting victims and aggressive prosecution and punishment of identity thieves. Key recommendations included the reduction in ‘unnecessary use’ of
SSNs, national standards for data protection and notification of breaches; the conduct of identity theft awareness campaigns and the creation of the National Identity Theft Law Enforcement Center to enable coordination of information and efforts. These strategies build on a raft of prior legislation. All states in the US have laws proscribing identity theft as a crime. While the shared intention of the laws across states is to prohibit identity theft for personal gain, variations between states mean that it may be classified as a misdemeanour or felony (Haygood & Hensley, 2006). At the federal level, the Identity Theft and Assumption Deterrence Act of 1998 made it an offence where an individual “knowingly transfers or uses, without lawful authority, a means of identification of another person with the intent to commit, or to aid or abet, any unlawful activity that constitutes a violation of Federal law, or that constitutes a felony under any applicable State or local law”. This was amended by the Identity Theft Penalty Enhancement Act in 2004 to provide for additional years of imprisonment for identity theft linked to serious crimes (Haygood & Hensley, 2006 p. 76). The Fair and Accurate Credit Transactions Act, 2003 was introduced to help prevent identity theft and made provisions for individuals to obtain a free copy of their credit report annually, the introduction of a National Fraud Alert System to alert potential creditors of possible identity theft and the truncation of account numbers on credit card receipts (Holtfreter & Holtfreter, 2006). In the US state and federal public disclosure laws are now imposing obligations on businesses to notify customers when personal information is breached (Haygood & Hensley, 2006; Krause, 2006; Schwartz & Janger, 2007). Reported data breaches in the US are recorded by the Privacy Rights Clearinghouse. In 2006, 327 breaches were recorded relating to a minimum of 48,000 pieces of potentially compromised information. Just over one in five breaches were attributed to hackers external to the organization. In a fur-
549
Cyber Identity Theft
ther 28 cases personal identifying information was inadvertently posted to a publicly viewable web site (Privacy Rights Clearinghouse, 2007). Restrictions have also been placed on the use of SSNs and controls placed on the disposal of records containing personal information (Haygood & Hensley, 2006).
United Kingdom While the US is restricting the use of a national identification system (SSNs), the United Kingdom is working towards the implementation of a national identification system. The Identity Cards Act 2006, enacted to address identity crime, provided for the establishment of the National Identity Register and provision of identity cards. Identifying information and biometrics are to be collected from all UK residents over the age of 16. This information will be available to government departments and (with consent) private sector organizations. The London School of Economics and Political Science Identity Project has maintained ongoing analysis of the introduction of the identity cards scheme. They have major reservations about the scheme, arguing that: “the proposals are too complex, technically unsafe, overly prescriptive and lack a foundation of public trust and confidence” (London School of Economics and Political Science, 2005, p. 5; italics in original). They note that the scheme is unprecedented in size, under costed, and based on unreliable technology. In their 2005 report they concluded that the “scheme should be regarded as a potential danger to the public interest and to the legal rights of individuals” (London School of Economics and Political Science, 2005, p. 5). More recently they have focussed on the increased surveillance associated with the scheme, raising concerns about “furthering the creation of a surveillance society” (The London School of Economics and Political Science Identity Project, 2007, p. 1).
550
Australia Australia is also considering the introduction of an identity card scheme in the form of a health and services ‘smartcard’, which will replace existing Medicare and benefit cards (Jackson & Ligertwood, 2006; KPMG, 2006). KPMG predict the introduction of the health and social services access card will move the focus from detection and investigation of fraud to that of prevention and deterrence. They argue the card will make it more difficult to perpetrate identity and concession fraud. (KPMG, 2006). Blindell (2006) reviewed the legal status and rights of identity theft victims in Australia. In most states of Australia, identity theft victims are not classified as victims of crime and are consequently not eligible for existing victims’ rights and services. The report highlighted the need for the development of a specific identity theft offence and the explicit inclusion of identity theft victims within statutory definitions of victims. These statutory rights should include the recovery of costs relating to reporting of theft, preventative action to limit further use and financial reputation restoration; the right to access victim assistance services; the right to victim certificates and to have victim affidavits recognized; and the right to present victim impact statements to sentencing courts.
International Investigation and prosecution of identity theft and related fraud is difficult when the crimes transcend national boundaries requiring evidence to be obtained in more than one country. In these circumstances, investigation and prosecution of cross-jurisdictional cyber-crimes is reliant on cooperation between jurisdictions. International cooperation is enabled through the ratifying of conventions such as the Council of Europe Cyber-crime Convention and the United Nations Convention Against Transnational Organized
Cyber Identity Theft
Crime. These conventions support mutual legal assistance and provide powers such as the ability to extradite evidence, intercept electronic data, search computer systems and seize stored data (Broadhurst, 2006). Regional groups such as G8 Senior Experts Group on Transnational Organised Crime and treaties between countries also aid in the expediting of cyber-crime investigation and prosecution. Where mutual legal assistance treaties are not in place, countries may require assistance through letters rogatory or through informal co-operation (Brenner & Schwerha, 2004). There is some concern that international conventions alone are not enough. While supporting the Council of Europe Cyber-crime Convention Davis (2003) argued for the need for a “truly global treaty” (p. 223) that creates uniform laws to prevent identity theft that do not focus on industry self-regulation, and for an international formal body to coordinate efforts and to enable prosecution of major identity theft crimes.
organizational Initiatives Organizations can function as the site of identity misuse and have a responsibility to both ensure controls are in place to detect stolen identities pre-transaction and to alert legitimate customers to data breaches and suspicious activities posttransaction. Based on interviews with 70 Australian public and private section organizations Lacey and Cuganesan (2004) identified the most common forms of pre-transaction controls in use were visual inspection of identity documents and tokens (undertaken by 81% of organizations) and matching of information against internal sources (36%). Factors inhibiting effective organizational response to identity theft included a lack of resources, industry pressures, inadequate law enforcement response, and limited knowledge of privacy legislation. Gerard, Hillison and Pacini (2004) caution that organizations who do not adopt measures to address the risk of identity theft face
increased liability for failure to protect personal information. In addition to organization-specific steps, organizations are grouping together to collectively address identity theft. Examples of these types of collectivities are the Anti-Phishing Working Group (APWG) and the Financial Services Technology Consortium (FSTC) in the US. The APWG maintains an archive of reported phishing attacks and has a range of working groups addressing area such as best practices, education and future threat models and forensics. The APWG is part of the ‘Phisherman Project’ team (Tally et al, 2006) that is developing a phishing data collection, validation, dissemination, and archival system. The FSTC sponsors identity theft related research including counter-phishing and the capability of biometric systems to resist spoofing.
law enforcement Initiatives Law enforcement initiatives to counter cyber identity theft may occur at the local, national or international level. At the local level, law enforcement officials may not have sufficient resources or expertise to investigate cyber identity theft (Lynch 2005). At a national or regional level partnerships between police and non-government organizations and security may provide the resources required (Broadhurst, 2006). For example, the Australasian Centre for Policing Research has developed the Australasian Identity Crime Policing Strategy 2006 – 2008. This strategy focuses on the coordination of approach across jurisdictions and public and private sector partners; expanding the community partnership and education; resourcing and training law enforcement; utilizing technological solutions to ensure secure transactions and valid identity registration and authentication; and providing assistance to victims of identity theft (Australasian Centre for Policing Research, 2005). Cyber identity theft crosses national boundaries, raising jurisdictional issues and highlighting the
551
Cyber Identity Theft
need to develop trans-national policing capability (Broadhurst, 2006).
Individual Initiatives An individual can adopt a range of measures to help protect personal information online. The first step is to educate themselves about safe practices online. This information is readily available on the web (see for example, ‘Consumer Advice: How to Avoid Phishing Scams’ at http://www.antiphishing.org/consumer_recs.html and the Australian Government ID Theft Kit at http://www.ag.gov. au/ ). Technical solutions can be employed such as the use of browser extensions (e.g. AntiPhish; Verisign, PwdHash) that generate warnings when sensitive information is typed into untrusted websites (Kirda & Kruegel, 2006). Other options include taking out identity theft policies (Slosarik, 2002) and checking credit reports annually (O’Neill & Xiao, 2005). Despite the availability of these protective measures, survey results suggest that individuals are not adequately protecting their identity online. The majority of Internet users rarely check credit reports (O’Neill & Xiao, 2005), and less than half use encryption and anonymisation techniques, set their computers to reject cookies, read privacy policies or clear their caches after browsing (Milne, Rohm, & Bahl, 2004). Research has suggested that the perception of having ‘protected’ onseself against identity theft may result in an individual engaging in riskier behaviour online. Bolton, Cohen and Bloom (2006) examined the potential effect of two cyber identity theft risk management remedies (identity theft insurance and a computer and internet security system) on perceptions of personal risk of identity theft and behavioural intentions. Exposure to either risk management remedy resulted in lower estimates of personal risk of identity theft and an increased intention to engage in riskier Web site shopping.
552
Privacy and civil rights concerns Concerns have been expressed that solutions proposed to address cyber identity theft and related fraud erode the civil liberties and privacy of all Internet users. The extended powers in policing provided under conventions such as the Council of Europe Cyber-crime Convention and the United Nations Convention Against Transnational Organized Crime have been criticized. Huey and Rosenberg (2004) argued that compelling Internet Service Providers (ISPs) to provide intercept technology, storage and searching of data traffic and release of information to law enforcement authorities (enabled under the Council of Europe Cyber-crime Convention) presents a threat to individual privacy while also imposing large demands on ISPs. European civil liberties groups also have concerns that the proposed expansion of police powers will interfere with freedom of expression (Davis, 2003). Other non-governmental organizations and professionals also have concerns that the Council of Europe Cyber-crime Convention will effectively ‘kill the Internet’ due to privacy concerns and penalties imposed for non-compliance (Davis, 2003). With any new initiative there is a danger of ‘function creep’, where the initiative is expanded to cover areas not originally intended. Function creep in the use of identity cards may further erode the privacy and civil liberties of citizens (see XXXX, this volume for a discussion on Thailand’s new smart-card and concerns arising from its use by government). There is always the need to balance human and legal rights of citizens against the potential benefits of ‘solutions’ (Jackson & Ligertwood, 2006). Ideally, attempts should be made to identify and evaluate the potential effects of each new technological ‘solution’ prior to implementation, so that a considered judgement can be made of the overall costs and benefits to governments, organizations and individuals. There is a danger in the current ‘politics of fear’ (Altheide, 2003)
Cyber Identity Theft
climate; especially relating to the fear of terrorism; that ‘solutions’ will be implemented without this level of scrutiny. As Altheide (2003) noted: Similar to propaganda, messages about fear are repetitious, stereotypical of outside “threats” and especially suspect and “evil others.” These messages also resonate moral panics, with the implication that action must be taken to not only defeat a specific enemy, but to also save civilization. Since so much is at stake, it follows that drastic measures must be taken, that compromises with individual liberty and even perspectives about “rights”, the limits of law, and ethics must be “qualified” and held in abeyance in view of the threat. (p. 38) The explicit linking of identity theft to terrorism activities (see, for example, Pistole, 2003) provides the climate in which there is a danger of new technological ‘solutions’ being implemented without sufficient independent scrutiny.
future trends Future attempts to address cyber identity theft are likely to build on existing measures of developing national and international co-ordination and co-operation. Advances in the development of ITCs may see the increased use of technological ‘solutions’ to cyber identity theft. Increasingly biometric systems are being advanced as the future for identification systems to reduce identity fraud. Biometric authentication systems verify identity through the matching of a person to a stored biometric. Biometric systems include automatic face recognition, iris recognition, hand geometry, voice recognition, finger print imaging, signature scanning and keystroke scanning. The qualities of the ‘ideal’ biometric are that it is universal, unique, permanent and collectable. Fingerprint imagines/scanning is the most accurate of cur-
rent biometrics but has the lowest level of public acceptance due to its association with criminality. In contrast facial scanning is the least accurate but has the highest public acceptance. Iris scanning performs well in terms of both public acceptability and performance (Scott, 2005). Biometric systems provide a lower risk of counterfeiting than other document based systems and checking processes are less susceptible to human error than manual processes. They are expensive systems to initiate, requiring both the cost of the biometric and the cost of the reading (checking) equipment. Error rates of 10% to 15% have been reported across biometric systems, indicating that these systems are still very much in the developmental stage (Cabinet Office, 2002). Unacceptable levels of accuracy and reliability currently inhibit the widespread use of biometric systems (US Department of Treasury, 2005). Enrolment fraud continues to be a major risk with biometric systems. While biometric systems have utility in authenticating identity following enrolment, they offer little protection against the use of stolen identities in enrolment. A stolen identity may still be used to open a new account where no previous relationship with the financial institution exists (US Department of Treasury, 2005). Other risks associated with the use of biometrics include system vulnerability, system circumvention, and verification fraud. Risk management techniques that need to be employed when utilising biometrics range from strong network security, internal security measures, and two-factor authentication (Barton, Byciuk, Harris, Schumack & Webster, 2005). Issues surrounding the use of biometrics include the lack of international standards, privacy concerns and lack of trust. There is a need for international data protection laws and guidelines to govern the use, collection and storage of data (Scott, 2005; US Department of Treasury, 2005).
553
Cyber Identity Theft
conclusIon This chapter has provided a criminological perspective on cyber-identity theft. ICTs are increasingly used in the perpetration of identity theft and related fraudulent criminal activities. ICTs greatly increase the scope of identity theft and, if left unchecked, may threaten the future of electronic commerce and communications. There is a move towards coordination and cooperation in addressing cyber identity theft across jurisdictions, both within and between nations. Future trends in addressing identity theft are likely to include a continued focus on international cooperation and the increasing use of technological solutions, including biometric authentication systems. The introduction of new technological solutions to cyber identity theft need to be balanced against privacy and civil liberties concerns.
references Altheide, D. L. (2003). Notes towards a politics of fear. Journal for Crime, Conflict and the Media, 1(1), 37-54. Anti-Phishing Working Group (2007). Phishing Activity Trends Report for the Month of April, 2007. Retrieved May 27, 2007 from http://www.antiphishing.org/reports/apwg_report_april_2007.pdf Australian Bureau of Statistics (2005). Household Use of Information Technology 2004–05, Cat. no. 8146.0. Canberra: Australian Bureau of Statistics. Australasian Centre for Policing Research. (2006). Standardisation of definitions of identity crime terms: A step towards consistency. Australasian Centre for Policing Research Report Series No 145.3. Canberra: Commonwealth of Australia. Australasian Centre for Policing Research. (2005). The Australasian Identity Crime Policing Strategy 2006 – 2008. Retrieved May
554
27, 2007 from http://www.acpr.gov.au/pdf/ ID%20Crime%20Strat%2006-08.pdf Barton, B., Byciuk, S., Harris, C., Schumack, D., & Webster, K. (2005). The emerging cyber risks of biometrics. Risk Management, 52(10), 26-30. Baum, K. (2006, April). Identity theft, 2004: First estimates from the National Crime Victimization Survey. Bureau of Justice Statistics Bulletin. Retrieved May 27, 2007 from www.ojp.gov/bjs/ pub/pdf/it04.pdf. Berg, S. E. (2006). Recommendations for a comprehensive identity theft victimization survey framework and information technology prevention strategies. Master of Science in Information Technology thesis. Rochester Institute of Technology. Retrieved May 27, 2007 from https://ritdml. rit.edu/dspace/handle/1850/1647. Blindell, J. (2006). Review of the legal status and rights of victims of identity theft in Australia. Australasian Centre for Policing Research Report Series No 145.2. Canberra: Commonwealth of Australia. Bolton, L. E., Cohen, J. B., & Bloom, P. N. (2006). Does marketing products as remedies create “get out of jail free cards”? Journal of Consumer Research, 33(1), 71-81. Brenner, S. W., & Schwerha, J. J. (2004). Introduction-Cybercrime: A note on international issues. Information Systems Frontiers, 6(2), 111-114. Broadhurst, R. (2006). Developments in the global law enforcement of cyber-crime. Policing: An International Journal of Police Strategies and Management, 29, 408-433. Cabinet Office. (2002). Identity fraud: A study. Retrieved May 27, 2007 from http://www.identitycards.gov.uk/downloads/id_fraud-report.pdf Community Oriented Policing Services, (2006). A national strategy to combat identity theft. US Department of Justice. Retrieved May 27, 2007
Cyber Identity Theft
from http://www.cops.usdoj.gov/mime/open. pdf?Item=1732. Davis, E. S. (2003). A world wide problem on the World Wide Web: International responses to transnational identity theft via the Internet. Journal of Law and Policy, 12, 201-227. Drake, C. E., Oliver, J. J., & Koontz, E. J. (2004). Anatomy of a phishing email. Proceedings of the First Conference on Email and Anti-Spam. Retrieved May 10, 2007, from www.ceas.cc/papers-2004/114.pdf Federal Trade Commission (2007). Consumer fraud and identity theft complaint data January – December 2006. Retrieved May 27, 2007 from http://www.consumer.gov/sentinel/pubs/Top10Fraud2006.pdf. Finch, E. (2003). What a tangled web we weave: Identity theft and the Internet. In Y Jewkes (Ed.. Dot.cons: Crime, deviance and identity on the Internet (pp. 86-104). Cullompton: Willan. Finch, E. (2007). The problem of stolen identity and the Internet. In Y Jewkes (Ed.), Crime on-line (pp. 29-43). Cullompton: Willan. Gerard, G. J., Hillison, W., & Pacini, C. (2004). Identity theft: The US legal environment and organisations’ related responsibilities. Journal of Financial Crime, 12(1), 33-43. Haygood, R. & Hensley, R. (2006). Preventing identity theft: New legal obligations for businesses. Employment Relations Today, 33(3), 71-83. Holtfreter, R. E., & Holtfreter, K. (2006). Gauging the effectiveness of US identity theft legislation. Journal of Financial Crime, 13, 56-64. Hoskin, P. (2006). Emerging fraud risks and countermeasures in government welfare programs. Paper presented at The Australian & New Zealand Society of Criminology 19th Annual Conference, Sydney, Australia.
Huey, L., & Rosenberg, R. S. (2004). Watching the web: Thought on expamding police surveillance opportunites under the Cyber-Crime Convention. Canadian Journal of Criminology and Criminal Justice, 46(5), 597-606. Identity Theft Resource Centre. (2003). Identity theft: The aftermath 2003. Retrieved March 2, 2007 from http://www.idtheftcenter.org/idaftermath.pdf. Identity Theft Resource Centre. (2005). Identity theft: The aftermath 2004. Retrieved March 2, 2007 from http://www.idtheftcenter.org/idaftermath2004.pdf. Inspector General. (2005). Top issues facing social security administration management: Fiscal year 2006. Publication L. No. 106-531. Retrieved May 27, 2007 from http://www.ssa.gov/oig/ ADOBEPDF/2006TopMgmtChallenges.pdf. Jackson, M., & Ligertwood, J. (2006). Identity management: Is an identity card the solution for Australia? Prometheus, 24(4), 379-387. Javelin Strategy & Research. (2007). 2007 Identity fraud survey report-Consumer Version. How consumers can protect themselves. Retrieved May 27, 2007 from www.javelinstrategy.com. Jefferson, J. (2004). Police and identity theft victims-Preventing further victimisation. Australasian Centre for Policing Research, No 7. Retrieved March 2, 2007 from http://www.acpr. gov.au/publications2.asp?Report_ID=154 Kirda, E & Kruegel, C. (2006). Protecting Users against Phishing Attacks. The Computer Journal, 49, 554-559. KPMG. (2006). Commonwealth of Australia. Health and social services smart card initiative. Volume 1: Business case public extract. Retrieved February 1, 2007 from http://www.humanservices.gov.au/modules/resources/access_card/ kpmg_access_card_business_case.pdf.
555
Cyber Identity Theft
Krause, J. (2006). Stolen lives: Victims of identity theft start looking for damanges from companies that held their personal financial information. ABA Journal, 92, 36-41, 64.
Office of Community Oriented Policing Services. (2006). National strategy to combat identity theft. Retrieved May 27, 2007 from http://www.ncjrs. gov/App/Publications/AlphaList.aspx
Kreuter, E. A. (2003). The impact of identity theft through cyberspace. Forensic Examiner, 12(5-6), 30-35.
O’Neill, B., & Xiao, J. J. (2005). Consumer practices to reduce identity theft risk: An exploratory study. Journal of Family and Consumer Sciences, 97, 33-38.
Lacey, D., & Cuganesan, S. (2004). The role of organizations in identity theft response: The Organization-Individual victim dichotomy. The Journal of Consumer Affairs, 38(2), 244-261. London School of Economics and Political Science Identity Project. (2007, April 24). Submission to the House of Commons Home Affairs Committee inquiry into “A surveillance society?” Retrieved May 27, 2007 from http://identityproject.lse. ac.uk/LSE_HAC_Submission.pdf London School of Economics and Political Science Identity Project (2005). The identity project: An assessment of the UK Identity Cards Bill and its implications. Retrieved May 27, 2007 from http:// identityproject.lse.ac.uk/identityreport.pdf LoPucki, L. M. (2001). Human identification theory and the identity theft problem. Texas Law Review, 80, 89-135. Lynch, J. (2005). Identity theft in cyberspace: Crime Control methods and their effectiveness in combating phishing attacks. Berkeley Technology Journal, 20, 259-300. Marshall, A. M., & Tompsett, B. C. (2005). Identity theft in an online world. Computer Law & Security Report, 21, 128-137. Mercuri, R. T. (2006). Scoping identity theft. Communications of the ACM, 49(5), 17-21. Milne, G. R., Rohm, A. J., & Bahl, S. (2004). Consumers’ protection of online privacy and identity. The Journal of Consumer Affairs, 38(2), 217-232.
556
Paget, F. (2007). Identity theft. McAfee Avert Labs technical white paper No 1. Retrieved May 27, 2007 from http://www.mcafee.com/us/local_content/white_papers/wp_id_theft_en.pdf. Pistole, J. (2003). Fraudulent identification documents and the implications for Homeland Security. Statement for the record, John S. Pistole, Federal Bureau of Investigation Assistant Director, Counterterrorism Division, before the House Select Committee on Homeland Security. Retrieved, August 19, 2007 from http://www.globalsecurity. org/security/library/congress/2003_h/031001pistole.doc President’s Identity Theft Task Force (2007). Combating identity theft: A strategic plan. Retrieved May 28, 2007 from http://www.identitytheft. gov/reports/StrategicPlan.pdf Privacy Rights Clearinghouse. (2007). Chronology of data breaches 2006: Analysis. Retrieved May 27, 2007 from http://www.provacyrights. org/ar/DataBreaches2006-Analysis.htm Schwartz, P. M., & Janger, E. J. (2007). Notification of data security breaches. Michigan Law Review, 105, 913-984. Scott, M. (2005). An assessment of biometric identities as a standard for e-government services. International Journal of Services and Standards, 1(3), 271-286. Sharp, T., Shreve-Neiger,A., Fremouw, W. Kane, J., & Hutton, S. (2004). Exploring the psychological and somatic impact of identity theft. Journal of Forensic Sciences, 49(1), 131-136.
Cyber Identity Theft
Slosarik, K. (2002). Identity theft: An overview of the problem. The Justice Professional, 15(4), 329-343. Sweeney, L. (2006). Protecting job seekers from identity theft. IEEE Internet Computing, 10(2), 74-78. Synovate (2003). Federal Trade CommissionIdentity theft survey report. Retrieved May 20, 2007 from http://www.ftc.gov/os/2003/09/synovatereport.pdf Tally, G., Sames, D., Chen, T., Colleran, C., Jevans, D., Omiliak, K., & Rasmussen, R. (2006). The Phisherman Project: Creating a comprehensive data collection to combat phishing attacks. Journal of Digital Forensic Practise, 1(2), 115-129. United States Department of Treasury. (2005). The use of technology to combat identity theft. Report on the study conducted pursuant to Section 157 of the Fair and Accurate Credit Transactions Act of 2003. Retrieved May 20, 2007 from http://www. ustreas.gov/offices/domestic-finance/financialinstitution/cip/biometrics_study.pdf. van der Meulen, N. (2006). The challenge of countering identity theft: Recent developments in the United States, the United Kingdom, and the European Union. Report Commissioned by the National Infrastructure Cyber Crime program (NICC). Retrieved May 20, 2007 from http://www. tilburguniversity.nl/intervict/publications/NicolevanderMeulen.pdf.
key terms Biometrics: Biometrics are unique physiological or behavioural attributes of the individual that may be utilized for verification and authentication of identity. Examples of biometrics include fingerprints, voice, retina, facial structure, DNA profile, and heat radiation. Cyber Identity Theft: cyber identity theft refers to the on-line misappropriation of identity tokens using information and communication technologies. Identity Crime: Identity crime refers to the use of false and/or stolen identities for criminal purposes. It encompasses both identity theft and identity fraud. Identity Fraud: Identity fraud is the use of false and/or stolen identities to obtain money, goods, services or other benefits. Identity Theft: Identity theft involves the theft of legal identity, manifest in identity tokens such as SSNs, documents or knowledge of factual information that identifies an individual. Keyloggers: Malicious software (‘malware’) installed on victims computers to record keystrokes (including passwords and logins). Phishing: Phishing refers to the use of emails and websites to ‘fish’ for personal information such as credit card numbers, back account information and passwords to use for fraudulent purposes.
557
558
Chapter XXXVI
Walking the Information Overload Tightrope A. Pablo Iannone Central Connecticut State University, USA
AbstrAct This chapter asks: What is information overload? At what levels of existence does it occur? Are there any features common to information overload at all these levels? What are information overload’s types? What are information overload’s current and future trends? What problems do they pose? How can they be addressed in both effective and morally justified ways? It argues that there is anarchy concerning the meaning of information overload, that information overload’s precise characterization is best left open at this stage in the inquiry, that information overload occurs at the biological, psychological, and social levels, that it is relational, that there are at least two overall types of information overload—quantitative and semantic— involving various kinds and current and likely future trends which pose problems requiring specific ways of dealing with them. The essay closes outlining how to identify effective and morally justified ways of dealing with information overload.
InformAtIon And InformAtIon oVerloAd T.S. Eliot’s once asked “Where is the wisdom lost in knowledge and where is the knowledge lost in information?” (Eliot, 1952, p. 96). This essay is written in the spirit of Eliot’s question. I will address a particular way in which knowledge, wisdom, and much more can be lost in information, namely, information overload. I will ask: What is
information overload? At what levels of existence does it occur? Are there any features common to information overload at all these levels? What are information overload’s types? What are information overload’s current and future trends? What problems do they pose? How can they be addressed in both effective and morally justified ways? I will argue for seven main theses: First, there is anarchy concerning the meaning of information overload. Second, the precise characterization of
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Walking the Information Overload Tightrope
information overload is best left open at this stage in the inquiry. Third, there is a wide range of levels—biological, psychological, social—at which information overload occurs. Fourth, information overload at each level is relational involving an input and a capacity to process it and respond with an output under specifiable circumstances. Fifth, there are at least two overall types of information overload, quantitative and semantic. Six, current and likely future information overload trends pose problems requiring different ways of dealing with them depending on the nature and circumstances of the overload. Seventh, fully evaluating these trends and policies and decisions concerning them, must partly be carried out in the real world, by those affected, through meaningful dialogue and interaction with various information overload situations and options for dealing with them along the lines of what can be characterized as discursivo-interactive ethics. For this purpose, I will outline an approach (which I discussed in detail elsewhere) for engaging in such dialogue and interactions. The term “information” derives from the Middle English “informe(n)” and this from the Latin “informare”—to form, in particular, the mind. “To form the mind,” however, has more than one meaning. One is education—the development of a person’s general knowledge, judgment and character, i.e., the development of wisdom, as in educating the young so that they can live a good life. To those who conceive of education in this manner, however, wisdom is not merely information. T.S. Elliot’s above remark suggests it, and Alfred North Whitehead made it plain when he quipped “Culture is activity of thought, and receptiveness to beauty and humane feeling. Scraps of information have nothing to do with it. A merely well-informed man is the most useless bore on God’s earth.” (Whitehead, 1929, p. 1). Besides meaning education, “to form the mind” also means instruction or the communication of rules of reasoning or action—e.g., the rules of inference of a logical system, the grammar of
a language, the rules of etiquette of a culture, the motor vehicle regulations of a state. A third meaning is training or the development of particular practical skills—say, how to use a new word processing program, hot to drive a motor vehicle, how to swim. Instruction and training, however, are often considered identical to education. This confuses things and exacerbates conflicts with those who conceive of education as aimed at developing wisdom. Though information has little prestige among the latter, it has much prestige among those who identify education with instruction or training. Further, the term “information” has acquired a wide range of additional meanings—from sensory and other types of input (as in a sweet taste), through data (which need not be structured, e.g., noise), to patterns (which are structured, as in music, and may have causal properties, as in nucleotides in DNA). In fact, today, the term “information” is often used merely as a buzzword in phrases like “Information Age” and “Information Society.” This meaning overload of the term “information” is but one of the factors contributing to the unmanageability of what goes under the heading of “information overload.” There are then good reasons to conclude our first thesis: There is anarchy concerning the meaning of information overload. In attempting to curb this semantic anarchy, one could venture a more focused definition of information overload, say, as an excessive encumbrance of knowledge. Alternatively, one could define information overload in terms of information theory or the mathematical theory of communication, as an excessive encumbrance of messages imparted (Shannon & Weaver, 1949, passim; Hartley, 1928, pp. 535-563). Yet, such attempts might miss significant aspects of information overload by definitional fiat. This provides grounds for our second thesis: The precise characterization of information overload is best left open at this stage in the inquiry. Accordingly, I will address cases which have some claim to
559
Walking the Information Overload Tightrope
exemplify information overload, briefly indicate the conception or conceptions of information they involve, try to distinguish different information overload levels, kinds and trends, indicate the problems they pose, and outline ways of addressing them. This approach will help highlight all concerns involved and develop reasons supporting a non-question-begging conception of information overload that is sensitive to all those concerns. I will therefore proceed merely using a tentative understanding of information overload as the excessive flow or amount of input or information that can but need not be knowledge and can lead to detrimental biological, psychological, or social effects.
leVels of InformAtIon oVerloAd It has been widely thought in modern times that more knowledge brings scientific, technological, economic and, generally, social progress, and helps individuals not only attain freedom from all kinds of want, but also achieve personal fulfillment. Disagreeing voices have mostly echoed views from past times frozen in such sayings as “ignorance is bliss” and “knowledge is the root of all evil.” Among scholars, the dominant view was poignantly formulated by the economist J. M. Clark: “knowledge is the only instrument of production that is not subject to diminishing returns” (Clark, 1923, p. 120). Yet, skeptical questions have been increasingly raised in science since the early twentieth century. Some are associated with information overload as sensory input overload. Let us examine it next.
sensory-Input overload In the early 1900s, Georg Simmel discussed the attitude of reserve that city people use to shield themselves from metropolitan indiscriminate suggestibility or, as he said, “the intensification
560
of nervous stimulation” (Simmel, 1950, pp. 409410). This stimulation, he argued, causes many city dwellers to develop a perceptual habit of hardly noticing individuals when moving through a crowd, and its associated characteristics of reserve and even slight aversion.(Simmel, 1950, pp. 415-417). Whatever the response, the kind of information overload causing it is at least partly psychological—sensory input overload. Further, the sensory input overload Simmel describes is highly conceptualized in various ways. One is as annoyance, its associated response being aversion. Another is as an unwanted or uninteresting distraction, which causes filtering out the input by responding with indifference (though not necessarily with aversion). And the conceptualization can be habitual wherever a trait—for example, reserve—develops, hence recurrently curtailing the input, if not by filtering it out, at least by queuing it—delaying its review and the resulting eventual response. The sources of such overload are also often highly conceptualized. Examples include new scientific findings—say, the therapeutic uses of stem cells, or the growing discovery of life supporting planets—new technological developments—say, the internet, or fuel cells—and new or radically altered market options—some, however unreliable, real, like the web’s worldwide advertising market of fertilizers and metal scrap, others fictional, like the available real estate market in Second Life. In any case, these examples evidence that Simmel’s subject is not just psychological but also social information overload. And in the 1970s, Peter H. Lindsay and Donald A. Norman noted that in these types of cases, the “discrepancy between the amount of information held in the sensory system and the amount that can be used for later stages of analysis…implies some sort of limit on the capacity of later stages, a capacity that is not shared by the sensory stages themselves” (Lindsay & Norman, 1972, p. 329) Fred Dretske later connected this with knowledge: “There is
Walking the Information Overload Tightrope
more information in the sensory store than can be extracted, a limit on how much of this information can be exploited by the cognitive mechanisms” (Dretske, 1981, p.142). Sensory input overload then varies with the circumstances, the individuals involved, and the perceptual tasks they face. No doubt, some individuals can detect and classify sensory inputs—for example, hear sounds and distinguish colors—more accurately than others. Nonetheless, in perceptual multitasking—say, perceiving and rating size, brightness, and hue presented jointly instead of separately—the total capacity to perceive and rate the inputs increases, even though the accuracy in perceiving and rating each individual input decreases (Eriksen, 1962). Hence, sensory input overload (e.g., size, brightness, and hue) is part of a relation which also involves the capacity to process the input (conceptualize it) and respond with and output (rate it) under given circumstances. Overload occurs when the input crosses a threshold after which the information processing and response capacity is undermined. What about non-sensory input overload?
non-sensory Input overload Non-Sensory Input Overload in Biology Non-sensory input overload occurs at the cellular level when increasing the rate of input of electrical impulses to a neuron eventually leads to transmission breakdown (Miller, 1960). It also occurs at the total organ level: The reactions of a cat’s optical cortex stimulated by light flashes of constant duration but variable flickering rates, increased up to twenty-five flashes per second but, after reaching that threshold, decreased in some—eventually sharp—proportion to the increase in flash frequency. Additional forms of biological non-sensory input overload can (and do) occur at the ecosystem level, in ecosystem
overload (Miller, 1960; Granit &. Phillips, 1956; Boulding, 1978; McMichael, 1995). Some conditions, however, occur at the physiological level but partly prompted by sensory input overload. Examples are fainting upon learning of extremely bad news and some rare forms of amnesia such as psychogenic amnesia and psychogenic fugue (Associated Press, 2007, p. F4; Douglas & Marmar, 1998; Michelson & Ray, 1996).1 These bridge the gap between physiology and psychology. Let us next turn to non-sensory input overload in computer systems.
Non-Sensory Input Overload, Computers, and Information Theory The notion of information overload is also used in computer theory indicating that so much information has entered an information-processing system that the system cannot easily, if at all, process it. This characterization suits information theory, which can apply to both information systems processing non-sensory input information, and those processing sensory input information. Now, a computer is an information processing system that, theoretically, could have an indefinitely expandable storage tape, i.e., constitute a Turing machine. This is an imagined physical actualization of a formal logic that has an unlimited number of proofs and theorems. 2 Yet, actual computers’ hardware or software limitations often constrain their information processing capacities.3 This kind of information overload prompts additional questions: Where is the information overload threshold for an actual computing device? Can it be altered? How? Given this essay’s ethical focus, however, I will not here address these questions and, instead turn to the social and personal effects of information overload.
561
Walking the Information Overload Tightrope
InformAtIon theory, InformAtIon oVerloAd, And socIAl lIfe At the societal macro-level (e.g., the economy, the political system as a whole and, in general, social institutions), information overload is nonsensory; while concerning interpersonal relations, which occur at a societal micro-level, information overload often is sensory. Further, both these kinds of social information overload contrast with non-sensory input overload of organs or neurons (at a non-sensory physiological microlevel), and with non-sensory input overload at the biological macro-level of ecosystems (Boulding, 1978; Iannone,1994, Chapter 7, especially 96-97 and 105-108; McMichael, 1995). A third is the psychological level of information overload.
scope of social Information overload Though social information overload was already studied in the early 1900s, the phrase “information overload” was first applied to groups around the mid-twentieth century (Miller, 1960). In 1963, Karl Deutsch discussed information overload as a disease of cities which limits freedom, as well as efficient communication and transport (Deutsch, 1963). However, not until 1970 was information overload comprehensively discussed interconnecting its various levels—biological, psychological, social—and exploring it in a wide range of contemporary personal, interpersonal, and collective developments (Toffler, 1970).
ences of opinions or conflicts of demands, together with what people do to uphold their opinions or satisfy their demands. This conflictive nature of issues, together with their complexity and sheer multiplicity—let alone the speed at which they develop—poses various social problems over and above those posed by any one issue in isolation. Most notable among these is issue overload, i.e., a situation in which issues are so many, complex, or intractable that they exceed, or nearly exceed, what ordinary individuals can understand and ordinary societies can handle through the courts, legislation, or executive or other institutional channels as traditionally set up (Iannone, 1994). At the social level characteristic of issue overload, information overload occurs in a feedback loop structure whereby, often, issue overload partly results from information overload and, on its turn, almost invariably contributes to increasing information overload. Elsewhere, I have dealt with issues and issue overload as well as with the feedback loop structures they involve and ways of addressing them (Iannone, 1999). I will later return to this matter. Let us here simply note that, just as with psychological and physiological information overload, social information overload is relational, which confirms our fourth thesis: Information overload at all levels—biological, psychological, and social—is relational involving an input and a capacity to process it and respond with an output under specifiable circumstances. It also shows some current and likely future trends, to which I turn next.
Information overload and Issue overload
sub-kInds And consequences of InformAtIon oVerloAd: current And future trends
Social information overload often significantly results from the circumstances, most notably, from public resistance to or demand for any of the above or other information items. This often contributes to cause issues, that is, sharp differ-
Orrin E. Klapp argued that information overload yields noise, banality, and its frequent companions: alienation, despair, disenchantment, anomie, feelings of illegitimacy, and absurdity, all of which lead to boredom. He also argued that overload is
562
Walking the Information Overload Tightrope
not only a matter of quantity of information, but also of degradation of information (Klapp, 1986, pp. 1-4, 80, 82, 89, 117, 120-121, 125-129). Let us examine these claims.
noise A group of friends is at a dinner party. They are waiting for another friend, Paul, who is a bit late and is supposed to bring his girlfriend Julie to the party. The group’s conversation has grown quite loud. One of them, Peter, asks almost shouting to be heard “What car did Paul say he would bring?” All of the others excitedly and truly answer him at the same time: Mary says “His newest car.” Jim replies “The car he loves most.” Marcela responds “The car by which he plans to propose to Julie.” Tom states “That of his cars with most buttons on the dashboard.” Angela says “His green car.” Since all of them reply loudly at the same time, Peter hears only noise. This is a kind of information overload—noise—which amounts to a matter of quantity—high degree of loudness—and interference—the replies given together competed with one another so as to prevent each from being heard by Peter. He then asks them to reply to his question without shouting and they do so, but still all at the same time, so that Peter still cannot understand any of the answers. This is still noise, a kind of information overload; but it is merely a matter of interference. Knapp states “even when measured in decibels, noise is subjective” (Klapp, 1986, p. 84). In Peter’s case, it is subjective in the sense that it depends on the subject’s capability to disentangle interference. Suppose further that Peter asks his friends not to speak at the same time and Mary repeats hear answer softly—”His newest car,” and the others follow suit. These answers may all be true (as
stated in our description of the case); yet useless to Peter if he does not know any of Paul’s cars. This does not make the answers of Peter’s friends false, nor does it make them to be noise. It makes them uninformative relatively to Peter. That is, not merely structural or semantic, but also pragmatic factors come into play.
Jargon Knapp correctly claims that “noise can occur anywhere that information is communicated, such as radio static, computer garbage, headlight glare, specks that spoil a photographic negative, typographical errors, distortion of rumors, genetic noise, interference between brain hemispheres, and so on” (Klapp, 1986, p. 84). Further, loudness also applies to signals other than sound signals— from enormous or bright ads to flashy dressing styles. And interference occurs in the relentless superposition Knapp aptly calls “clamor” and, together with loudness, accurately categorizes as “forms of intrusiveness” (Klapp, 1986, p. 85). I must however part company with Knapp concerning jargon, because though repulsive and a source of decoding difficulty, the kind of information overload it produces is semantic—i.e., it concerns the meaning of the jargon used—, not, as loudness and interference, quantitative. Now, loudness, interference, and jargon are part and parcel of current globalization processes. Here, markets and other interactive environments produce global associations with features of the city Simmel described—a global city of sorts with its characteristic multiple, transient, distant interactions; not a global village of recurrent and close interactions at all. Let us examine these developments.
boredom and contemporary worldwide developments Boredom sometimes—though not always—results from no or little information. It also results
563
Walking the Information Overload Tightrope
from excessive redundancy, say, repeatedly conveyed clichés and impersonal messages from the media. Clearly, not all redundancy is bad. Some is reassuring—evidencing that friends continue to be friends—, helps social continuity—as in rituals of solidarity—, aids communication—by ensuring that messages have indeed been understood—, confirms individual and social identity—as when returning to a significant place in our past—, and promotes active engagement—as with a well known tune or joke. Yet, redundancy’s benefits have a threshold. Some people become dependent on internet chatrooms and cell-phones. To prevent or overcome this, cell phones and answering machines screen out unwanted calls (indeed, a US law prohibits telemarketing calls to numbers listed federally by their owners), and email services block spam, because a law of diminishing returns is at work, not only economically but also psychologically. Even vacations become boring because repetitive or too restful for persons now in the habit of being constantly exposed to constant and multifaceted bits of information (Shelton, J., Sunday, 2007, pp. F1-F3). 4 Some however insist that things are better than many suggest (Postrel, 2003). For in the contemporary hyper-informative environment, perceptual and multitasking abilities are highly developed, furthering greater aesthetic appreciation. They argue that contemporary culture’s aesthetic imperative is a vital component of a healthy, forward-looking society. One might reply that, in the process, though appreciation improves, say, concerning the cubist paintings of the Spanish painter Picasso and the atonal music of the Austrian composer Anton Webern, it correlatively goes down, say, concerning the paintings of Rembrandt and the music of Haydn. Yet, it is still an open and worth pursuing question, whether this claim is true and, if so, to what extent the new aesthetic sensitivity of hyper-informative society undermines the aesthetic sensitivity which ties this society to its past.
564
Here speed is crucial. As Paul Virilio states, “The twin phenomena of immediacy and of instantaneity are presently one of the most pressing problems…Real time now prevails above both real space and the geosphere (Virilio, P., 1995, Articles: A030); Virilio, P., 2002).
Personal and social Information overload as a result of Increasing speed in life For Virilio, increasingly faster social life leads to personal disorientation concerning reality. To exist is to exist here and now, he argues, and cyberspace’s globalized, instantaneous information, undermines this spatial and temporal anchors of the way we experience our existence, that of others, and our relations to those people and things other than ourselves. This disorientation is evident in how websurfers’ form virtual relationships with other web-surfers in chat-rooms, or through email dialogues where the information exchanged can often hardly be checked. Such disorienting virtual relationships have led to tragic decisions by teenagers who became victims of sexual predators on the web. Yet, not all is bad about the web: Expatriates (or, for that matter, many who travel far away to study or work) and those of their loved ones who stay behind, can now be closer in touch and have long, more substantial conversations than before thanks to email, and even see each other thanks to Skype and webcams. Unreliable postal services, the inevitable time lags between letters, and the all-too-brief, and weak phone-calls preceding satellite communications which tended to cause scattered and rushed exchanges of news rather than sustained conversations, are now avoidable or things of the past. One may reply that cyberspace only gives a false sense of communication and closeness. Consider the members of a United States suburban family with teenage children, both of whose par-
Walking the Information Overload Tightrope
ents work. Their week is scheduled into discrete periods—transit time, work time or school time, shopping time, family time, and personal time. This schedule scatters them in physical space and, though cell-phone family plans and computer messaging help them communicate, they do so mostly on-the-go. This eclipses more traditional forms of communication in such shared places as the dinner table or the living room. Yet, in the US (and elsewhere) in the 1950s, most mothers stayed at home taking care of their children and most fathers went to work out of the house for much of the day. Hence, family time and communication between fathers and their children were infrequent in many families. Afterwards, more women’s working out of the home, and TV viewing, undermined communication among all family members even more. Hasn’t then cyberspace only intensified already present communication breakdowns? One reply is that the infrequency of family time and family members’ communication in previous decades did not eclipse but emphasized physical space; while, communication through cyberspace is abstract—exchanging voice or text messages, pictures and videos, without the interlocutors’ bodily presence. That this involves a significant disvalue is made plain by the fact that even people comfortable with cyberdating eventually want to meet and hang out with each other: Mere cyber-dating and mere electronic messages are not physical enough.
economic Information overload as a result of Increasing speed in life As for the economic effects of new information technologies, they are not all bad. In developing economies, where traditional land-line phones are few, expensive, and defective, cell-phones have been a godsend for the poor and become the main form of communication in their countries’ economies. According to the United Nations International Telecommunications Union, between
2000 and the end of 2005, cell-phone subscription in developing countries surged fivefold, jumping to 1,4 billion. The beneficial results for individuals range from improving access to clients—and sales—for an embroidery businessman in Vietnam, to improving fishermen’s access to fish prices at various ports in Chennai, India, so that they can call ahead from their fishing boats and bring their catch to the port where they can get the best price. At the societal level benefits are substantial too. Market research in China, India, and the Phillipines carried out by the firm McKinsey & Co. and, separately, by Professor Leonard Waverman of the London Business School, suggests that a 10% increase in wireless penetration can lead to a 0.5 % or about US$ 12 billion growth for an economy the size of China’s (Associated Press, 2007, p. C3). Yet, the false sense of reality conveyed through internet exchanges has also led to imprudent decisions on the part of consumers or business people lured by the misleading felt-reality of cyberspace. Further, most notably in financial markets, funds can be moved through the internet from one country to another at the press of a button leaving investors and entire markets high and dry. Some restrictions have been instituted in some countries; but they are limited so as not to discourage investment, in which case the cure would be worse than the disease. Others however welcome new technology’s speed as progress, pointing out that there used to be a significant time-gap between the time a Hollywood movie started showing in the US and the time it did so elsewhere. Today, by contrast, when a Hollywood movie opens in the US it often simultaneously opens in movie theaters over much of the planet. Some, however, perceive this as a threat to local economies and cultures which now must accelerate to keep up with foreign competition. Mixed responses are elicited in other markets. In the late 1990s, airline passengers welcomed
565
Walking the Information Overload Tightrope
their newly acquired ability to buy tickets online directly from the airlines or from online vendors faster and at better prices. But travel agents objected, especially when various airlines decided to renege on the old business relationship they had had with travel agents. Also, critics point to the effects of greater speed of operation introduced by automated equipment in production (say, in motor vehicle production lines) and internet use in personnel management (say, in speeding up and supervising, through managerial eavesdropping, the rate of production of airline ticket-sale employees). They argue that this increases workers’ likelihood of being physically harmed by the breakdown of hard-to-operate equipment, and mentally harmed by their having to concentrate more and more on simpler, repetitive tasks at the ever faster pace of the new equipment they must operate. They add that these consequences often involve exploitation, at least when the increase in speed and correlative change in the nature of the workers’ work decreases instead of increasing the workers’ bargaining power pay, status, and/or self-esteem (Iannone, 1987, pp. 78-80 and 101-118).
Political Information overload as a result of Increasing speed in life The virtualization of government in opinion-democracy patterned on viewer-counts and opinion polls undermines representative democracy. For the immediate off-the-cuff internet recording of opinions or ballots undermines representative government in two ways. First, careful deliberation and judgment is undermined by the greater speed typical of internet exchanges, when not of the internet processes envisioned or implemented to attain closure or make decisions. Second, these exchanges all-too-often tend to encourage tunnel-vision, short-term thinking focused on narrow topics or lists of positions used as a litmus test for recording opinions or ballots. Both ways conflict
566
with representative government’s mandate to balance short, intermediate, and long-term concerns on behalf of the public. Further, through this virtualization and its simplifying effects, a kind of superficial globalization develops (Virilio, P., 1995, Articles: A030; Virilio, P., 2002). Though enhanced by cyberspace, however, the situation just mentioned began to develop before cyberspace entered political life. Government by referendum was already present and arguably undermining the long range component (say, concerning US foreign policy or US science policy) of representative democracy in the sense previously characterized. Yet, magnification by cyberspace often prompts the development of issues, and issue overload leading to legal and political gridlock in the U.S. legal system and at the global level (Breyer, 1993; Iannone, 1994; Iannone, 1999).5
Information overload and Information technology Even granting that not all new information technologies’ effects are bad, the sobering fact remains that though these technologies, used individually, in the short run, and with moderation, can be harmless, they can have collective undesirable, maybe sometimes disastrous consequences. For one, the digital information explosion is running out of room. According to the technology research firm IDC, the world generated 161 exabytes (161 x 1 quintillion bytes) of digital information just in 2006. This is the equivalent of 12 stacks of books each reaching from Earth to the Sun, i.e., 3Million times the information in all books ever written. It would take 2Billion of the 2007 highest capacity iPods to hold just that digital information. In effect, the supply of data outstrips the data storage currently available (Associated Press 2007, pp. C1 and C3).
Walking the Information Overload Tightrope
Information overload and knowledge The speed at which information is exploding, shortens what Peter J. M. Nicholson calls “the ‘half-life’ of active information—the information we actually call on to do our jobs and run our lives.” This leads to and ever-growing revision of written information which, in the end, both in science and life, sacrifices knowledge to speed, and authoritativeness to easy access in the race to keep up with the information explosion (Nicholson, 2007, pp. B6 and B7). Analogously, some express ethical concerns about the consequences—quantitative and also semantic—of the enormous and growing amount of messages and its effects, both personal and social, on the users of information and the information processing systems themselves. The development of the printing press and its many unintended and unexpected consequences constitutes a precedent to the contemporary developments of computers and related information gathering and transmission systems. Johann Gutenberg, a devoted Christian, thought his invention would promote the cause of the Church, whereas, in effect, it undermined its monopoly. Analogously, computers were supposed to help us accumulate information so that, eventually, unsolved problems would be solved and humanity would benefit. Yet, the ever growing accumulation of information facilitated by computers has caused disorder—a quantitative result—and confusion—a semantic result. This, together with the preceding discussion, supports our fifth thesis: There are at least two main types of information overload, quantitative and semantic, each one of which involves various kinds and sub-kinds of information overload. Within what threshold, then, are new digital technologies beneficial? At which point do they lead to information overload? What can and should be done about this overload either proactively so it does not occur or correctively once it occurs? I will next turn to these matters.
how cAn—how should—we deAl wIth InformAtIon oVerloAd’s current And LikeLy FuTure TrendS? technological Approaches to dealing with Information overload A current research trend aims at increasing the effectiveness of human activities while reducing the pressures their growing speed brings upon humans. It seeks mind-expanding machines or cognitive prostheses—devices that improve human cognition, much as eyeglasses improve vision, by accommodating to how humans behave. Say, an airplane-cockpit display that shows crucial information so that a standard pilot can understand what the aircraft is doing in a fraction of a second instead of the usual few seconds, accommodates to how much humans use both their central and their peripheral vision and how their memory processes information—both topics in human cognitive studies. There is, however, controversy about whether these mind-expanding devices are a blessing or a curse, bringing about even greater pressures by increasing the speed at which humans can process information and function without being overloaded (Bower, 2003, p. 136). What about other—less artificial—ways of dealing with information overload.
Psychological response mechanisms to Information overload Information processing responses other than those Simmel described—indifference, reserve, slight aversion—were identified in 1960 by James G. Miller. He listed: “(a) omission—temporary nonprocessing of information; (b) error—processing incorrect information; (c) queuing—delaying the response during a period of high overlap of input information in the expectation that it may be possible to catch up during a dull; (d) filter-
567
Walking the Information Overload Tightrope
ing—neglecting to process certain categories of information while processing others; (e) cutting categories of discrimination—responding in a general way to the input, but with less precision than would be done at lower rates, that is, instead of reporting ‘I see yellow,” saying ‘I see a light color’ or ‘I see a color”; (f) employing multiple channels—processing information through two or more parallel channels at the same time; decentralization is a special case of this; (g) escape from the task” (Miller, 1960, passim; Lanzetta & Roby, 1956, pp. 307-314). These ways of dealing with information overload exemplify an additional current trend in understanding and helping deal with information overload. They, together with the human user friendly research trend previously mentioned, provide good reason to conclude our sixth thesis: current and likely future information overload trends pose problems requiring different ways of dealing with them depending on the nature and circumstances of the overload.
social decision Procedures for dealing with Information overload Information overload has been likened to traffic congestion in investigating the logic and processes of social change (Braybrooke, 1974; Braybrooke, Brown, and Schotch, 1995; Braybrooke, 1998), and studying social decision procedures (Barry, 1976, pp. 88-93; Iannone, 1994, essays 4 and 5 through 8). Let us compare a few of these procedures with Miller’s mechanisms. Some social decision procedures are strictly non-confrontational. Examples include discussion of merits, negotiation, consensus building, bargaining, mediation, arbitration, and simply coping while waiting for a favorable opportunity to act. This latter procedure involves Miller’s queuing—delaying the response during a period of high overlap of input information in the expectation that it may be possible to catch up during a dull—and omission—temporary non-processing
568
of information (e.g., when the media does not for a while pick up on available newsworthy information). While arbitration and mediation are forms of Miller’s escape from the task, because the social decision procedure is passed on to some other agent. Other social decision procedures are still nonconfrontational, but involve pressure tactics such as publicity campaigns, letter-writing, demonstrations, and public appeals to higher authorities (including the international community), e.g., in trying to curb those TV ads’ excesses which amount to what we discussed under the category noise. They can and often occur concomitantly, hence are analogous to or forms of Miller’s employing multiple channels—processing information through two or more parallel channels at the same time, of which decentralization is a special case. Still other social decision procedures are confrontational but non violent. They include civil disobedience (e.g., to oppose internet related privacy violations), outflanking or bypassing, political insulation, and various forms of manipulation—perhaps involving pressure from third parties—when social issues are involved and a balance needs to be found between competing and intractable political forces. Among these, outflanking or bypassing is analogous to or overlaps with Miller’s cutting categories of discrimination or, in more drastic situations, with Miller’s filtering. At the societal level and when it does not involve manipulative intentions—e.g., rumors passed on as true information which counteracts destabilizing forces—, it is also analogous to or overlaps with error—processing incorrect information, which may enable the system to return to normal processing afterwards. Finally, there are social decision procedures which are confrontational and involve various degree of violence. Examples of these are news blackouts, economic embargoes, freezing foreign investments, strikes or in general undermining or preventing the delivery of goods or services
Walking the Information Overload Tightrope
to social sectors or entire societies, blockades, threats of force, and, in crisis situations, even combat. All of these may be effective and morally permissible when the stability of sound societies is threatened, e.g., by hacking attempts by foreign governments or their proxies aimed at breaking into the restricted web-sites of other countries’ defense related facilities or at overloading them in order to make them crash and hence inoperant. Some of the said procedures (e.g., news blackouts, economic embargoes, freezing foreign investments, strikes, and blockades), when used as ways of avoiding facts calling for more drastic decisions, are analogous to or overlap with Miller’s omission. Since they can occur concomitantly, they are also analogous to or overlap with Miller’s employing multiple channels. While combat, which in effect changes the information by causing a crisis and system breakdown, is a procedure analogous to or overlapping with what Miller calls escaping the task. These analogies hold between the psychological and social levels, suggesting that they can apply across various—at least sensory—levels of information overload. For example, queuing can have interpersonal applications—say, when job demands temporarily increase unmanageably, or when a couple faces too many daily problems and delays discussion to avoid a fight—, as well as policy applications—when too many issues confront policy makers and these postpone discussion on some (Miller, 1960, p. 697).6
how far can these Procedures help and when are they morally Justified? Given the fluid nature of information overload’s kinds and trends, their ethical evaluation and resulting policy and decision injunctions cannot be completed in this chapter or even in a series of academic studies. This leads to my seventh thesis: evaluating these trends and policies and decisions concerning them, must partly be carried
out in the real world, by those affected, through meaningful dialogue and interaction with various information overload situations and options for dealing with them along the lines of what can be characterized as discursivo-interactive ethics. What this chapter can and does offer is an outline of an approach (which I discussed in detail elsewhere) for engaging in such dialogue and interactions (Iannone, 1994, 1-14, 87-111, 112-150; Iannone, 1999, pp. 174-180).
First Determine the Nature of the Situation One should first determine the kind of situation faced as a result of information overload by asking: Does the situation arise at the micro-level (say, excessive noise in a metropolis) or macro-level (say, the supply of data currently generated in cyberspace outstripping the data storage currently available)? Is it conflictual (as when barking dogs’ owners conflict with those who want to silence the dogs), or consensual (as when most everyone is happy that more cyberspace information is generated, unaware of its becoming impossible to store, let alone use)? Does it predominantly make room for appeals to reason and meaningful dialogue (as in disagreements about the need to regulate immediate off-the-cuff internet recording of opinions or ballots), or is it mainly confrontational (as in hacking for political reasons or simply as a form of vandalism)? Is it destabilizing of individuals (as in the case of the metropolis’ noise) or groups (as in the off-the-cuff internet recording of opinions or ballots)? Does it involve violations of individual rights (as in internet violations of privacy) and, if so, in what ways and to what extent?
Next Determine the Feasible Responses Available and the States of Affairs Each Would Create Once the situation’s features have been reasonably determined, ask: What responses are likely to be
569
Walking the Information Overload Tightrope
both feasible and effective in dealing with the situation? Which of these responses are equally or more likely to lead to undesirable consequences to groups or to limitations on individual freedoms and well-being?
Then Determine the Reception by Those Likely to be Affected or Their Trustees Having identified these responses (say, information overload response mechanisms and other procedures previously discussed), ask: Do those affected or their trustees or representatives, when of sound and cool mind, free from the influence of coercion and manipulation, and well-informed, find them objectionable? If they do not, then, regardless of our personal objections, the said responses are permissible. But if those affected or their trustees find the responses objectionable, then these responses are not automatically permissible.
After That Determine How to Try to Settle Remaining Disagreements A second order question arises: How can the differences that divide those who confront the previous situation concerning tolerable responses to it be settled? General guidelines I have also discussed in detail elsewhere (Iannone, 1994, 1-14, 87-111, 112-150; Iannone, 1999, pp. 174-180) are: Whenever reasonable people disagree about alternative policies and decisions, procedures may be used to settle on alternatives within the available time in the following order of priorities: First, only non-confrontational procedures, if feasible and likely to be effective in the foreseeable future, are permissible; Second, among confrontational procedures, only the least confrontational among those feasible and likely to be effective in the
570
foreseeable future are permissible, but only to the extent they are necessary; Third, if none of the preceding procedures is feasible or likely to be effective in the foreseeable future, then coping and waiting, perhaps accompanied by discussion of merits, consensus building, and some limitedly confrontational procedures (e.g., cutting off funds) likely to be marginally effective in the long run are permissible until circumstances change.
conclusIon An answer to T.S. Eliot’s question “Where is the wisdom lost in knowledge and where is the knowledge lost in information?” is: In globs of rapidly increasing information overload. What then can we do about it? It would be presumptuous and out of touch with the complexities of the situation to claim to have a complete, let alone single or definitive answer to this question. Yet, the decision procedures and guidelines described in this essay may help. So can advances in mathematics and computer technology. None of them, however, is a substitute for wisdom—the balanced “activity of thought and receptiveness to beauty and humane feeling” Whitehead considered the main aim of education. Nor is this balance something to be attained once and for all. Instead, it needs to be continuously regained in the ongoing process aimed at individual and social flourishing—our life.
references Associated Press (Saturday, January 27, 2007). Amnesia victim wandered 25 days. New Haven Register. Associated Press (Sunday, January 28, 2007). Cell Phones Boost Developing Nations. New Haven Register.
Walking the Information Overload Tightrope
Associated Press (Tuesday, March 6, 2007). Data Explosion Running out of Room. New Haven Register.
Eliot, T.S. (1952). ‘Choruses from “The Rock”. ‘ Complete Poems and Plays. New York: Harcourt, Brace p. 96.
Barry, B. (1976). Political Argument. London: Routledge and Kegan Paul.
Eriksen, Ch. W. (1962). Behavior and Awarenes: A Symposium of Research and Interpretation. Durham, NC., Duke University Press.
Boulding, K. (1978). Ecodynamics: A New Theory of Societal Evolution (Beverly Hills, CA: Sage Publications. Bower, B. (August 30, 2003). Artificial Intelligence Meets Good Old-Fashioned Human Thought. Science News, Vol. 164, No 9. (p. 136). Bower, B. (February 17, 2007). Net Heads. Science News, Vol. 71 (pp. 104-105). Breyer, S. (1993). Breaking the Vicious Circle. Cambridge, MA: Harvard University Press. Braybrooke, D. (1974). Traffic Congestion Goes through the Issue Machine. Routledge and Kegan Paul. Braybrooke, D. (1998). Moral Objectives, Rules, and the Forms of Social Change. Toronto: Toronto Studies in Philosophy-University of Toronto Press. Braybrooke, D., Brown, B. and Schotch, P.K. (1995). Logic on the Track of Social Change. Oxford and New York: Clarendon Press/Oxford University Press. Clark,J.M. (1923). Studies in the Economics of Overhead Costs. Chicago: University of Chicago Press. Deutsch, K. W. (1963). The Nerves of Government: Models of Political Communication and Control. New York: Free Press. Douglas, J. and Marmar, C. R. (1998). Trauma, Memory, and Dissociation. Washington, DC: American Psychiatric Press. Dretske, F. (1981). Knowledge and the Flow of Information. Cambridge, MA: MIT.
Granit, R and C. G. Phillips (1956). Excitatory and Inhibitory Processes Acting upon Individual Purkinje Cells in the Cerebellum of Cats. The Journal of Physiology, Vol 133 (pp. 520-547). Hartley, R.V.L. (1928). Transmission of Information Bell System Technical Journal, Vol. 7 (pp. 535-563). Iannone, A. P. (1987). Contemporary Moral Controversies in Technology. New York and London: Oxford University Press. Iannone, A. P. (1994). Philosophy as Diplomacy: Essays in Ethics and Policy Making. Atlantic Highlands, NJ: Humanities Press. Iannone, A. P. (1999). Philosophical Ecologies: Essays in Philosophy, Ecology, and Human Life. Atlantic Highlands, NJ and Amherst, NY: Humanity Books and Humanity Press. Iannone, A.P. and Briggle, A. (2005). Information overload and speed entries. In Carl Mitcham, (gen. ed.), Encyclopedia of Science, Technology, and Ethics. New York: Macmillan. Klapp, O. E. (1986). Overload and Boredom: Essays on the Quality of Life in the Information Society. New York: Greenwood Press. Lanzetta J. T. and Roby, T. B. 1956). Effects of Work-Group Structure and Certain task Variables on Group Performance. Journal of Abnormal and Social Psychology, Vol. 53. Lindsay, P. H., and Norman, D. A. (1972). Human Information Processing. New York: Academic Press.
571
Walking the Information Overload Tightrope
McMichael, A. J. (1995, ©1993). Planetary Overload: Global Environmental Change and the Health of the Human Species. Cambridge [England]; New York: Cambridge University Press). Michelson, L. and Ray, W. J. (1996). Handbook of Dissociation: Theoretical, Empirical, and Clinical Perspectives. New York: Plenum Press. Miller, J. G. (February 1960). Information Input Overload and Psychopathology. American Journal of Psychiatry, Vol. 116 (pp. 695-704). Nicholson, P. J. M. (March 9, 2007). The Intellectual in the Infosphere. The Chronicle of Higher Education. Postrel, V. (2003). The Substance of Style: How the Rise of Aesthetic Value Is Remaking Commerce, Culture, and Consciousness. New York: Harper Collins. Shannon, C. E. & Weaver, W. (1949). The Mathematical Theory of Communication. Urbana, IL: The University of Illinois Press. Shelton, J. (Sunday, January 21, 2007). E-nailed. New Haven Register.
key terms Artificial Intelligence: A computer program designed to simulate human brain function and, by extension, the scientific program aimed at designing and building intelligent artifacts. Artificial Life: A research program which studies the general characteristics and processes of life, including such processes as self-organization, self-reproduction, learning, adaptation, and evolution. Cognitive Science: An area of research involving anthropology, computer science, linguistics, neuroscience, philosophy, and psychology, studies intelligent activity, whether displayed by living organisms (especially human beings) or machines. Connectionism: A movement in cognitive science aimed at explaining human intellectual abilities using artificial neural networks (also called “neural networks” or “neural nets”). Information: Data communicated or received.
Simmel, G. (1950). The Sociology of Georg Simmel. Glencoe, Ill: The Free Press.
Information Processing: the acquisition, recording, organization, retrieval, display, and dissemination of information.
Thiam, T. (Jul. 1999). Neural Networks. International Joint Conference on Neural Networks, Vol. 6 (pp. 4428 - 4431).
Information System: An integrated set of components for collecting, storing, processing, and communicating information.
Toffler, A. (1970). Future Shock. New York: Random House.
Information Theory: A mathematical theory which formulates the conditions and parameters affecting the transmission and processing of information.
Virilio, P. (August 1995). Speed and Information: Cyberspace Alarm! CTHEORY. Virilio, P. (2002). Desert Screen: War at the Speed of Light. London and New York: Continuum. Whitehead, A. N. (1929). The Aims of Education and Other Essays. New York: Macmillan Publishing Company.
572
Modularity: The view that psychological organization involves independent and specialized cognitive systems or modules. Neural Networks: Also called “neural nets,” they are simplified models of the brain composed of large numbers of units (the analogs of neurons)
Walking the Information Overload Tightrope
together with weights that measure the strength of connections between the units. Social Decision Procedures: Was of going about trying to resolve differences of opinion or conflicts of demands between individuals or groups.
endnotes Some of the points discussed in detail in this essay were briefly outlined or simply mentioned in my contributions to Iannone, A.P. and Briggle, A. (2005). Information overload and speed entries. In Carl Mitcham, (gen.ed.), Encyclopedia of Science, Technology, and Ethics. New York: Macmillan. 1
Amnesia is the temporary or permanent loss of a part or whole of memory. When resulting from extreme psychosocial stress, it is dissociative (previously called psychogenic) amnesia. Dissociative amnesia includes localized amnesia, selective amnesia, systematized amnesia, generalized amnesia and continuous amnesia. The most common type, localized amnesia, is often an outcome of a particular event. It renders the afflicted unable to recall details of an usually traumatic event such as a violent rape. Selective amnesia is similar to localized amnesia except that the memory retained is very selective. Often a person can remember some aspects of the traumatic situation, but not the specific ones which make it traumatic. Systematised amnesia is a loss of memory for a specific category of information. In generalized amnesia, the diseased either forgets the details of an entire lifetime. In continuous amnesia, the persons affected by it can’t recall the details of their lives prior to a certain point in time, including the present. Dissociative (psychogenic)
2
fugue is generalized amnesia plus a flight from family, problem, or location. In highly uncommon cases, the person may create an entirely new life. The role of information overload in dissociation is still quite an open area of research. The main features of computers help understand the latter point. They are complex hierarchical physical systems—i.e., a division of components such as memory boards, processors, and control systems ordered in a manner that reflects their complexity—that use a hierarchy of languages, i.e., a division of languages ordered in a manner that reflect their complexity. That is, computers have two main types of components (which are the subject of the two main branches of computer theory). One is computer hardware, i.e., the computer analogue of an organism’s body. It includes memory—where information is stored—, a processor—where the arithmetic-logic unit transforms data and the control unit executes programs—, and input-output devices. The latter are used, first, for information exchanges between the computer and its environment; second, in the case of feedback and feed-forward, through a closed loop, between the computer and the computer; third, in some cases—e.g., floppy disks and magnetic tapes—for additional storage. Memory, processor, and input-output devices communicate with each other through a fast switching system of components. The other main type of computer component is computer software—the computer analogue of an organism’s information processing, i.e., in a wide sense of this term, of its mental life. It is written in a hierarchy of programming languages which provide instructions for data operations and transfers, transfers of control from one part of a program to another, and modifications of programs.
573
Walking the Information Overload Tightrope
3
4
574
Connectionism promised to help address these questions between the 1940’s and 1960’s; then noticed technical limitations made it less appealing until the 1980’s, when the discovery of ways of overcoming these limitations made it appealing again. Alternatives to it involve modularity—the view that psychological organization involves independent and specialized cognitive systems or modules— and is not without critics (Fodor, J. A. (1983). Modularity of Mind: An Essay on Faculty Psychology. Cambridge, Mass.: MIT Press; Simon, H. (1996). The Sciences of the Artificial-3rd Edition Cambridge, MA: MIT Press..; Uttal, W. R. (2003).The New Phrenology: The Limits of Localizing Cognitive Processes in the Brain (Cambridge, MA: MIT Press). When our youngest daughter lost her cell-phone while a freshman at NYU, she reported having cell-phone withdrawal symptoms. Yet, a few days later, not having found her phone, she reported that it felt good
5
6
not to be bombarded by so many continuous phone-calls and text-messages and that she was not going to rush to get a new phone, instead continuing with her techno-detox program (her words). For example, the process leading to California’s proposition 13 made room for greater participatory democracy in California and heralded similar changes in the rest of the country by encouraging greater government by referendum in the 1970s—a decade before personal computers opened the door to the computerization of politics in the US and elsewhere. One of Miller’s related hypotheses concerns the cost of information transmission. He states (in terms of information theory) that “the cost per bit of information flow at very high rates is probably much greater than at low rates, rising precipitously at the confusion period as the system begins to break down” and suggests that future empirical studies might evaluate this hypothesis.
575
Chapter XXXVII
Cyber-Victimization Lynne D. Roberts Curtin University of Technology, Australia
AbstrAct Information and communication technologies (ICTs); while providing a range of benefits to individuals, organisations and governments; also provide new opportunities for criminal activities to emerge. This chapter provides an overview of criminal victimization online. The focus is on the impact of cybercrimes on victims and the associated legal, technical, educational and professional responses to cybervictimization. The focus on cyber-victimization is situated within the broader context of responses to victims of crime in off-line settings. The form of cyber-crimes will continue to change as new ICTs and applications emerge. Continued research into the prevalence, types and impacts of cyber-victimization is required in order to inform victim service provision and effectively address the needs of current and future cyber-victims.
IntroductIon The use of information and communication technologies (ICTs) is ubiquitous in western societies. Networked computers enable instantaneous global communication and transfer of digital information. The introduction of third generation (3G) mobile telephones provides the potential for wide-spread mobile Internet access. Networked ICTs provide social, educational, information, personal and financial opportunities. However, these same technologies also create new opportunities for criminal activity. Networked computers provide the media for new types (or variations
on old types) of criminal activity to emerge. The introduction of each new information and communication technology (ICT) potentially expands the range of criminal opportunities and potential victims. While these ‘cyber-crimes’ have received considerable media and some academic attention, the focus has largely been on the crime rather than offenders or victims (Wall, 2005). Limited research has specifically examined cyber-crime from the perspective of the victim. This chapter provides an overview of the current state of knowledge on cyber-victimization. It begins with a brief overview of cyber-crime, delineating cyber-crimes ‘against property’ and
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Cyber-Victimization
cyber-crimes ‘against the person’. Six types of cyber-crimes are examined in terms of their defining features and prevalence. The body of the chapter examines the impact of personal and property cyber-crimes on victims with a focus on organizations, adults and children. Current legal and law enforcement, technical, educational and professional responses to cyber-victims are outlined. The examination of responses to cyber-victimization is situated within the broader context of responses to victims of crime occurring outside the virtual arena. The chapter ends with an acknowledgement that the continued development of ICTs will result in the emergence of new types of cyber-crimes and associated cyber-victimization. Continued research into the impacts of cyber-victimization is required in order to effectively address the needs of current and future cyber-victims.
bAckground ICTs may be used as a means of communication and organization to support criminal activities. ICTs provide criminals with a “global reach” (Savona & Mignone, 2004, p. 5) through cheap, fast, secure, anonymous communication with multimedia capacity. Further, ICTs can create new opportunities for criminal activity and provide new ways of conducting existing criminal activities (Savona & Mignone, 2004; Wall 2005). These criminal activities that are enabled by ICTs are broadly referred to in the media and some academic discourse as cyber-crime. While the use of the term cyber-crime has been criticized as being “fairly meaningless because it tends to be used emotively rather than scientifically … with no specific reference point in law” (Wall, 2005, p. 79), it provides a useful umbrella when considering crimes committed using ICTs. Competing definitions of cyber-crime have emerged. For example, Tavani (2004) argued for limiting the use of the term to only those crimes
576
“in which the criminal act can be carried out only through the use of cybertechnology and can take place only in the cyberrealm” (p. 183). Tavani delimited cyber-crime from ‘cyberrelated crimes’ which include ‘cyberexacerbated crimes’ (such as cyber-stalking, online paedophilia and Internet pornography where technology can affect the scope of the crime) and ‘cyberassisted crimes’ (such as using the Internet to lodge fraudulent income tax forms) where the technology is used to assist in the conduct of the crime without affecting the scope of the crime. Similarly, Grabosky (2004) differentiated between conventional crimes committed with computers from cyber-crimes that involve attacks on computer networks, including theft and espionage. Wall (2005) proposed the use of an ‘elimination test’ to determine whether a crime would still exist without the Internet. Using this elimination test Wall concluded that the Internet provides increased opportunities for existing types of crime (without the Internet these crimes would still exist, but would likely be reduced in number), new methods of conducting traditional crimes (without the Internet these new opportunities for offending would disappear) and opportunities for conducting new types of criminal activity. Within definitions of cyber-crime, differing typologies of cyber-crimes have been developed. For example, Gordon and Ford (2006) differentiated between cyber-crimes that are single, discrete events facilitated by crimeware programs and exploiting system vulnerabilities (e.g. phishing, hacking and identity theft) and cyber-crimes that involve repeated contacts or events and that typically do not depend upon crimeware programs (e.g. cyber-stalking, child predation and cyberextortion). Wall (forth-coming) differentiated between computer integrity crimes (e.g. hacking, denial-of-service attacks and viruses), computer related crimes that use networked computers to conduct criminal activities (e.g. cyber-piracy) and computer content crimes where illegal materials,
Cyber-Victimization
such as child pornography, are held on networked computers. For the purposes of this chapter, a broad definition of cyber-crime that reflects media and public use of the term is utilized. Cyber-crime is defined as “any crime that is facilitated or committed using a computer, network or hardware device” (Gordon & Ford, 2006, p.14). Using this definition, the computer or device can be the agent, facilitator or target of the crime. While the range of potential cyber-criminal acts is huge, from cyber-terrorism1 to spam, this chapter will focus on current cyber-crimes, using a broad typology from criminology: crimes against property and crimes against the person. Due to space limitations it is not possible to cover all types of cybercrimes. Instead, within each of these categories of cyber-crime, the focus will be on three types of cyber-crimes.
ProPerty cyber-crImes Property cyber-crimes are focused on achieving financial gain. The three types of cyber-crimes against property examined here are spam, cyber identity theft and related fraudulent activity, and cyber-piracy.
spam Most Internet users are familiar with the concept of spam (cyber junk mail): the unsolicited bulk distribution of promotional emails. Addresses for spamming can be obtained by legal (opt-in subscriptions) or illegal means such as the use of ‘spider-bots’ to harvest addresses from the World Wide Web. Spam can also be the delivery vehicle for malicious software (‘malware’) including Trojans, keyloggers, viruses and scams such as advance fee fraud emails (Holt & Graves, 2007). Cyber-crimes potentially converge where hackers, virus writers and spammers work together (Wall, 2004).
Based on a survey of 849 Internet users, Nucleus Research (2007) estimated that more than 90% of e-mail reaching corporate servers was spam, with spam representing two out of three messages reaching employees’ inboxes after spam filtering. The estimated cost per employee in identifying and deleting spam was $712 per annum, based on an average of 21 spam messages received per day. While filters are blocking a higher percentage of the spam than previously, an unintended consequence (and indirect cost) is the blocking of legitimate emails. The effect of spam is not restricted to corporate employees. The Australian component of the 2004 International Crime Victimisation Survey (a survey of 7001 Australian households) found that more than half of householders with Internet access were affected by spam and a quarter were exposed to Internet scams (Krone & Johnson, 2006). While at the individual level spam may be regarded merely as a ‘nuisance’, collectively spam provides “small-impact bulkvictimisations” (Wall, 2004, p. 311) and is a threat to public trust, communications and commercial infrastructure.
cyber Identity theft Cyber identity theft is the online misappropriation of personal information for use in presenting oneself as somebody else, typically for financial gain2. The range of methods employed in cyber identity theft include hacking (unauthorized access to computer systems), phishing (the use of emails and websites to ‘fish’ for personal information), pharming (a variation on phishing, that redirects connections between an IP address and the target server), advance-fee frauds, fake taxation forms, keyloggers and password stealers (Paget, 2007). Identity theft and related fraud is a growing concern across countries. Survey research suggests that in the US alone 8.4 million Americans were the victims of identity fraud in 2006, with the
577
Cyber-Victimization
total cost of this fraud estimated at $49.3 billion (Javelin Strategy and Research, 2007). Fraud associated with cyber identity theft is a major source of reported cyber-crime. The US Internet Crime Report (National White Collar Crime Center and the Federal Bureau of Investigation, 2007) provides details of Internet crimes reported to the Internet Crime Complaint Centre. In 2006, 207,492 complaint submissions were received, a 10% decrease from the previous year. Fraudulent losses amounted to $198.44 million with a median dollar loss of $724 per complaint. Almost half (44.9%) of complaints were about auction fraud. Most fraud was initiated though email (73.9%) or websites (36%).
cyber-Piracy Cyber-piracy refers to illegal file-sharing of copyrighted material (e.g. music, computer software, video games, films, books) over peer-to-peer computer networks (Hill, 2007). It has been estimated that 35% of personal computer software is pirated (BSA, 2006). Almost 20 billion songs were illegally downloaded in 2005 (IFPI, 2006) and there was a fall in global CD sales of 23% between 2000 and 2005 (IFPI, 2007).
cyber-crImes AgAInst the Person Cyber-crimes against the person are criminal activities conducted online that take the form of an assault against the individual (or, in limited cases, a group, organization or culture), their integrity or reputation. The three types of cyber-crimes against the person examined here are cyber-bullying and harassment, cyber-stalking, and the sexual exploitation of children.
cyber-bullying and harassment Cyber-bullying refers to bullying behaviour conducted online through ICTs such as email, 578
newsgroups, bulletin boards, instant messaging, web-sites and online games. Cyber-bullying behaviours encompass online postings, conversations or messages that are designed to harass, humiliate and intimidate the receiver. This may include threats, insults and teasing. Multimedia capacity extends the range of bullying material to videos and images (such as photo-shopping a victim’s face onto pornographic images). The potential audience for cyber-bullying activities is far larger than for its offline counterpart (Chisholm, 2006) and the bullying is more concrete than verbal bullying (Campbell, 2005). In a survey of grade seven students approximately one quarter reported being cyber-bullied with 15% admitting to cyber-bullying other students. While not currently as prevalent as off-line bullying, a concerning aspect is that two-fifths of students who reported being cyber-bullied did not know who the perpetrator was (Li, 2007). The Youth Internet Safety Survey, a population telephone survey of youth Internet users in the US, reported that 9% of youth have been harassed online (Ybarra, Mitchell, Wolak & Finkelhor, 2006). A third of youth victims of online harassment have been victimized on multiple (3 or more) occasions. In about half of cases they are harassed by other adolescents (51%), may know their harassers off-line (45%) and may experience aggressive contact off-line (25%).
cyberstalking Cyber-stalking refers to stalking activities conducted online using information and communication technologies3. Cyber-stalkers may utilise a range of tools and virtual environments including email, chat rooms, bulletin boards, newsgroups, instant messaging and keylogging trojans. Cyber-stalking activities include threats, harm to reputation (‘cyber-smearing’), damage to data or equipment, computer monitoring and attempts to access confidential information (Bocij, 2003; Bocij & Sutton, 2004).
Cyber-Victimization
The Internet provides a wide range of opportunities for individuals to interact with strangers, expanding the potential ‘pool’ of victims for cyberstalking. The widespread adoption of the Internet provides individuals with unprecedented access to information about other individuals through both information placed in the public domain by the individual (e.g. a personal web page) and information placed online without the knowledge or consent of the individual and over which the individual has no control. Individuals with even limited technological sophistication can engage in the online surveillance of (potential) victims. Concern has been raised that this may result in increased stalking of ‘strangers’ (Bocij, 2003b; Bocij & Sutton 2004; McGrath & Casey, 2002). The prevalence of cyber-stalking has yet to be established. While surveys of college students have reported that between one in ten and a third of students report at least one form of online harassment (Finn, 2004; Spitzberg & Hoobler, 2002), community population-based surveys have yet to be conducted, limiting the generalisability of research findings.
sexual exploitation of children Cyber-crimes involving the sexual exploitation of children include the online distribution of child pornography, the sexual grooming and solicitation of children and children’s unwanted exposure to online pornography. The Internet has increased the market and visibility of child pornography through increasing the ease of producing and distributing digital material. It has enabled the fast transmission of child pornography across and between countries (Schell, Martin, Hung & Rueda, 2007). Pornographic child images posted online may potentially circulate in perpetuity. Establishing the prevalence of child pornography online is hampered by differing definitions of child pornography and reliable estimation methods. The CyberTipline was established in
1998 to coordinate the reporting of online child sexual exploitation (including child pornography). It is mandatory for Internet Services Providers to report child pornography and child sexual exploitation to the CyberTipline (CyberTipline, undated a). In 2005, 70,768 reports were made to the CyberTipline, of which 64,250 related to the possession, manufacture and distribution of child pornography (CyberTipline, undated b). In addition to being the subject of pornography, children may be exposed to (child or adult) pornography online. One third of youth surveyed in the 2005 Youth Internet Safety Survey had been exposed to unwanted sexual material online in the past year (Wolak, Mitchell, & Finkelhor, 2006). Increases in unwanted pornography exposure have been attributed to increasing Internet use, technological changes such as digital photography, faster Internet connections and computer storage capacities and aggressive marketing of pornography websites (Mitchell, Wolak, & Finkelhor, 2006). The introduction of 3G mobile phones has been posited to increase the exposure of children to unwanted pornographic images (Reid, 2005). Children may also be directly sexual solicited online. In 2005 the CyberTipline received 2,664 reports of online enticement of children for sexual acts (CyberTipline, undated b). The 2005 Youth Internet Safety Survey reported that approximately one in seven youth (13%) have received sexual solicitations online, down from one in five (19%) in 1999/2000 (Wolak et al., 2006). Deirmenjian (2002) identified two models of paedophilia operating in cyberspace: the ‘trust based seductive model’ and ‘direct sexual model’. As the names suggest, the first model is where a paedophile works to gradually obtain a child’s trust before attempting to ‘seduce’ the child into sexual activity; the second model dispenses with building trust and from the beginning is openly sexual. A stratified random survey of law enforcement agencies in US conducted in 2001/2002 (Wolak, Finkelhor, & Mitchell, 2004) reported that the juvenile victims of sexual offences where
579
Cyber-Victimization
the offender and victim originally met online were predominantly 13 to 15 year old girls (75%). Offenders and victims typically met in chat rooms and in the majority of cases the adult offenders were open about their desire to initiate a sexual relationship, the victim and offender had sex on more than one occasion, and victims formed close attachments or were ‘in love’ with the perpetrators. These findings are suggestive of Deirmenjian’s ‘trust based seductive model’.
cyber-VIctImIZAtIon Research in off-line settings has demonstrated that victims vary in their response to crime. Factors influencing responses can broadly be broken down into characteristics of the victim, characteristics of the criminal event and characteristics of the postvictimization experience. Victim characteristics include demographics such as age and gender, but also pre-victimization adjustment and stress. Characteristics of the criminal event include the type and seriousness of the crime, the relationship between the victim and offender, and the victim’s perceptions of the crime and who was to blame. Following the crime the post-victimization experience includes the level of involvement in the criminal justice system and the degree of social support (Lurigio & Resnick, 1990). Victims of crime may experience an initial shock reaction followed by physical, psychological and financial impacts. Longer term effects may be experienced impacting on employment and relationships. An impact may also be experienced by immediate family, friends and relations (United Nations, 1999). In addition to this primary victimization resulting from the criminal event, an individual may experience secondary victimization resulting from the response of institutions (criminal justice system, hospitals, victim services agencies) and individuals. As with crimes conducted in off-line settings, cyber-victims vary in their responses to cyber-
580
crimes. Many victims of cyber-crimes may not realize their victimization until long after the event. Only a small percentage of cyber-crimes are reported to the police, estimated by Wall (2004) at 120 to 150 reports per 1,000,000 cyber-crimes in the United Kingdom. Victims may be reluctant to report cyber-crimes due to embarrassment, not knowing where or how to report the crime, or the size of the loss (Wall, 2004). Organizations may not report cyber-crimes due to the fear of negative publicity and negative commercial impact (Wall, 2005). The victims of cyber-crime include children, adults, organizations and governments. In this section we detail research findings on cyber-victims and their responses to cyber-crimes.
children and youth Children may be the victims of a range of social harms and cyber-crimes online, predominantly ‘against the person’ offences. Children harassed and bullied online vary in their reactions to the behavior. More than a third (38%) of ten to seventeen year olds harassed online report being distressed as a result of the harassment (The Youth Internet Safety Survey; Ybarra et al., 2006). Of concern, those targeted for harassment were more likely to harass others online themselves, had social problems and were victimized in other contexts (Ybarra et al., 2006). Similarly, Li (2007) reported that almost a third of Canadian junior high student victims who had been bullied off-line had also been cyber-bullied. Many victims of cyber-bullying experience sadness, anger, anxiety, and fear (Beran & Li, 2005). Being the perpetrator or victim of online harassment may be an indicator of problems in the child’s offline life. Male youths who reported being harassed were three times more likely than other youth to exhibit major depressive-like symptoms (Ybarra, 2004). Those who engaged in online harassment had multiple risk factors in their offline lives including poor family relation-
Cyber-Victimization
ships, substance use and delinquency (Ybarra & Mitchell, 2004b). Those who report being an aggressor as well as a target of Internet harassment face significant psychosocial challenges (Ybarra & Mitchell, 2004a). The results from the Youth Internet Safety Surveys conducted in 1999/2000 and 2005 indicate that youth are now less likely to interact with strangers online (34% in 2005 from 40% in 1999/2000) and form close relationships (11% from 16%) than previously. This is also reflected in changes in the relationship between perpetrator and victims. In 2005 approximately one in eight solicitations were from offline friends and acquaintances (up from 3% in 1999/2000). A significant minority of offences (41% solicitations, 29% exposures, 31% harassment) occurred while the victim was using the Internet with offline peers. The majority (91%) of youth exposed to unwanted sexual material or solicited online do not report being distressed by the incident (Wolak et al., 2006). The exposure of youth to pornographic images and sexual solicitations online needs to be viewed within the context of youth’s behavior online. Chat rooms are frequently used by teenagers in developing their sexual identity and the use of sexualised nicknames, utterances and themes are common (Subrahmanyam & Smahel, 2006). Rather than a common site for sexual exploitation of minors, it has been proposed that the Internet may provide a ‘safer’ environment for exploring sexuality than off-line settings (Subrahmanyam, Greenfield & Tynes, 2004).
Adults Adults may be the victims of crimes against the person online, including cyberstalking, harassment and bullying. For example, there have been reports of victimization of teachers by students where malicious videos are posted on sites such as YouTube (Independent Television News Limited,
2007) or teachers are rated by students on sites such as RateMyTeachers.com. Previous research has suggested that cybercrimes against the person may be perpetrated by offenders who seek out ‘victims of opportunity’ through targeting inexperienced Internet users (Bocij, 2003; Bocij & Sutton 2004; Casey, 2004). More experienced Internet users may be less likely to be victimized (Spitzberg, 2006) and experience harassment as less distressful (Bocij, 2003; Bocij & Sutton 2004). The ongoing threat and experience of vulnerability created by cyberstalking may result in the victim experiencing a range of effects including psychological distress4. Adults may also be the victims of property crimes. Identity theft and associated fraudulent activity can result in financial loss, impaired credit ratings, damage to the reputation and the integrity of identity, criminal investigation, lost productivity and psychological impacts including stress, emotional trauma and relationship breakdown. (Identity Theft Resource Centre, 2003; 2005; Jefferson, 2004; LoPucki, 2001). Victims who are unable to easily resolve problems associated with identity theft are more likely than other victims to experience longer-term problems including clinical somatization, depression and anxiety (Sharp, Shreve-Neiger, Fremouw, Kane & Hutton, 2004). Cyber-crimes can result in secondary victimization. Victims of cyber-identity theft may be denied services normally available to victims because in many jurisdictions police do not recognize the individual as the primary victim5 (Jefferson, 2004; van der Meulen, 2006). Victims may also be treated as ‘suspects’ for crimes committed with the stolen identities (Baum, 2006; Jefferson, 2004; Kreuter, 2003; Identity Theft Resource Centre, 2005). Impaired credit ratings may contribute to secondary victimization through denial of credit, increased insurance and credit card interest rates, cancellation of credit cards, denial of services and continued contact by collection agencies
581
Cyber-Victimization
(Baum, 2006; Identity Theft Resource Centre, 2005; Synovate 2003).
organizations Organizations may be the victims of both personal and property cyber-crimes. An analysis of 171 closed cases investigated by the Computer Investigation and Technology Unit of the New York Police Department revealed that victims of cyber-stalking included educational institutions (8%), private corporations (5%) and public sector agencies (1%; D’Ovidio and Doyle, 2003). Cyber-crime represents a threat to e-commerce organizations in terms of undermining consumer confidence and trust in online shopping (Smith, 2004). Where the organization has been the site of the identity theft the impact may include harm to the reputation and credibility of the organization and the payment of direct financial costs and the indirect financial costs required to repair the damage caused (Paget, 2007). Organizations may also be subject to cyber-extortion where protection payments are demanded in order to restore services crippled by ‘denial-of-service’ attacks (Lepofsky, 2006). Copyright holders and the music and software industries are frequently portrayed as the victims of cyber-piracy, subject to losses in revenue. However, the notion of copyright holders as crime ‘victims’ is contested. There is the potential for copyright holders to benefit from piracy through increasing the diffusion of a product, providing a form of ‘product sampling’, increasing the share within the marketplace and even making the product an industry standard (Hill, 2007).
resPonses to cyberVIctImIZAtIon An examination of responses to cyber-victims needs to be situated within the context of responses to victims of off-line crimes. All governments have
582
an ethical responsibility to address the needs of victims of crime. Despite this, victims are commonly referred to as the ‘forgotten’ figures in the criminal justice system. Beginning in the 1970’s the crime victims’ rights movement has fought for a number of reforms to address victims concerns (Beloof, 2005; Cassell, 2005). The first two waves of crime victims’ rights in the US have sought and achieved statutory rights and constitutional rights for victims. The third wave of rights currently being sought are the rights of standing, remedy and review, in order to enforce rights at criminal trial and appellate courts (Beloof, 2005). Crime victims’ rights internationally have been supported by United Nations declarations and conventions. The UN Declaration of Basic Principles of Justice for Victims of Crime and Abuse of Power (adopted by General Assembly resolution 40/34 of 29 November 1985) provided basic principles for victims including access to justice and fair treatment restitution, compensation and assistance. The accompanying Handbook on Justice for Victims: On the Use and Application of the United Nations Declaration of Basic Principles of Justice for Victims of Crime and Abuse of Power (United Nations, 1999) provides guidelines for implementing victim service programs, victim-sensitive policies, procedures and protocols. The clusters of services that should be provided by victim assistance programs are crisis intervention, counselling, advocacy, support, training of allied professionals, public education and prevention. Highlighted in the handbook are the need for policy makers and service providers to understand the impact of victimization and the need for provision of both crisis and long-term services to assist and support victims in dealing with emotional trauma and criminal justice processes. More recently, the Draft UN Convention on Justice and Support for Victims of Crime and Abuse of Power (United Nations, 2006) includes provisions for access to justice and fair treatment for victims; the protection of victims; information
Cyber-Victimization
on how to access support services, legal advice, compensation and criminal justice processes; material, medical, psychological and social assistance to victims, including immediate, medium term and long term assistance; and to enhance systems of restorative justice; to legislate to allow victims to seek restitution and reparation from offenders; and to provide compensation. Most western countries now offer a range of services aimed at assisting crime victims to cope with the stresses of the victimization experience. These include victim compensation programs, crisis intervention, individual and group counseling. However, only a minority of victims access victims’ services, due to either their lack of knowledge of the availability of services or the perception that they will not be helpful. The majority of crime victims rely on family and friends to support them through their post-victimization experience. Recent research suggests that victims who do access victim services experience no greater improvement in psychological functioning than victims who do not access services (Sims, Yost & Abbott, 2005; 2006). While making important strides forward in obtaining victims’ rights, victim-oriented reforms have been criticized as satisfying only some of victims’ needs at the expense of other emotional and justice needs. Procedural justice is of prime importance to victims (Sebba, 1996). Over the last two decades reform emphasis has shifted from the provision of government services to victims to seeking greater involvement of victims with the criminal justice process (Edwards, 2004). While involvement in the criminal justice system may exacerbate psychological trauma experienced during the victimization and potentially jeopardize the victim’s (perceived) safety, it also has the potential to provide mental health benefits and ultimately greater safety and protection (Herman, 2003). Cyber-crimes are relatively new crimes and responses to cyber-victimization are still developing. Responses need to be consistent with victims’ rights established for offline settings.
Current responses to victims of cyber-crimes are outlined below.
laws and criminal Investigation In order to meet the justice needs of cyber-victims, criminal investigation and prosecution of offenders is required. The prosecution of cyber-crime offenders is dependent on the existence of laws covering cyber-crimes and the willingness of police to investigate cyber-crimes. Some cybercrime offences are already covered under existing laws. For some other cyber-crimes, the introduction or amendment of legislation to cover online instances of existing criminal behaviours has been rapid in western countries (see, for example, http://www.ncsl.org/programs/lis/cip/stalk99. htm for a listing of states in the US that have implemented computer harassment and/or cyberstalking laws), but is far from universal. The key issue hampering legal responses to cyber-crimes is that while cyber-crime is global, most laws are restricted to nations or states. For example, the Australian Spam Act 2003 (amended 15 April 2005) prohibits the sending of unsolicited commercial electronic messages with an Australian link, but can do nothing to stop spam messages originating outside of Australia. The investigation and prosecution of cybercrimes is reliant on cooperation between jurisdictions where crimes transcend national boundaries. Steps have been taken to promote cross-jurisdictional cooperation. The United Nations has adopted a key role in developing and enacting plans of action to combat cyber-crimes through the Commission on Crime Prevention and Criminal Justice and the Office of Drug Control and Crime Prevention (Pocar, 2004). International cooperation is enabled through the ratifying of conventions such as the Council of Europe Cyber-crime Convention and the United Nations Convention Against Transnational Organized Crime. These conventions support mutual legal assistance and provide powers to extradite evidence, intercept
583
Cyber-Victimization
electronic data, search computer systems and seize stored data (Broadhurst, 2006). Two major components of international legislation are the implementation of internationally agreed norms within partner jurisdictions and procedures for cooperation between jurisdictions. However, international cooperation is currently hindered by the lack of consistent definitions of cyber-crimes and inconsistencies in the sanctions imposed across jurisdictions, highlighting the need to harmonize legislation (Pocar, 2004). General principles for drafting cyber-crime laws include the need for consistency in legislation between cyber and off-line behaviours; the need to ensure that specific technologies are not referenced in order to ensure laws do not become quickly outdated; and the need for laws that enable the prosecution of offenders across jurisdictions (Downing, 2005). A variety of initiatives have been developed in order to address specific cyber-crimes. There has been a strong focus on the sexual exploitation of children. The Innocent Images National Initiative in the US (http://www.fbi.gov/innocent. htm) is a multi-agency (governments, FBI, police) undercover online investigative operation that investigates child pornography and child sexual exploitation. In the decade to 2006, the Innocent Images National Initiative opened 17,691 cases resulting in 5,840 convictions or pre-trial diversions. Criminal investigation may also be used as a preventive measure. Police may use ‘sting’ operations where police officers pose as juveniles on the Internet in order to attract paedophiles. An estimated quarter of all arrests for Internet sex crimes against juveniles in the US involved these types of sting operations (compared to 29% where the offender met an actual victim online). Offenders arrested as a result of a sting operation were more likely to have sought targets in sexually oriented chat rooms and engaged in fewer interactions over a shorter period of time than offenders who met a victim online. These ‘sting’
584
offenders also varied from other offenders in terms of age, socio-economic status and previous deviant and violent behaviours (Mitchell, Wolak & Finkelhor, 2005). While sting operations have been heralded as stopping offenders before they find a victim, the difference in both style of operation and personal characteristics between ‘sting’ offenders and other arrests suggest stings may not be successful in targeting the most serious of offenders. As currently operating, they are targeting the ‘direct sexual model’ of paedophile offending (Deirmenjian, 2002) rather than the ‘trust based seductive model’ (Deirmenjian, 2002) that has the grooming of the potential victim as its basis. Legislation brought in to protect the privacy of children online includes the US Children’s Online Privacy Protection Act (COPPA). COPPA applies to the online collection of personal information from children under 13 and stipulates what a web site operator must include in a privacy policy, when and how to seek verifiable consent from a parent and what responsibilities an operator has to protect children’s privacy and safety online. Cyber-crimes have been heavily targeted in some industry sectors where large sums of money are at stake. For example, the IFPI and recording industry bodies have been actively targeting illegal file sharing globally with legal actions brought against more than 10,000 individuals in 18 countries in 2006 (IFPI, 2007). Legislative and investigative approaches are unlikely to be sufficient to enable the investigation and prosecution of all cyber-crimes. Many cyber-crimes (e.g. spam) result in small-impact, bulk victimizations that are not a high priority for police or courts. This highlights the need for police to establish a network of relationships in order to effectively police cyber-crimes. The majority of cybercriminal activities are resolved without police involvement and working relationships need to be established between police and internet users groups, online virtual environment managers, network infrastructure providers, corporate security organizations, and non-governmental
Cyber-Victimization
and governmental non-police organizations in order to effectively ‘police’ cyberspace (Wall, forthcoming).
technical Technical solutions may be utilized to reduce or ‘design out’ some types of cyber-crimes, typically through filtering, blocking and monitoring digital data. Increasingly sophisticated spam filters employed at the server level can block a large percentage of spam (Nucleus Research, 2007). Similarly, web-based filters such as ‘Net Nanny’ are employed to block content unsuitable for children. The music industry has called for Internet Service Providers to block websites that facilitate file sharing of copyrighted music (IFPI, 2007). However, no blocking or filtering system provides one hundred percent accuracy, with the potential to let through material that should be blocked, or blocking material that should not be filtered. For example, recent tests of the efficacy of server based Internet content filters found that the best of the filters blocked only 76% of content that should be blocked (RMIT Test Lab, 2006). Filtering and blocking systems need to be updated continuously to retain any form of currency.
Industry codes of Practice The use of industry codes of practice and standards have been advocated as a means of controlling cyber-crimes (Mitrakas, 2006). Standards and practices have, or are, being developed in some jurisdictions. For example, Australia has the Internet Industry Association Spam Code of Practice (http://www.acma.gov.au/WEB/STANDARD// pc=PC_100605) registered by the Australian Communications and Media Authority and the Internet Industry Association is currently developing the IIA Cybercrime Code of Practice (see http://www.iia.net.au). However, the degree to which the onus of detection and reporting of cy-
ber-crimes should be the responsibility of Internet Service Providers has yet to be established.
educational All Internet users need to be aware of how to protect their personal information online and how to interact safely within virtual environments. Materials relating to Internet safety (see, for example, WiredSafety: http://www.wiredsafety. org/) and crime prevention tips (see, for example, the tips provided by the Internet Crime Complain Centre: http://www.ic3.gov/preventiontips.aspx) are readily available online. Material is also available from a number of sites that provides specific advice on internet safety for children. For example, NetAlert (http://www.netalert.net. au/), the website of Australia’s Internet Safety Advisory Body (a not-for-profit community organization established by the Australian government to provide independent advice and education on managing access to online content), contains a wide range of information on Internet safety directed at parents, teachers, librarians and children. Educational campaigns have also been directed at parents. For example, IFPI promotes family members checking their children’s Internet use to ensure they are not breaching copyright laws through downloading music, computer games or other digital material illegally (IFPI, 2007). It is not only children who would benefit from education on safe Internet practices. Recent research suggests that the majority of adult users of social networking sites place their security at risk by giving out personal information and downloading unknown files. Four out of ten survey respondents were not concerned about becoming cyber-crime victims while using social networking sites (Russell Research, 2006). There is a need to ensure that professionals working with children and youth are trained to identify and respond to issues associated with Internet use. In particular, an understanding of the nature of youth online relationships is required and
585
Cyber-Victimization
professionals need to be able to provide information on the safe use of the Internet. An Internet harassment component should be added to all anti-bullying programs in schools. Prevention efforts need to be targeted at those youth most at risk, with education about sexual solicitation targeted at preteens and teens. Reporting of all types of online harassment should be promoted, with increased options for reporting, and the reasons for reporting emphasised (Mitchell et al., 2006; Wolak et al., 2004, 2006; Ybarra et al., 2006; 2007). Child sex offenders will continue to adopt new methods of offending as technology evolves. Parents, teachers and caregivers will need to keep abreast of changing technologies in order to protect children. Professional workers (teachers, law enforcement, medical) will also need to continuously update their knowledge in order to educate children and successfully identify and prosecute offenders (McColgan & Giardino, 2005).
Professional Support services to cyber-crime victims may be provided off-line through existing agencies supporting all types of crime victims. In order for this to be effective, professionals working within these agencies will need to familiarize themselves with the varying impacts of cyber-victimization and the needs of victims. Mental health professionals need to be aware of online sexual predatory behaviour in order to recognize the signs of children and youth who may be victims and to become sensitized to their needs (Chisholm, 2006; Deirmenjian, 2002). One area that has not yet been fully explored is the provision of online counselling for victims of cyber-crimes. Some victims’ assistance organizations are using the Internet to provide counselling by email, online chat with a counsellor, or online support groups (Finn, 2001; US Department of Justice 1998). Online counselling is still in its infancy and recent research suggests
586
that not all providers meet ethical standards of service provision (Shaw & Shaw, 2006). However, research suggest that some adults (Young, 2005) and children (King et al, 2006) prefer to engage in online counselling because of its accessibility and the anonymity it can provide. Some support may also be offered by organizations that deal specifically with victims of cyber-crimes, such as Cyberangels (http://www.cyberangels.org/) and WHO@ (Working to Halt Online Abuse: www. haltabuse.org.
future trends The form of cyber-crimes will continue to change as new ICTs and applications emerge. The spread of mobile and wireless technologies, while providing users with increased mobility and flexibility, increase security risks through possible intrusion by unauthorized persons, bandwidth leeching, exploitation of network access and increased risk of loss or theft (Urbas & Krone, 2006). As the types of cyber-crime expand, new victims of cyber-crimes are likely to emerge. As with offline crimes, individuals will vary in their responses to cyber-criminal acts. Not all will view themselves as victims or necessarily experience distress as a result. Only some will require victims’ services. In line with the guidelines provided by the United Nations (1999, 2006), services need to be available to support those cyber-victims requiring help through crisis intervention, counselling, advocacy and support. There will be an ongoing need for training of allied professionals, for public education programs and for prevention strategies to be developed and implemented.Service providers will need to identify the specific needs of the individual rather than provide a blanket approach to all cyber-crime victims. While the appropriate responses may to some extent be related to the type of cybercrime, the potential for victims of the same type of crime to have differing needs must
Cyber-Victimization
be recognized. The impacts of cyber-victimization and appropriate responses to cyber-victims require significant further research. In determining service provision requirements there is the need for reliable information on the nature and extent of cyber-crimes. It is critical to obtain reliable estimates of the incidence and prevalence of cyber-crimes. Care needs to be taken in interpreting prevalence estimates from population surveys due to their propensity to measure ‘social harm’ rather than specific criminal offences (Hough, 1990; Clare and Morgan, forthcoming). Gross over-estimates of the risk of ‘stranger danger’ crimes such as cyber-stalking, cyber-bullying and harassment, and the sexual exploitation of children may result in declining public trust and an unnecessary decline in the use of ICTs for social and communication purposes by children and adults (Yar, 2006). Continued research into the prevalence, types and impacts of cyber-victimization is required in order to effectively address the needs of current and future cyber-victims.
references Baum, K. (2006, April). Identity theft, 2004: First estimates from the National Crime Victimization Survey. Bureau of Justice Statistics Bulletin. Retrieved May 27, 2007 from www.ojp.gov/bjs/ pub/pdf/it04.pdf Beloof, D. E. (2005). The third wave of victims’ rights: Standing, remedy and review. Brigham Young University Law Review, 2, 255-365. Beran, T., & Li, Q. (2005). Cyber-harrassment: A study of a new method for an old behavior. Journal of Educational Computing Research, 32(3), 265-277. Bocij, P. (2003). Victims of cyberstalking: An exploratory study of harassment perpetrated via the
Internet. First Monday, 8(10). http://firstmonday. org/issues/issue8_10/bocij/index.html Bocij, P. & Sutton, M. (2004). Victims of cyberstalking: Piloting a web-based survey method and examining tentative findings. Journal of Society and Information [online], 1(2). Broadhurst, R. (2006). Developments in the global law enforcement of cyber-crime. Policing: An International Journal of Police Strategies and Management, 29, 408-433. BSA. (2006). Third annual BSA and IDC global software piracy study. Retrieved May 30, 2007 from http://www.bsaa.com.au/. Campbell, M. A. (2005). Cyber bullying: An old problem in a new guise? Australian Journal of Guidance and Counselling, 15(1), 68-76. Casey, E. (2004). Digital evidence and computer crime: Forensic science, computers and the Internet (2nd ed). London: Elsevier Academic Press. Cassell. P. G. (2005). Recognizing victims in the federal rules of criminal procedure: Proposed amendments in light of the Crime Victims’ Rights Act. Brigham Young University Law Review, 4, 835-925. Chisholm, J. F. (2006). Cyberspace violence against girls and adolescent females. Annals of New York Academy of Sciences, 1087, 74-89. Clare, J., & Morgan, F. (forthcoming). Exploring the “Bright Figure” of crime: Analysing the relationship between assault victimization and victims’ perceptions of criminality. CyberTipline (undated a). CyberTipline: Annual report totals by incident type. Retrieved June 3, 2007 from http://www.cybertipline.com/en_US/ documents/CyberTiplineReportTotals.pdf CyberTipline (undated b). CyberTipline fact sheet. Retrieved June 3, 2007 from http://www. cybertipline.com/en_US/documents/CyberTiplineFactSheet.pdf
587
Cyber-Victimization
D’Ovidio, R. & Doyle, J. (2003). A study on cyberstalking: Understanding investigative hurdles. FBI Law Enforcement Bulletin, 72(3), 10-17.
Hough, M. (1990). Threats: Findings from the British Crime Survey. International Review of Victimology, 1, 169-180.
Deirmenjian, J. M. (2002). Pedophilia on the Internet. Journal of Forensic Science, 47(5), 1-3.
Identity Theft Resource Centre. (2003). Identity theft: The aftermath 2003. Retrieved March 2, 2007 from http://www.idtheftcenter.org/idaftermath.pdf .
Downing, R. W. (2005). Shoring up the weakest link: What lawmakers around the world need to consider in developing comprehensive laws to combat cybercrime. Columbia Journal of Transnational Law, 43, 705- 762. Edwards, I. (2004). An ambiguous participant: The crime victims and criminal justice decision making. British Journal of Criminology, 44(6), 967-982. Finn, J. (2001).Domestic violence organizations online: Risks, ethical dilemmas, and liability issues. Paper commissioned by Violence Against Women Online Resources. Retrieved May 29, 2007 from http://www.mincava.umn.edu/documents/commissioned/online_liability/online_liability.pdf Finn, J. (2004). A survey of online harassment at a university campus. Journal of Interpersonal Violence, 19(4), 468-483. Gordon, S., & Ford, R. (2006). On the definition and classification of cybercrime. Journal in Computer Virology, 2(1), 13-20. Grabosky, P. (2004). The global dimension of cybercrime. Global Crime, 6(1), 146-157. Herman, J. L. (2003). The mental health of crime victims: Impact of legal intervention. Journal of Traumatic Stress, 16(2), 159-166. Hill, C. W. (2007). Digital piracy: Causes, consequences, and strategic responses. Asia Pacific Journal of Management, 24, 9-25. Holt, T. J. & Graves, D. C. (2007). A qualitative analysis of advance fee fraud e-mail schemes. International Journal of Cyber Criminology, 1(1), 137-154.
588
Identity Theft Resource Centre. (2005). Identity theft: The aftermath 2004. Retrieved March 2, 27, 2007 from http://www.idtheftcenter.org/idaftermath2004.pdf accessed 2 March 2007. IFPI (2006). The recording industry 2006: Piracy report. Retrieved May 29, 2007 from http://www. ifpi.org/content/library/piracy-report2006.pdf IFPI (2007). IFPI:07. Digital music report. Retrieved May 29, 2007 from http://www.ifpi.org/ content/library/digital-music-report-2007.pdf Independent Television News Limited (2007). Teachers warn of cyber-bully menace. Available: http://www.channel4.com/news/articles/uk/teach ers+warn+of+cyberbully+menace/641947 Javelin Strategy and Research. (2007). 2007 Identity fraud survey report-Consumer Version. How consumers can protect themselves. Retrieved May 27, 2007 from www.javelinstrategy.com. Jefferson, J. (2004). Police and identity theft victims-Preventing further victimisation. Australasian Centre for Polcing Research, No 7. Retrieved May 27, 2007 from http://www.acpr. gov.au/publications2.asp?Report_ID=154. King, R., Bambling, M., Lloyd, C., Gomurra, R., Smith, S., Reid, W., & Wegner, K. (2006). Online counselling: The motives and experiences of young people who choose the Internet instead of face to face or telephone counselling. Counselling and Psychotherapy Research, 6(3), 169-174. Kreuter, E. A. (2003). The impact of identity theft through cyberspace. Forensic Examiner, 12(5-6), 30-35.
Cyber-Victimization
Krone, T. & Johnson, H. (2006). Internet purchasing: Perceptions and experiences of Australian households. Trends & Issues in Crime and Criminal Justice, No. 330. Canberra: Australian Institute of Criminology.
National White Collar Crime Center and the Federal Bureau of Investigation. (2007). Internet Crime Report January 1, 2006 – December 31, 2006. Retrieved May 28, 2007 http://www.ic3. gov/media/annualreport/2006_IC3Report.pdf.
Lepofsky, R. (2006). Cyberextortion by Denialof-Service attack. Risk Management, 53(6), 40.
Nucleus Research. (2007). Research note. Spam: The repeat offender. Retrieved May 21, 2007 from http://www.nucleusresearch.com/research/h22. pdf
Li, Q. (2007). New bottle but old wine: A research of cyberbullying in schools. Computers in Human Behavior, 23, 1777-1791. LoPucki, L. M. (2001). Human identification theory and the identity theft problem. Texas Law Review, 80, 89-135. Lurigio, A. J., & Resnick, P. (1990). Healing the psychological wounds of criminal victimization: Predicting postcrime distress and recovery. In A. J. Lurigio, W. G. Skogan & R. C. Davis (Eds.), Victims of crime: Problems, policies and programs (pp. 50-67). Newbury Park, CA: Sage. McColgan, M. D., & Giardino, A. P. (2005). Internet poses multiple risks to children and adolescents. Pediatric Annals, 34(5), 405-414. McGrath, M. G., & Casey, E. (2002). Forensic psychiatry and the Internet: Practical perspectives on sexual predators and obsessional harassers in cyberspace. Journal of the American Academy of Psychiatry and the Law, 30(1), 81-94. Mitchell, K., Wolak, J., & Finkelhor, D. (2005). Police posing as juveniles online to catch sex offenders: Is it working? Sexual Abuse: A Journal of Research and Treatment, 17(3), 241-267. Mitchell, K., Wolak, J., & Finkelhor, D. (2006). Trends in youth reports of sexual solicitations, harassment and unwanted exposure to pornography on the Internet. Journal of Adolescent Health, 40, 116-126. Mitrakas, A. (2006). Information security and law in Europe: Risks checked? Information and Communications Technology Law, 15(1), 33-53.
Paget, F. (2007). Identity theft. McAfee Avert Labs technical white paper No 1. Retrieved May 27, 2007 from http://www.mcafee.com/us/local_content/white_papers/wp_id_theft_en.pdf. Pocar, P. (2004). New challenges for international rules against cyber-crime. European Journal on Criminal Policy and Research, 10, 27-37. Reid, A. S. (2005). The rise of third generation phones: The implications for child protection. Information and Communications Technology Law, 14(2), 89-113. RMIT Test Lab. (2006). A study on server based Internet filters: Accuracy, broadband performance degradation and some effects on the user experience. Report commissioned by NetAlert Limited. Retrieved June 7, 2007 from http://www. netalert.net.au/03100-A-Study-on-Server-BasedInternet-Filters---26-May-2006.pdf Roberts, L. D. (forthcoming). Cyber-stalking: Jurisdictional and definitional concerns with computer-mediated interpersonal crimes. In K. Jaishankar (Ed.) International perspectives on crime and justice. Russell Research. (2006). CA/NCSA Social networking study report. Retrieved June 6, 2007 from http://staysafeonline.org/features/SocialNetworkingReport.ppt#0. Savona, E. U., & Mignone, M. (2004). The fox and the hunters: How IC technologies change the crime race. European Journal on Criminal Policy and Research, 10, 3-26.
589
Cyber-Victimization
Schell, B. H., Martin, M. V., Hung, P. C. K., & Rueda, L. (2007). Cyber child pornography: A review paper of the social and legal issues and remedies- and a proposed technological solution. Aggression and Violent Behavior, 12, 45-63.
Subrahmanyam, K., & Smahel, D. (2006). Connecting developmental constructions to the Internet: Identity presentation and sexual exploration in online teen chat rooms. Developmental Psychology, 42(3), 395-406.
Sebba, L., (1996). Third parties, victims and the criminal justice system. Columbus: Ohio State University Press.
Synovate (2003). Federal Trade CommissionIdentity theft survey report. Retrieved May 20, 2007 from http://www.ftc.gov/os/2003/09/synovatereport.pdf
Sharp, T., Shreve-Neiger,A., Fremouw, W. Kane, J., & Hutton, S. (2004). Exploring the psychological and somatic impact of identity theft. Journal of Forensic Sciences, 49(1), 131-136. Shaw, H. E., & Shaw, S. F. (2006). Critical ethical issues in online counseling: Assessing current practices with an ethical intent checklist. Journal of Counseling & Development, 84, 41-53. Sims, B., Yost, B., & Abbott, C. (2005). Use and nonuse of victim services programs: Implications from a statewide survey of crime victims. Criminology and Public Policy, 4(2), 361-384. Sims, B., Yost, B., & Abbott, C. (2006). The efficacy of victims services programs: Alleviating the psychological suffering of crime victims. Criminal Justice Policy Review, 17, 387-406. Smith, A. D. (2004). Cybercriminal impacts on online business and consumer confidence. Online Information Review, 28(3), 224-234. Spitzberg, B. H. (2006). Preliminary development of a model and measure of computer-mediated communication (CMC) competence. Journal of Computer-Mediated Communication, 11, 629666. Spitzberg, B. H., & Hoobler, G. (2002). Cyberstalking and the technologies of interpersonal terrorism. New Media & Society, 4(1), 71-92. Subrahmanyam, K., Greenfield, P. M., & Tynes, B. (2004). Constructing sexuality and identity in an online teen chat room. Journal of Applied Developmental Psychology, 25(6), 651-666.
590
Tavani, H. T. (2004). Ethics and technology: Ethical issues in an age of information and communication technology. New York: John Wiley and Sons. United Nations. (14 November 2006) Draft UN Convention on Justice and Support for Victims of Crime and Abuse of Power. Retrieved June 6, 2007 from http://www.tilburguniversity.nl/intervict/undeclaration/convention.pdf United Nations. (1999). Handbook on justice for victims: On the use and application of the Declaration of Basic Principles of Justice for Victims of Crime and Abuse of Power. New York: Centre for International Crime Prevention. Urbas, G., & Krone, T. (2006). Mobile and wireless technologies: security and risk factors. Trends & Issues in Crime and Criminal Justice, No. 329. Canberra: Australian Institute of Technology. US Department of Justice (1998). New directions from the field: Victims rights and services for the 21st Century. Retrieved 29 May 2007 from http:// www.ojp.usdoj.gov/ovc/new/directions/. van der Meulen, N. (2006). The challenge of countering identity theft: Recent developments in the United States, the United Kingdom, and the European Union. Report Commissioned by the National Infrastructure Cyber Crime program (NICC). Retrieved May 20, 2007 from http://www. tilburguniversity.nl/intervict/publications/NicolevanderMeulen.pdf.
Cyber-Victimization
Wall, D. S. (2004). Digital realism and the governance of spam as cybercrime. European Journal on Criminal Policy and Research, 10, 309-335. Wall, D. S. (2005). The Internet as a conduit for criminal activity. In A. Pattavina (Ed.), Information technology and the criminal justice system (pp. 78-94), Thousand Oaks, CA: Sage. Wall, D. S. (2007). Policing cybercrimes: Situating the public police in networks of security within cyberspace. Police Practice and Research, 8, 183-205. Weimann, G. (2005). Cyberterrorism: The sum of all fears? Studies in Conflict & Terrorism, 28(2), 129-149. Wolak, J., Finkelhor, D. & Mitchell, K. (2004). Internet-initiated sex crimes against minors: Implications for prevention based on findings from a national study. Journal of Adolescent Health, 35(424), e11– e20. Wolak, J., Mitchell, K. , & Finkelhor, D. (2006). Online Victimization of Youth: Five Years Later. Durham, NH: National Center for Missing and Exploited Children. Yar, M. (2006). Cybercrime and society. London: Sage Publications Ybarra, M. (2004). Linkages between depressive symptomatology and Internet harassment among young regular Internet users. Cyberpsychology & Behavior, 7(2), 247-257. Ybarra, M., & Mitchell, K. (2004a). Online aggressor/targets, aggressors, and targets: a comparison of associated youth characteristics. Journal of Child Psychology and Psychiatry 45(7), 1308–1316. Ybarra M. L., & Mitchell, K. J. (2004b). Youth engaging in online harassment: Associations with caregiver-child relationships, Internet use, and personal characteristics. Journal of Adolescence, 27, 319-336.
Ybarra M. L., Mitchell, K. J., Finkelhor, D., & Wolak, J. (2007). Internet prevention messages: Targeting the right online behaviours. Archives of Pediatrics & Adolescent Medicine, 161(2), 138-145. Ybarra M. L., Mitchell, K. J. Wolak, J. & Finkelhor, D. (2006). Examining Characteristics and Associated Distress Related to Internet Harassment: Findings From the Second Youth Internet Safety Survey Pediatrics, 118(4), e1169-e1177. Young, K.S. (2005). An empirical examination of client attitudes towards online counselling. Cyberpsychology & Behavior, 8(2), 172-177.
key terms Cyber-Crime: Cyber-crime refers to crimes that are conducted (or facilitated by) using computers and networks. Cyber-Victimization: Cyber-victimization refers to the process of victimizing others through the use of information and communication technologies. Cyber-victims can be governments, organizations or individuals. In this chapter, cyber-victimization is used to refer specifically to the victimization resulting from cyber-criminal behaviour. Cyber-Bullying: Cyber-bullying refers to bullying behaviour conducted online through information and communication technologies. Cyber-bullying behaviours encompass on-line postings, conversations or messages that are designed to harass, humiliate and intimidate the receiver. This may include threats, insults and teasing. Cyber Identity Theft: Cyber identity theft refers to the on-line misappropriation of identity tokens using information and communication technologies.
591
Cyber-Victimization
Cyber-Stalking: Cyberstalking refers to stalking activities conducted online using information and communication technologies. Cyber-stalking activities include threats, harm to reputation (‘cyber-smearing’), damage to data or equipment, computer monitoring and attempts to access confidential information Keyloggers: Malicious software (‘malware’) installed on victims computers to record keystrokes (including passwords and logins). Phishing: Phishing refers to the use of emails and websites to ‘fish’ for personal information such as credit card numbers, back account information and passwords to use for fraudulent purposes. Spam: The unsolicited bulk distribution of promotional emails.
592
endnotes 1
2
3
4
5
At the current time cyber-terrorism largely represents a threat, rather than a reality, although the potential for significant victimization resulting from cyber-terrorism can not be overlooked (Weimann, 2005). See Roberts, this volume, for more information on cyber-identity theft. See Roberts (forthcoming) for an overview on cyber-stalking. See Spitzberg and Cupach (2007) for an over-view of first, second and third order effects experienced by stalking victims. Organizations (defrauded creditors) are often regarded as the primary victims of identity theft as they incur the financial cost of the fraudulent use of stolen identities (LoPucki, 2001).
593
Chapter XXXVIII
Spyware
Mathias Klang University of Lund, Sweden & University of Göteborg, Sweden
AbstrAct It is well known that technology can be use as to effectively monitor the behavior of crows and individuals and in many cases this knowledge may b the motivation for people to behave differently than if they were not under surveillance. This internalization of surveillance has been widely discussed in privacy literature. Recent software developments have created new threats to the integrity of the individual. Today a form of software, commonly known as spyware, poses an increased threat of covert surveillance. Computer users subjected to spyware are often unaware of the surveillance and therefore continue to behave in a natural manner. This chapter argues that the integrity of the computer user is not protected under law and any rights the user may believe she has are easily circumvented.
IntroductIon This chapter discusses an unusual type of surveillance software, which may be installed in many computers. The strange aspect of this software is that it has often been downloaded and installed by the user, but without her knowledge. The software is mainly designed to collect information about the user of the computer and relay this information back to the software manufacturer. The download, installation, data collection and data transfer all take place within the users own computer but very seldom with the users knowledge (Freeman & Urbaczewski 2005, Zhang 2005). This chapter deals
with the issue of a class of monitoring software that gathers information about the computer user and sends the information to another entity the users consent or knowledge commonly referred to as Spyware (Stafford & Urbaczewski 2004, Urbach & Kibel 2004). It is the intention of this chapter to describe the technology involved and thereafter discuss how this new technology is affecting the online privacy debate. The chapter continues to discuss the basis for legitimacy of technology and also the current level of technological deterrents available. The chapter concludes with a comparison of two approaches at resolving the current problem, via legislation or the market approach.
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Spyware
Integrity is a prerequisite for democracy. The perceived lack of integrity causes concern among users. This concern was however met with a regulatory inertia since the apparent legal position of the software in question could be disputed. This lack of concern for the users opinions vis-à-vis integrity resulted in the creation of a market based regulatory solutions. These solutions came in the form of integrity protecting, Spyware removal software. The earliest uses of the term Spyware to denote a particular form of software that gathers, without the users knowledge, information about the user and transmits it back manufacturer appeared around 2000 (Zone Alarm 2000). The importance of the discussion of Spyware lies in the discussion of control of user data and user control over the personal computer. Despite being installed via deceit (Klang 2004) those discussing the effects of Spyware on user integrity and privacy issues were aware of the fact that causing Spyware to be installed was not an illegal act. Therefore the discussion becomes a practical definition and implementation of the concept of online privacy. Groups of actors perceived Spyware to be a threat to individual privacy despite the uncertain legal position gives Spyware manufacturers an upper hand. The failing ability of regulatory structures to provide protection against the perceived threat of Spyware create the rise of a market based solution where software manufacturers created anti-Spyware software to provide users the wherewithal to prevent Spyware from operating within their computers. The example of Spyware provides an excellent case of the failure of structural regulation, the rise of a perceived threat among actors and the development of a market-based solution to the perceived threat. By studying this example we may find a method where the slowness of structural regulation to react to a perceived user threat provides both an economic opportunity for actors and provides an example of how online
594
problems can be resolved without the intervention of regulatory structures.
PrIVAcy The discussion of privacy as a philosophical, social and legal value has been lively ever since the publication in 1890 of the influential paper, The Right to Privacy (Warren & Brandeis 1890). Arguably the clearest conclusion from this long debate is that the interpretation of privacy is context dependent. However despite the width of the arguments most can be categorised either as belonging to the reductionist approach or by viewing privacy a necessary individual right (Thompson 1975). The reductionist approach understands privacy as being described by its component parts while ignoring the relationships between them. This is the view that privacy is not unique and can be reduced to other interests. The second approach to privacy is to see it as a fundamental human need or right and therefore it needs not be derived from other rights. Thompson (1975) argues that privacy is not an individual right but can be motivated and defended by using other rights, which makes the right to privacy per se unnecessary: For if I am right, the right to privacy is ‘derivative’ in this sense: it is possible to explain in the case of each right in the cluster how come we have it without even once mentioning the right to privacy. Indeed, the wrongness of every violation of the right to privacy can be explained without even once mentioning it. (Thompson, p. 313). Posner (1984) argues for the reductionist approach by using an economic analysis of privacy. He argues that “personal privacy seems to be valued more highly than organizational privacy, a reverse ordering would be more consistent with the economics of the problem.” The reductionist arguments have often been attacked by scholars
Spyware
(Rachaels 1975), who claim that the distinctive right to privacy is both desirable and important when attempting to support and protect this interest. The debate on integrity has developed over time (Wong 2005) and has always stood in relation to the level of technology of the day. The seminal Warren and Brandeis (1890) article on privacy was very much a result of the technology of the time. They recognized the developments of technology and feared, amongst other things, the continued development of the small portable camera that could be handled by the amateur (Kern 1983). The idea behind the Warren and Brandeis (1890) article was to explore whether existing US law afforded a principle of privacy protection. Their conclusions have been actively discussed since then. The reason for this discussion is that privacy is an ambiguous term definitions have ranged between the right to be let alone, (Warren & Brandeis 1890), the development of personality (Strömholm 1967) to the right to control information about oneself (Fried 1970). This privacy includes other notions such as individual dignity and integrity, personal uniqueness and personal autonomy. Westin (1967, p 25) examines privacy from the starting point that “…the constant search in democracies must be for the proper boundary line in each specific situation and for an over-all equilibrium that serves to strengthen democratic institution and processes.” The search for an adequate understanding of privacy is further complicated by different methods implemented in attempting definitions. Therefore some writers have described the condition of privacy, characterising its features but not offering a definition (Parker 1974) while others have attempted definitions. Westin (1967) conducted anthropological studies of privacy and through these he offers a control-based definition of privacy claiming that it is the:
…claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others. Viewed in terms of the relation of the individual to social participation, privacy is the voluntary and temporary withdrawal means, either in a state of solitude or small-group intimacy, or, when among larger groups, in a condition of anonymity or reserve. (p 7) While control-based definitions have their advantages and are attractive to the individual or group invoking privacy rights they have been criticised for their focus upon individual autonomy since this focus becomes a weakness when attempting to formulate privacy protection policies (Regan 1995). In rejecting the controlbased approach as being inefficient for policy formulation Regan (1995) argues that greater recognition should be given to the “broader social importance of privacy”. The reasons for this are threefold, firstly privacy should be understood as a common value in which all individuals value some degree of privacy, secondly privacy is as a public value, which is not merely of value to the democratic political system and thirdly privacy can be understood as a “collective value” in light of technological developments and market forces, requiring minimum levels of privacy protection. A flaw contained within the control-based definition is that it is inadequate when considering instances where personal information is obtained without the individuals knowledge and therefore without informed consent. This is common through methods such as data mining techniques and online profiling (Bygrave 2002). Such methods cannot provide methods for the subject to control information and therefore the remaining alternative is regulation of minimum standards of data protection (Bygrave 2002).
595
Spyware
foucAult & the PAnoPtIcon In an attempt to reform the prison system of his time Bentham (1995 [1787]) developed an architectural plan for an ideal prison called the Panopticon. His ideal prison design was a ring shaped prison with a watchtower in the centre of the ring. The ring contained the cells were each prisoner would sit in individual cells. Each cell would have a window on the outer wall to allow light into the cell and a large opening, opposite the window and facing the watchtower. The walls of the cell were extended to prevent the prisoners from communicating with, or even seeing, each other. From the watchtower the guard could see everything in each cell while the use of lighting and blinds prevented the prisoners from seeing the guards and therefore they could never know if they were watched at any time. The prisoners could only act on the assumption that they could be watched at any time. The Panopticon design is control by architecture. With a minimum of manpower and direct intervention the prisoners are immersed in total visibility and the inmates reaction must be one of self-control (Foucault 1980). The gaze of the guards is through this architecture internalised by the prisoner making the prisoner become her own guard. Through this self-policing surveillance the effects are constant and pervasive even when no actual surveillance is being carried out. As Foucault (1980) notes the Panopticon is an architectural device made into regulation: There is no need for arms, physical violence, material constraints. Just a gaze. An inspecting gaze, a gaze which each individual under its weight will end up by interiorising to the point that he is his own overseer, each individual thus exercising this surveillance over, and against himself. A superb formula: power exercised continuously and for what turns out to be a minimal cost. (Foucault 1980, p 155).
596
Since his presentation of the metaphor of the architectural device Foucault’s interpretation of the Panopticon has been widely used in discussions on privacy and surveillance. The basic premise is that the awareness of potential surveillance effects the way in which the subject under potential surveillance behaves. The Bentham/Foucault Panopticon metaphor has increased in popularity with the increased use of surveillance technology both online and offline. This metaphor is however not applicable in the situation of privacy since, as explained below, the user under surveillance is unaware of the surveillance. Therefore the actor continues to behave in an open manner. The difference therefore is that the actor has no choice in the matter. The use of covert surveillance regularly considered to be a more serious interference with the individual’s integrity and usually allowed only in more extreme situations (Etzioni 1999, Norris & Armstrong 1999).
sPyIng technology Spyware can be defined as surveillance technology or software, which is bundled into another piece of software and enters the ‘infected’ computer, and use the ‘infected’ computer in an unauthorised manner. Blanke (2006) writes about Spyware, Basically, they are computer programs that are installed (or install themselves) on computers and perform activities that range from the innocuous to the criminal, but almost always negatively affect the basic use and enjoyment of the machine. (pp 1-2). Spyware is most commonly bundled together with freeware which the user downloads and installs (Klang 2003, 2004). The main purpose of the Spyware is to collect information, and send it to the information gatherer. Spyware, once installed on the computer, can take a wide range of activities such as: (i) Transmit information about the computer user to a third party, (ii) Lower the security by subjecting it to allowing an attacker
Spyware
to remotely control the infected computer, (iii) Record keystrokes to reveal passwords and other sensitive information, (iv) Enable the computer to be hijacked and used in a denial-of-service attack, and (v) Search for vulnerabilities which may be exploited by hackers (Shukla & Fui-Hoon Nah 2005, Stafford & Urbaczewski 2005, Thompson 2005). This chapter takes a more limited view of Spyware, focusing on the types of Spyware that most users find objectionable. The reason for this is that most users would prefer, had they known what was happening, not to have Spyware on their computers. This becomes an interesting ethical discussion since the Spyware manufacturers tend to claim that the users have agreed to the Spyware installed in their computers. Since there is a difference of opinion as to whether or not the Spyware has been installed with or without the users consent the actual installation becomes a critical issue. What is interesting to note is the fact that Spyware can be included with the users consent but without her knowledge. This is done by including Spyware clauses in the end user licence agreement (EULA) that is displayed when the user begins installing the software and requires the user to agree to the terms before the software can be installed and used. The importance of contract law and the EULA will be discussed further below. The discussion on Spyware is made more complex due to the lack of an agreed upon definition; this flaw seems to stem from a lack of adequate consensus with which to reach a definition. The name alone is not universal, Spyware is sometime known as scumware, parasite-ware, stealware or theftware and occasionally mixed up with computer trojans, viruses and worms. This chapter uses the name Spyware since it is the name, which is rapidly becoming the most accepted when describing the phenomenon. An important additional aspect of Spyware is the difficulty connected with its removal. Spyware rarely appears in the computers uninstall list and even if it can be located removal of parts of the
software can sometimes affect the more traditional workings of the computer. Other complaints connected with the removal of Spyware have been the issue that if not totally removed the Spyware has an ability to re-install itself.
sPywAre busIness model It is important for software manufacturers to spread their software and also to obtain financing for their work. There is a culture of not paying for goods and services online. Many, if not most, users have come to expect and demand that information, software and services are available at no cost. The traditions of no cost software have been compared with tribal gift economies (Barbook 1998) since there is a tendency to help, share and barter with property in these cultures. The tradition of no cost software and information has developed into the copyright conflicts taking place today. Entertainment files are being transferred over peer-to-peer networks despite the fact that they are copyrighted. The entertainment industry is attempting to regain control over their traditional marketplaces by persecuting those who aid copyright infringement via technical means (Bowrey 2005). This situation has led many users to attempt to legitimise their infringing actions and call for the demise or radical change of copyright legislation. In discussing the legitimacy of infringing software copyright Nissenbaum (1995) argues both with consequential and deontological arguments that there are some specific cases where infringement is morally permissible. However, whether or not the action of copyright infringement can be justified or not the situation is such that many do not feel that they are doing anything wrong in violating another’s copyright or at least they are not deterred by any such emotion. This desire for free software has led to a loss of revenue and a need for software manufacturers to find alternative incomes. Enter the parasite economy (Cave 2001). To obtain income for their
597
Spyware
products, popular software can act as a host for other software, carrying it into the computers of users. Popular free software can create channels of revenue by offering themselves as carriers of bundled software. The Spyware (or indeed any other software) which travels with the free software pays a minimal fee per download for the service. The total cost paid therefore depends upon the popularity of the downloaded software. The creators of the downloaded software claim that their actions are both legal and driven from economic necessity. Users demand free software, software manufacturers need funding to create more competitive software and marketers need to reach potential customers. Since the users obtain free software, software houses obtain a new source of income and the marketers increase their reach, then one might argue that there should not be any discussion on the evils of Spyware. This is, however, an oversimplified interpretation of the situation.
IntegrIty Those who are discontented with the position of Spyware often evoke the arguments of the privacy and integrity debate. Lacking the international consensus of contract law these users have to argue from a rights based position which is a weaker position since they first have to prove the existence of their position and then argue that theirs is the position more worthy of concern. Online privacy has been discussed for a long time and in many different ways. The most common legal discussion tends to be whether or not there is, or there should be, a right to privacy. If this can be answered in the affirmative the question then becomes one of degree i.e., where do the limits of privacy stand? In Europe this question has received considerable help in recent years due to the growth of the European Union, which requires the incorporation of the European human rights convention (Con-
598
vention for the Protection of Human Rights and Fundamental Freedoms – Rome 1950). Prior to the incorporation of the convention into the national legislation of the member states the discussion centred on the creation of a right as opposed to a discussion of positive law. After the incorporation the main thrust of the legal discussion became a positivistic discussion on what should be included in a right to privacy. The concept of privacy can only be placed in relation to the ability of that privacy to be invaded. Unfortunately the discussions have for a long time focused on either the voluntary submission of data or the use of cookies. Software such as Spyware has not been discussed and its appearance and proliferation requires urgent action on the part of the users, software manufacturers and regulators. In the modern privacy debate an influential work is Foucault’s (1979) interpretation of the Panopticon. This internalisation of supervision arises from the awareness of constant supervision, or even the threat of constant supervision, and causes the subject to behave differently. The subject must behave in a manner consistent to the fact that she may be observed at any time. This knowledge has the effect of changing the behaviour of the subject in a manner that is incompatible with the concept of human freedom. Technological advances have brought about the change in the concept of privacy and many would claim that the new technology represents a Panopticon of sorts. While there may be certain elements of truth in this type of discussion this is not the case with Spyware. This is due to the fact that the user is unaware that she may be watched and this causes her to behave in a natural and uninhibited manner. This means that tools of supervision installed in the computer through bundled software are more serious than the Panopticon metaphor. In the Panopticon the user is aware that she is being watched and has the choice to behave accordingly but this crucial difference is, in the case of Spyware, that the user has no knowledge that her actions may be observed.
Spyware
Leaving the Panopticon metaphor leaves us more able to understand the need for an increased discussion in the privacy debate. This new technology represents a new challenge to the level of privacy we can expect. The amount of privacy we can reasonably expect is “…a function of the practices and laws of our society and underlying normative principles” (McArthur 2001, p 127). Unfortunately the open public debate on the integrity depriving aspects of Spyware has not yet developed enough which has the effect of depriving the law from a worthy basis of discussion and not developing the underlying principles to be able to meet this new challenge. In the face of this vacuum, courts may be tempted to fall back upon a familiar pattern of discussion centred on contract law and this leaves the user in a weaker position.
sPywAre mAkers’ PosItIon Despite the fundaments of contract law and despite the legality of the business models there appears a level of discontent among those afflicted by the technology. These users are not pacified by the legality of the scheme. They do not agree with the implementation of technology in this clandestine manner for the purposes of invasion of their experienced integrity. In this we can see a connection to the arguments of Habermas (1989) on the relationship between technology and power. This relationship exists in a state of constant evolution and the important issue to be discussed is one of legitimacy. The problem of legitimacy arises when the technology is driven forward in such a way as to exclude a number of users from the socio-legal discourse. Since it can be claimed that it is one of the many roles of law to legitimise actions and create a level of understanding between the citizens and those in power it is important that those who are affected by the technology, and the infrastructure it creates, have the opportunity to partake in an open discussion.
When the law is used in such a manner as to silence debate by legitimising actions, which are unwelcome to the users, then one can claim that the law has been used as a pacifier and alienated the users from partaking in the debate. While attempting to remove the nefarious software may be a complex affair there are software programs which may be useful. Software such as Ad-aware created by Lavasoft and Spybot created by PepiMK Software both can be downloaded free and be used to find and remove the unwanted software. These programs have however given rise to an interesting dilemma. They are not all too open about their methods in defining what Spyware is and as such have a large amount of political power in their ability to blacklist programs. Comet Systems Inc claim that they have been unfairly targeted by Lavasoft and their business has suffered because of it (Miles 2002). The potential of anti-Spyware companies to damage the legitimate business interests is a serious threat. Marketing companies claim that they have a right to market their products, software companies need revenues from marketers to be able to provide free software. The whole process is legitimised by contract law. The question therefore may be to what extent the anti-Spyware companies are, or should be, liable for their activities. It is therefore important for the creators of anti-Spyware programs to be open about their methodology and their choice of programs that their software removes. The most popular antiSpyware program is reputedly overly covert and silent about their business practice that makes any discussion on openness difficult. For those who are technically adept the whole problem of Spyware is an issue of lesser importance but the majority of technology users are happily unaware of how their technology works or how to correct it if it fails to work. This is the group that needs anti-Spyware software. This group is usually unaware of the choices made by the programmers of which software to define as Spyware and which not to include in this category.
599
Spyware
Anti-Spyware software, once established, creates for itself the role of gatekeeper since it has the ability to choose which software is to remain on the users computer and which is not. For software developers therefore, the anti-Spyware software becomes another barrier that must be respected. Some software developers have attempted to open discussions on the powerful position attained by anti-Spyware companies in relation to deciding which advertising is allowed and which not (Klang 2003) is. Another interesting reaction can be seen in the attempts of the software companies to fight back against the anti-Spyware programs. The Radlight17 video playing software, once installed, attempted to remove or disable the Loveseats Ad-aware program. This action was legitimised in the EULA: …You are not allowed to use any third party program (e.g. Ad-aware) to uninstall application bundled with Red-Light. Such programs will be removed. If you want to uninstall them, you may do so via Add/Remove in Windows’ Control Panel. The EULA text has since been amended with a text describing which types of third party software is bundled into the program and also the fact that it will create a GUID for the computer. It no longer claims the right to remove software installed in the computer. It does however openly explain that the software will be used for marketing purposes. In a text few of their customers will ever read.
legal Position The current EU Directive on Privacy and Electronic Communications 2002/58/EC does not define what Spyware is or even the scope of Spyware. While there may be advantages to such an approach the result is a degree of uncertainty about what types of programs constitutes Spyware. The only guide the Directive on Privacy offers is recital 24 which provides that:
600
…so-called Spyware, web bugs, hidden identifiers and other similar devices can enter the user’s terminal without their knowledge in order to gain access to information, to store hidden information or to trace the activities of the user and may seriously intrude upon the privacy of these users. (p 9) This lack of definition causes the users and the regulatory structures to rely heavily upon fundamental regulatory principles. Once such principle is that of contract law, implemented in the case by the end user license agreement (EULA). In contrast certain jurisdictions have defined Spyware, one such example is the State of Utah, and have received criticism for its definition as being over-inclusive, in other words including harmless or beneficial software within the scope of the Spyware definition. Since those who claim that the software is legal all tend to focus upon the EULA as a ‘‘silver bullet’’ in resolving the conflict then it is in the re-examination of contracts which we must base this first stage of the discussion. Therefore this section must briefly explore the regulatory structure of the EULA. Contract theory (Furmston 1996) is taught to most law students in the form of a simplified mythical situation where two people meet. In the meeting, A makes B an offer. In our scenario A offers B the latest widget in technology for the price of 10$. B, after careful thought, agrees to the offer and the contract is formed. Formation means that legally enforceable obligations have been created. The point of formation is usually symbolically celebrated by some ceremony of, e.g., handshakes, nods or applying names on paper. This ceremonial aspect is an important issue in producing evidence that a contract actually has been formed but binding obligations can be formed without the ceremonial aspect. The whole basis of liberal contract theory is the meeting and agreement of the wills of competent individuals.
Spyware
If B wishes to buy some software from her local computer store the contract is completed when she hands over her money and receives the box, containing the disk, containing the software. However, an interesting thing occurs when B gets home and tears open the shrinkwrap, opens the box and begins to install the software. Out of the box spills lots of documents written in unfriendly complex language. These amendments and additions to the contract are known as shrinkwrap contracts (Rowland & McDonald 2000). B does not need these to install the software so she proceeds to enter the disk into her computer. On the screen she receives many options all of which she must decide whether or not to agree to. One such text, which usually appears in the beginning of the installation, demands that she agree to a larger text to be able to continue. This is known as a clickwrap (Arne & Bosse 2003) and is an evolution of the shrinkwrap and has the power to regulate the contract that B has already entered into. Usually the clickwrap is seen to be more binding than the shrinkwrap since it requires positive actions from the user. The situation is the same if B had chosen to download free software from the Internet. Supposing B had downloaded the software Kazaa to be able to share music files with all her friends. The text that precedes the installation of the program is binding. The fact that she does not read it, or if she does read it but cannot understand it does not alter the fact that it is binding. So if Kazaa has written that they intend to collect data and send it to the company for marketing purposes B cannot do anything about that – except not install the program. Naturally the theory of contracts is a simplification used in education to teach students about the basics of law. It is not intended to be the solution or total description of reality. Nor are the challenges created by technology the only situations where flaws in the simplified model of contract theory become more evident. In any large business process there may always be difficulties in
discerning when a contract was entered into and what the content of the contract was. To solve this dilemma the myth has been amended with the theories of the shrinkwrap and clickwrap licences as described briefly above. The courts have understood the necessity of these licenses and have reinforced their legality and power over the users. At first glance the courts acceptance of these licences may seem unduly harsh. The writer of the contract is at an advantage since they have the time to create a contract which best suits their needs. Additionally the advantage is enhanced by the situation that most consumers are not legally trained and, should they read the contract, may not see the disadvantages the contract places them in. However, this situation is true of many contractual situations. Few of us bother to read the contractual terms when renting a video film or a car. There is a great deal of trust placed in the fairness of the overall system (Klang 2001), additionally we see many other people renting cars and video films without problem and therefore we assume the same will be true for us. The courts acceptance of the standard licence is based upon the needs of commerce; the courts acceptance of the shrinkwrap/clickwrap contract is based upon the knowledge that most consumers are not going to read the contractual terms that underpin every train or airplane ticket. In most cases the contracts are not unduly harsh since they have developed over time to suit the contractual situation and the courts acceptance of them is based upon commercial necessity. It is however, important to note that this is not the same as saying that the shrink/clickwrap licence is enforceable in all situations. There are contract situations were the contracts are not enforced by the courts. In Scandinavian contract law the courts have the power to amend contracts that are unduly harsh on one of the contracting parties. This situation typically occurs when the drafter of the contract has used techniques, such as language and layout, to obfuscate the terms of
601
Spyware
the contract. Under common law the question may be one of misrepresentation since the Spyware is most commonly bundled into another software product and it is not the intention of the user to download the Spyware. Under U.K. law, in the case of Spurling v Bradshaw1, Lord Denning stated that in the use of sweeping exclusion clauses it was necessary to draw the contracting parties’ attention to such cases it required something startling. Denning suggested printing in red ink with a red hand pointing to it or something similarly striking which could not be missed. Arguments, such as these, show that the existence of contract is not enough to legitimize any and all content. The red hand argument can be extended for use against the bundling of Spyware, especially since the Spyware interferes with the peaceful enjoyment of the users’ property vis-à-vis the browser and the personal computer. Today we tend to follow what is often referred to as the liberal contract theory and see contract law as an instrument for enforcing promises (Gordley 1991). This view is tempered with the fact that the contract is seen as an agreement where the wills of the contracting parties are in accord. If we are to view contract law as an enforcement mechanism then the law tends to be weighted in favour of the EULA since this is, at first glance, the contract. However, it is important to remember that the contract should represent an agreement and as such the question of what the parties knew they were agreeing to is vital to the actions of the courts in attempting to decide upon these issues.
spyware in court The courts have already been made aware of Spyware, however the issues that have been raised have not be concerned with the privacy aspects of the software and are therefore not helpful to understanding where the legal reasoning should be developing within this field. However, it is interesting to note that the development of Spyware related case law is moving ahead in relation
602
to trademark and copyright infringement. While this is helpful for companies hoping to maintain control over their online assets the connection of Spyware to trademark and copyright tends to relegate the importance of privacy concerns to a lesser place. The software of the Gator Corporation caused pop-up advertising to appear on the screen in front of the desired page. These prevented the legitimate page from being viewed in a manner in which it was intended since the page is marred by advertising messages. This prompted 16 online news-publishing organizations to file a lawsuit2 against Gator claiming trademark and copyright infringement, and unfair enrichment by freeloading on the reputation of the established sites. The court granted a preliminary injunction in July 2002 preventing Gator from causing pop-up advertising on the Plaintiffs websites. In February 2003 the case was settled out of court but unfortunately for the development of jurisprudence in this area the settlement is covered by confidentiality. The US courts have, however, not been consistent. In June 2003 the court (Tedeschi 2003) granted WhenU’s motion to dismiss charges of trademark infringement, unfair competition and copyright infringement. With this the company U-Haul could not prevent WhenU.com from delivering competitors’ ads to visitors to U-Haul’s site.
the regulatory Approach The right to privacy is a fundamental right protected both in international conventions3, European Union directive4 and national legislation. Despite these structures intended to provide regulation aimed at protecting privacy these types of software present a threat to computer user integrity. In addition to this the software user has seldom had the opportunity to provide informed consent to the software since it has not been introduced into the computer in an open manner. The computer user is often unaware of the surveillance and therefore
Spyware
continues to behave in an open uninhibited manner. Despite legal measures, the legal position of Spyware is not clear and there are legal grounds for claiming that the software is legal. Obtaining unauthorized access to a computer or computer system is commonly seen as ethically and legally wrong.5 Despite this the key term here is unauthorized access. While owners of computers infected with Spyware may claim that they never authorized the installation and therefore this counts as an unauthorized access. Spyware producers argue that the users have indeed authorized the installation and use of Spyware. They support their arguments by pointing to the licensing agreement (end-user license agreement) that the users have agreed to by installing the software. The ubiquitous use of the EULA and clickwrap methodology that require the user to accept the entire agreement with a single clip have become accepted practice. Despite this it should not be understood to mean that every license and every condition in the license are unquestionably binding. The American legal reaction to the problem of Spyware has been to develop the “Spyware Control and Privacy Protection Act of 2001” (hereafter The Spyware Act) intends to control Spyware. The Spyware Act requires that manufacturers notify consumers when a product includes this capability, what types of information could be collected, and how to disable it. More importantly The Spyware Act makes it illegal for the programs to transmit user information back to the manufacturers unless the user enables this function and the user has given the collector access to the information. There are exceptions for validating authorized software users, providing technical support, or legal monitoring of computer activity by employers. However, The Spyware Act has been attacked for not being consumer friendly since despite its good intentions it does not go far enough in controlling the actions of the Spyware producers. The Spyware Act follows the ideas set out in the
European Data Protection Directive (DPD) in that it divides personal data into two categories: sensitive and non-sensitive. Sensitive data concerns personal data surrounding the data subject’s finances, medical history, sexual orientation, lifestyle, political affiliation and race. This data cannot be collected or used without the data subjects consent. The non-sensitive data, however, is everything else and all information that can be inferred from that information. This includes any and all actions that the software can record from the web and the conclusions which can be drawn from this data. This non-sensitive data can be collected, processed and sold without the data subjects consent. While at first glance this seems to be a reasonable starting point, there is one major drawback. By collecting or recording much, or all, of the information a user obtains via the Internet several inferences can be made about the users, which pertain to their sensitive data, and therefore the division of sensitive and non-sensitive division is no longer useful. The European legal reaction has been to develop the DPD, which has been enacted in all member states and can be used to criminalise the actions of Spyware since the DPD requires that the consent of the user be obtained prior to collection of personal data. While these legislative tools are effective they have been unable to deal with the Internet-based privacy invasions due in part to the fact that the techniques required to monitor and enforce are beyond the power of single, or groups, of states. The member states of the European Union have a strong level of privacy legislation that enables a level of control of the companies dealing with personal data in their businesses. Unfortunately those dealing in personal data collection through Spyware are notoriously difficult to locate and tend to shy away from establishing themselves in states where there is a strong privacy enhancing legislation. In addition to this the Directive 2002/58/EC on Privacy and Electronic Communications (DPEC), lacks a clear definition of Spyware. In
603
Spyware
addition to which, the users right to be informed in accordance with the DPD has been watered down in relation to online information about the purposes of cookies or similar devices: Member States shall prohibit the use of electronic communications networks to store information or to gain access to information stored in the terminal equipment of a subscriber or user without the prior, explicit consent of the subscriber or user concerned. This shall not prevent any technical storage or access for the sole purpose of carrying out or facilitating the transmission of a communication over an electronic communications network. (EU Official Journal 13 June 2002, C. 140E/121) The weakness of the legislative approach is that it must struggle to obtain a balance of the needs and wants of the different groups in society. After this balanced approach there is the difficulty of enforcing legislation that is limited to nation states in an environment that no longer considers these boundaries to be relevant. It is not that the courts do not have the competence or jurisdiction to rule in a case but rather the question as to whom is behind the software and which corporate entities or private citizens are to be considered to be responsible for the software.
dIscussIon The ethical viewpoint of the Spyware maker can be summarised by the Friedman’s (1993) controversial view of the sole duty of the corporation to maximise profits and benefit the shareholder. This deontological approach however is opposed by the other rule based imperative not to spy upon or cause harm to others. The question of how to resolve this conflict of ethical rules by applying the Kantian humanity principle of not treating people as a means to an end but rather as ends in themselves. If we were to apply this final
604
Kantian principle we can arrive at the conclusion that Spyware constitutes a breach of ethical conduct. However if we were to attempt to apply a utilitarian analysis of the situation the ethics of the actors becomes less clear. The inclusion of Spyware in free software could be viewed as a necessary evil and the creation and supply of free software creates more happiness or utility than the evil generated. This argument is effectively reinforced by the fact that there is software which can be used to protect the individual from the harms of software. For this argument to be effective the person downloading Spyware must be aware both of the consequences of her actions and the availability of counter-measures. Since it is difficult to clearly state that Spyware is inherently immoral, the position of other actors who provide the infrastructure with which the software is spread (for example those who bundle Spyware with other software) is even more difficult to ascertain. Despite the difficulties it is possible to state that Spyware is often an unwelcome addition to the computer and from the growing popularity anti-Spyware software it is possible to surmise that many computer users believe Spyware to be wrong. One interesting question, which the Spyware problem opens up, is the question of which is the best method for combating these types of issues. The problem of Spyware is relatively new and relatively unknown outside technical forums or privacy forums. While many of these forums may agree that the problem is growing, it remains difficult to see which solutions should be applied to the problem. The use of anti-Spyware software is at best a market solution that requires from the users knowledge of the problem and, at least, a working knowledge of where to find the solutions and how to install and use them. The level of information required by the market is therefore reasonably high, especially considering the fact that most Internet users have never even heard of Spyware and, even if they had, may not appreciate the importance of defending their privacy. If
Spyware
the users are aware of the problem, and want to resolve it, a new question arises and this is one of understanding which software is the best for their problems. This stage is crucial since the users can download inadequate anti-Spyware software or, in the worst case scenario, even more Spyware. Attempting a regulatory approach takes time and a great deal of concerted effort. Habermas argues (1989) that societies are the base for a multitude of pluralistic opinions but only a few ever come to the forefront of the public debate. In ensuring that the debate is maintained in an open environment and that the rules are created in a transparent method it is important that the basis for rules are discussed by those who are effected by them. It is also important however to note in this context that all rules should be held under constant debate (McCormick 1997). Rules should not exist in a closed space but must exist in the open under public scrutiny to avoid the creation of a representative elite whose interpretations of societies needs, or an illusion of public good, control what is developed as a social balance. The user is left therefore with problems on all sides and must therefore attempt a pragmatic approach to the problem. Not using information technology is not a viable option but what is important is that an effort is made by the users to keep up to date with the state of privacy, both technical and legislative. The user must be prepared to use both technical means to protect her own data while participating in a public discourse on the importance of better data legislation. As has already been shown the nation state is not capable of meeting all the ills on the Internet and protecting its citizens from them but it is important that the nation state creates an environment where the individual has the ability to find information, make informed decisions concerning her privacy and, if so desired, implement technical countermeasures to protect the level of privacy required.
conclusIon Spyware is yet another example of how privacy has become the price that computer users pay for their use of the information technology infrastructure. This time the threat comes from software bundled into free software, the problem is that the price the user pays is not one that is discussed or declared openly. The user is therefore not able to enter into an agreement on equal grounds or attempt to negotiate to achieve a better bargain. Contract law is in this scenario pushed to the limits and used as a legitimising factor for unethical business practices. There are alternatives for the user. She can naturally choose not to use free software but this choice requires knowledge of the integrity threatening software within the free software. Also the choice not to use free software has economic effects and may create barriers to active participation in the information technology infrastructure. There are also possibilities for the user to attempt to eradicate the unwanted software installed into the computer. These types of solutions require an awareness of the problem and a certain level of knowledge on how to find, download, install and run the necessary software. An important issue with these market-based solutions is that their fairness and objectivity have been marred by accusations of injustice and unfair treatment. On the legislative side we, once again, see an example of legislation struggling to enforce local ideas under fire from a global (or a-national) infrastructure. This is becoming a familiar pattern when the national or regional legislation attempts to deal with Internet-based technology. There is no solution to this aspect of the problem other than international legal consensus which is very hard to achieve, implement and enforce. The disruptive effects of technology upon a social balance created over time can have the subtle effect of changing that balance which was created at a prior technological balance. Technological advance demands a renewed discussion on its effects upon
605
Spyware
the users in society and on the gradual effects of technology on society. This is especially true since a market approach to resolving the issue requires that more information is made available to those who are effected by the problem. Without this information they will be unable to take a stand on whether they desire to protect themselves, and if so, in which manner. An additional reason for the need for more public debate amongst those concerned is that they are themselves responsible for achieving a re-balancing of the socio-technical regulation. Without information and debate the process of establishing a balance between the effects of technology and the needs of society will cease to be forceful and any meaningful effects such a debate can create will be lost. The focus of regulation should be to maintain the core democratic value of integrity. Within the participatory democracy the need for integrity protection is important to ensure individual participation within the democracy. Without such assurances a supervising gaze is developed, and such a gaze becomes a mechanism of control internalised in the behaviour of the individual preventing her from participating openly and freely in the democratic process. Therefore it is important to ensure that the disruptive effects of technology in relation to integrity do not negate the positive potential of participation through technology. The process of participating in a democracy requires privacy. The simplest example of the importance of privacy is the secrecy of the ballot box. The voting system is created in such a manner as to insure that even those closest to each other cannot be certain of how the others vote. This integrity is there to ensure democracy. While many could argue that they would not change the way they vote even if they had to do so in public the importance of integrity in democracy is, through this example, easily grasped. Therefore it is strange that when threats to online integrity appear they are not treated with equal importance. The need for integrity is fundamental for the participatory
606
democracy to function even in the online domain and therefore the regulator should be more concerned with protecting this core democratic value than conservatively overprotecting the sanctity of contract.
references Arne, P. H. & Bosse, M. M. (2003). Overview of Clickwrap Agreements. Practising Law Institute, January–March. Barbook, R. (1998). The Hi-Tech Gift Economy. First Monday, Vol. 3 Number 12, December. URL: http://www.firstmonday.dk/issues/issue3_12/barbrook/ Bentham, J. (1995 [1787]). Panopticon or Inspection-House. In Bozovik, M. (ed) The Panopticon Writings, Bozovik, M. (ed). London: Verso Books. Blanke, J. M. (2006). Robust Notice and Informed Consent, The Keys to Successful Spyware Legislation. Columbia Science and Technology Law Review, Vol. 7, pp. 1-33. Bowrey, K. (2005). Law & Internet Cultures, Cambridge: Cambridge University Press. Bygrave, L. (2002). Data Protection Law: Approaching its Rationale, Logic and Limits. The Hague: Kluwer Law International. Cave, D. (2001). The Parasite Economy. August 02, Salon.com. URL: http://archive.salon.com/ tech/feature/2001/08/02/parasite_capital/ Etzioni, A. (1999). The Limits of Privacy. New York: Basic Books. Freeman, L. A. & Urbaczewski, A. (2005). Why Do People Hate Spyware? Communications of the ACM, Vol. 48, No. 8, August. Foucault, M. (1979) Discipline & Punish: The birth of the prison. New York: Vintage Books.
Spyware
Foucault, M. (1980). Power, knowledge: selected interviews and other writings 1972-1977. Gordon, C (ed & trans), New York: Pantheon. Friedman, M. (1993 [1970]). The Social Responsibility of Business is to Increase Its Profits. New York Times Magazine, (1 September 1970). Reprinted in Beauchamp, T. & Bowie, N. (eds) Ethical Theory and Business, Englewood Cliffs, N.J.: Prentice-Hall. Furmston, M. P. (1996). Cheshire, Fifoot & Furmston’s Law of Contract. London: Butterworths. Gordley, P. (1991). The Philosophical Origins of the Modern Contract Doctrine. Oxford: Clarendon Press. Habermas, J. (1989). The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society. Thomas, B. (trans). Cambridge, MA: MIT Press. Klang, M. (2001). Who do you Trust? Beyond encryption, secure e-business. Decision Support Systems, Vol. 31, pp. 293-301. Klang, M. (2003). Spyware: Paying for Software with our Privacy. International Review Of Law Computers and Technology, Vol. 17, Number 3, November, pp. 313-322. Klang, M. (2004). Spyware – the ethics of covert software. Ethics and Information Technology, Issue 3, September, pp. 193-202. Kern, S. (1983) The Culture of Time and Space 1880-1918. Cambridge, MA: Harvard University Press. McArthur, R. L. (2001). Reasonable Expectations of Privacy. Ethics and Information Technology, 3, pp. 123–128. McCormick, J. (1997). Habermas’s Discourse Theory of Law: Bridging Anglo-American and Continental Legal Traditions. The Modern Law Review, Vol. 60, pp. 734-743.
Miles, S, (2002). Ad-Aware Maker LavaSoft Frustrates Internet Advertisers. The Wall Street Journal Online. URL: http://online.wsj.com/ article/0,,SB1035830...231.djm,00.html. Nissenbaum, H. (1995). Should I Copy My Neighbor’s Software?. In Johnson, D. & Nissenbaum, H. (eds), Computers, Ethics, and Social Responsibility. New Jersey: Prentice-Hall. Norris, C. & Armstrong, G. (1999) The Maximum Surveillance Society: The Rise of CCTV. Oxford: Berg. Parker, R. (1974). A Definition of Privacy. Rutgers Law Review, Vol. 27, pp. 275-296. Posner, R. A. (1984). An Economic Theory of Privacy. In Schoeman (ed), Philosophical Dimensions of Privacy: An Anthology. Cambridge: Cambridge University Press. Rachaels, J. (1975). Why Privacy is Important? Philosophy and Public Affairs, Vol. 4, pp. 323333. Regan, P. (1995). Legislating Privacy: Technology, Social Values, and Public Policy. Chapel Hill: The University of North Carolina Press. Rowland, D. & McDonald, E. (2000). Information Technology Law, 2nd edition. London: Cavendish Publishing. Shukla, S. & Fui-Hoon Nah, F. (2005) Web Browsing and Spyware Intrusion. Communications of the ACM, Vol. 48, No. 8, pp. 85-90. Stafford, T. F. & Urbaczewski, A. (2004) Spyware: The Ghost in the Machine. Communications of the Association for Information Systems, Vol 14, 291-306. Strömholm, S. (1967). Right of Privacy and Rights of the Personality. Stockholm: Norstedts. Tedeschi, B. (2003). Pop-up ads provoke a turf battle over Web Rights. International Herald Tribune, Tuesday 8 July, p 15.
607
Spyware
Thompson, J. J. (1975). The Right to Privacy. Philosophy & Public Affairs, 4, 295. Thompson, R. (2005). Why Spyware Poses Multiple Threats to Security. Communications of the ACM, Vol. 48, No. 8, pp. 41-43. Urbach, R. R. & Kibel, G. A. (2004). Adware/Spyware: An update regarding pending litigation and legislation, Intellectual Property and Technology Law Journal, 16(7), 12-17. Warren, S. & Brandeis, L. D. (1890). The Right to Privacy. Harvard Law Review, Vol. 4, pp. 193-220. Westin, A. (1967) Privacy and Freedom, London: Bodley Head. Wong, R. (2005). Privacy: Charting its Developments & Prospects. In Klang & Murray (eds), Human Rights in the Digital Age, London: Cavendish Publishing. Zhang, X. (2005). What do Consumers Really Know About Spyware? Communications of the ACM, Vol. 48, No. 8, pp. 44-48.
Panopticon: This refers to an architectural plan developed by Bentham for an ideal prison with a ring shaped prison with a watchtower in the centre of the ring Privacy: The ability of an individual or group to not disclose information about themselves. Spyware: This is a particular form of software that gathers, without the users knowledge, information about the user and transmits it back manufacturer.
endnotes 1
2
3
Zone Alarm (2000) Essential Security for DSL and Cable Modem Users Now Available with New Internet Security Utility - ZoneAlarm 2.0, Press Release, 26 January. 4
key terms Anti-Spyware Software: This refers to software that has the ability to choose which software is to remain on the users computers and which is not. EULA: This is a regulatory principle known as end user license agreement. Integrity: This refers to the use of an internal set of principles used for guiding actions.
608
5
Spurling (J) Ltd v Bradshaw (1956), CA. Also in Thornton v Shoe Lane Parking [1971] 2 QB 163. Washington Post, Newsweek Interactive Co., LLC., et al. v The Gator Corporation, Civil Action 02–909-A, U.S. District Court (EDVa). See for example Article 12 of the Universal Declaration of Human Rights, Adopted and proclaimed by General Assembly resolution 217 A (III) of 10 December 1948, or Article 8 of the Convention for the Protection of Human Rights and Fundamental Freedoms. Most importantly Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data. Also, Directive 2002/58/EC on Privacy and Electronic Communications (DPEC). See for example United States Computer Fraud and Abuse Act or the United Kingdom Computer Misuse Act.
609
Chapter XXXIX
In Vitro Fertilization and the Embryonic Revolution D. Gareth Jones University of Otago, New Zealand Maja I. Whitaker University of Otago, New Zealand
AbstrAct The advent of in vitro fertilization (IVF) marked a watershed in the scientific understanding of the human embryo. This, in turn, led to a renaissance of human embryology, accompanied by the ability to manipulate the human embryo in the laboratory. This ability has resulted in yet further developments: refinements of IVF itself, preimplantation genetic diagnosis, the derivation and extraction of embryonic stem cells, and even various forms of cloning. There are immense social and scientific pressures to utilize the artificial reproductive technologies in ways that have little or no connection with overcoming infertility. As the original clinical goals of IVF have undergone transformation ethical concerns have escalated, so much so that they are condemned by some as illustrations of ‘playing God’, while any babies born via some of these procedures are labelled as ‘designer babies’. Both terms reflect the fear and repugnance felt by some at the interference with the earliest stages of human life by the artificial reproductive technologies. It is at these points that bioethical analyses have an important contribution to make.
IntroductIon Since its introduction in 1978 (Steptoe and Edwards, 1978), in vitro fertilization (IVF) has proved revolutionary. The most evident face of that revolution are the three million individuals born using IVF. This in turn has ushered in a
plethora of related assisted reproductive technologies (ARTs) that represent a new genre of medical interventions in the reproductive process and even beyond. However, none of these revolutionary vistas could have eventuated were it not for a series of technological breakthroughs that lie, not in the clinic, but in the laboratory. These
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
In Vitro Fertilization and the Embryonic Revolution
breakthroughs revolve around the human embryo, which can now be maintained in vitro and therefore manipulated in a laboratory environment. The ability to isolate the embryo in this manner has opened up a new era for biomedical science, and in its wake a new era in bioethics. This is because the embryo has shifted from being an object of theoretical debate to occupy a central role in sociopolitical and ethical debate, a highly contested and deeply fraught realm. The magnitude of this transition is difficult to exaggerate. Moral concerns about the status of the human embryo have been expressed for many years, chiefly by reference to induced abortion and the status of the much older fetus. While the ethical model provided by the abortion debate was seen for some time as providing an adequate model for analysing the preimplantation embryo, this is clearly not the case. The 3-4 month old fetus within a woman’s body is far removed from the 3-6 day old embryo in a laboratory. The first is well on its way to becoming a new individual; the second has no such prospects until it is implanted in a woman’s uterus. The first has many of the marks of individuality; the second has few if any such marks until implantation and further biological development take place. The first is beyond the reach of experimental manipulation; the second is generally the object of analysis and study, and potentially manipulation. The world of the ARTs precipitates discussion of scientific, ethical, philosophical, theological, social and policy issues. Each of these has a part to play in ongoing debate on the ARTs, in that their interrelationship renders inadequate any one approach on its own. This is a challenge for those to whom such an interdisciplinary enterprise is foreign. In this chapter we shall assess what this move from moral philosophy to public policy entails. In doing this we shall have to determine how the concerns of the public about ‘playing God’ and producing ‘designer babies’ can be balanced against the thrust of scientific advance and clinical
610
expectations. The different worlds represented by the two are on a collision course, and the task of ethical analysis is to find ways of coping with this collision. However, the novelty of this task for bioethicists is itself a challenge.
embryonIc deVeloPment Embryonic development begins when an egg is successfully fertilized by a single sperm, a process that takes between 26 and 30 hours to complete. The resultant single cell, the zygote, is totipotent, that is, it has the potential to give rise eventually to a complete new individual. On the second day of development, this single cell undergoes cleavage, during which it divides with little intervening growth to produce two, then four, then eight smaller, identical cells. These are the blastomeres, which at the eight-cell stage are only loosely associated with one another, each retaining its totipotency. By the 32-cell stage, they have become increasingly adherent and closely packed, and have almost definitely lost this equal developmental potential. By day five the embryo consists of well over 100 cells and is termed a blastocyst. The outer cells of the blastocyst differentiate to form a surface layer, the trophectoderm, which becomes the trophoblast, which in turn eventually gives rise to the placenta. By contrast, the inner cells of the blastocyst constitute the inner cell mass (ICM) and are still undifferentiated (unspecialized), retaining the potential to form every type of tissue involved in the construction of the fetus (the cells are pluripotent). Some of these cells will later form the embryo proper and subsequently the fetus. Around day seven the blastocyst embeds in the uterine wall, marking the beginning of implantation, which is usually completed by day 14. Hence, the term preimplantation embryo refers to the embryo up to 14 days’ gestation. At 15 to 16 days the primitive streak, a transitory developmental structure, becomes evident in
In Vitro Fertilization and the Embryonic Revolution
the midline of the embryo. After this stage the presence of a single primitive streak signifies the existence of a single individual (two primitive streaks are necessary for twins), and the cells of the embryo are now committed to forming specialized tissues and organs (the cells are now multipotent). The primitive streak instigates the appearance of the neural plate, from which arises the first rudiment of the nervous system a week or so later. The appearance of the primitive streak has been widely regarded as marking a point of transition, with some arguing that no coherent entity exists prior to this, from which these same writers argue that it is misleading to refer to any earlier stage as a human individual (e.g. Shannon and Walter, 2003). This designation only becomes possible with the recognition of the embryo’s definitive orientation after 14 days, together with the initial appearance of a nervous system at around 21 days. While these points have a scientific basis, they send out powerful ethical and regulatory messages, so that in those societies where research on human embryos is permitted the dominance of the 14-day upper limit to research is currently unchallenged. The 14-day upper limit as to when embryo research can be undertaken assumes that any preceding developmental stages have limited scientific interest. However, the advent of embryonic stem cell research has directed the spotlight onto a much earlier stage of development, at five to six days. It remains to be seen whether this will have repercussions for ethical thinking and social policy. This scientific description only begins to hint at the different ways in which developing embryos are regarded ethically and theologically. The points just outlined are regarded as irrelevant by those for whom fertilization is seen as the starting point of a new individual with full moral status and demanding full protection as a human being (e.g. Hui, 2002). Such a position pays only limited attention to scientific descriptions. Nevertheless,
scientific perspectives cannot be ignored, since decisions are being routinely taken in IVF clinics on the basis of scientific criteria. Whenever IVF procedures are undertaken, human embryos are being assessed at each clinical procedure, quite apart from the need for ongoing research to improve the efficiency of the procedures themselves. It is essential, therefore, to have guidelines outlining the analyses that can and cannot be undertaken on in vitro embryos, and the scientific rationale behind these guidelines.
ArtIfIcIAl reProductIVe technologIes IVF is the best known and most commonly used procedure within the whole gamut of the ARTs. With this technique, eggs and sperm are mixed together in vitro, that is, in suitable chemical media in the laboratory. Before a woman’s eggs can be obtained for IVF her ovaries are generally stimulated with hormones. Then, while the patient is sedated, eggs are aspirated from the ovaries under ultrasound guidance. Embryos formed in this procedure are transferred into the uterus usually at around the eight-cell stage allowing them to develop naturally. Overall, the success rate of IVF is around 25 live births per 100 cycles started. However, this varies according to the mother’s age. For women under 35 the percentage of live births per cycle started is around 30-35%, dropping to 20% for women aged 38-40, 10% for ages 41-42, and down to a dismally low 4% for women over the age of 42 (Centers for Disease Control and Prevention, 2006). IVF is constantly being refined. This is demonstrated by the development of various techniques used with IVF to improve the success of the procedure. These techniques include intracytoplasmic sperm injection (ICSI), which in some cases increases the chance that an embryo will be formed by directly injecting a single sperm into an egg. This is now a much utilized procedure,
611
In Vitro Fertilization and the Embryonic Revolution
accounting for around 50% of all IVF procedures. Other procedures sometimes employed include the use of (immature) sperm from the epididymis or testis rather than (mature) sperm obtained by ejaculation. All these procedures are aimed at increasing the success rate of IVF. The hormonal regimen prescribed for women undergoing IVF causes them to superovulate and produce a number of eggs (up to 15) in a single cycle. Doing this reduces the likelihood that a woman will have to undergo the uncomfortable and potentially dangerous process of hormonal stimulation again. Currently, embryos and sperm can be frozen and defrosted easily without causing any significant damage. On the other hand, eggs cannot be as readily treated in this manner, although this is changing. Additional reasons why it is desirable to produce this large number of embryos are: the transfer and implantation of the initially chosen embryos may be unsuccessful; not all the embryos will be suitable for transfer and so having a pool of embryos from which to choose is beneficial. Usually a successful pregnancy occurs before all the embryos have been implanted, leaving embryos surplus to the couple’s needs. In most countries these frozen embryos can be stored for a maximum period of time, with ten years frequently being the period selected. Other than storage for a limited length of time embryos can be: destroyed, by allowing them to thaw; donated to other couples; or used in research. Once again, what can or cannot be done varies between countries. The number of frozen embryos in storage tends to be large, over 400,000 in the US (Weiss, 2003), and over 120,000 in the UK (Templeton, 2003). Preimplantation genetic diagnosis (PGD) is an extension of IVF and is used as a possible means of circumventing the birth of children with disorders such as Duchenne’s muscular dystrophy, haemophilia, haemoglobin diseases, cystic fibrosis, Huntington’s disease and achondroplasia. PGD analysis is also employed to check for any
612
abnormalities in the number of chromosomes (aneuploidy), a common cause of miscarriage especially with older mothers. Either the polar body of the egg, or one or more cells from an embryo between days three and five of development, are tested. The aim is to determine if that embryo carries the harmful gene under investigation, or has an abnormal chromosomal arrangement. If it does it is discarded. The testing is continued until an unaffected embryo is found; this embryo is placed in the uterus for continued development. There is as yet no evidence to suggest that the cell biopsy adversely impacts on the health of the resulting child, although this has been questioned by some, and longitudinal studies to examine any long-term effects have been called for (ESHRE PGD Consortium Steering Committee, 2002; President’s Council on Bioethics, 2004). One of the features of PGD is that it enables the sex of embryos to be readily determined. When dealing with sex-linked genetic conditions, this enables embryos of the appropriate sex to be selected so that the resulting child will not suffer from the condition in question. However, sex selection can also be used for social control of the next generation by providing parents with a child of the preferred sex. Since this latter form of sex selection has nothing to do with any medical condition, it demonstrates the paper-thin boundary between medical and social drivers. A further use to which PGD may be put is to avoid the implantation of an embryo with a late onset genetic disorder, such as Huntington’s disease, or one with a predisposition to develop conditions like diabetes, high blood pressure, breast cancer, or even—hypothetically—Alzheimer’s disease (Klipstein, 2005). In none of these cases would the individual suffer from any disease for many years and perhaps never. Consequently, embryos are being selected against because of the possibility, as against the probability, that they will develop into individuals who may suffer from a particular condition. This is moving a considerable distance from any conventional form of therapy.
In Vitro Fertilization and the Embryonic Revolution
Even more challenging is the birth of so-called ‘saviour siblings’, where one baby is brought into the world in an attempt to save the life of an already existing sibling (with a condition like Fanconi’s anaemia). Human leukocyte antigen (HLA) tissue typing is used in addition to PGD to select embryos that are both free of the particular familial disorder, and are a tissue match for the older sibling. This double requirement of embryos means that more embryos are discarded than in straight-forward PGD, including embryos known to be biologically normal (those free of the disorder but not a tissue match). At birth, the umbilical cord blood of this newborn child will probably be used to treat the older ill sibling. This approach appears to be using one child for the benefit of another, with clear overtones of exploitation (although this is not an inevitable consequence of even such an extreme procedure).
beyond the ArtIfIcIAl reProductIVe technologIes: stem cells And clonIng This takes us into realms beyond what might be termed standard ways of producing embryos (although of course the procedures referred to in the previous section were far from standard 30 years ago). Inevitably, this also takes us more obviously into research territory. Although some of the alternatives can occur during natural fertilization e.g. embryo splitting, it is the more recent attempts to produce embryos in the laboratory that have elicited both excitement clinically, and apprehension ethically and socially. The area of embryo creation that elicits the greatest public interest and concern is ‘cloning’. In science this broad term covers a variety of procedures, not all of which are concerned with embryo creation, but our attention will be turned to somatic cell nuclear transfer (SCNT). By removing the nucleus from a donor cell and introducing it to an enucleated egg, the nucleus from the donor
cell is reprogrammed. The egg is then stimulated chemically or electrically, enabling the egg with its donor nucleus to develop into a blastocyst that is genetically identical to the donor. Strictly speaking, however, clones produced by this technique are not identical to their progenitors because they also contain mitochondrial genes from the cytoplasm of the host egg. This blastocyst can then be used to produce embryonic stem cells (ESCs) (therapeutic or research cloning) or if inserted into a woman’s uterus a new human individual (reproductive cloning). While the latter procedure is the subject of endless debate, it is of little interest to scientists and tends to be a distraction for serious bioethical debate. At present, any tissues that may be produced by therapeutic cloning would be mainly used in research on cell and tissue differentiation and growth. The longer-term goal for human therapeutic cloning is to coax ESCs from cloned embryos to form specific cell types, which could then be used to produce newly generated tissues to replace damaged or defective tissues in patients. The transplanted tissues would be expected to be genetically identical to the patient’s own tissues, eliminating the risk of tissue rejection by the immune system, and hence avoiding the need for immunosuppressants (theoretically anyway). However, such person-specific therapies using SCNT embryos will likely not be developed for a considerable number of years. Another kind of artificially produced embryo is a hybrid or chimeric embryo. Hybrids are formed by mixing genetic material from two different sources or species, whether by normal fertilization, cell nuclear transfer or otherwise. A ‘chimeric’ embryo is one that contains a mixture of cells from two or more different sources or species. Forming a hybrid embryo via cell nuclear transfer involves transferring the nucleus from one cell into an enucleated egg of another species in a process similar to SCNT. For example, transferring a human nucleus into a rabbit egg
613
In Vitro Fertilization and the Embryonic Revolution
would produce a human-rabbit cytoplasmic hybrid that could be used as a source of human-like ESCs or as a model of human diseases. The use of non-human eggs would bypass the need for a large number of donated human eggs, as would be required to develop SCNT technology. Current legislative debate in the UK has allowed the creation of ‘cybrid embryos’ formed by placing human DNA in an empty animal egg. Other techniques to be approved include the creation of human embryos with animal genes inserted in their DNA, human embryos containing animal cells, and genetically engineered animals with human genes (Editorial, 2007). Chimeric embryos could be used to examine the proliferation and differentiation of human stem cells and to test the pluripotency of ESCs (Goldstein et al., 2002; Muotri et al., 2005). As we have moved beyond the ARTs, we have moved imperceptibly into research territory. With this move it is important to realize that nothing in the realm of embryo research would be feasible were it not for IVF and the research that lies behind it and has continued to support it. In other words, the ability to isolate and maintain embryos in vitro is the raison d’etre both of the ARTs and of embryo research. Embryo research is motivated by a number of possibilities, including understanding why serious developmental anomalies occur, and thus devising ways of treating and even preventing them. It has the potential to increase knowledge about the causes of infertility and miscarriage, promoting advances in the treatment of infertility, and the development of more effective contraception. Beyond screening out embryos with harmful genes, as in PGD, there is the possibility of ‘treating’ these embryos by replacing those genes with harmless ones. However, the safety and efficacy of such an approach are questionable, precluding implanting genetically modified embryos in the near future. Before these goals can be achieved, knowledge of the most fundamental
614
developmental mechanisms must improve so that it becomes possible to unravel the mechanisms by which cells develop into different cell and tissue types. The background to all such work in the human is provided by research on experimental animals, principally mouse embryos. One research avenue that has attracted huge attention of late has been that using human ESC lines, derived from cells from the ICM of the blastocyst. These can then be grown in culture in the laboratory and used for research purposes (Reubinoff et al., 2000; Lanzendorf et al., 2001). Human ESCs are relatively undifferentiated and are able to give rise to cell types from all three of the body’s basic germ layers; they are pluripotent. Combined with the ability of human ESCs to be cultured for long periods of time, this pluripotency is the main reason why many researchers believe such cells are crucial to our understanding of human development and disease. It is not yet clear what signals influence ESCs to differentiate into specialized cell types. When these can be identified, researchers may be able to direct ESCs to develop into the cell types necessary for therapeutic applications, by producing heart, liver, and pancreatic cells, and even nerve cells. Most of the therapeutic potential of embryo research is centred on the use of ESCs through the burgeoning discipline of regenerative medicine. Because human ESCs are pluripotent, they may be able to be used to replace cells lost through injury or disease. Potentially, ESCs could one day be used to treat conditions such as juvenile onset diabetes, heart attack, autoimmune diseases, and neurological conditions such as stroke, Parkinson’s disease, Alzheimer’s disease, multiple sclerosis and spinal cord injury. However, such therapies will not be developed from human ESCs for a number of years, perhaps many years. An appropriate degree of caution is therefore necessary in discussing the potential benefits of this research.
In Vitro Fertilization and the Embryonic Revolution
comIng to terms wIth the embryo Much of the ethical debate on embryo research focuses on the moral status that should be accorded to the embryo, although there are other relevant ethical considerations (for example, the informed consent of the parents who have created the embryo). At one end of the spectrum are those who hold to the moral principle that the use of any embryo for research purposes, including stem cell research, is unethical and unacceptable. Their reasoning is usually that an embryo is entitled to full human status from fertilization onwards, and that it is a ‘moral person’, capable of being harmed and benefited just like children and adults. Destroying embryos is ethically commensurate with killing a person. Embryos must be accorded the same rights and protections accorded to adult humans. This includes proscribing their use in embryo research, in the same way most people proscribe and abhor the killing of a child purely to benefit another. At the other end of the spectrum are those who believe that the early embryo has no moral status because it is not a human being and does not possess moral personhood. Some regard the embryo at its blastocyst stage as a mere collection of cells, lacking any of the rights of adult humans. Within such a view, embryos are not capable of being harmed or benefited. Their use is ethically defensible, so long as other issues such as consent are adequately addressed. According to this position, it might actually be unethical not to use surplus IVF embryos for research that aims to alleviate human suffering. Many other people adopt stances at a variety of points between these two extremes, considering that embryos have rights and are owed protections, due to their potential to become moral persons that can be harmed and benefited, or to their shared genetic heritage. Nevertheless, these rights and protections exist to a lesser degree
than do those of full moral persons. Within this view, embryos’ rights and protections are to be weighed against the potential benefits of using embryos for research. Consequently, the ethical justification of research projects will depend on the quality of the scientific questions being asked and the potential benefits of the research. The first position, that of according full moral status to the embryo, leads to opposition to embryo research. The second position, according the early embryo no moral status, allows embryo destruction. The third position, an umbrella position somewhere between the other two, allows embryo research with a variety of restrictions. An important restriction stems from the appearance of the primitive streak at around 14 days after fertilization. This may be morally relevant in a number of ways. It represents a significant juncture between an ill-defined human entity and a coherent individual (see discussion in Embryonic development). It also marks the beginnings of a nervous system and thereby is relevant to questions of when the developing individual acquires the capacity to feel pain. The possibility of twinning prior to the appearance of the primitive streak has long been regarded in some quarters as an argument against the individuation of the preimplantation embryo (e.g. Eberl, 2000). Discussions of moral status do not usually take account of the location of the embryo (blastocyst). Since a blastocyst has to be located in a woman’s uterus to have the possibility of developing into an individual, it has to be asked whether a blastocyst in the laboratory has the same moral status as a blastocyst in a woman’s uterus (Towns and Jones, 2004a). The contentious nature of the moral status of the human embryo is such that the chances of gaining consensus within a pluralist society are negligible. Perhaps they are actually nil. And so, rather than argue from first principles about whether the early embryo is a moral person, an alternative approach is to look at relevant policies in place within that society, and thereby assess
615
In Vitro Fertilization and the Embryonic Revolution
the value the society actually places on embryos at different stages of development (Jones and Towns, 2006). This is a pragmatic approach based on how a society functions in practice in the reproductive realm. Hence, if a society has in place legislation that allows the production of surplus embryos in IVF programs, and also allows PGD plus the availability of a full range of artificial contraceptives (some of which prevent the implantation of embryos), it is approving the implicit destruction of embryos under certain circumstances. The implication is that early embryos do not have the same rights and protections as children or adult humans. Existing legal provisions do not appear to support the view that early embryos are moral persons that can never be used for research of potential benefit to others. While the provisions cannot automatically be interpreted as approving research on embryos, any arguments against research cannot be based on the inviolability of embryos. Consequently, a policy outlawing the destruction of human embryos for research purposes would be at odds with the legal position that allows such destruction in related situations, provided that the anticipated research benefits are greater than the perceived harm done by destroying embryos.
source of embryos for reseArch If human embryos are to be used for research along any of the above lines (including the derivation of human ESCs), they must be obtained from some source, and it is here that ethical issues come to the fore. As outlined in the previous section, for some commentators embryos should never be used for research purposes, or indeed for any purposes that will not benefit the embryo on which the research is being conducted. This is a minority position, and for most societies some research is allowable on human embryos, although this in turn raises three major possibilities.
616
Hence, one is confronted by four possibilities in all (Towns & Jones, 2004b). These vary from position A, the prohibition of all embryo research, to position D, which allows the creation of human embryos specifically for research via both fertilization and SCNT. In addition, there are two intermediate positions. Of these, position B confines the use of ESCs to those currently in existence, in that they were extracted prior to some specified date, thereby prohibiting the extraction of ESCs and the utilisation of ESCs derived in the future. Position C allows for the use and ongoing isolation of ESCs from surplus IVF embryos. Position A is compatible with the stance that human life commences at fertilization, allowing nothing to be done to the embryo that is not in its best interests. Such a stance would also be expected to disapprove of IVF, on the grounds that its development and further refinement have necessitated research on embryos. Further, IVF programs that incorporate the production of surplus embryos would also be unacceptable since these programs inevitably result in the production of numerous embryos that have to be discarded. Its emphasis is entirely on the harm done to embryos, ignoring the good that might accrue to others in the human community through embryo research including the therapeutic potential of ESCs. This is where position B may have a role to play, in that it allows some research on human embryos, but also sets out to protect human embryos. This is achieved by allowing research only on stem cell lines already in existence. The embryos from which these lines were extracted have already been destroyed, and since that is an unalterable fact it may be reasonable to utilize those stem cells in scientific research. In forbidding the destruction of further embryos, this position gives the impression of placating both sides of the argument. Research can continue in a limited way, and some good might emerge from this research. Hence, it is not deaf to the plight of people with severe degenerating conditions who
In Vitro Fertilization and the Embryonic Revolution
could, possibly, benefit from scientific advances. At the same time, those advocating protection of human embryos can feel that their case has been supported, by preventing the destruction of any more embryos for research (and possibly therapeutic) purposes. Position B represents an uneasy compromise, made possible only by accepting the use of ‘ethically tainted/unethically-derived’ material. However, position B proves problematic in societies that permit IVF programs that produce surplus embryos, most of which will be discarded. Hence, restrictive embryo research guidelines fail to protect the large numbers of embryos that are routinely being destroyed by IVF procedures. They simply prevent research on embryos destined to be destroyed. There is an uneasy compromise here. For those who emphasize the importance of personhood from fertilization onwards, position A is the more consistent of the two positions. However, this position suffers from neglect of any interests beyond those of the very early embryo. Should the whole of our focus be on the very earliest stages of embryonic development to the exclusion of all other stages of life? Position C aims to provide some protection of the human embryo, since any research, including ESC research, is limited to surplus embryos from IVF programs. This allows both the utilisation and extraction of new ESCs, and eliminates arbitrary time limits on extraction. There is an ethical consistency in this case. Of course, this position accepts the destruction of embryos, but the destruction is of in vitro blastocysts that have no future as human individuals. Although produced in IVF programs, in order to give rise to new individuals, these early embryos are no longer required to achieve this. Position C fulfils a broad range of imperatives, seeking to improve the health status of numerous individuals suffering from common debilitating conditions, as well as treating early embryos with the care and respect due to human tissue.
The move to position D is a major one in ethical terms. The creation of embryos for research purposes, either by fertilization or SCNT, and possibly involving hybrid embryos, appears to be a move into a new moral sphere. It represents a dramatic shift in moral perspective since embryos are being created not with the prospect of giving rise to new human lives, but solely for exploring biological and medical possibilities. They are being used as a means to another end—that of generating new knowledge, and possibly helping others in the future. This does not automatically constitute an argument against this position, but it does highlight the dimensions of the move. The issue it raises is whether there is any substantial ethical difference between the status of many of the embryos in IVF programs, created with the prospect of increasing the likelihood of a successful pregnancy (most will never become future individuals), and those created with no prospect of becoming future individuals. The differences between positions C and D may be less than sometimes thought, although this will in no way alleviate the concerns of those who see this as a move down a slippery moral slope.
selectIng embryos And desIgnIng bAbIes The initial impetus behind the development of IVF was to help infertile couples achieve a pregnancy and live birth, and this remained the guiding principle for the first few years. This has led to modifications to IVF procedures, a notable example being ICSI. The dramatic improvements in live birth rates amply demonstrate the success of such innovations. Today, however, the potential uses of IVF, and its associated ARTs, are being driven as much by social expectations as by clinical medical imperatives. Examples are the utilization of IVF by older women (in their 40s), post-menopausal women (in their 50s or even 60s), and same-sex couples.
617
In Vitro Fertilization and the Embryonic Revolution
An additional level of sophistication is introduced by the possibility of selecting which embryos will be implanted in the woman’s uterus. This is the driving force behind PGD. As with IVF, this technique was originally developed with a clear therapeutic rationale, to screen out embryos with deleterious genetic defects, but the technique can equally be extended to choose a child of the desired sex or one with certain socially-determined genetic characteristics. The issue of selection of one embryo over another is a serious ethical issue that is vigorously condemned by those for whom full human personhood commences at fertilization. After all, if all humans, including very early embryos, have identical status that must be upheld under all circumstances, there will be no place for deciding that one embryo is to be chosen for further development whereas another is to be discarded. No reasons can be strong enough to justify selection of this nature. If equality of embryonic status is the only relevant moral consideration, the fact that one embryo carries the gene for cystic fibrosis whereas another does not, is irrelevant. One is as important as the other. Alternatively, the other major perspectives on embryonic status discussed previously enable account to be taken of the wishes and circumstances of the parents with cystic fibrosis in their family. Selection of one embryo over another then becomes ethically feasible. The process of choosing one embryo over another in PGD raises the spectre of eugenics: selection of the fit and rejection of the unfit. On these grounds some view PGD as discriminating against people with disabilities, and promoting the view that the birth of such people should be prevented (e.g. King, 1999). Possible eugenic overtones are accentuated when discussions centre on selecting against homosexuality, obesity or hyperactivity, or for intelligence, beauty or athletic ability. It is unfortunate that far too much attention is focused on ephemeral and unrealistic goals, such as these,
618
that are neither currently scientifically possible nor legally permissible. PGD, like many other procedures, could be used eugenically, but PGD is not inherently eugenic in nature. Another diversion in bioethical debate on PGD is the claim that the choices made in PGD have as their goal the production of ‘designer babies’. However, the individual without the gene for cystic fibrosis hardly represents design in any meaningful sense. As is the case with many of the other reproductive technologies, PGD needs to be demythologized—the practical implications of PGD are far less fantastical or threatening than imagined. Talk about ‘designer babies’ is unhelpful because it promises far more than it is ever capable of delivering. The science is primitive, and even if this were not the case, designing human beings would involve undertaking some form of PGD, together with a considerable amount of complex manipulation of embryos. Also, the manipulation may not work. It is regrettable that the focus of so many discussions appears to be on choosing genes for fair hair, blue eyes, intelligence, physique, and good looks. The ephemeral nature of these longings points to their superficiality, let alone an ignorance of the scientific precision, clinical complexities and expensive resources that would be required to achieve them. What is required is a rigorous assessment of the merits of what can and cannot be accomplished by biomedical science. Our starting point should focus on the good of the patient, with a commitment to improve the quality of the patient’s life and, if feasible, to replace illness by health. This is a positive hope, but it is also a realistic one. The intervention may not work; hopes may be dashed. But the attempt is to be encouraged as long as our expectations are guided by realistic clinical and scientific goals.
In Vitro Fertilization and the Embryonic Revolution
‘PlAyIng god’ An equally popular cliché is that of ‘playing God’, an accusation frequently levelled at scientists as they propose an increasing range of technological interventions into human life at its earliest stages. The term ‘playing God’ implies that scientists are going too far, they are intruding into realms best left to God or ‘nature’, depending upon one’s beliefs. The concept has negative connotations and is selectively employed—at the beginning of human life (and possibly at the end), but not in most other areas of modern medicine. This selectivity suggests that the concept is being used to stigmatize procedures that are not approved of, rather than serving as a serious tool of ethical analysis. If an area is deemed untouchable, by definition any intrusion is unacceptable; it is ‘playing God’. It reflects hostility towards the procedures in question rather than presenting a clear rationale as to the manner in which it transgresses divine boundaries. Each component facet of the ARTs has to be assessed on its merits. ‘Playing God’, if we continue to use this language, should be assessed with a view to determining the positive as well as negative implications of procedures and the manner in which they are employed. By asking ethical questions, such as ‘In what way will this procedure benefit individuals?’ and ‘Do the advantages outweigh the disadvantages?’, we will be in a position to decide whether unknown and initially daunting procedures may be ethically acceptable. Rejecting a procedure by labelling it ‘playing God’, fails to advance ethical thinking, and is of no assistance to those who have to make the ultimate decisions. A whole genre of ethical responses is based on clichés like this, and their nebulous nature is such that opposition to many of the ARTs will continue unabated for many years to come. One result of this is that there will be dissonance between scientific developments in the ARTs and at least some of the ethical debate. This can best be seen by reference to the speed and magnitude
of the scientific procedures since the 1980s. At that time, IVF was a relatively simple procedure, and few of its associated developments were even on the horizon, including ICSI, which has revolutionized its efficacy. By 1989 more than 400,000 children had been born using IVF, a figure that is today closer to 3 million (Horsey, 2006). Also in 1989 PGD appeared on the scene (Handyside et al., 1989). Later, in 1998, ESCs made their first appearance with their derivation from human embryos (Thomson et al., 1998). Around the same time cloning burst into the limelight, with the 1997 announcement of the birth of Dolly the sheep (Wilmut et al., 1997), with its avalanche of dire warnings predicting the end of humanity as we know it. Therapeutic or research cloning followed in its wake, with hybrids becoming an object of scientific attention in the last year or so. From an ethical standpoint it is pertinent to ask whether these developments have been matched by any increasing sophistication in ethical analysis. This is an important consideration because the point has been repeatedly made in the literature that one or other scientific procedure should not proceed until sufficient time has elapsed to allow for our ethical thinking to catch up. A moratorium should be imposed before approving a certain scientific or clinical direction in order to allow time for the ethical issues to be resolved. An apt illustration is provided by proposals to conduct research on human embryos. However, what emerges is that those who opposed such research (and with it IVF procedures) in the 1980s generally oppose this research 20 years later. While there are exceptions, in that some groups have changed their position over this period, those who opposed this research on the grounds of ‘playing God’ and/or in some way entering forbidden territory remain implacably opposed. And there is no reason why they should change over the next 20 years. The reality is that scientific developments in the ARTs will continue in opposition to the positions adopted by the ‘playing God’ cadre of ethicists. The only ethicists in a
619
In Vitro Fertilization and the Embryonic Revolution
position to influence scientific developments are those whose writings are informed by contemporary scientific concepts, and who are in active dialogue with scientific researchers. The negative connotations associated with ‘playing God’ scenarios fail to rise above the level of political slogans. While such ethical stances will be heard in the public arena and will exert influence in that arena, they will provide little input into ethical debate on how the ARTs should or should not be applied in the clinic, and whether they should or should not be supported by society.
the IN VITRO embryo Regardless of which positions are adopted over embryonic status or embryo research, there is an intimate relationship between ethical and scientific perspectives, a relationship that is all too often ignored in bioethical discussions in reproductive biology. What can be done scientifically will continue to determine the ethical questions with which society has to grapple. Although the principles to be used to answer these questions may be unclear and equivocal, answers—even if temporary ones—have to be provided. The pace of scientific and clinical developments has left ethical thought struggling to keep up. The novel technologies associated with IVF have raised startlingly new issues for society to consider. The ability to isolate and maintain an embryo in vitro introduces a host of potential clinical interventions and research possibilities. In addition, it has defined a new category of being for consideration, the in vitro embryo. The preimplantation in utero embryo garners little attention in the case of natural fertilization, as it is generally undetectable until implantation occurs and pregnancy is established. The in vitro embryo, however, is a tangible entity, capable of being manipulated for reproductive ends as originally intended, but also for a variety of other
620
research possibilities. There is no way of avoiding the challenges and controversies engendered by the in vitro embryo, since it will continue to loom large in future scientific prospects and in the public imagination. The possibilities provided by embryonic stem cells, cloning, and related technologies have forced societies to ask the question ‘what can be done with the human embryo?’. However, the answer to this question has proved both elusive and divisive, as opposing concerns are weighed within a shifting framework of pluralist thought. Yet it is one that must be grappled with if we seek to remedy human suffering as much as is possible while adhering to consistent ethical principles. As one looks ahead, it is difficult to see any way around ongoing divergence of opinion and even conflict. As long as fundamentally different views are held on the moral status of the embryo, there will continue to be fundamentally different ethical stances on which scientific directions are or are not allowable. There will continue to be those who argue that technology is running ahead of our moral ability to deal with the technological developments. Others will contend that technological solutions to clinical and social problems are urgently needed and that this drive overrides conventional ethical boundaries. In practice clinical and scientific developments will continue apace and will be implemented, with the status of the embryo remaining in ethical limbo. The pragmatic nature of the clinical and scientific drivers will emerge as far more significant than the more theoretical discussions regarding the precise moral value to be ascribed to the human embryo. If this is a correct assessment of future decision-making, it points to the importance of striving for ethical consistency in public policy, so that human embryos are treated in comparable ways in IVF, PGD, artificial contraception, and embryo research (including the use of embryonic stem cells). Underlying all such procedures are health imperatives that provide the context for all ma-
In Vitro Fertilization and the Embryonic Revolution
nipulations of embryos, its goal being that of improving the health of human beings. In this context it would be ironic if human embryos were to be treated as of no moral significance, but likewise it would be counterproductive to view them as of greater moral significance that future children and adults. They are part of a moral community, and that community should determine their role in the continued unfolding of the artificial reproductive revolution.
references Centers for Disease Control and Prevention (2006). 2004 Assisted Reproductive Technology Success Rates - National Summary & Fertility Clinic Reports. Atlanta: US Department of Health and Human Services. Retrieved May 2, 2007, from http://www.cdc.gov/art/ART2004/index.htm Eberl, J.T. (2000). The beginning of personhood: a Thomistic biological analysis. Bioethics, 14, 134-157. Editorial (2007). An unwieldy hybrid. Nature, 447(7143), 353-354. ESHRE PGD Consortium Steering Committee (2002). ESHRE Preimplantation Genetic Diagnosis Consortium: data collection III (May 2001). Human Reproduction, 17(1), 233-246. Goldstein, R.S., Drukker, M., Reubinoff, B.E., & Benvenisty, N. (2002). Integration and differentiation of human embryonic stem cells transplanted to the chick embryo. Developmental Dynamics, 225(1), 80-86. Handyside, A.H., Pattinson, J.K., Penketh, R.J., Delhanty, J.D., Winston, R.M., & Tuddenham, E.G. (1989). Biopsy of human preimplantation embryos and sexing by DNA amplification. Lancet, 1(8634), 347-349. Horsey, K. (2006, June 26). Three million IVF babies born worldwide. BioNews. Retrieved May
25, 2007, from http://www.bionews.org.uk/new. lasso?storyid=3086 Hui, E.C. (2002). At the Beginning of Life: Dilemmas in Theological Ethics. Downers Grove, IL: InterVarsity Press. Jones, D.G., & Towns, C.R. (2006). Navigating the quagmire: the regulation of human embryonic stem cell research. Human Reproduction, 21(5),1113-1116. King, D.S. (1999). Preimplantation genetic diagnosis and the ‘new’ eugenics. Journal of Medical Ethics, 25, 176-182. Klipstein, S. (2005). Preimplantation genetic diagnosis: Technological promise and ethical perils. Fertility and Sterility, 83,1347-1353 Lanzendorf, S.E., Boyd, C.A., Wright, D.L., Muasher, S., Oehninger, S., & Hodgen, G.D. (2001). Use of human gametes obtained from anonymous donors for the production of human embryonic stem cell lines. Fertility and Sterility, 76, 132-137. Muotri, A.R., Nakashima, K., Toni, N., Sandler, V.M., & Gage, F.H. (2005). Development of functional human embryonic stem cell-derived neurons in mouse brain. Proceedings of the National Academy of Sciences of the United States of America, 102(51), 18644-18648. The President’s Council on Bioethics (2004). Reproduction and Responsibility: The Regulation of Reproductive Technologies. Washington DC: President’s Council on Bioethics. Reubinoff, B.E., Pera, M.F., Fong, C.Y., Trounson, A., & Bongso, A. (2000). Embryonic stem cell lines from human blastocysts: somatic differentiation in vitro. Nature Biotechnology, 18, 399-404. Shannon, T.A., & Walter, J.J. (2003). The New Genetic Medicine. Lanham, MD: Rowman & Littlefield Publishers, Inc.
621
In Vitro Fertilization and the Embryonic Revolution
Steptoe, P.C., & Edwards, R.G. (1978). Birth after the reimplantation of a human embryo. Lancet, 2(8085), 366. Templeton, S.-K. (2003, September 21) Spare embryos ‘should be donated to infertile couples’. The Sunday Herald. Retrieved May 25, 2007, from http://findarticles.com/p/articles/mi_qn4156/ is_20030921/ai_n12584815 Thomson, J.A., Itskovitz-Eldor, J., Shapiro, S.S., Waknitz, M.A., Swiergiel, J.J., Marshall, V.S., & Jones, J.M. (1998). Embryonic stem cell lines derived from human blastocysts. Science, 282, 1145-1147. Towns, C.R., & Jones, D.G. (2004a) Stem cells, embryos, and the environment: a context for both science and ethics. Journal of Medical Ethics, 30(4), 410-413. Towns, C.R., & Jones, D.G. (2004b). Stem cells: public policy and ethics. New Zealand Bioethics Journal, 5, 22-28. Weiss, R. (2003, May 8). 400 000 Human Embryos Frozen in U.S. The Washington Post. Retrieved May 25, 2007, from http://www.washingtonpost. com/ac2/wp-dyn?pagename=article&contentId= A27495-2003May7 Wilmut, I., Schneike, A.E., McWhir, J., Kind, A.J. & Campbell, K.H.S. (1997). Viable offspring derived from fetal and adult mammalian cells. Nature, 385, 810-813.
key terms Cloning (Somatic cell nuclear transfer): Asexual reproduction, in which the nucleus (and
622
chromosomes) of an ovum (egg) is replaced with the nucleus of a somatic (body) cell of an adult. This causes the ovum to develop as if it had been fertilized without the involvement of sperm. Embryonic Stem Cells: Stem cells derived from the inner cell mass of early embryos (blastocysts). In Vitro Fertilization (IVF): The process of fertilizing a (human) egg with a (human) sperm in vitro in the laboratory and therefore outside the body of the woman; embryo transfer may follow, and the term ‘IVF’ is used to cover both the fertilization and the embryo transfer. Preimplantation Embryo: A name given to the entire product of the fertilized egg up to the end of the implantation stage (fourteen days). Preimplantation Genetic Diagnosis (PGD): A procedure devised to test early human embryos for serious inherited genetic conditions, with the subsequent transfer of only unaffected embryos to a woman’s uterus. Primitive Streak: A thickening of the ectoderm which appears in the human embryo at fourteen to fifteen days’ gestation; often considered to represent the transition from a non-organized to an organized state during embryonic development; its appearance marks the end of the time during which research can be undertaken on embryos. Surplus Embryos: Embryos created as a part of fertility treatment that are left over once the treatment has finished; they are capable of development but were not implanted because more embryos were created than were ultimately required.
623
Chapter XL
Inter-Organizational Conflicts in Virtual Alliances Joyce Yi- Hui Lee University of Bath, UK Niki Panteli University of Bath, UK
AbstrAct In this chapter we argue that even though conflict has been explored at an intra-organizational level, its effect and role at an inter-organizational level has remained unexplored. Yet, with the increasing number of virtual inter-organizational alliances, attention needs to be given on this issue. This chapter, hence, discusses the factors that contribute to different types of conflicts at an inter-organizational level, which are named as business strategic conflict and cultural conflict. More significantly, two conceptual frameworks of tendencies of business strategic conflict and cultural conflict are illustrated for understanding the course of global virtual alliances. The frameworks are designed to be a foundation of future empirical research. The ethical implications of doing such research are discussed.
IntroductIon Building relationships with customers, suppliers and even competitors has been exhorted by today’s dramatic changing business landscape. Firms partner up with suppliers abroad and purchase products from overseas vendors because of cost, quality and resource exchange reasons (Reardon & Hasty, 1996) and that higher profits could be achieved (Chan, 1992).
Information technology (IT) plays a central role because spatial and temporal barriers could be overcome (Boudreau, Loch, Robey, & Straud, 1998). The proponents think that virtual alliances offer an effective integration of expertise from independent organizations and reduce time and cost wastes in travelling (Bal & Gundry, 1999; DeSanctis & Monge, 1998; Prasad & Akhilesh, 2002). However, virtual alliances are criticised for they create potential challenges, such as a
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Inter-Organizational Conflicts in Virtual Alliances
lack of physical interaction and social contact which is fundamental to business collaboration (Defillippi, 2002). Communication electronically occurs in very different context from face-to-face (FTF) conversation. Clark and Brennan (1991) argued that there are six structured features for grounding FTF conversations including co-presence, visibility, audibility, co-temporality, simultaneity and sequentiality. Co-presence allows people to look at what the others are doing in the same surrounding. Visibility means that people are able to see the others even though they are not working in the same place. Audibility allows people to hear the others’ voice so that sound and intonation changes can be recognised. Co-temporality in where people’s saying and uttering can be received immediately when they are just produced. Simultaneity allows all members to express and receive messages at the same time. Sequentiality in where people involve into a continuous conversation and they can not get out of sequence. All members in FTF meetings are linked together without time lags. In contrast to FTF, computer-mediated communication (CMC) is not provided with all above features. For example, though email does not have the above characteristics, it has been the dominant means of communication. However, conflict and dispute will be exacerbated when people communicate electronically more than communicate via FTF meeting (Fridman & Currall, 2003). The coordination of activities does not benefit all firms engaging in the technical networks. Who can survive in the business competition depends on how well they can manage and organize business activities efficiently on the virtual platform. A virtual alliance embedded in a combination of CMC and infrequent FTF communication is far from a mere application of information technologies. Several organizational issues regarding communication and interaction through technology in the course of virtuality has drawn researchers’ attention(Boudreau et al., 1998; Maznevski & Chudoba, 2000; Panteli & Fineman, 2005a;
624
Schultze & Orlikowski, 2001). Communication in organizations is realised based on a common perception and knowledge. However, while communication takes place in virtual networks striding different companies in different countries, the challenges become more complicated. When different languages, regimes, customs and relevant cultural characters occur in the collaborative network; misunderstandings and frictions increase. National cultural diversity, hence, has been seen as one of the most difficult hindrances in inter-organizational collaboration (Hofstede, 1984; Swierczek & Onishi, 2002). Conflicts on business competitive and cooperative strategies are also challenging the organizational alignment in reality. For instance, the two electronic giants, SONY and Samsung, agreed to share patents in order to speed up the development of basic technologies in 2003 (Frauenheim, 2004). To date the collaboration has not been a common practice and the firms are still competitors in consumer-electronics industry. Apparently, business strategic conflict accelerates the complexity of business collaboration. Conflict has been studied at a number of levels focusing on the definitions of conflict, the actors in the process and the elements that influence the conflict (e.g. Darling & Fogliasso, 1999; De Dreu & Weingart, 2003; e.g. Deutsch, 1973; Jehn, 1997; Jehn & Mannix, 2001; Thomas, 1976). Jehn (1997) identified the relationship, task and process conflicts which are the most common typology of conflict but inter-organizational conflicts were not addressed. In contrast with the dramatic growth of global virtual alliances, what we have known about conflicts is little. Thus, this chapter aims to show that additional types of conflicts, namely cultural conflict and business strategic conflict, are of relevance to the inter-organizational level. More significantly, the tendencies of conflicts are illustrated to describe the nature of conflicts in inter-organizational collaboration in depth.
Inter-Organizational Conflicts in Virtual Alliances
the sIgnIfIcAnce of VIrtuAl AllIAnces The main function of a virtual alliance is similar to conventional cooperation because they both exist to achieve common organizational goal and purpose. However, a virtual collaborative network operates across space, time, and organizational borders by webs of communication technologies. Thus, the global virtual alliance works across various countries and cultural boundaries. It becomes an effective integration of expertise that spans time, geographies, nationalities and cultures (Prasad & Akhilesh, 2002). DeSanctis and Monge (1998) viewed it as a common identity that organizations come together in the minds of members. Moreover, virtual alliances between organizations entail an applicable and flexible approach which facilitates the ability of firms to adapt to the global competition. It is a network consisting of independent organizations, groups or individuals that connect together to find a market opportunity (Panteli & Fineman, 2005a). Michael Dell pointed out the importance and benefit of virtual integration which enhance the quality of customer service and the response of problem solving (Magretta, 1998). Indeed, global virtual alliance offers many benefits for firms that strive to handle the everchanging working environment but it also presents potential problems and pitfalls (Maznevski & Chudoba, 2000). According to Bal and Gundry’s investigation (1999) on virtual teaming in automotive supply chain, several important benefits such as time and cost saving in travelling were over-emphasized; whereas, concerns about adopting virtual communication tools, for example, the loss of social contact and difficulty of managing information security were often excluded. Obviously, virtual collaboration is far from a mere application of technology (Bal & Gundry, 1999). By means of IT, information, knowledge and experiences are easier to be accumulated and shared among members, because the development
of communication technologies has transformed information and knowledge into something separable, divisible and transportable (Cohendet, Kern, Mehmanpazir, & Munier, 1999). It is clear that virtual collaboration plays an important role in the shaping of business organizations. Most studies on the topic have focused primarily on its benefits. However, the approach of electronic communication needs to complement rather than totally replace conventional communication as there may be lack of social interactions (Defillippi, 2002). The context of global virtual alliances not only contains multiple electronic communication technologies but also the complex business, culture and other organizational activities.
the tyPology of conflIct In orgAnIZAtIons Conflict is the core issue in global virtual alliances. The organizational conflict is defined and classified from various perspectives that are important to understanding human behavior in organizations.
definition of Conflict Conflict has been seen as a significant element which influences the effectiveness and performance in organizational collaboration. Conflict is awareness on the part of the parties involved of discrepancies, incompatible wishes, or irreconcilable desires (Boulding, 1963). It is also viewed as a perspective bargaining process when objectives of dispute must be negotiated (Pinkley, 1990). Simmel (1955) argued that conflict leads to the cohesion as the political centralisation is required to deal with the emergency of conflict and bring people together. However, Coser (1956) claimed that conflict also leads to anomie, thus, external conflict does not improve cohesion. The effect of conflict is still being discussed; therefore, the
625
Inter-Organizational Conflicts in Virtual Alliances
relevant studies have been discussing different types of conflicts in diverse ways. In earlier studies, conflict was categorized as ‘substantive’ and ‘affective’- ‘Conflict rooted in the substance of the task which the group is undertaking, and conflict deriving from the emotional, affective aspects of the group’s interpersonal relations’ (Guetzkow & Gyr, 1954, pp.369). On the surface, the two types of conflicts may result in same manifestations, such as product development schedule could be delayed because of both types of the conflicts. However, the two types of conflicts are usually expected on different conditions. For instance, the substantive conflict might be expected to occur when team members are more task-oriented striving and struggling to achieve specific tasks, however, the affective conflict might debase the team communication and work efficiency. A similar finding by Coser (1956) about interpersonal conflict includes goal-oriented conflict and emotional conflict. Generally, conflict in team is concerned with emotion and group task issues which may impact on organizational functions in different conditions.
implication of Conflict on organizational behavior There are wide discussions on diverse classification of conflicts. An insightful comparison between productive and destructive conflict is pointed out by Deutsch (1973). Generally, interaction between productive conflict and destructive conflict depends on the flexibility in organizations. In productive conflict, group members involve in a variety of activities and interactions to negotiate and communicate to achieve and gain an acceptable solution under relaxed organizational climate. In contrast, groups working in destructive conflict have less flexibility because the team task and goal will be narrower. Groups try to defeat one another rather than interact positively to each other. Thomas(1976) revealed foundational reviews on conflict in multiple groups in organizations
626
by suggesting two models of conflict. One is the process models and another is the structural model. Process model describes the internal dynamics of conflict issue between two or more groups in organizations. The dynamic conflict is sequentially ordered by five circumstances which are frustration, conceptualization, behavior, interaction and outcome (Thomas, 1976). In the stage of frustration, one group is aware of the other groups’ interference with one’s need, requirements or opinions. Conceptualisation means where people have different definitions and feelings on conflict situation which may affect group members’ attitude and behavior. Turning to behavior stage, group members’ action would be affected by different perceptions of conflict. In interaction stage, conflict in interaction among people and groups may be heightened or decreased. When conflict ceases, agreement or enmity would occur in the final stage of outcome. On the other hand, Thomas (1976) presented the structural model on conflict which is concerned with the variables on shaping conflict. There are four parameters related to conflict issues on Thomas’ study which are ‘behavioral predisposition’, ‘social pressure’, ‘incentive structure’ and ‘rules and procedures’ (Thomas, 1976, pp.912-927). Behavioral predisposition contains motivation, capability and personality of groups and group members. Social pressure covers the discrepancy between cultural values, organizational norms, and common interests. Incentive structure affects conflict by the issues of parties’ or members’ relationship and some competitive issues and common problems. Rules and procedures mean the process of decision making, such as decision rule and negotiation. To summarize Thomas’ research, this suggests that ‘conflict can be defined as an interpersonal dynamic which is shaped by the internal and external environments of the parties involved and this dynamic is manifested in a process which affects group performance either functionally or dysfunctionally’ (Appelbaum, Shapiro, & Elbaz, 1998, pp.219).
Inter-Organizational Conflicts in Virtual Alliances
Further, Appelbaum et al. (1998, pp.214) viewed the studies on organizational conflict and differentiated two aspects of conflict theories from ‘cognitive perspective’ and ‘interactional perspective’. For cognitive perspective, researchers in this area have studied organizational conflict concentrating on how and what people think. Individuals are the fundamental elements of driving conflict so the emphasis of managing conflict from cognitive view is more concerned with human’s thoughts. Also, because people’s thinking is unique, the information and message they use to communicate and share usually follow different ways. Another relevant conflict study is viewed from interactional perspective. In interactional perspective area, ‘action and behavior are a series of interconnected events’ (Appelbaum et al., 1998, pp.214). In contrast with the cognitive perspective, researchers in the interactional perspective are more concerned with how and with what they respond to conflict as represented in their behavior rather than mental processes. Moreover, the influence of conflict on organizational functions has been further discussed. Amason et al. (1995, pp.21) contended that ‘effective teams know how to manage conflict so that it makes a positive contribution. Less effective teams avoid conflict altogether or allow it to produce negative consequences that hamper their effectiveness.’ It posits that the conflict is the pivotal issue on team performance. Also, the implication of conflict on team effectiveness is that the ‘cognitive conflict’ improves team effectiveness and the ‘affective conflict’ curtails team performance (Amason et al., 1995). The most common types of conflicts are task conflict and relationship conflict by Jehn (1992, cited by Jehn & Mannix, 2001). Relationship conflict is recognitions of interpersonal incompatibilities, such as dislike among group members and feeling of frustration and friction. Task conflict is defined as members having different viewpoints on their group task. Relationship conflict has consistently been found to be destructive to or-
ganizational functions, such as team satisfaction, group performance and effectiveness (Amason & Schweiger, 1997) but task conflict may be beneficial (Jehn et al., 1999). In a more recent study, Jehn (1997; 2001) proposed that conflict in work groups over ‘time’ had to be considered. It can be categorised into three types- relationship, task, and process conflicts. Process conflict means that group members have different opinions about how a task could be accomplished and preceded. In addition, Jehn (1997) found the process conflict interferes with group task proceeding. The arguments about who does what and who should take how much responsibilities decrease productivity of task accomplishment.
Intra- vs. Inter-organizational Conflict Conflict can be differentiated at six levels which are intra-individual, inter-individual, intra-group, inter-group, intra-organization and inter-organization (Suliman & Abdulla, 2005). Intra-individual conflict occurs within a single person himself or herself when he or she experiences different goals, roles and certain needs are impeded. Inter-individual conflict occurs when two persons’ purposes, opinions and actions tend to disagree with each other. Intra-group conflict means friction and discrepancies that happen among members within a group. When conflict happens in two or more groups having contradictory goals, issues and actions, it is named as inter-groups conflict. Inter-organizational conflict consists of those conflict levels mentioned above. When conflict occurs between different companies and organizations because of their clashes on market share and business strategies, it is defined as interorganizational conflict. Conflict has been studied at a number of levels focusing on ‘who’ are the actors in the course and ‘what’ are the factors which influences the conflict. Several studies have reviewed the relationship between conflict and organizational
627
Inter-Organizational Conflicts in Virtual Alliances
functions, such as group effectiveness and performance (e.g. Darling & Fogliasso, 1999; De Dreu & Weingart, 2003; e.g. Jehn, 1997; Jehn & Mannix, 2001).However, the coverage is only located within intra-organizational conflict level without striding across national boundaries. The importance of managing the inherent or inevitable conflict has been noted but there has been almost no published research directly exploring conflict at inter-organizational level (Reid, Pullins, & Buehrer, 2004).
conflIct At InterorgAnIZAtIonAl leVel Conflict is a broad concept which has been discussed theoretically from multiple views and perspectives but the context of conflict in interorganizational level has not been sufficiently studied. Conflict is not only a pivotal issue within a single organization but also between organizations in different countries.
Inter-organizational network relationship development Turnball et al. (1996) categorized three types of resources sharing which are financial resources, network position and companies’ skills and technologies in business interaction. Financial resource is the fundamental reason that companies work with each other because running business on a strong financial basis would gain more resources for firms. In terms of the network position, business network consists of companies’ relationship and the rights and the obligations of working in a specific industry. For example, if a company is good at running a retail store chain, that would be its competitive advantage of tapping into a major customer market. The company would enjoy more opportunities on resource exchange. Moreover, from the point of view of technological resources, a company’s technologies and skills
628
comprise of the abilities of product or service design, process and marketing. A company’s inter-organizational relationship is built on the need of certain technologies in relation with all other partners. For instance, company A may buy components from B. The components may be A’s design but B produces them. In this case, A’s ability of product design and B’s ability of manufacturing are utilized in the collaboration. It is clear that ‘companies interact with each other and develop relationships in order to exploit and develop their resources’ (Turnball & Wilson, 1989, pp. 47) based on different requirements.
Business Strategic Conflict The resource knowledge exchange and sharing is seen as the basis of building inter-organizational relationships in order to gain competitive advantage in market competition. It seems contradictory that the resources and knowledge sharing which may be the pillar and support of the companies’ competitiveness but it makes the companies risky if the key resources and knowledge are shared with their potential competitors. Conflict on business competitive and cooperative strategies are also challenging the organizational alignment, for instance, the S-LCD, the ‘joint venture’ (Frauenheim, 2004) formed by the two electronic giants, SONY in Japan and Samsung in South Korea. In 2003, both of the companies agreed to share patents in order to speed up the development of basic technologies. Although the collaboration aims to keep clear of unnecessary conflict, such as the waste of time in resolving disputes concerning infringement of patent rights, to date it has not been a common practice. The relationship between the companies is still strictly competitive in the consumer-electronics industry. In terms of organizations’ competitiveness, building a cooperative business strategy in a competitive environment is a thorny issue for
Inter-Organizational Conflicts in Virtual Alliances
these organizations because former collaborators may become competitors in the near future. In this study, such conflict is named as business strategic conflict. Business strategic conflict in the early stage of collaboration can be high or low depending on the position in the market where the firms have targeted and the orientation which the firms have been working toward (see Figure 1). In situations where firms are competitors in the same industries, the cooperative activities will start with high business strategic conflict. High business strategic conflict may hinder firms in their willingness of sharing knowledge with the other partners. The case of SONY and Samsung’s joint project addressed above is a good example. The relationship between these two companies is more competitive rather than cooperative. Disputes regarding and negotiations of sharing patent rights took a long time before the beginning phase. The tendency for business strategic conflict tends to be limited in the middle stage as a compromise will be reached if the members all expect the sustainability of the alliances. Also, in the middle stage, the profit and outcome of the collaboration are not in evidence so that the firms can work in a cooperative way. For instance, if the business collaboration is built on new product development, most companies will not pay much attention on watching fragmental ideas. However, as soon as the ideas are developed as concrete products, torts issues will
Figure 1. Tendency of business strategic conflict Level of Conflict
Business Strategic Conflict
Early Stage
Middle Stage
Final Stage
Time
rise. In addition, information and knowledge become divisible and transportable by IT so that the accumulation of knowledge and experience are easy to share to members through legal or illegal ways. Apparently, business strategic conflict would become higher in the final stage. Nevertheless, the scenario can be avoided through well dealt with consensus and agreement. Effective value integration in virtual alliances can be achieved based on the condition that business strategic conflict is well-managed. Researchers have linked the conflict management with a company’s ability to develop a set of network relationships. Cox (2004) discussed organizational conflict issues in business relationship alignment from the perspective of commensurability of value captured between buyers and suppliers. Reid et al. (2004) examined the interpersonal conflict in business-to-business sales interactions. Reardon and Hasty (1996) viewed the international vendor relations by using game theory. Wong et al. (1999) proposed that the attitude of conflict management influenced buyers’ and suppliers’ long-term relationship. It states that conflict that is dealt with more openly will encourage cooperation in general and suppliers to contribute more to product quality (Wong et al., 1999). Kim and Michell (1999) discussed relationship marketing in Japan which has been deemed to contribute to the success of Japanese firms. Based on a closer relationship, sellers and suppliers share their information, increase investment on new projects and reduce in-direct and direct cost of products. However, the relationship is costly and difficult to maintain and manage as it may reduce the flexibility for a customer to change or shift to co-work with the other suppliers (Kim & Michell, 1999). Research related to inter-organizational collaboration, customer-supplier relationship management and supply chain management have been recognised as crucial to organizational competi-
629
Inter-Organizational Conflicts in Virtual Alliances
tive advantage. Nevertheless, the studies have only tended to focus on the approach of relationship alignment rather than on significant context of conflict between different companies and countries. Therefore, there is a need to understand the context of conflict in inter-organizational virtual alliance because it is the key for firms dealing with the business collaboration by tightening or weakening buyer-seller relationship.
Cultural Conflict Moreover, while a project or product is proceeding through inter-organizational collaboration, open-minded communication is expected. Nevertheless, with regard to the foregoing expectation, it seems to be a long way to go. Communication in organizations is realized based on a common perception and knowledge. When the communication and collaboration take place not only within a single firm but also over striding different companies and countries, the issue becomes more difficult. Conflict occurs easily in multinational or multicultural environment since the languages, norms, personal styles and other relevant cultural characters are different (Gorden, 1991). A study of Chinese-American differences in conflict-avoiding (Friedman, Chi, & Liu, 2006) indicated that the higher Chinese tendency to be conflict-avoiding, whereas American is more direct to conflict. Chinese people believe direct conflict will hurt relationship with the other members and groups. It also showed that Chinese people are more sensitive to hierarchy than American so that Chinese tends to avoid conflict to people in higher position. Conflict may be heightened by hierarchy among Chinese. Ohbuchi and Takahashi (1994) reported a similar pattern in their study about cultural styles to conflict management. It found that Japanese people used the avoiding strategy to conflict 48% of the time and American used an avoiding attitude 22% of the time. From the perspective of organizational management, if managers want to encourage an open discussion to
630
conflict, they need to ensure their organizational climate in such a way that direct disagreement will not damage relationship (Friedman et al., 2006). National culture is defined as a collective mental programming which distinguishes one nation from another (Hofstede, 1984). It has been highlighted as one of the most difficult obstacles in organization management (Hofstede, 1984; Swierczek & Onishi, 2002). In virtuality, cultural conflict is a vital issue that affects the course of communication and collaboration. For instance, the meaning of ‘silence’ usually expresses harmony and respect in Japan but silence is not accorded the same meaning in western cultures (Gudykunst & Nishida, 1984; Panteli & Sockalingham, 2004; Sano & Yamaguchi, 1999). Non-verbal expression is a potential issue. Blended languages may cause misunderstandings also. Particularly in virtuality, interaction via technical tools has re-shaped the traditional face-to-face communication. Email is the main medium of communication (Panteli & Fineman, 2005a; Ziv, 1996). The text-based tool reinforces collaboration and interaction but it also leads to the difficulty of communication through language. When complexity of product development arises; clashes and frictions increase. Conflict may beget creative ideas and inventions but, it can entail counterproductive effects. The tendency of cultural conflict (see Figure 2) in the early stage of collaboration is high but tends to be lower and stable over time. In interorganizational collaboration, firms from different countries are grouped within the virtual network. Virtual team members mostly are composed of people who have practiced in blended languages, customs, and communication styles and worked in different regimes. Same behavior may represent different meanings. In the early stage, members are not familiar with each other and are not used to their partners’ working styles so it results in the high level of cultural conflict. Less familiarity causes both relationship conflict and cultural conflict. In terms of relationship con-
Inter-Organizational Conflicts in Virtual Alliances
Figure 2. Tendency of cultural conflict
ethIcAl consIderAtIons And chAllenges
Level of Conflict
Cultural Conflict
Early Stage
Middle Stage
Final Stage
Time
flict, team members often operate under politeness norm so relationship conflict in the beginning phase is relatively low (Jehn & Mannix, 2001). In contrast with relationship conflict, the complex cultural charters operated and blended together are the main source of cultural conflict so that cultural conflict in the beginning stage of group interaction is high. When communication crosses national boundaries, cultural conflict will arise. Particularly, text-based communication, such as emails, can easily be misread and misunderstood (Mann & Stewart, 2000). When team members become more familiar with one another after a lapse of time, this increased familiarity tends to result in low cultural conflict. Nevertheless, being familiar among team members may take long time depending on how and with what team members respond to cultural differences. In short, with the tendency of globalization and advancement of IT, inter-organizational collaboration for multiple intentions, such as problem solving and value creation, in a virtual network will become more common, namely the conflict between organizations in different countries will increase. There are, however, challenges and ethical issues that need to be considered when pursuing this type of research.
Despite the significance of the issue outlined above, it must be recognised that a problem may arise with regard to accessibility of related primary data as well as accessibility of participants. Often virtual project teams are formed on the spot and for a specific period of time with tight deadlines; this as a result may restrict access to the project and project participants during the lifecycle of the project and will constrain the researcher’s insight into the project. Formal access needs to be granted to the researcher by management who would be required to give access to project documentation and email archives. Also, access to various project participants and team members is important in order to enable the researcher to get a comprehensive understanding of any problems that might have occurred during and as a result of virtual interactions. Further, their commitment in the research project is needed in order to get insights on issues not communicated by email such as conflicts, confidential issues and the like as well as issues that are not communicated by email as it would be difficult for instance of a researcher to record telephone conversations. In the case of virtual organizing, there is another factor that will influence data collection potentially leading to problems of access; this involves the medium used for communication. When virtual team members communicate by text-based email messages, this will enable their communication to be traced through email archives. When other communication media are used, however, such as videoconferencing, webconferencing systems, such archives may not be available making it difficult for the researcher to collect important project documentation. In such cases, therefore, we suggest that the researcher is given access to attend and participate at such computer-mediated communication and gain access to team interactions.
631
Inter-Organizational Conflicts in Virtual Alliances
conclusIon The primary insight that this chapter offers is that the dynamics of conflicts in virtual alliance should be concerned with the development of inter-organizational collaboration. Cultural conflict is seen as one of the most difficult obstacle in virtual alliances. Business strategic conflict is also a vital consideration for success in business collaboration. These inter-organizational conflicts show the complexity of organizational conflicts in the nature of global virtual alliances. More significantly, this study extends other aspects of previous conflict studies, such as relationship, task and process conflicts (Jehn, 1997; Jehn and Mannix, 2001), to the application of global virtual alliances. Therefore, the framework of conflict propensities including a set of inter-organizational conflicts which are business strategic and cultural conflicts is developed for understanding the course of the conflict issues in global virtual alliances. The frameworks of tendencies of interorganizational conflicts are conceptual. Although they are based on prior research and have not yet been tested directly, we hope that the chapter can provide some guidance for future research.
references Amason, A. C., Hochwarter, W. A., Thompson, K. R., & Harrison, A. W. (1995). Conflict: An Important Dimension in Successful Management Teams. Organizational Dymanic, 24(2), 20-35. Amason, A. C., & Schweiger, D. (1997). The Effects of Conflict on Strategic Decision Making Effectiveness and Organizational Performance. International Journal of Conflict Management, 5, 239-253. Appelbaum, S. H., Shapiro, B., & Elbaz, D. (1998). The Management of Multicultural Group Conflict. Team Performance Management, 4(5), 211-234.
632
Bal, J., & Gundry, J. (1999). Virtual Teaming in the Automotive Supply Chain. Team Performance Management: An International Journal, 5(6), 174-193. Boudreau, M., Loch, K. D., Robey, D., & Straud, D. (1998). Going Global: Using Information Technology to Advance the Competitiveness of the Virtual Transational Organization. Academy of Management Executive, 12(4), 120-128. Boulding, K. (1963). Conflict and Defense. NY: Harper & Row. Chan, T. S. (1992). Emerging Trends in Export Channel Strategy: An Investigation of Hong Kong and Singaporean Firms. European Journal of Marketing, 26(3), 18-26. Clark, H., & Brennan, S. (1991). Grounding in Communication. In L. Resnick, J. Levine & S. Teasley (Eds.), Perspectives on Socially Shared Cognition. Washington: American Psychological Association. Cohendet, P., Kern, F., Mehmanpazir, B., & Munier, F. (1999). Knowledge Coordination, Competence Creation and Integrated Networks in Globalised Firms. Cambridge Journal Economics, 23, 225-241. Coser, L. A. (1956). The Function of Social Conflict. NY: Free Press. Cox, A. (2004). Business Relationship Alignment: On the Commensurability of Value Capture and Mutuality in Buyer and Supplier Exchange. Supply Chain Management: An International Journal, 9(5), 410-420. Darling, J. R., & Fogliasso, C. E. (1999). Conflict Management across Cultural Boundaries: A Case Analysis from a Multinational Bank. European Business Review, 99(6), 383-392. De Dreu, C. K., & Weingart, L. R. (2003). Task versus Relationship Conflict, Team Performance, and Team Member Satisfaction: A Meta-Analysis. Journal of Applied Psychology, 88(4), 741-749.
Inter-Organizational Conflicts in Virtual Alliances
Defillippi, R. J. (2002). Oganizational Models for Collaboration in the New Economy. Human Resource Planning, 25(4), 7-18. DeSanctis, G., & Monge, P. (1998). Communication Processes for Virtual Organizations. Journal of Computer-Mediated Communication, 3(4), 0-0. Deutsch, M. (1973). The Resolution of Conflict. New Haven: Yale University Press. Frauenheim, E. (2004). Sony, Samsung Complete LCD Plant. The Economist. Fridman, R. A., & Currall, S. C. (2003). Conflict Escalation: Dispute Exacerbating Elements of Email Communication. Human Relations, 56(11), 1325-1347. Friedman, R., Chi, S. C., & Liu, L. A. (2006). An Expectany Model. Journal of International Business Studies, 37, 76-91. Gorden, J. R. (1991). A Diagnostic Approach to Organizational Behaviour. Boston: Allyn & Bacon. Gudykunst, W. B., & Nishida, T. (1984). Individual and Cultural Influences on Uncertianty Reduction. Communication Monographs, 51, 26-36. Guetzkow, R., & Gyr, J. (1954). An Analysis of Conflict in Decision Making Groups. Human Resource Relations, 7, 367-381. Hofstede, G. (1984). Culture’s Consequences: International Differences in Work-related Values. CA: Sage. Jehn, K. A. (1997). A Qualitative Analysis of Conflict Types and Dimentions in Organizational Groups. Administrative Science Quarterly, 42, 530-557. Jehn, K. A., & Mannix, E. A. (2001). The Dynamic Nature of Conflict: A Longitudinal Study of Intragroup Conflict and Group Performance. Academy of Management Journal, 44(2), 238-251.
Kim, J. B., & Michell, P. (1999). Relationship Marketing in Japan: The Buyer-Supplier Relationships of Four Automakers. Journal of Business & Insudtrial Marketing, 14(2), 118-129. Magretta, J. (1998). The Power of Virtual Integration: An Interview with Dell Computer’s Michael Dell. Harvard Business Review, 72-84. Mann, C., & Stewart, F. (2000). Internet Communication and Qualitative Research- A Handbook for Researching Online. London: Sage. Maznevski, M. L., & Chudoba, K. M. (2000). Bridging Space over Time: Global Virtual Team Dynamics and Effectiveness. Organizationa Science, 11(5), 473-492. Ohbuchi, K. Y., & Takshashi, Y. (1994). Curtural Styles of Conflict Management in Japanese and Americans: Passivity, Convertness and Effective ness of Strategies. Journal of Applies Social Psychology, 24(15), 1345-1366. Panteli, N., & Fineman, S. (2005). The Sound of Silence: The Case of Virtual Team Organising. Behaviour & Information Technology, 24(5), 347-352. Panteli, N., & Sockalingham, S. (2004). Trust and Conflict within Virtual Inter-organizaional Alliances: A Framework for Facilitating Knowledge Sharing. Decision Support System, 39, 599-617. Pinkley, R. L. (1990). Dimensions of Conflict Frame: Disputant Interpretations of Conflict. Journal of Applied Psychology, 75(2), 117-126. Prasad, K., & Akhilesh, K. (2002). Global Virtual Teams: What Impacts Their Design and Performance. Team Performance Management, 8(5/6), 102-112. Reardon, J., & Hasty, R. W. (1996). International Vendor Relations: A Perspective Using Game Theory. International Journal of Retail & Distribution Management, 24(1), 15-23.
633
Inter-Organizational Conflicts in Virtual Alliances
Reid, D. A., Pullins, E. B., & Buehrer, R. E. (2004). Measuring Buyers’ Perspections of Conflict in Business-to-Business Sales Interactions. Journal of Business & Insudtrial Marketing, 19(4), 236-249. Sano, N., & Yamaguchi, S. (1999). Is Silence Golden? A Cross-Cultural Study on the Meaning of Silence. In T. Suguman, M. Karasawa & J. Liu (Eds.), Progress in Asian Social Psychology: Theorical and Empirical Contributions. NY: Wiley. Schultze, U., & Orlikowski, W. J. (2001). Mataphors of Virtuality: Shaping an Emergent Reality. Information and Organization, 11, 45-77. Simmel, G. (1955). Conflict and the Web of GroupAffiliations. NY: Free Press. Suliman, A. M., & Abdulla, M. H. (2005). Towards a High-Performance Workplace: Managing Corporate Climate and Conflict. Management Decision, 43(5), 720-733. Swierczek, F. W., & Onishi, J. (2002). Culture and Conflict: Japanese Managers and Thai Subordinates. Personal Review, 32(2), 187-210. Thomas, K. (1976). Conflict and Conflict Management. In R. N. College (Ed.), Handbook of Industrial and Organizational Psychology. Chicago: Rand NcNally College Publishing Company. Turnball, P. W., Ford, D., & Cunningham, M. (1996). Interaction, Relationships of Networks in Business Markets: An Evolving Perspective. Journal of Business & Insudtrial Marketing, 11(3/4), 4-62. Turnball, P. W., & Wilson, D. (1989). Developing and Protecting Profitable Customer Relationships. Industrial Marketing Management, 18(1), 1-6.
634
Wong, A., Tjosvold, D., Wong, W. Y. L., & Liu, C. K. (1999). Cooperative and Competitive Conflict for Quality Supply Partnerships between China and Hong Kong. International Journal of Physical Distribution & Logistics Management, 29(1), 7-21. Ziv, O. (1996). Writing to Work: How Using E-mail Can Reflect Technological and Organizational Change. In S. C. Herring (Ed.), Computer-Mediated Communication: Linguistic, Social and Cross-Cultural Perspectives. Philadelphia: John Benjamins Publishing.
key terms Communication: To share information with others. Computer-Mediated Communication: Data, information and knowledge exchange across two or more networked computers. Conflict: Disagreements, disputes and fighting between two or more groups of people. Email: Electronic mail is the exchange of messages by telecommunication. Face to Face: Meeting someone directly in same place. Inter-organizational Collaboration: Organizations cooperate together for specific business intentions. Karen A. Jehn: K.A. Jehn identified the relationship, task and process conflicts. Virtual Alliance: A network consisting of organisations, groups or individuals that connect together based on computer-mediated communication.
635
Chapter XLI
From Coder to Creator: Responsibility Issues in Intelligent Artifact Design Andreas Matthias Lingnan University, Hong Kong
AbstrAct Creation of autonomously acting, learning artifacts has reached a point where humans cannot any more be justly held responsible for the actions of certain types of machines. Such machines learn during operation, thus continuously changing their original behaviour in uncontrollable (by the initial manufacturer) ways. They act without effective supervision and have an epistemic advantage over humans, in that their extended sensory apparatus, their superior processing speed and perfect memory render it impossible for humans to supervise the machine’s decisions in real-time. We survey the techniques of artificial intelligence engineering, showing that there has been a shift in the role of the programmer of such machines from a coder (who has complete control over the program in the machine) to a mere creator of software organisms which evolve and develop by themselves. We then discuss the problem of responsibility ascription to such machines, trying to avoid the metaphysical pitfalls of the mind-body problem. We propose five criteria for purely legal responsibility, which are in accordance both with the findings of contemporary analytic philosophy and with legal practise. We suggest that Stahl’s (2006) concept of “quasi-responsibility” might also be a way to handle the responsibility gap.
IntroductIon Since the dawn of civilization, man has lived together with artifacts: tools and machines he himself has called into existence. These artifacts he has used to extend the range and the quality of his senses, to increase or replace the power of
his muscles, to store and transmit information to others, his contemporaries or those yet to be born. In all these cases, he himself had been the controlling force behind the artifacts’ actions. He had been the one to wield the hammer, to handle the knife, to look through the microscope, to drive a car, to flip a switch to turn the radio on
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
From Coder to Creator
or off. Responsibility ascription for whatever the machines “did” was straightforward, because the machines could not act by themselves. It was not the machine which acted, it was the controlling human. This not only applied to the simple tools, like hammers and knives, but also to cars and airplanes, remotely controlled planetary exploration vehicles and, until recently, computers. Any useful, traditional artifact can be seen as a finite state machine: its manufacturer can describe its range of expected actions as a set of transformations that occur as a reaction of the artifact to changes in its environment (“inputs”). The complete set of expected transformations is what comprises the operating manual of the machine. By documenting the reactions of the machine to various valid input patterns, the manufacturer renders the reader of the operating manual capable of effectively controlling the device. This transfer of control is usually seen as the legal and moral basis of the transfer of responsibility for the results of the machine’s operation from the manufacturer to the operator (Fischer & Ravizza, 1998). If the operation of a machine causes damage, we will ascribe the responsibility for it according to who was in control of the machine at that point. If the machine operated correctly and predictably (that is, as documented in the operating manual), then we will deem its operator responsible. But if the operator can show that the machine underwent a significant transformation in its state which was not documented in the operating manual (e.g. by exploding, or failing to stop when brakes were applied) then we would not hold the operator responsible any longer, and precisely for the reason that he did not have sufficient control over the device’s behaviour to be able to assume full responsibility for the consequences of its operation. With the advent of learning, autonomously acting machines, all this has changed more radically than it appears at first sight. Learning automata, as we will see, are not just another kind of machine, just another step in the evolution of artifacts from the spear to the automobile. Insofar
636
as responsibility ascription is concerned, learning automata can be shown to be machines sui generis, in that the set of expected transformations they may undergo during operation cannot be determined in advance, which translates to the statement that the human operator cannot in principle have sufficient control over the machine to be rightly held responsible for the consequences of its operation. Learning automata cause a paradigm shift in the creation, operation and evaluation of artifacts. In the progress of programming techniques from classic, imperative programming, to declarative languages, artificial neural networks, genetic algorithms and autonomous agent architectures, the manufacturer/programmer step by step gives up control over the machine’s future behaviour, until he finds her role reduced to that of a creator of an autonomous organism rather than the powerful, controlling coder that she still is in popular imagination and (all too often) in unqualified moral debate. In the course of this chapter, we will retrace the crucial points of this technological development. We will see how exactly the shift from coder to creator takes place and what this means for the problem of responsibility ascription for the actions of learning automata. It can be shown that the loss of control over the operation of such machines creates a “responsibility gap” which must somehow be bridged. Since humans cannot have enough control over the machine’s behaviour to rightly assume responsibility for it, we will examine the question whether learning, autonomous machines could possibly be ascribed themselves responsibility for their own actions. We will discuss the prerequisites to machine responsibility and see that it does not necessarily mean that we will need to consider machines to be moral agents or even quasi-personal entities. Instead, responsibility ascription to a machine can be done without a shift in the metaphysical status of the machine using a “functional” approach to responsibility (“quasi-responsibility,”
From Coder to Creator
as put forth by Stahl, 2006): if we understand the concept of responsibility to be a social construct which serves particular social aims, then it seems feasible to apply it to machines, as long as those aims are served and the corresponding social goals fulfilled.
bAckground learning machines We are surrounded by machines which act on our behalf, more or less autonomously, and quite often we are not even aware of it. All around us are things like electronic noses for banana ripeness determination (Llobet, 1999), collision avoidance systems in automatic submarine navigation (Schultz, 1991a), autonomously flying (Stancliff & Nechyba, 2000), game playing (De Jong, 1988), and web document filtering (Zhang & Seo, 2001) machines and programs, while for a long time elevators in high-rise office towers are being controlled by adaptive, learning programs, which are able to predict and anticipate traffic patterns and to modify their own behaviour to better serve those patterns (Sasaki et al., 1996). In order to understand why some of these machines cannot be dealt with using traditional patterns of responsibility ascription, it will be necessary to have a look at how these machines are being constructed and programmed. Unfortunately, there is still widespread confusion regarding this point, even in the philosophical literature discussing the status of machines and the ethics of man-machine interaction. Only by understanding the broad principles of the construction of learning automata, we will be able to see where they differ from traditional artifacts and why they are capable of inducing a paradigm shift in the role of the engineer as well as in the question of responsibility ascription for the consequences of their operation.
In the present context, we can distinguish five main categories of computer software: traditional, procedural programs, declarative programming, connectionist architectures (including reinforcement learning systems), genetic algorithms (including genetic programming), and autonomous agents (mobile and immobile). As we will see, each one of these technologies represents one further step in the process of manufacturer and operator control loss.
Procedural Programming In this “classic” model of software development, the programmer writes down her program as a series of commands to the machine. Each command triggers a corresponding, well-defined action of the computer system: the transfer of data from one memory location to another, arithmetic operations, comparisons of the contents of variables, input or output of data from or to the operator of the program. Programs written in Basic, C, Cobol, Fortran, Pascal or Java all are built according to this principle (see Figure 1). The reader of such a program can look at the code, “execute” it mentally step by step, and thus anticipate what the machine is supposed to do. The machine, too, does not do anything mysterious or incomprehensible: it just executes the commands one after the other in a well-defined order. An analogy for the process of programming in an imperative language would be to cook following a detailed recipe. The cook does not do anything except executing, step by step, the instructions in the recipe, until he reaches the end of it, which should coincide with the completion of the meal that was described therein. In this programming model, the programmer still has the complete control over the machine’s behaviour, and every action of the program is really an action of the programmer, as the program itself cannot deviate from the prescribed sequence of operations.
637
From Coder to Creator
Figure 1. Imperative programming in BASIC and Java. It can easily be seen that the programs consist of commands to the machine (INPUT, PRINT , setFont) which are executed from top to bottom or in loops (DO LOOP UNTIL). This is the common structure of all imperative programming, regardless of the language used. (BASIC code from Wikipedia)
declarative Programming Declarative programming was widely introduced in the 1970’s. It aims to change the role of the programmer, in that she should not any more describe to the computer how to do things, but rather what the data are and what relations exist between them. A program in a declarative, logic-oriented programming language would look much like a series of statements in a predicate logic notation, which describes some part of the world. The interpreter of the programming language itself would then apply the facts and relations given in the program in order to find solutions to users’ queries (see Figure 2). As an analogy we could imagine describing a family tree to a stranger, telling him how many sisters and brothers we have, their names and ages, and also the data of the parents and their siblings. After we have supplied the listener with this information, he is able to answer complex questions about the family tree (e.g. “how many sisters does Mary have, which are older than twenty-five?”), by using a process of inference from the known facts. He does this without the need for us to supply him with a step-by-step
638
recipe for executing this process of inference, because he has “built-in” rules for doing logical operations of this kind. A typical application of the declarative programming paradigm are expert systems, which consist of a knowledge base about a particular domain and an inference engine, which is capable of drawing logical conclusions from the facts stored in the knowledge base. Expert systems are used in areas where the domain knowledge can be easily extracted from a human expert and converted into logical rules: so for example in medical diagnosis, or in the diagnosis of motor faults. One of the earliest expert systems was DENDRAL, developed at Stanford in the late 1960’s. DENDRAL was designed to infer the structure of organic molecules based on their chemical formulas and mass spectrographic information about the molecule. It proved so successful that similar systems are still used in chemical laboratories today. Another famous system, MYCIN, also built at Stanford, was aimed at diagnosing bacterial infections of the blood. MYCIN could provide explanations for its reasoning and handle uncertain input data (Luger, 2005).
From Coder to Creator
Figure 2. A declarative program in PROLOG. Here, the programmer does not issue commands to the machine. Instead, she states a series of facts, from which the machine will derive a program. (PROLOG code from Wikipedia)
For the present discussion it is only important to note that declarative programming and expert system knowledge bases deprive the programmer from one aspect of control over his program: the control over the flow of execution. Unlike with imperative languages, the programmer of a declarative program does not know exactly how it will be executed, in what exact order the clauses and predicates are going to be applied by the machine in order for it to “prove” the users’ queries. Also, like with any extensive system of formal logic statements, the programmer cannot anticipate every possible production from the given axioms and rules of inference, just because of the sheer number of possible productions.
connectionist systems Connectionist architectures are increasingly used in all domains where the representation of the domain knowledge in the form of clear and distinct symbols and formal rules is not possible. Artificial neural networks consist of hundreds or thousands of artificial “neurons”, which are often designed as shown in Figure 3. The neuron, which is simulated in software, has a number of input channels (in the biological metaphor corresponding to its dendrites). The input signals are
multiplied with adaptable weights (w1-w3) and the sum of them is then used as a parameter to a function, which maps the sum of the weighed inputs to an output signal. The output signal is in turn used as input to other artificial neurons. Most often such artificial neurons are arranged into layers (see Figure 4). By adapting the weights at its “synapses” the neural network can learn. It is important to see that neural networks don’t contain a symbolic representation of knowledge anywhere inside them. In order to learn, they are presented with pairs of input/output patterns, and then they adapt the weights of their synapses so that the given input pattern leads to the desired output pattern. This is repeated for all pairs of input and output the system is supposed to learn. After training is completed, the artificial neural net is not only capable of mapping the learned input patterns to the desired output; it can also map previously unseen input patterns to the outputs of similar input patterns. This is an important feature: it enables the network to “recognize” and classify noisy and varying input correctly. Accordingly, artificial neural networks are mainly used for pattern recognition tasks: to recognize handwriting in palmtop computing devices, to identify faces of people in video images, to convert the scanned image of a page back to computer-readable text. 639
From Coder to Creator
Figure 3. Basic structure of a single artificial neuron. The input values i1-i3 are multiplied by weights w1-w3 and the result is used as the parameter of a trigger function f(i). If the function „fires“, a signal is output along the axon. This signal can, in turn, be used as an input for other artificial neurons.
In order to “program” a neural network to do such a task, we just present it with a great number of examples, along with the result we wish it to output for every example (“learning phase”). That is, if we wanted a neural network to learn to recognize hand-written input, we would present it with a few hundred examples of hand-written letters from different writers, and the network would automatically adjust its internal weights so that it is able to match the supplied input patterns to the characters they are supposed to depict. After a few thousand training rounds for each pattern (which would not need more time than a few minutes to complete), the network would be able to recognize hand-written letters, although the programmer never told it how to achieve this result. The how is something the network figured out “by itself”, by adjusting its internal weights so as to be able to match each input pattern to the desired output during the learning phase. The important thing here is to note that the behaviour of artificial neural networks is not programmed any more. Instead, it is taught by example. Inside the network there is no symbolic
640
Figure 4. A three-layer artificial neural network
representation of any learned facts and rules, but only an ordered set of connection weights, which somehow encode the network’s performance. But those weights cannot be examined by a human in order for her to predict its future behaviour. They have no symbolic interpretation. In fact, the teacher of an artificial neural network can never fully predict how the network will process a future input. She can only train the network with as many different sets of inputs as possible, in the hope that she will cover the whole range of significant variations in the input patterns. But, and in this she is much like a teacher of humans, she cannot look inside the network to see the stored knowledge. As we do with children in school, the teacher of a neural network can only apply tests to the network and evaluate its answers to the test input patterns. Another often used technology is reinforcement learning (Moriarty, 1999). Here the distinction between the phases of training and production use of the neural net is lifted. The network learns and adapts its reactions inside its final operating environment, by exploring available action alternatives in a trial-and-error fashion and optimising its own parameters according to the results. Thus, the exploration phase is an integral part of the design of the working machine and cannot be separated from it. This is necessary in highly dynamic environments, where
From Coder to Creator
the system has to adapt to continuous change in order to achieve optimal performance. Imagine, for example, a system which controls the flow of traffic on highways. Such a system needs not only to adapt to changing traffic patterns in the course of a day, a week, and a month, but also to the varying availability of the roads themselves: floods, accidents, public works are only some of the causes for change in the infrastructure that is available to the system to work with. A statically trained program would not be able to adapt to such changing conditions while maintaining a good performance. Reinforcement learning, on the other side, introduces trial-and-error into the operating phase. It continuously tries out new alternative actions and evaluates the resulting performance of the system it controls. Good results “reinforce” the system’s exploratory reactions, bad results trigger further exploration of other alternative actions. While reinforcement learning systems are able to control a wide range of highly dynamic situations well, they inevitably introduce erroneous decisions into the process of control. “This is quite contrary to the common, traditional understanding that technology, done correctly, must operate free of errors, and that errors are always the errors of the programmer, not of the programmed machine. Reinforcement
learning presents us, for the first time, with the necessity of errors as we know it from living systems: as a system feature, the precondition for learning and adaptive behaviour, and not merely a product flaw” (Matthias, 2004a).
genetic Algorithms In contrast to artificial neural networks, which model software after the general principles of operation of the nervous system, genetic algorithms (Holland, 1975) mimic the process of evolution through variation, genetic recombination and selection. The solution to a problem is represented as a chain of symbols, that together comprise the “genome” of a virtual organism. If, for example, the problem is to find a way through a maze, a successful solution would be a chain of symbols that denote direction changes: right, right, left, right, left, and so on, until the maze has been successfully traversed (see Figure 5). The solution is arrived at by generating a big initial population of virtual organisms, each one with its own, random genome of direction changes. These organisms are now put into the virtual maze, where some of them get immediately stuck at the walls, some take a few steps ahead, and some might even take a successful turn or two. The most successful or-
Figure 5. Traversing a maze with a genetic algorithm. Left: the way through the maze. Right: two genomes recombine to produce an offspring which correctly represents the desired solution (a way through the maze). The individual genes denote directions: S: go straight, R: turn right, L: turn left.
641
From Coder to Creator
ganisms (the ones which take the greatest number of steps before they collide with a wall) “mate” with each other in such a way, that their “genetic information” is combined in their offspring, often by a simple cross-over of parental genes: the chains of direction statements of the parents is cut at some point, and the offspring receive parts of the genetic information of both parents. This process is repeated, often over hundreds or thousands of simulated generations, until some individuals contain in their genome a string of symbols which represents one solution to the problem. Since the generations can be simulated at high speed, such a genetic algorithm might even arrive at a solution faster than a conventional programmer, who would first need to solve the problem at hand himself, and then implement his algorithm in software. Genetic algorithms are robust: they can be run again and again when conditions change, and they will always evolve some solution to the problem at hand. Unfortunately, there is no guarantee that this would be an optimal solution. Like biological evolution, genetic algorithms can get stuck in relative optima: solutions that are relatively better than others, but not the globally best solutions. Applications of genetic algorithms include the evolution of robot behaviours (Schultz, 1994), the classification of sensory input (Stolzmann et al., 2000), and the navigation of autonomous vehicles (Schultz, 1991b). Genetic algorithms are, for the programmer, another step away from the role of the coder: she does never get to see the actual code that is going to be executed, because the code evolves by itself under the constraints defined by the environment. The programmer only sets up this environment and generates a big number of random individuals, which are then left to themselves, to breed and evolve as biological organisms would do. Here we have clearly reached the stage where the programmer is but the creator of a simulated world, and not a coder anymore. She is no longer in the position to rightly assume responsibility
642
for the code that is to evolve inside the organisms she has created.
Autonomous Agents Autonomous agents don’t present any new features compared to the other technologies discussed, except that they are designed to act autonomously, out of the reach and supervision of the programmer. There are both embodied agents (like autonomous, mobile robots, which can physically move around in space and manipulate objects; see Adams, 2000) and pure information agents, which move in a virtual space created inside some computer infrastructure (e.g. Internet search engine crawlers). Two points concerning agents are interesting in the present context: First, that agents are per definitionem designed to act, and that, in the course of their operation, they must inevitably interact with other things, people, and social entities (laws, institutions, expectations). Second, that agents which have a physical presence constitute a new category of machines: such that can learn from the direct interaction with a real environment and that can in return directly manipulate this same environment. These machines have (contrary, for example, to desktop computers) an unmediated access to sensory impressions, their symbolic representation, and subsequent actions that lead to new sensory impressions, and they act in the same environment as humans do. (Matthias, 2004a)
the PArAdIgm shIft: from coder to creAtor loss of control In the previous sections we have seen how the subsequent stages of technological development lead to an increasing and inevitable loss of control of the programmer over the machine. Increasing,
From Coder to Creator
because first, with declarative programming, is lost the control over the program flow. Next, with artificial neural networks, the programmer loses the symbolic representation of her program. All that remains to be inspected are a multitude of synaptic weights, which cannot be interpreted symbolically any more. The only way to measure the performance of a neural network (artificial or otherwise) is to test its reactions. Reinforcement learning brings about the loss of the computer’s infallibility. Where up to now we could distinguish “correct” and “erroneous” programs, the technique of trial-and-error employed in reinforcement learning irrevocably blurs this distinction. Even the correct program cannot avoid errors, because the errors are the price for its adaptability. Genetic algorithms and genetic programming bring about the final loss of the code as well as of the training situation: instead of programming, the software engineer deploys his program’s seeds as randomized virtual organisms in a simulated primordial soup, from which he hopes, in time, to see solutions emerge. He has never seen the code of these virtual organisms which represent his solutions, he has never programmed them. And autonomous agents, finally, are designed to act without supervision, so that an erroneous behaviour will be detected by the programmer only long after it has occurred and its consequences have been established, if at all. Since it seems like the programmer is voluntarily giving away control over the machine, one might be tempted to talk of “abdication” instead of “loss” of control. But we should not overlook the fact that using advanced software design technologies is not an arbitrary decision of the programmer, but a necessity which is forced upon her by the constraints of the problem domain she has to tackle. Some highly dynamic and complex problems cannot be handled otherwise but by the use of neural networks and genetic algorithms. If the programmer wants to stay in business, she has no choice but to adopt these technologies. Thus, it might be said that the responsibility for
adopting them lies with the society, the decision of which it is to demand that problem domains like street traffic control or automated diagnoses of diseases be handled by computer programs. Since society demands of the programmer to provide an automated solution, the programmer is arguably forced into the situation of reduced control, thus justifying us talking about loss instead of voluntary abdication of control.
the epistemic Advantage of machines Aside from the inevitability of error as part of the process of learning, modern computers often have another property that distinguishes them from the artifacts of the past: they have an epistemic advantage over their operator as well as their manufacturer, which renders effective supervision impossible. Let’s have a brief look at how supervision commonly works. If I supervise a child or a dog, I can do so effectively only if my faculties of understanding the world around me and my capability to predict the outcome of certain actions is more developed than that of the child or animal I am supervising. This epistemic advantage I have over the supervised makes me a suitable supervisor, and at the same time gives me a greater degree of control over the interactions of the supervised with his environment than he would have if left to his own devices. By knowing more than the child, I am in a position to foresee harm both to the child and to the environment, and thus to avert it. This higher degree of control also bestows on me a higher responsibility: commonly parents as well as dog owners are held responsible for the actions of their children and pets, as long as they have been supervising them at the moment the action occurred. This is the same pattern we have seen before: higher degree of control leads to higher degree of responsibility. The child’s reduced control over the environment leads to it having to carry less (or no) responsibility for its actions. 643
From Coder to Creator
A common misconception in the discussion about learning machines shows itself in the demand that they should be supervised more effectively, so that negative consequences of the machine’s operation can be seen in advance and averted. In fact, such a surveillance of the learning automaton is, in many cases, impossible, because it is the machine that as a rule has the epistemic advantage over the human operator. Control systems of air planes and nuclear power plants, robots which assemble electronic parts with micrometre precision, deep space exploration devices, military self-targeting missiles, even internet search engines: these are only a few of the machines that cannot be controlled effectively by a human, because he is in a position of epistemic disadvantage: his senses don’t permit him to measure the level of radioactivity inside a nuclear reactor, his speed of reaction is insufficient to control a modern fighter air plane, the precision of his eyes and fingers does not extend to micrometre level, and he has no sense at all for deep space radiation. Even something as simple as searching the internet cannot be done by a human any more. If I enter a search string into my favourite search engine, I will possibly get thousands of hits, and I will never be able to systematically check whether these hits really are the best available, and whether I was presented with the full list of hits or a censored or suboptimal subset. The epistemic advantage of machines is in these cases rooted in their superior processing power, in their possession of additional sensory equipment humans don’t have, and in their almost unlimited and unfailing memory. Because of this, there is no way a human could outperform the controlling computer of a fighter plane or a nuclear power plant: he is a slave to their decisions, because he is physically unable to control and supervise them in real-time.
the responsibility gap In the face of these developments, it doesn’t seem meaningful (or just) to insist that a programmer 644
should be responsible for an autonomous, learning program’s actions in the same way that a car driver is responsible for the consequences of driving his car. If technological development leads to an inevitable loss of control, and if, as we have seen, such control is a prerequisite of just responsibility ascription, then we must either accept that nobody is responsible for the actions of some classes of learning automata; or we must, alternatively, find some other target to ascribe responsibility to. Or we must, and this would be the ultima ratio, dispense with modern technology, of the kind described above, altogether. If we accept the existence of such a responsibility gap (Matthias, 2004a), we might ask whether the machines themselves might be suitable targets for responsibility ascription. Unfortunately, the philosophical discussion that has evolved around this topic has often been laden with confusion: confusion about the technical background of learning automata (and its implications) as described above, as well as confusion about what conditions are to be attached to the case of machine responsibility. Could a machine be legally liable? Under which conditions? Is the eligibility for moral agency a prerequisite to responsibility? Are the conditions for moral responsibility the same as for legal responsibility or different? Is personhood necessary for responsibility? Or even the plain and indispensable fact of being biologically human?
cAn AutomAtA be AscrIbed reSponSiBiLiTy? the ladder to Personhood As long as we talk about humans, it is safe to use the concepts of responsibility, culpability, rehabilitation, punishment, intentionality and personhood which are based on anthropocentric views of what constitutes a (legal, moral, or social) person. It is only when we embark on the
From Coder to Creator
analysis of the non-human agent that we need to clarify these concepts and to point out the implicit anthropocentrisms contained in them. In Matthias (2004b) we proposed that common conditions and phenomena of both the legal and the moral spheres of interaction can each be ascribed to a specific type of agent for which they make sense. We distinguished the “legal person”, the “moral person” (both of which should, in retrospect, have been named “agents” instead of “persons”), the “social person” and, finally, the biological human being, each with its own set of conditions that are to be met. In the present context of responsibility ascription, what we really mean when we talk about machine responsibility is legal responsibility, or the ability to be legally liable for damages caused by the operation of the machine itself. For our purpose of clarifying the legal liability of machine actions, it is not necessary to talk about moral agency or the possible metaphysical personhood of machines (for the difficulties of such an undertaking see, for example, Johnson, 2006). Our aim in the context of the present discussion is to solve the arising problems of responsibility ascription to machines inside the framework of civil law, and not to discuss retribution, resentment or punishment in relation to nonhumans. This would be a completely different field of inquiry, and one which is not likely to lead to usable, practical results any time soon. What concerns us are damages caused by the machine’s operation: and these fall squarely into the realm of civil law, which operates mostly without metaphysical assumptions about the actors involved. Now, to be legally responsible, one does not have to be human. Many legal systems have a concept of corporate entities (associations, companies, states) which can act and carry responsibility for their actions independently of the human beings that take part in the corporate structure. In international relations, for example, countries have rights and obligations that are different from those of the humans who represent them
at any particular moment. It is crucial to see that the volition of the corporate entity, and thus the responsibility for its actions, is not at all derived from the volitions or the individual human-ness of its members. Often the members of a corporate entity (e.g. a stock corporation) are themselves not individual humans at all, but other corporate entities: companies or even governments. The volition of a corporate entity (for example of the government of a democratic country) cannot be derived from any mental states of any individual human agent (e.g. its prime minister); instead it is the product of a complex, rule-controlled process, which involves a majority of the individuals participating in the corporate entity (for example in the form of voting), and thus can only be attributed to the corporate system as a whole. Still, every system which is to be ascribed a legally responsible status has to exhibit certain properties. Which exactly these are is not clear, since national laws differ significantly on this point, and, additionally, as a rule they don’t bother to specify explicit conditions for legal responsibility. Instead, the borders between full, partial and absent responsibility are often deliberately left unspecified in order to be drawn individually for each case by the courts. A few points are, nevertheless, clear: a person under heavy drug influence is less responsible for his actions, as is someone who is, for medical reasons, unable to understand the complexity of a situation or its implications. In German law, persons can be conditionally able to assume responsibility for their actions: for example, compulsive gamblers can have reduced responsibility for their actions where gambling is concerned, but be fully responsible otherwise. If we try to locate specific conditions for (legal) responsibility inside the framework of contemporary analytical philosophy, we find five main points, which seem to be necessary as well as sufficient for it (Matthias, 2004b): •
Intentionality (Dennett, 1987): The requirement that an agent must be capable of
645
From Coder to Creator
•
•
•
646
intentional action, is clearly a prerequisite for responsibility. Thus a person is not held responsible for the consequences of involuntary movements, be it a reflex or an action undertaken while sleepwalking. Being receptive and responsive to reasons (Fischer&Ravizza 1993 and 1998) is another prerequisite for legal responsibility. Legally responsible agents must be receptive to reasons for or against a possible action, and they must respond to sufficient reasons against an intended action with a change in their plans and their subsequent behaviour. Rational understanding of causal relationships between actions and results, as well as the capability of the agent to act according to his understanding are basic requirements for legal responsibility in law. Having second-order volitions (Frankfurt 1971 and 1993): For example, in German law a person can be legally incapacitated “because he is in a pathological way controlled by the will of others” (Diederichsen 1984, 150). It is not only freedom to act which the law requires of a legally responsible agent, but the freedom of the will, the ability to choose the goals she desires to pursue. Only an agent who has the freedom to act according to her desires and the freedom to choose those desires freely, has all the freedom “that it is possible to desire or to conceive.” (Frankfurt 1971, 16) Being (legally) sane (Wolf, 1987) means that the agent’s second-order volitions must be “sane,” that is, they must be compatible with the kinds of second-order volitions other agents in the same social context have. Effectively this requirement puts a third-order instance above the level of second-order volitions, thus exercising a social control over the choice of an agent’s second-order volitions. Most legal systems recognize this requirement: an agent can be perfectly intentional and reasons-responsive and still
•
be insane; in that his basic values system, which underlies all his individual choices, is not compatible with the basic values of the surrounding social context. Such an agent would generally not be held fully responsible for his actions. Being able to distinguish between intended and merely foreseen consequences of actions (Dworkin, 1987). Legal systems often distinguish “intended” from merely “foreseen” consequences of actions, and they presuppose the agent also to be able to do so; see, for example, the common as well as important distinction made between negligent and wilful homicide in most legal systems.
Though some of these properties have been proposed as being by themselves sufficient for (moral or legal) responsibility, we maintain that this is due to the implicit understanding that we are talking about humans, or at least systems that are partially similar or functionally analogous to humans. But now we are (in principle, at least, or as a thought experiment) able to construct artificial systems which exhibit some of those properties without the others. And thus we see, that they are each one of them necessary for full responsibility; but none of them is, in itself, sufficient. Now, do machines satisfy these five criteria? Intentionality and reasons-responsiveness are exhibited (at least for limited domains of action) by every chess computer. Dennett (1987) has shown how the concept of intentionality can be separated from obscure metaphysical requirements (e.g. involving mental states, an approach that we find in Johnson (2006) and which is doomed to fail because of the complexity of the underlying mind-body discussion, which cannot be dismissed en passant). Expert systems have a long tradition of stating their reasons for particular choices. Also the concept of foreseen vs. intended consequences is one every planning component in an artificial intelligence system
From Coder to Creator
exhibits. More difficult to grasp technologically is the concept of sanity, because it presupposes a huge amount of knowledge about what kinds of volitions are commonly in use in a particular social context. Such knowledge will probably not be available before the problem of common-sense and everyday-knowledge for computers is successfully solved (for first steps in this direction see various publications of the CYC project, e.g. Lenat, 1995).
social responsibility without Personhood In a very promising approach, Stahl (2006) tries to further simplify the problem of responsibility ascription to sub-personal agents, e.g. computers. He argues that the concept of responsibility needs not necessary be coupled with questions of agency or personhood, but can be seen as fulfilling a social role: An analysis of the concept of responsibility shows that it is a social construct of ascription which is only viable in certain social contexts and which serves particular social aims. If this is the main aspect of responsibility then the question whether computers can be responsible no longer hinges on the difficult problem of agency but on the possibly simpler question whether responsibility ascriptions to computers can fulfil social goals. (Stahl, 2006, 205) He then goes on to propose a concept of “quasi-responsibility,” which is analogous to (full, metaphysical) responsibility in its social function, but not confined to humans or persons alone. Of course, such a concept would be useless if it would not interface at some points to our real social environment, and thus be part of causal chains which involve “non-quasi” entities, and here lies one of the points that have still to be clarified: If I assign “quasi-responsibility” to a machine, and it causes harm or damage, this damage is obviously not a
“quasi-damage”. Accordingly, I cannot expect the machine to undertake only a “quasi-compensation” for the damage caused, or the legal system to provide me with “quasi-laws” to apply to the machine. But if we drop the prefix “quasi” from the consequences, then we must ask ourselves, how much exactly does “quasi-responsibility” actually differ from the “real” thing?
future trends Increasing use of learning automata in all areas of life will lead to a further widening of the responsibility gap. At the same time, increasing computing power widens the epistemic gap, thus rendering effective supervision of the machine even more unrealistic. Already, one particular internet search engine is responsible for much of the world’s knowledge and information filtering, while at the same time, due to the amount of information processed, it cannot be effectively controlled by a human any more. Faster processing speed, coupled with increasing amounts of information, makes human evaluation of a machine’s decisions (at least in real-time) impossible. At the same time, increasing computing power will enable us to deploy more and more products based on artificial neural net technology, which, by its very architecture, defies effective (symbolic) evaluation of the information stored within. Already we are surrounded by autonomously deciding and acting technology. In areas like traffic control, automated disease diagnosis, automated control of flying vessels and submarines, as well as numerous others, we cannot but employ selflearning technology in ever increasing amounts, thus always giving away more and more of human control over the machines’ decisions and actions. As machines act autonomously and without effective supervision, we will observe an increase in cases where responsibility for the consequences of such actions cannot be justly assigned to a hu-
647
From Coder to Creator
man. Up to a point, the resulting damage can be distributed to a broad base of a social group’s members (for example, through unspecific insurance compensations or government intervention). But as such cases become more numerous, eventually a solution will have to be found which permits us to deal with them inside the regular conceptual framework of civil law (as opposed to ad-hoc compensations after the damage has occurred). It must be seen whether ascribing a limited kind of (quasi-) responsibility to the acting automaton itself may provide a solution to the problem which is both just and practical to handle in everyday legal transactions.
conclusIon Some classes of learning, autonomous machines are artifacts sui generis, and they cannot be compared to conventional machines where the responsibility ascription for the consequences of their operation is concerned. There are various ways to deal with the resulting responsibility gap. If we want to avoid embarking on metaphysical discussions of machine personhood (which tend to involve aspects of the notoriously difficult philosophy of the mind-body-problem), we can limit the question at hand to legal issues (where nonhuman responsible entities are already present: corporations, states) and ask whether machines could be ascribed legal responsibility for their actions, leaving the moral issue aside. We find that legal responsibility requires at least intentionality, reasons responsiveness, second-order volitions, sanity and the ability to distinguish between foreseen and intended consequences of one’s actions. All of those requirements seem not to be in principle out of the reach of future machines, but they are certainly not fulfilled at present. Another approach would be to assign “quasi-responsibility” to machines, a subset of “true” responsibility, as proposed by Stahl. It
648
must be examined further what the exact merits of such a solution would be.
references Adams, B., Breazeal, C., Brooks, R.A. & Scassellati, B. (2000). Humanoid Robots: A New Kind of Tool. IEEE Intelligent Systems and Their Applications: Special Issue on Humanoid Robotics, 15(4), 25-31. De Jong, K.A. & Schultz, A.C. (1988). Using Experience-Based Learning in Game Playing. Proceedings of the Fifth International Machine Learning Conference, Ann Arbor, Michigan, June 12-14, 1988, 284-290. Dennett, D.C. (1978a). Intentional Systems. In Brainstorms. Philosophical Essays on Mind and Psychology. MIT Press, 3-22. Dennett, D.C. (1978b). Conditions of Personhood. In Brainstorms. Philosophical Essays on Mind and Psychology. MIT Press, 267-285. Dennett, D.C. (1987). The Intentional Stance. Cambridge MA, London: MIT Press. Diederichsen, U. (1984). Der Allgemeine Teil des Buergerlichen Gesetzbuches fuer Studienanfaenger. 5th extended ed. Heidelberg: C.F.Mueller jur. Verlag. Dworkin, G. (1987). Intention, Foreseeability, and Responsibility. In Schoeman, F. (ed.), Responsibility, Character, and the Emotions. New Essays in Moral Psychology. Cambridge: Cambridge University Press, 338-354. Fischer, J.M. & Ravizza, M. (1993). Perspectives on Moral Responsibility. Ithaca and London: Cornell University Press. Fischer, J.M. & Ravizza, M. (1998). Responsibility and Control. A Theory of Moral Responsibility. Cambridge: Cambridge University Press.
From Coder to Creator
Frankfurt, H. (1971). Freedom of the Will and the Concept of a Person. Journal of Philosophy, LXVIII, 5-21.
Sasaki, K., Markon, S. & Makagawa, M. (1996). Elevator Group Supervisory Control System Using Neural Networks. Elevator World, 1.
Frankfurt, H. (1993). Identification and Wholeheartedness. In Fischer, J.M. & Ravizza, M. (eds.), Perspectives on Moral Responsibility. Cornell University Press, 170-187.
Schultz, A.C. (1991a). Using a Genetic Algorithm to Learn Strategies for Collision Avoidance and Local Navigation. Proceedings of the Seventh International Symposium on Unmanned Untethered Submersible Technology. University of New Hampshire Marine Systems Engineering Laboratory, September 23-25, 1991, 213-215.
Holland, J. (1975). Adaptation in Natural and Artificial Systems. University of Michigan Press: Ann Arbor. Johnson, D.G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8, 195-204. Lenat, D. (1995). CYC: A Large-Scale Investment in Knowledge Infrastructure. Communications of the ACM, 38(11). Llobet, E., Hines, E.L., Gardner, J.W. & Franco, S. (1999). Non-Destructive Banana Ripeness Determination Using a Neural Network-Based Electronic Nose. Meas. Sci. Technol., 10, 538548. Luger, G. (2005). Artificial Intelligence: Structures and Strategies for Complex Problem Solving. 5th ed. Addison-Wesley Matthias, A. (2004a). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175-183 Matthias, A. (2004b). Responsibility Ascription to Nonhumans. Climbing the Steps of the Personhood Ladder. In Ikäheimo, H., Kotkavirta, J., Laitinen, A. & Lyyra, P. (eds.), Personhood. Workshop papers of the Conference “Dimensions of Personhood” (August 13-15, 2004). University of Jyväskylä, Publications in Philosophy 68. Moriarty, D.E., Schultz, A.C & Grefenstette, J.J. (1999). Evolutionary Algorithms for Reinforcement Learning. Journal of Artificial Intelligence Research, 11, 199-229
Schultz A.C. (1991b). Using a Genetic Algorithm to Learn Strategies for Collision Avoidance and Local Navigation. Proceedings of the Seventh International Symposium on Unmanned Untethered Submersible Technology. University of New Hampshire Marine Systems Engineering Laboratory, September 23-25, 213-215. Schultz, A.C. (1994). Learning Robot Behaviors Using Genetic Algorithms. Proceedings of the International Symposium on Robotics and Manufacturing. August 14-18, Washington DC. Stahl, B.C. (2006). Responsible Computers? A Case for Ascribing Quasi-Responsibility to Computers Independent of Personhood or Agency. Ethics and Information Technology, 8, 205-213. Stancliff, S.B. & Nechyba, M.C. (2000). Learning to Fly: Modeling Human Control Strategies in an Aerial Vehicle. Machine Intelligence Laboratory, Electrical and Computer Engineering, University of Florida. Retrieved June 3, 2007 from: http:// www.mil.ufl.edu/publications. Stolzmann, W., Butz, M.V., Hoffmann, J. & Goldberg, D.E. (2000). First Cognitive Capabilities in the Anticipatory Classifier System. Illinois Genetic Algorithms Laboratory Report No. 2000008. University of Illinois, Urbana. Wolf, S. (1987). Sanity and the Metaphysics of Responsibility. In Schoeman, F. (ed.), Responsibility, Character, and the Emotions. New Essays
649
From Coder to Creator
in Moral Psychology. Cambridge: Cambridge University Press, 46-62. Zhang, B.T. & Seo, Y.W. (2001). Personalized Web-Document Filtering Using Reinforcement Learning. Applied Artificial Intelligence, 15(7), 665-685.
key terms Artificial Neural Network: A networked structure, modelled after a biological neural network, and implemented in software on a computer. Artificial neural networks enable computers to handle imperfect (noisy) data sets, which is essential for robust performance in advanced recognition and classification tasks (handwriting recognition, weather prediction, control of complex movements in robotic bodies). Autonomous Agents: Programs or programmed devices which act autonomously, without supervision of a human, often in a remote location (e.g. on a remote server or another planet). Since such agents are per definition required to operate without supervision, responsibility attribution for their actions to a human is especially difficult. Declarative Programming: A programming paradigm where the programmer does not specify the machine’s behaviour in detail. Instead, she describes the problem to be solved in a kind of predicate logic calculus, leaving the details of the inference process to the machine.
650
Epistemic Advantage: In general, the fact that an agent has by design better access to information about the world than another agent. In particular, the fact that some machines are in the privileged position to access data about the environment that humans cannot access (for example due to a lack of suitable sensor equipment, e.g. for gamma radiation or ultraviolet light); or that they are able to process information at a speed which transcends the speed of human thought, thus enabling them to handle situations in real-time, which humans cannot handle without machine aid (e.g. controlling a nuclear power plant, a low-flying fighter airplane, or a subtle orbital manoeuvre in space.) Genetic Programming: A programming paradigm where the program “evolves” as a string of symbols out of other strings of symbols. The “evolution” process mimics the mechanics of biological evolution, including operations like genetic cross-over, mutations, and selection of the “fittest” program variants. Imperative Programming: A programming paradigm where the programmer describes the machine’s actions step by step, thus keeping full control over the machine’s behaviour when executing the program. Learning Machine: A machine which modifies its behaviour after deployment, through adaptation to the environment in which it operates. Since its final behaviour at any moment depends not only on the initial programming, but also on the environment’s inputs, it is in principle not predictable in advance.
651
Chapter XLII
Historical Perspective of Technoethics in Education J. José Cortez Syracuse University, USA
AbstrAct Fundamental democratic principles and values that guide our social relationships have been important concerns in the evolution of this nation’s system of formal public schooling. With its increased use and reliance on advanced technologies, education faces some fundamental challenges that have potentially far-reaching implications for educational institutions, professional teaching strategies and practices, and student learning. This chapter explores the topic of technoethics as an applied field of ethics and research, viewed from a historical perspective of education in the United States and its embrace of technology. The underlying intent is to inform the readers’ understanding of the basic concepts of common good, citizenship, and democratic values that are the underlying precepts associated with the history of public schooling in the United States. Additionally, the author discusses the increasingly critical need for educators to address the social and ethical dilemmas associated with new technological developments and their application to educational settings.
IntroductIon The world of technology has undergone transformations that have profound implications for the moral and ethical, as well as the legal and professional, dimensions of educational practice as well as for society in general. Just as the invention of the automobile, printing press, telescope, and space exploration had led to revolutionary
changes in culture and societal values that were not predicted at the time of their discovery and initial use—technology has had similar impacts upon society. Milson (2001) states, technology—defined, in a broad sense, as tools to accomplish objectives—can also “incorporate the interrelationship of the tool with life, society, and the environment. Arguably, technology has the ability to impact and transform our conception of
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Historical Perspective of Technoethics in Education
key democratic values, such as liberty, equality, and justice. The emergence of telecommunications, for example, is causing us to reconceptualize liberty as it relates to freedom of speech, freedom of assembly, and privacy” (p. 88). Are printed and online pornography unethical, or are they expressions of free speech and artist expression? Are the attempts of the Homeland Security agency in the United States to monitor telecommunications transmission, Internet Web site and e-mail, and long-distance phone calls justified in the name of national security, or are these measures an affront to freedom of speech and privacy? What are the implications of parental controls of cable television in the home and Internet access restrictions at public libraries? What threats to intellectual property rights does the Internet pose? These are but several of the many ethical issues that confront society today and its embrace of technology. Likewise, they are important ethical concerns that impact educational institutions, processes, and professional practices.
ethics defined Emmans (2000) defines ethics as a set of moral principles or values, built upon existing mores, and result from a society’s ideas and actions. They do not precede the society and the situation in which it finds itself; rather, they are a reaction to the situation, ever changing, and reflect the current state of affairs of the society. Ethics differ from morals in that ethics is a “philosophical reflection on moral beliefs and practices…a conscious stepping back and reflecting on morality” (Hinman, 1984). Additionally, Muffoletto (2003) asserts, “ethics is based upon values grounded on some notions of and for the common good, or what is perceived as good and right (the truth) for individual and community actions” (p. 62).
652
bAckground Increasingly, education has become a critical area in society with a growing need to address moral and ethical dilemmas associated with new technological developments: Its manifold uses and reliance on advanced technologies creates fundamental challenges that have potentially far-reaching implications for educational institutions, professional teaching strategies and practices, student learning; and, research methods and processes. However, this is not to say that education as a field, and educational technology as a discipline, have not addressed ethical issues throughout their history in the United States; quite the contrary. Ethics and morals have nearly always been central to education’s history, reform movements, and philosophical thought: Fundamental democratic principles and values that guide our social relationships have been important concerns in the evolution of this nation’s system of formal public schooling. This chapter explores the topic of educational technoethics as an applied field of ethics and research, viewed from the historical and philosophical perspectives of education in the United States. This view of education is important: History opens the mind to the world around us; and, through it we can better understand the events, issues, and crises in today’s social world. History provides explanations of how our present society has evolved and why it has adopted certain institutions, roles, practices, and ideologies. From this history, we can gain a sense of perspective and a realization of the complexities and impacts brought about by social and technological changes: •
Pedagogy, media and technology, and the means of social communication, are posited as requiring ethical analyzes in order to be used in a suitable and coherent way in education.
Historical Perspective of Technoethics in Education
•
Advances in educational media and technology are surveyed and juxtaposed in this chapter with an increasingly critical need for educational administrators and teacherpractitioners to effectively address the social and ethical dilemmas associated with new technological developments and their application to increasingly diverse educational settings.
educAtIon role In the mAkIng of A new nAtIon Formal schooling among the European colonists in the United States focused on the religious beliefs and mores brought here from their homelands. Early schools were religious-affiliated, and moral training was the purpose for formal schooling, long before the country’s current system of statedsupported public schooling emerged. (Urban & Wagoner, 2000; Altenbaugh, 2003).
learnt citizenry: early concepts of the common good, citizenship, and democratic Values The notion of and for the common good emerged early on in the history of education in the United States, evolving from the Catholic and Protestant colonists’ emphasis on religious education and Benjamin Franklin’s attributes and values ascribed to the self-made man and useful education in the 1700s; to Thomas Jefferson’s call for education for republican citizenship. Altenbaugh (2003) states: Many of the Founders and early national leaders expressed concern about the ability of the family to successfully prepare citizens for the new United States. They saw a need for mass education that would be systematized and consistent, that is,…serve a nation-building purpose. (p. 45)
For Benjamin Rush, Robert Coram, and Samuel Smith—and later, Jefferson—the particulars were political; republican citizenship was to be effected in large measure through education and grounded in religion. Without this, Rush asserted, there would be no virtue; and without virtue, no freedom (Urban and Wagoner, 2000; Gutek, 2001). Jefferson’s view of education took on importance in terms of its role in contributing to selfadvancement of young men in society; and, in his legislative Bill for More General Diffusion of Knowledge, he conceptualized basic education as being essential for citizenship and a public investment for self-government and human happiness at both the individual and social levels. “Jefferson envisioned education as a necessary foundation for people who governed themselves democratically through representative institutions” (Gutek, 2001, p. 162). “An essential consideration in the minds of those who faced the momentous task of establishing a new nation…the study of history, he [Jefferson] contended, would improve the citizens’ moral and civic virtues and enable them to know and exercise their rights and duties” (Urban and Wagoner, 2000). Education was to be general and utilitarian for citizens and provide them with the moral training and civic skills needed to participate in government—gradually evolving from the notion of moral training as being the purpose of formal schooling, to one that saw formal schooling as a means of ensuring a civil and stable society in which its citizens willingly obeyed the laws (Altenbaugh, 2003). Tyack (1967) states: After the American Revolution, numerous theorists like Thomas Jefferson, Benjamin Rush, and Noah Webster argued that without a transformed educational system the old pre-Revolutionary attitudes and relationships would prevail in the new nation. Rush said that a new, uniform state system should turn children into ‘republican machines.’ Webster called for an ‘Association of
653
Historical Perspective of Technoethics in Education
American Patriots for the Formation of an American Character,’ strove to promote uniformity of language, and wrote a ‘Federal Catechism’ to teach republican principles to school children. Jefferson wanted to create state primary schools to make loyal citizens of the young. In addition, many early theorists wanted a national university to prepare and legitimate elites for leadership. (p. 83-119) Ethics in education essentially represented a philosophical reflection of the moral beliefs and practices of the times in the United States, and often throughout the history of education it served particular societal needs and interests at events or moments in the history of our society and its relevancy for fostering character and civic virtue in children and adults.
education in the Age of reason Jefferson’s life coincided with the Age of Reason, or Enlightenment, in Europe; and, hence, many of his ideas regarding the role of education have their origins in this Enlightenment period. Gutek (2001) states: The Enlightenment stands out as the era that unleashed the essential intellectual and cultural trends that created a modern worldview….One of the major trends of the Enlightenment was a new way of thinking about nature and the place of a human being in a natural universe…Education was important in that, in the minds of Enlightenment philosophes (philosophers), it prepared people to live according to the principles of nature. (p. 110) Among the various currents of thought and reform that originated during the Enlightenment period involved education and the notion that for true progress to occur, it would be necessary to free schools from the domination of the established religions and discontinue looking toward ancient
654
texts for guidance. Nature was considered the key to understanding and shaping reality— an outward world in which humans experienced through their senses and was orderly in that it followed certain patterns. Therefore, educators merely needed to identify and use the activities that were appropriate to a particular stage of human development: By observing children, it was possible to identify and plot the course of development (Gutek, 2001). Horace Mann, a leader of the Common School Movement in the United States during the 1800s, espoused an educational philosophy that emphasized children should be educated to respect ethical values and that schools had the duty to prepare hard-working men and women to be industrious and productive, which he viewed as positive moral values. He believed that the wellbeing of society depended on literate, diligent, and responsible citizens; and, that the common school was the cultural agency that transmitted U. S. cultural heritage and democratic values to students through a study of literature and history—instilling a basic morality, but short of the religious sectarianism (Gutek, 2001).
education in the 19th century and Progressive reforms Likewise, Johann Heinrich Pestalozzi, a Swiss educational innovator, developed a natural method of education that brought significant changes to teaching and learning in both Europe and the United States. Pestalozzi asserted that with natural education, all children could develop into morally respectable, economically productive, and socially useful adults. The affective side of human nature, emotional growth, was as important as cognitive development. Neither could proceed in isolation from the other; however, it was important that an emotionally secure environment come first. Pestalozzi conjectured that the development of psychological-emotional security originated in the immediate closeness of the family and then
Historical Perspective of Technoethics in Education
extended to the more remote and abstract social, economic, and political relationships of the larger community (Walch, 1952). This aspect of Pestalozzi’s educational philosophy was based on affective human relationships, a form of moral education reflecting his view of how people developed emotional security and wellness: “Love originated in the satisfaction of basic human instincts rather than in abstract moral principles. Moral and ethical behavior, too, originated in and was a product of the same developmental pattern” (Gutek, 2001, p. 140). Later, among the progressive reformers of the late 19th century and early 20th century, John Dewey viewed education as building character in learners as a form of social intelligence. Democracy for Dewey was both a means and an end to the building of a good and just society: It was through education that he sought to develop strategies and methods for training students to become disciplined and socially responsible adults, conscientious citizens concerned with the rights of others and the common good, and to be equipped with the knowledge and technical skills to be productive members of society in the context of our modern industrial world. Democracy, thus, becomes an ethical ideal rather than a mere practical accommodation. The social good is joined to a citizen’s individual development: the purpose of education is to develop the human capacity to judge and act intelligently, given appropriate enabling conditions (Dewey, 1993; 1997). Dewey also began writing a code of ethics for the teaching profession. “The first national code of ethics for the education profession was adopted by the National Education Association (NEA) on July 1, 1929” (Yeaman, 2005, p. 15).
opening the classroom to a larger world through media and technology The Progressive Era in education also represents a logical starting point for a discussion of how
and where media and technology, and means of social communication, can be posited as requiring ethical analyzes. Progressivist reformers challenged the formalized, mechanical, and lifeless instruction described by critics in the 19th century: Pedagogical progressives called for learning environments and instructional practices that focused on student interests that opened the classroom to the larger world, and involved students in activities that had intellectual and social outcomes. Another branch to the progressive movement in education addressed administrative and organization needs and was anchored in the enthusiasm for scientific management and the quest for efficiency (Urban and Wagoner, 2000). A common element in the reforms advocated by both branches of the reform movement is what we now refer to as technology. For most people today, any mention of technology in the classroom brings to mind the use of some electronic device, computerized system, and the Internet. When a teacher uses computers in the classroom for instructional purposes or browses the Internet for new teaching materials, that teacher is using what is commonly referred to as educational technology. In reality educational technology is not limited to these examples of technology, nor is educational technology a new discipline within education: The modern tools of this field are simply the latest developments in a field that is as old as education itself. Various forms of classroom technology existed prior to the Civil War, the most common of which were educational apparatuses such as timepieces, maps and globes, slates and blackboards, textbooks, and the abacus or numeral frames. These visual apparatuses or aids, in terms of their first role in elementary and secondary schools, also included field trips to museums—serving as instructional aids for teachers. Such aids included museum exhibits, charts, photographs, illustrations, lantern slides, and maps (Heinich, Molenda, and Russell, 1989).
655
Historical Perspective of Technoethics in Education
As early as 1830, educators began to recognize that teaching apparatuses could make invaluable contributions to the educational field. “Most of these apparatuses were extremely simple and required very little engineering in their manufacture. Although there was knowledge as to the existence of these items, very few found their way into the average classroom of the times” (Anderson, 1961, p.19): Teachers were slow to accept and integrate reform and new teaching tools into the classroom—prompting reformers like Horace Mann and Henry Barnard began to require their schools (in Massachusetts and Connecticut, respectively) to report regularly on apparatuses acquired and their use. In fact, in his Second Annual report as Secretary of the Board of Commissioners of Commons Schools in his state, Henry Barnard was critical of his state for not utilizing teaching apparatuses more effectively (Anderson, 1961).
emergence of technology and technoethics in education This situation gradually improved following the Civil War and the emergence of industrialization across the nation, though the unprecedented technological advances of that era had little impact on education. “Few new instructional implements were introduced. It was mainly a period of polishing and streamlining what already existed, though mass production methods did lower prices and extend the benefits of existing apparatuses to more areas” (Anderson, 1961, p. 34). Nevertheless, two inventions during the late 19th century spurred the development of educational technology and, concurrently, created moral and ethical dilemmas associated with new technological developments that would later have far-reaching implications for educational institutions, professional teaching strategies and practices, student learning; and, research methods and processes—photography and film. The 20th century would, in turn, set the stage for the
656
invention of radio, television, and later computer and Web based technologies.
Photography We owe the name photography to Sir John Herschel , who first used the term in 1839, the year the photographic process became public in Europe. The word is derived from the Greek words for light and writing (Scharf, 1974). Not all people welcomed this exciting invention; some pundits viewed it in quite sinister terms. During the late 19th century and early 20th century, following the inventions of photography and film, some artists saw in photography a threat to their livelihood, even prophesied that painting would cease to exist. A newspaper report in the Leipzig City Advertiser stated: The wish to capture evanescent reflections is not only impossible... but the mere desire alone, the will to do so, is blasphemy. God created man in His own image, and no man-made machine may fix the image of God. Is it possible that God should have abandoned His eternal principles, and allowed a Frenchman... to give to the world an invention of the Devil? (Leggat, 1995, p. xx) Statements and letters by John Ruskin in the 1870s and 1880s follow a similar theme—photographs are false, a matter of ingenuity, whereas art is one of genius. He argued the artist must use extreme caution when using photography, though it may serve some of his needs. “Virtually denying his earlier aesthetic principles, late in life…he lamented the ‘microscopic poring into’ the visible world of nature which the invention of photography served to intensify” (Scharf, 1974, p. 100). Portrait miniaturists were the first group of artists to suffer from the effects of photography. By 1863, a number of miniaturist artists, seeing the writing on the wall, turned to photography for their livelihood, while others cashed in on the fact that the photographic images of that time
Historical Perspective of Technoethics in Education
were in monochrome, and found a renewed sense of creativity by coloring them in. Not long after Louis-Jacques-Mande Daguerree developed his daguerreotype process— a photographic process using the camera obscura and unfixed photographic images to obtain the most naturalistic rendering of contour and tone possible— the critic Jules Janin “praised the daguerreotype for its usefulness to the artist ‘who does not have the time to draw’. Janin was one of the first to, in the new medium, the great possibility of accurately reproducing earlier works of art. This could hardly have been reassuring to engravers, though elsewhere they were confidently told that they would benefit from the discovery of photography as once scribe had from printing” (Scharf, 1974, p. 26-7). Even noted painters of this time tinkered with the new technology and used it in their work: Antoine Claudet’s miniature paintings and portrait made extensive use of daguerreotype technique in photography, as did the painters Ingres and David Octavius Hill. Manet favored the daguerreotype in his portrait of Mery Laurent and etchings of Edgar Allen Poe. In fact, his famous painting of The Execution of Emperor Maximilian is an interesting example of the marriage of painting and photography— the head of Maximilian alone had been painted in a conventional manner after a photograph. Manet used several photographs, in addition to news releases to authenticate his paintings—weeks after the actual execution. Such it was with the technology of photography: it made it easier and quicker for one to reach and capture beauty, but it does not simplify the process or possessing or appreciating it. However, because of the stigma attached to artists who were known to rely on photography, its use was generally concealed and many photographs were destroyed afterwards (Scharf, 1974). These developments, in turn, represented the first of many social challenges and ethical dilemmas associated with this new technological development: ethical concerns about the ability
to manipulate the photographic data, changing the whole meaning of what is “truthful”, in this case, visual art.
Film Albert S. Howell, a farmer boy from Michigan who had studied engineering at night, and Donald H. Bell, a movie projectionist, invented and developed one of the first precision film projectors in 1907. This development accelerated the pace of technological advances in education, ushering in the visual instruction movement in education and the emergence of educational technology in the form of visual media and, later, audiovisual communications (Cortez, 2002). The use of film in education was brought about by the success and popularity of illustrated lectures on the Lyceum and Chautauqua lectures circuit, and the impetus provided by the effectiveness and extensive use of training films during World War I. The first films for instructional uses were usually theatrical films for general purpose or entertainment interest. Later, in the 1920s as the motion picture industry began to expand, it was thought that theatrical films had educational value as well. Otherwise, most instructional films at the time of World War I and afterwards were for industrial, government, and military training. The earliest forms of educational film were the newsreel, travelogues, and scientific motion pictures (Dent, 1969; Urban and Wagoner, 2000). The first school use of motion pictures was in 1910 in the City of Rochester (NY) public schools, where the school board adopted films for regular instructional use (Newby et al., 2000). Sound recording was integrated with film during the 1920s (Heinich et al., 1989, p. 18). At the start of the 20th century, public schools had established organizations and classroom room practices that would be familiar today: schools were divided into grades and separate classrooms; within each classroom rows of desks faced the teachers and blackboard. Courses of study set the
657
Historical Perspective of Technoethics in Education
boundaries and expectations regarding what was to be taught and when. Learning was teacher-centered, and students were passive learners. During the 1920s, teachers were beginning to carry larger teaching loads and, in many instances, had grown dissatisfied with older teaching methods—seeing them as slow, ineffective, and often wasteful. As a result, many educators gradually became receptive to the faster, more direct teaching process provided by instructional motion pictures (Dent, 1969). As educators gradually began to recognize the significance of instructional films, an important new movement in American education—known at that time as visual instruction, which developed from the mainstream of instructional technology during the period of 1918-1924 (Saettler, 1990). Visual instruction movement was a significant movement during the Progressive Era in education, and film became a satisfactory solution to both pedagogical reformers and the efficiency experts. Cuban (1986) states: Because film was viewed as real and concrete, a medium for breathing reality into the spoken and printed word that stirred emotions and interest while taking up far less instructional time— promoters and school officials joined progressive reformers in introducing motion pictures in classrooms… Classroom use of films became a symbol of progressive teaching practices, just as the microcomputer is today. In the1920s and 1930s, the black window shades, silver screen, and 16mm projector lent an aura of modernity and innovation to classrooms. (p. 11-12)
number of educational experiments. Among these was the Ohio School of the Air in 1929, launched in a joint effort by the State of Ohio, Ohio State University, and a Cincinnati radio station. The growth of instructional radio occurred primarily during the decade of 1925-1935—during which the first courses in radio education were established at colleges and universities across the country. However, by the late 1930s and the advent of World War II, the growth period of educational radio had begun to decline. As educational radio declined, instructional television brought about a general expansion of instructional technology (Heinich et al., 1989). The first non-experimental educational television station was launched by Iowa State University in 1950 (Newby et al., 2000). Molnar (1969) attributes the following statement to Edward R. Murrow: “This instrument [television] can teach, it can illuminate, yes it can even inspire. But it can only do so to the extent that humans are determined to use it to these ends. Otherwise it is merely lights and wires in a box” (p. 59). By the 1950s, television favorably impacted the communications movement and took center stage as a medium in education. Later, during the late fifties, instructional television was beginning to receive serious attention from educators and educational broadcasters; and, during the 1960s courses geared to this medium were being taught by either open or closed circuit television, on educational or commercial stations, and in educational institutions (Saettler, 1968).
Radio and Television
the emerging role of Audio-Visual media in education
In 1920, the Radio Division of the U. S. Department of Commerce was established and it began to license commercial and educational radio stations. Classroom broadcasting to enhance instruction spread rapidly during the decades preceding World War II (Cuban, 1986). During the 1920s and 1930s, radio became the focus of a
During the post World War II period and through the early years of the Cold War, the National Defense Education Act (NDEA) of 1958 played a key role in the emergence of instructional technology and its wide spread application in the nation’s schools and institutions of higher education (Saettler, 1990).
658
Historical Perspective of Technoethics in Education
As a result, the educational media field began to shift its focus from hardware to the role of media in learning. The instructional theories or models of the postwar time period initially focused on the communications process, and the authors of these models indicated that in planning for instruction it was necessary to consider all of the elements of the communications process and not just focus on the medium, as many in the audiovisual field had a tendency to do (Heinich et al., 1989). These models helped to move audiovisual specialists to consider all of the components involved in the communications process. As a result, audiovisual studies began to be conceptualized as something broader than just media. A convergence of audiovisual sciences, communications theories, learning theories, and instructional design began. (Newby et al., 2000, p. 249) By the end of the 1950s, the aforementioned convergence had failed to take root and gain widespread acceptance in education. Despite the significant implications of communications theory and research, and the fact educational circumstances of the time were favorable to the application of mass media or a technology of mass media, in practice little use was made of the convergence of communications and the audiovisual movement. Saettler (1990) states: Aside from the usual resistance of educational institutions and teachers to new ways or means of communicating, the primary reason that educational technology did not incorporate communications within its conceptual framework to any great degree is that behaviorism began to exert its influence in the early 1960s, just about the time that communication was beginning to have some impact on educational technology. (p. 277)
electronIc medIA And the AdVent of technoethIcs In educAtIon Over the past 30 years there have been major efforts to rediscover and reexamine the ethical dimension of technology across the sciences, social sciences, and humanities. Education represents a prime example of a critical area with a growing need to deal with moral and ethical dilemmas associated with new technological developments: Its increased reliance on new technology creates fundamental challenges that impact professional teaching practices, institutions, and processes. The challenge of preparing students for citizenship, as an example, becomes increasingly more complex as our notions of the common good and our understanding of key democratic values are transformed in the face of technological advancements and their complexities.
educational technology and Technoethics defined For some, a definition of educational technology should place emphasis on the tools (media and equipment) used in the classroom; for others, it is the process of applying the tools in teaching practice. For purposes of simplicity and consistency in this chapter, and as a basis for comparing other perspectives, the term educational technology as defined by Roblyer and Edwards (2000) is used: “Educational technology is a combination of the processes and tools involved in addressing educational needs and problems, with an emphasis on applying the most current tools: computers and their related technologies” (p. 6). This definition argues that educational technology should be viewed as “processes” and “tools”, a notion that has relatively widespread support within the educational technology community. Hawkridge (1991) and Gustafson & Branch (2002) have conceptualized the areas of compe-
659
Historical Perspective of Technoethics in Education
tence of educational technology into a concept map and instructional development models respectively—taking into account instructional design, means of education, assessment of audiovisual means, selection of technological resources, technical production, and evaluation. Missing from both concept maps and ID models is an ethical area or phase. Technoethics is an applied field of ethics and research focusing on the special problems and dilemmas in society posed by science and technology. Muffoletto (2003) adds, the “concept of ethics within the field of educational technology is a conceptual construct that legitimates certain behaviors and interpretations, while reifying particular social structures and ways of knowing” (p. 62).
technology and the transformation of ethics in education Educational technology is, in itself, neither ethical nor non-ethical. Rather, it is the behavior of those in the field that, as practitioners or users, are deemed either ethical or non-ethical. When considering the role and application of technoethics to modern educational contexts in the United States, I submit we first begin with the fundamental question that Milson (2001) poses: “What are the effects of emerging technologies on the fundamental democratic principles and values that guide our social relationships?” As the 21st century evolves, several major issues that concern ethical uses of technology in education have increasingly confronted educators.
Technological Advances Advances in technological growth are compelling society to re-examine how technology is viewed. Alain de Botton (2002) categorized photography as a lower expression of the desire for possession of Beauty, implying that photography cannot match drawing or writing in encouraging the
660
development of one’s eye for detail. He suggests photography is used as a substitute for conscious seeing (thus implying its inadequacy). Technology, such as photography, can invite the enlightened mind to discover new and exciting forms of artistic expression, ever expanding the definition of ‘art’. Cannot newer forms of visual media serve in such a role (in their place in time and context), rather than being limited to simply copying works of art— as efficient and accurate methods of reproduction and advocacies for the democratization of image-making that Ball and Smith (1998) depict them as being? Do not photography and film have descriptive potential in documentary forms of ethnographic studies? Do they not capture a record of material reality, pictorial facts, or just simply a visual postcard, another Kodak moment— press the button and the camera does the rest. Are not these and other questions answered by the nature of the inquiry? In what ways can technology stand in the way of collecting authentic data? (I have in mind not just distorting “reality,” but blocking the “real” perspective altogether). Also, audio or video recording might also be suspect in creating the semblance of authenticity, while simultaneously expressing realism.
Digital Imagery The world of digitized images, as an example of technological advancements, has undergone transformations that have profound implications for the moral and ethical, as well as the legal and professional dimensions of image producing practices. A key issue concerns the debate that has persisted in mass communications and educational media since the invention of photography: the validity and authenticity of photography, as a media technology, in the expression, representation, and interpretation of reality. Specifically, are visual media simply for establishing some semblance of authenticity or are they genuine means for
Historical Perspective of Technoethics in Education
expressing the material realism of the experiences that are observed and documented—Emmison and Smith’s (2000) ‘the seen and the observed’? Digital imaging changes the whole meaning of what is “real” and/or “truthful”. (I distinctly recall learning that the photographic images from space made available by NASA to the mass media for public consumption are altered digitally from their original radar or infrared images). When data can be both manipulated and considered authentic? How can researchers trust the data? Is realism and truth important in all cases? Is it ever legitimate to manipulate digital images to make a point, if one acknowledges the changes that are made? Doesn’t the act of manipulation of data in imaging reflect the current state of our technology and culture? What will this mean to researchers of the future, looking back at our time in history? While I understand the inherent ethical concerns about the ability to digitally manipulate the photographic data, this makes me also wonder about similar concerns when audio taping and transcribing. I suspect that photography does not have a monopoly on this. In education, digital manipulation crosses a broad spectrum of art, media, education, business, design, engineering, and journalism curricula. Most notably (and in its simplest form), there is the emergence of digital manipulation technologies now available to anyone with a PC, camera phone, or a scanner with modestly priced software. Photographic truth, therefore, has the potential to become an elusive, often subjective, concept. At present, the ability to manufacture fraud, plagiarize, and thus violate intellectual property rights practically exceeds the ability of educational administrators and teachers to detect these problems.
The Internet Likewise, there are issues associated with Webbased technologies: Society’s proclivity and
rationale for sharing via the Internet contributes to the increasingly more frequent instances of plagiarism, theft, and violations of intellectual property rights. Both adults and students think nothing of (and have grown accustomed to) downloading text, images, videos, and music from the Internet—and, in the process and often unknowingly, create ethical dilemmas by not acknowledging the sources of the multimedia that they have embedded in their classroom presentations and assigned papers. After all, these resources are available and free, aren’t they? Shields (1996) asserts, there is little evidence that computer technologies have helped foster, even in postsecondary environments, the ethical sensibilities that would serve to prepare students for responsible lives in a technology-driven civil society. For the most part, cyberspace remains an uncertain, unmanaged, unedited, and an under-supervised arena. “The Internet knows no physical boundaries and also no moral or ethical ones” (Emmans, 2000). Parents and educators alike are faced with the prospect of newer breeds of criminals who pose as a danger to children, using technology to commit crimes such as hacking into database and Web sites; distributing pornography, viruses, and worms; downloading copies of images and music illegally; and, violating individual’s privacy, intellectual property rights, and copyright laws. Moreover, the Internet is where anyone can post data online and present them as factual. Basic traits of civic virtue, such as civility, honesty, etiquette, responsibility, caring; and, respect for individual rights, privacy, and property take on largely new meanings and relevancy in the largely anonymous world of cyberspace (Milson, 2001). The challenge of teaching students these basic attributes for citizenship is more challenging when one considers the fact many of the hackers, cyberpirates, pranksters, and violators of copyright laws are the very individuals that the civic and moral dispositions of technoethics are designed to protect (Milson, 2001). No software, nor policy,
661
Historical Perspective of Technoethics in Education
is foolproof; students have always found ways to circumvent rules (Emmans, 2000). How can educators teach, in an effective way, that it is not acceptable to lie online since people cannot see you, and to download something that is copyright protected and claim it as one’s own creation? The posting of data online has ethical implications as well: adults have the knowledge and experience to make decisions about information they access, and ascertain the validity and reliability of knowledge claims made by those posting data online, young people—especially school-age children—do not (Emmans, 2000).
Classroom Diversity Diversity in the classroom represents another, more recent significant challenge in education, and it has an impact on the fundamental question of how one defines the common good. As Russell (2006) states, “we live in a globalised world where individuals draw cultural meanings and ethical values from electronic media and the Internet, as well as from traditional institutions” (p. 140). Even within educational contexts, there exists the power to communicate instantaneously across national borders without the time to develop an understanding of ethical stances and potential problems that might adversely impact teachers and students in developing countries; or, their perceptions of the developed countries. In educational technology circles, one frequently hears of the aforementioned concerns and whether (and how) technology should be introduced to developing countries, the potential for technology to either narrow or widen the gap between the developed and the less-developed countries of the world community, and the issues of justice as they relate to the possible implications of technological inequities. Should not classroom diversity in this country prompt similar concerns? Morals vary across various, diverse social groups in our society as well as across national borders and ethnicities.
662
The question is (assuming ethics transcends morals): can the common good be generalized into a set of standards for all students in our nation’s classrooms? Increasingly, instructional designers, educational technologists, teacher-educators, and society in general have found that even in smaller cities and towns, school classrooms have become miniaturized United Nations—given the diversity of races, cultures, and ethnicities that are represented and the numerous languages that are spoken. Teachers interact and teach students from different countries, social and cultural contexts, and with varying religious beliefs and values. Perceptions that children gain from their parents and immediate relatives are likely to vary from culture to culture. Therefore, potentially they are guided by differing interpretations of what constitutes ethical behaviors. How do we, as educators, teach values and ethics without violating the cultural norms and belief systems of racial and ethnic minority, indigenous, and immigrant children and their families? How do educators promote inter-racial and cross-cultural learning and understanding if only one ‘right’ set of values and ethical principles are taught, to the exclusion of those of differing cultures and ethnicities represented in the classroom or online educational setting? Therefore, whose ‘common’ values do we now refer to in education and how is the ‘common good’ defined? In the classroom, what ethical notions should apply regarding students of color—can a teacher assume are all African-Americans and understand Black American history and culture? Yet, given the diversity of today’s classrooms, such an assumption would be questionable; upon further examination, some non-white students are from the Sudan, others from the Caribbean; and, a few from Muslim countries—all with differing cultural notions of race, ethnicity, and values. Do all students have equitable access to technology, and what is the quality of that access? What ethical standards apply? In using
Historical Perspective of Technoethics in Education
classroom technology, what we might consider as visually acceptable and ethical in Western culture—such as pictures of women or girls on the beach in swim suit attire, as part of a lesson in water safety—during the course of visiting a Web site as part of a classroom lesson? Such pictures may be considered offensive to some students’ (and especially their parents’) moral and ethical sensitivities in the many religious and cultural groups represented in the classroom or computer lab.
educAtIon As A Venue for A dIscussIon of technoethIcs In the 21st century Education is a venue for ethical debates and conflicts as one will find in other venues in society. Just as many factors have influenced and affected ethics in education (and will continue to do so); so, too, these factors will impact educational technology. While Cortés-Pascual (2005) asserts that technology needs an ethical analysis that should be developed in educational contexts, as one might garner from this brief discussion, there are many more questions and issues that are important to explore and seriously consider beyond the education’s increased reliance on technology. What kind of society do we live in today (and will we in the future) if we do not establish an appropriate ethical framework within which to educate students within a technological environment? How we attempt to define ethical behaviors has an impact upon determining the truth that will guide our moral decisions-making and steer our ethical reflections. And, there is the additional consideration of what definition of ethics will others use to determine whether our decision-making and behaviors are ethical or not (Muffoletto, 2003).
Technoethics’ Significance in education Educational philosophers, thinkers, and reformers over the years have pointed to education as being an important underpinning of good citizenship and the promotion of democratic values. Educators, however, must consider in today’s high-tech society how the knowledge, skills, and values needed for civic competence are being redefined by technological advances. In order to make decisions for the common good in the pursuit of democratic principles, citizens—in this new millennium—will need to have an understanding of the effects of emerging technologies on values such as liberty, justice, and equality; and, that civic dispositions of civility, honesty, responsibility, and ethical behaviors must be applied in new ways. Education, if it is to be relevant, “it must address not only how democratic values are transformed and applied in a technological society, but also the ‘habits of the heart’ required of citizens in such a society” (Milson, 2001, p. 89).
the need for a technoethic framework Technology and means of social communication are posited as requiring ethical analyzing in order to be used in a suitable and coherent way. This idea is materialized in the concept of educational technoethics, which deals with the following two parameters: the intrinsic values including technology and means of communication, the objective of technoethics; and, utilizing them as mediums for ethical values, the means of technoethics (Hawkridge, 1991; Nichols, 1994; Gill, 1997). Technology, however, has likely exacerbated and made more complex many of the previously existing conflicts within the field of education regarding how to facilitate learning and what constitutes best teaching practices, and it has contributed significantly to the changing nature of work and ethical behavior in education.
663
Historical Perspective of Technoethics in Education
It is within these contexts that technoethics in education can be evaluated and assessed. Technoethics in education, in essence, posits the need to stress unifying the social, philosophical, and educational references that are needed to analyze educational technology in the school curriculum to fully educate the learners in our current, as well as future, society— guiding current processes so that technology in educational settings serve human beings without losing the ethics of society’s minimum values. Technoethics is a way to establish, on one hand, a balance between principles and moral convictions in a technological horizon; and, on the other hand, the forecast of consequences as the basis for scientific knowledge (Cortés-Pascual, 2005). Without a technoethical framework, we are then left with a system of education and a society immersed in efficiency, technological growth, a multitude of technology artifacts, and information saturation. Yet, questions regarding the relevance of personal values, moral uncertainty in the culture of the images; and, with the conceptualization of technology as a instrument leading to a higher quality of life will be allowed to go unanswered.
references Altenbaugh, R. E. (2003). The American people and their education—A social history. Upper Saddle River NJ: Merrill Prentice Hall. Anderson, C. (1961). History of instructional technology, I: Technology in American education, 1650-1900. Washington DC: National Education Association. Cortés-Pascual, P. (2005). Educational technoethics: As a means to an end. AACE Journal, 13(1), 73-90. Cortez, J. J. (2002). The origins of technology in education: The role of federal interventions and
664
national emergencies during the early evolution of instructional technology and media. Unpublished master’s thesis, University of Dayton. Cuban, L. (1986). Teachers and machines: The classroom use of technology since 1920. New York: Teachers College Press of Columbia University. De Botton, A. (2002). The Art of travel. New York: Pantheon Books. Dent, D. (1969). Landmarks in learning: The story of SVE. Chicago IL: Society for Visual Education. Dewey, J. (1903). Ethical principles underlying education (as cited in Yeaman, A. R. J., 2005). Dewey, J. (1997). Democracy and education. Washington DC: Free Press. Emmans, C. (2000, Spring). Internet ethics won’t go away. The Education Digest, 66(1), 24-26. Emmison, M. J. & Smith, P. D. (2000). Researching the visual: Images, objects, contexts and interactions in social and cultural inquiry. London: Sage Publications. Gutek, G. L. (2001). Historical and philosophical foundations of education—A biographical introduction. Upper Saddle River NJ: Merrill Prentice Hall. Hawkridge, D. (1991). Challenging educational technology. ETTI, 28, 2. Heinich, R., Molenda, M. & Russell, J. D. (1989). Instructional media and the new technologies of instruction, 3rd ed. New York: Macmillian Publishing Company. Hinman, L. (1984). Ethics update glossary. http://ethics.asusd.edu/Glossary.html (as cited in Milson, A. J., 2001). Leggat (1995)
Historical Perspective of Technoethics in Education
Milson, A. J. (2001, Spring/Summer). Fostering civic virtue in a high-tech world. International Journal of Social Education, 16(1), 87-93.
Yeaman, A. R. J. (2005, March/ April). The origins of educational technology’s professional ethics: Part two. TechTrends, 49(2), 14-16.
Molnar, A. R. (1969, June). Ten years of educational technology. Educational Broadcasting Review 3(3), 52-9.
suggested reAdIng
Muffoletto, R. (2003, November/December). Ethic: A discourse of power. TechTrends, 47(6), 62-67. Newby, T. J., Stepich, D. A., Lehman, J., & Russell, J. D. (2000). Instructional technology for teaching and learning, 2nd ed. Englewood Cliffs NJ: Merrill. Nichols, R. G. (1994). Searching for moral guidance about educational technology. Educational Technology, 34(2), 40-48. Roblyer, M. D. & Edwards, J. (2000). Integrating educational technology into teaching, 2nd ed. Upper Saddle River, NJ: Prentice Hall. Russell G. (2006). Globalisation, responsibility, and virtual schools. Australian Journal of Education, 50(2), 140-54. Saettler, P. (1990). The evolution of American educational technology. Englewood, CO: Libraries Unlimited. Scharf, A. (1974). Art and photography. Baltimore MY: Penguin Books. Shields, M. A. (1996, February). Academe and the technology totem. The Education Digest, 61(2), 43-47. Tyack, D. (1967). Turning Points in American Educational History. Waltham, MA: Urban, W. & Wagoner, J. (2000). American education: a history, 2nd ed. Boston MA: McGraw-Hill Higher Education. Walch, Sr. M. R. (1952). Pestalozzi and the Pestalozzian theory of education. Washington DC: Catholic University Press.
Basalla, G. (1988). The evolution of technology. Cambridge, England: Cambridge University Press. Cremin, L. (1980). American education: The national experience,1783-1876. New York: Harper & Row, Inc. Cuban, L. (1986). Teachers and machines: The classroom use of technology since 1920. New York: Teachers College Press of Columbia University. Dewey, J. (1993). John Dewey and American democracy. New York: Cornell University Press. Ely, D. P. (1997, Jan.-Feb.). Professional education in educational media and technology. TechTrends, 43(1), 17-22. Gilfillan, S. C. (1970). The sociology of invention. Cambridge MA: MIT Press. Gill, D. W. (1997). Educating for meaning and morality: The contribution of technology. Bulletin of Science, Technology and Society, 17(5-6), 249-260. Gordon, G. N. (1965, December). The end of an era in American education. Educational Technology, 12(12), 15-19. Gormly, E. K.. (1996, December). Critical perspectives on the evolution of technology in America public schools. Journal of Educational Thought/ Revue de la Pensee Educative, 30(3), 263-86. Gustafson, K. L. & Branch, R. M. (2002). Survey of instructional development models, 4th edition. Syracuse NY: ERIC Clearinghouse on Information & Technology.
665
Historical Perspective of Technoethics in Education
Heinich, R., Molenda, M. & Russell, J. D. (1996). Instructional media and the new technologies of instruction, 5th ed. New York: Macmillian Publishing Company. Pugh, E. W. & Aspray, W. (1996, Summer). Creating the computer industry. IEEE Annuals of the History of Computing, 18(2), 7-17. Rickover, H. G. (1959). Education and freedom. New York: Dutton. Saettler, P. (1968). A history of instructional technology. New York: McGraw-Hill Book Company. Saettler, P. (1997, Jan-Feb). Antecedents, origins, and theoretical evolution of AECT. TechTrends, 43(1), 51-57. Stubblefield, H. W. & Keane, P. (1994). Adult education in the American experience: From the colonial period to the present. San Francisco: Jossey-Bass Publishers. Weber, E. (1969). The kindergarten: Its encounter with educational thought in America. New York: Teachers College Press. Yeaman, A. R. J. (2005, March/ April). The origins of educational technology’s professional ethics: Part two. TechTrends, 49(2), 14-16.
key terms Daguerreotype Process: Developed by LouisJacques-Mande Daguerree in France, this is a photographic process using the camera obscura and unfixed photographic images to obtain the most naturalistic rendering of contour and tone possible Educational Apparatuses: A term describing such 18th and 19th century teaching aids
666
such as timepieces, maps and globes, slates and blackboards, textbooks, and the abacus or numeral frames. These visual apparatuses or aids, in terms of their first role in elementary and secondary schools, also included field trips to museums—serving as instructional aids for teachers. Such aids included museum exhibits, charts, photographs, illustrations, lantern slides, and maps. Educational Technology: For purposes of this book chapter, this term is defined, in a broad sense, as tools to accomplish objectives and the interrelationship of the tools with life, society, and the environment. Educational technology is a combination of the processes and tools involved in addressing educational needs and problems, with an emphasis on applying the most current tools: computers and their related technologies. The modern tools of this field are simply the latest developments in a field that is as old as education itself. Educational Technoethics: An applied field of ethics and research focusing on the special problems and dilemmas in society posed by science and technology. Ethics: A set of moral principles or values, built upon existing mores, and result from a society’s ideas and actions. They do not precede the society and the situation in which it finds itself; rather, they are a reaction to the situation, ever changing, and reflect the current state of affairs of the society. Ethics is based upon values grounded on some notions of and for the common good, or what is perceived as good and right (the truth) for individual and community action. Visual Instruction: A movement that developed from the mainstream of instructional technology during the period of 1918-1924. It was during this early period in the history of educational technology that educators were beginning
Historical Perspective of Technoethics in Education
to use filmstrips, motion pictures, audio recording, and radio in educational settings. As a movement, has its roots in the efforts of reformist educators and theorists, who revolted against formalism and verbalism in educational practice during the nineteenth and early twentieth century and sought to emphasis the role of the senses in learning.
667
668
Chapter XLIII
Podcasting and Vodcasting in Education and Training Heidi L. Schnackenberg1 SUNY Plattsburgh, USA Edwin S. Vega1 SUNY Plattsburgh, USA Zachary B. Warner SUNY Plattsburgh, USA
AbstrAct On the cutting edge of current technologies are portable media, where users can download information and take it with them to digest it anytime, anywhere. Some of the newest ways of sharing portable information using the Internet are podcasting and vodcasting. Podcasts are a distribution of audio files such as radio programs or music videos, over the web. A derivative of the term (and idea) of podcast is “vodcast,” also commonly referred to as a video podcast. A vodcast functions in much the same way as a podcast, except that instead of users downloading only audio files, they also download corresponding video files to their portable media players. While one might think that podcasting and vodcasting have the ability to revolutionize education and training, these advances are not stand-alone panaceas. However they do offer an incredible educational advantage in that their multimedia aspects attend to a variety of learning needs.
IntroductIon In the today’s world, we are bombarded by information all the time. Television, radio, news-
papers, magazines, journals, the Internet, and many more venues transmit information to us 24 hours a day, 7 days a week. Most of these forms of media are restricted to the particular formats
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Podcasting and Vodcasting in Education and Training
in which they have been traditionally transmitted. Television is an audio/visual medium, radio is audio, newspaper is print and pictures and, in its inception, the Internet was primarily text and graphics. However, over the last several years this has begun to change. Information was and is being conveyed via the Internet in a variety of multimedia formats. Most news websites, commercial sites, databases, fan sites, etc. offer information in text, pictures, and/or audio/video. While this type of information is free and legal, it does constrain a user to being at a computer terminal with an Internet connection. On the cutting edge of this information blitz is portable media, where users can download information and take it with them to digest it anytime, anywhere. One of the most publicized ways of sharing portable information over the Internet is the highly controversial peerto-peer file sharing. Peer-to-peer file sharing is when users exchange files, most popularly music, over the Internet by either uploading or downloading files from individual terminals. In theory, these files are supposed to be copyright free so as not to have legal ramifications. However, as we know from recent and consistent media coverage of this practice, most peer-to-peer file sharers exchange popular, copy written music because it is an inexpensive way to obtain it. Fortunately, many news, television, and radio programs, as well as independent artists, have become savvy to the popularity of free, downloadable, portable information and entertainment. Recently, media outlets like these have been creating and uploading regular audio or audio/video programs that are often free and always legal to download from the host websites. This type of downloadable, portable, programming is called Podcasting, for strictly audio shows, or Vodcasting, for audio/video shows. Coinciding with the innovation of these media, a host of legal, ethical, and moral issues arise. When intellectual property and artistic creations become portable, who controls the flow or release of these media? Should information be download-
ed for profit, or should if be free? Is it reasonable to have media available for downloading without restrictions? The creation of podcasting and vodcasting incites unique questions about the ethical use, development, and diffusion of innovative technologies. Because these media are still in their infancy, there are no clear answers to the host of questions their use creates. However, practical use and academic literature do offer some guidance on both the concepts of podcasting and vodcasting, and the ethical implications surrounding their use.
PodcAsts/VodcAsts And the lIterAture While the phenomena of podcasting and vodcasting are fairly recent, there have been some academic literature and research published on these mediums as a tool to aid in teaching and learning in the classroom. Much of this literature was written by educators who have chosen to employ podcasting and/or vodcasting in their classrooms. Other articles and studies were composed by technology professionals that chose to examine podcasting and vodcasting in terms of education. Duke University made headlines and history in August 2004 when it handed out 20GB Apple® iPod devices to over 1,600 incoming students. The Center for Instructional Technology (CIT) at Duke evaluated the educational benefits of the devices basing their findings on student and faculty feedback. The findings showed that podcasts created benefits for students such as flexible access to media, better support for individual learners and promoted student interest in the classroom. There was also less reliance on physical materials for faculty as well as students (Belanger, 2005). Not all schools freely give out tools for creating and listening to podcasts, but as Zeynel Cebeci and Mehmet Tekdal point out in their study of audio learning, many students already own mp3 players capable of playing downloaded podcasts
669
Podcasting and Vodcasting in Education and Training
(Cebeci & Tekdal, 2006). Campbell echoes his in his study. He supports the use of podcasting at the university level saying that students will integrate scholastic podcasts into their lives easily because it is a medium they are already using for pleasure. He writes “it is only natural that school stuff would mingle with other aspects of [the student’s] daily life (Campbell, 2005).” The two Cukurova University professors also identify some pedagogical benefits of using podcasts for teaching and learning. They state that listening has been the primary learning method for thousands of years and podcasting may be a positive alternative for students that dislike reading. Cebeci and Tekdal suggest that podcasting does not need to be solely for lectures. They propose that podcast substance could be varied and clips of songs or speeches could be inserted in with the necessary course content (Cebeci & Tekdal, 2006). Matthews adds that students with disabilities such as visual impairment can greatly benefit from recorded material as they can listen on their own time as much as necessary (Matthews, 2006). Boulos, Maramba and Wheeler found similar benefits when they examined podcasting as a tool for students of medicine. They recommended studies to examine how to integrate podcasting into e-Learning (Boulos et al, 2005). Saby Tavales and Sotiris Skevoulis of Pace University undertook a study that sought to find a method for seamless integration. They developed scenarios where a student or an instructor missed a class period. In both cases, students could download the day’s lecture in podcast form. Tavales and Skevoulis believe that this solution to the problem of unplanned absences is applicable at any level of education (Tavales & Skevoulis, 2006). Although many studies show benefits to teachers and learners, podcasting in the classroom does have opposition. Brock Read interviewed an English professor from Georgia College & State University who cautioned the use of iPods and podcasting without strict directions. He warns that without clear usage guidelines, iPods can
670
become toys. Students need to be directed if they are allowed to use podcasts as an educational tool (Read, 2005). Another con of podcasting is the costs involved. Equipment needs include computers with certain requirements for hardware and software and a player such as an iPod. Devices to play podcasts on can be expensive and all universities do not have the means to provide them to students. Requiring students to provide their own devices could exclude some students (Matthews, 2006). Podcasting also brings up legal concerns. Donnelly and Berge bring up the issue of podcast ownership. If an instructor’s podcast follows a textbook, does the textbook publisher own rights to the podcast? If the podcasts are made available to only students in the classroom, such as with a course management system, the potential of podcasting is limited regardless of whether the content is accessed at school or at home. There is also confusion about whether music included in a podcast requires only a composition license or if a performance rights license is needed since podcasts are download and not streamed (Donnelly & Berge, 2006). Many benefits are associated with the use of podcasting as a tool for teaching and learning in the classroom. However, there are cautions for potential podcasters as well. The academic literature suggests that podcasting, when used correctly, can be an effective tool useful to both students and instructors. It is clear that more research must be done to answer some of the lingering questions regarding podcasting use for education and the laws surrounding it. Vodcasting shares many similarities with podcasting. Likewise, many of the benefits span both mediums. Vodcasting, however, has the added video component. This allows for a higher success rate in the teaching and learning environments because video, animation and interactive media have been shown to increase attention, motivation and interest (Chan & Lee, 2005). Hürst and Waizenegger mention vodcasting when
Podcasting and Vodcasting in Education and Training
they present a variety of approaches for “lecture casting.” In their study, they suggest a type of lecture cast where the recorded audio lecture is accompanied by graphics or video. The recorded lecture becomes a type of vodcast. Their research proved this medium to be very effective for lecture casting (Hürst & Waizenegger, 2006). Linda Herkenhoff researched vodcasting as a supplementary tool for training in the business world. She found that as an instrument for training and learning, vodcasting allowed trainees to learn at their own pace in whatever environment they chose. Multitasking became a possibility and students wishing to examine more advanced materials had the means to do so. She also found that vodcasting was a good fit for those who had English as a second language (Herkenhoff, 2006). This is similar to a finding by Robert Godwin-Jones in his study that examined “disruptive technologies for language learners.” Godwin Jones found that language instructors and learners could benefit from disruptive technologies, such as podcasting and vodcasting, because they provide supplementary opportunities for oral communication (Godwin-Jones, 2005). Vodcasting has not escaped criticism for its shortcomings. One of the most common issues is that students are not being trained in how to properly use the devices necessary for creating and playing a vodcast. Instructors are finding inadequate time to teach both the course content and a tutorial on how to use the vodcast technology (Chinnery, 2006). In order to make vodcasting a staple of a classroom, a standardized recording process would have to be developed to ensure a certain level of quality and limit access to copyrighted materials that are meant for classroom use only. The legal issues at hand are almost identical to those surrounding podcasting. It is unclear who owns the podcast or vodcast if there is copyrighted information contained in it (Meng, 2005). Vodcasts remain notoriously more difficult to create than podcasts. They contain downloadable files as well as streaming sources. Also, there
are so many different types of video, varying in format, resolution etcetera, that users may have trouble with playability of a vodcast. Another issue of difficulty is found in the playback. While the video content of a vodcast can be split into sections or chapters, current handheld devices are unable to take advantage of this navigation feature (Ketterl et al., 2006). Quality is another matter of much discussion. Although devices are increasingly portable, this shrinking of players is resulting in small speakers and screens as the industry standard. The outcome of this is poor audio-visual quality as well as the limited power of the devices (Chinnery, 2006). Vodcasting remains a growing technology with many potential uses for teaching and learning in the classroom setting as well as in other situations such as employee training. There are controversial issues surrounding vodcasting including legal questions and concerns about quality. As the technology grows, its implications will become more evident and the appropriate uses of vodcasting in education will become clear as research continues.
WhAT iS A podCAST? / WhAT iS A VodCAST? Podcasting is a distribution of audio files such as radio programs or music videos, over the Internet using either RSS or http://en.wikipedia.org/wiki/ Atom_(standard) drslt g1033Atom syndication for listening on mobile devices and personal computers. A podcast is a web feed of audio or video files placed on the Internet for anyone to subscribe to and download. Podcaster’s websites also may offer direct download of their files, but the subscription feed of automatically delivered new content is what distinguishes a podcast from a simple download or real-time streaming. Usually, the podcast features one type of “show” with new episodes either sporadically or at planned intervals such as daily, weekly, etc. There are also
671
Podcasting and Vodcasting in Education and Training
podcast networks that feature multiple shows on the same feed. The essence of podcasting is to create audio content for an audience who wants to listen whenever and wherever they desire. The term Podcast is a combination of the words “iPod” and “broadcast.” Although the word podcast is associated to the iPod brand of portable media players, podcasts themselves are not limited to specific brand of mobile devices. A derivative of the term (and idea) of podcast, is “vodcast,” also commonly referred to as a video podcast. A vodcast functions in much the same way as a podcast, except that instead of users downloading simply audio files, they download video files to their portable media players. Vodcasts, like podcasts, are also generally acquired via subscription and at this time, commonly for little or no cost. What makes the concepts of pod- and vodcasting novel is that they are not used to promote artists or for-profit works. Currently, podcasts and vodcasts feature lesser-known individuals and their work at no cost, simply to promote ideas and talent to which the public might otherwise not be exposed.
creAtIng A PodcAst Detailed and careful planning is required in order to make a podcast. The more organized and well thought-out the planning phase is, the smoother the actual production of the podcast will occur. Technology needed to develop a podcast are: (1) a desktop or laptop; (2) a microphone or digital media recorder such as an iRiver or a MiniDisc Recorder; (3) audio editing software, such as Audacity or GarageBand; and (4) an application to help create the “enclosures” of the podcast, such as Podifier. There are many other alternatives to the specific software mentioned above, but for the sake of clarity, only a few names have been selected from a very long list of possibilities. Regardless of preferred platform, the components used to create and generate a podcast are the same.
672
Podcasts cover a very broad spectrum of topics. At the present time, since there are no federal regulations governing this media, the podcaster (the emcee whose voice is heard on the podcast), does not need to file for a license. This is unlike traditional radio, which must adhere to guidelines set forth by the government. As a result of not having a license, anyone can generate a podcast and there are no restrictions as to what topics are discussed. Also unlike traditional radio, once a podcast is available on the web, it is then part of a “global” market rather than a “regional” one. This means that a listening audience can literally span the entire world and not just a small local geographic area. As stated above, podcasts are generally free, but there is a movement to make some podcasts accessible only for members who pay specific fees. Currently the music industry is at the forefront of this development. Currently, some popular music artists are selling podcasts of their music that fans can download from the Internet for a nominal fee.
creAtIng A VodcAst Much like podcasting, vodcasting involves the creation of TV-like programs that can be downloaded to any portable media player that supports video. These types of players include the Video iPod, the Sony PlayStation Portable (PSP) or the Creative Zen Vision W. Downloading video is done the same way as for its audio counterpart. Vodcasts can also be downloaded and stored on a PC and/or laptop. This form of portable media allows users to view everything from news programs, to favorite movies, to homemade productions, as well as instructional videos. Developing a vodcast involves a digital video camera with either firewire or USB capabilities that will connect to a computer and video editing software such Windows Movie Maker or iMovie (which are free to download, commercial
Podcasting and Vodcasting in Education and Training
products can include: Avid, Final Cut Pro, Adobe Premiere, Sony Vegas and many more). Video footage will need to be converted to 320x240 at 30 frames-per-second in MPEG4. If certain editing software doesn’t convert to the MPEG-4 format, a converter can be downloaded from any websites such as Softpedia.com, Tucows.com and Download.com. Examples of how to utilize this format can be seen in Corporate Training, where one can view standard operating procedures or learn how to use a new piece of software. In the academic realm, instead of having to re-create a live science demonstration a multitude of times for various classes, the instructor can record the demonstration, then have the video converted to the proper format, uploaded it to a server, place a link on a website and promote the link. The entertainment world is also promoting vodcasting. Videos can be found on iTunes of many of today’s TV programs and feature films for a fee. Vodcasts are also being made available on cell phones.
exAmPles of PodcAsts And VodcAsts Although the music industry may be taking steps to turn a profit from podcasting, it has also helped this phenomenon immensely. Independent producers and unsigned artists are using podcast technology to reach the masses. By the letter of the law, peerto-peer (P2P) services are legal and podcasts are free. In the last year or so, previously unknown groups have reached superstardom in the mainstream music world simply based on the number of times their music has been downloaded. Other industries are also pushing and promoting podcasts. The athletic industry is a major promoter of podcasts. Websites like www.espn. com give users a choice to either “play” the audio, “download” it to their computer or go to iTunes to send the broadcast to a portable media device.
The news can now be downloaded as a podcast. Websites such as www.cnn.com offer written text of breaking stories or the option to download a vodcast and watch the news or a news program. IBM and BMW are other companies also offering podcasts and vodcasts to their customers. While these industries are targeting customers and the general public, the field of education is following this trend to target students, instructors, parents and the communities they serve. There are now downloadable podcasts to help students learn how to properly pronounce words in a different language or learn English as a second language. Several historical websites also offer podcasts where veterans are being interviewed an offering personal accounts of the wars and armed forces in which they served. These are examples of educational podcasts are rapidly expanding and vodcasts are also appearing. While it’s true that there are no limits to the topics that general podcasts and vodcasts can cover, in education that is also the case as long as the content is appropriate for the targeted age group. An instructor can opt to have his/her lectures available for downloading to a digital media player to be listened to or watched for review or to make up a missed class session. Science experiments and field trips can inspire discussions that could make great podcasts. These static, singularly transmitted files can then generate further discussions outside of the classroom. Using podcasts and vodcasts in the classroom can give students a first person point-of-view of what is happening in “hot spots” around the world. Learners can watch and listen to vodcasts on how the situation in Darfur is progressing from those who are there trying to make a difference. They could also listen to podcasts from both sides of the Israeli and Palestinian conflict in the MiddleEast. These sorts of podcasts are created by people who live in a world where violence and conflict have become a way of life. Podcasts and vodcasts allow individuals the opportunity to hear various
673
Podcasting and Vodcasting in Education and Training
sides of an issue from a variety of perspectives and geographic locations. Fortunately, education doesn’t end in K-12 schools or in institutions of higher education and neither do targeted audiences for educational podcasts and vodcasts. Areas of adult education and corporate training are also endeavoring to use these new media to educate their constituents. In areas such as graphic design, multimedia, and many others, a professional can download either a podcast for listening or a vodcast to see how various techniques are achieved. Similarly, podcasts and vodcasts can be used in developing countries where technology and education are now beginning to take form shaping the country’s intellectual infrastructure.
ImPlIcAtIons of PodcAstIng/ VodcAstIng on educAtIon And trAInIng While one might think that podcasting and vodcasting have the ability to revolutionize education and training, as with virtually all technological innovations, these advances are not stand-alone panaceas. Since podcasts and vodcasts are only one-way transmissions of information that are enhanced with sound, still, and/or moving images, by themselves they do not engage the learner. Although many people download these media and learn new things from them, there exists at the very least a motivation to learn something new on the part of the user. In his Hierarchy of Needs, Maslow (1954, 1971) recognizes that motivation is a key factor in the human desire to learn. However, many other elements play a significant part in effective learning. Active engagement of the learner and feedback on behalf of the instructor or instructing body are also key elements in effective teaching and learning. Podcasts and vodcasts alone do not offer either of these elements, but, with proper use and interactive integration, an instructor can effectively use these tools to enhance a curriculum or
674
training. So while podcasts and vodcasts do offer a multitude of new ways to present information, they do not necessarily affect education or the learning environment because there is so much more needed to truly engage a learner other than simply transmitting knowledge. By themselves, podcasts/vodcasts may not enhance teaching and learning, but they do offer an incredible educational advantage in that their multimedia aspects can attend to a variety of learning styles and multiple intelligences (Gardner, 1983). This makes differentiated instruction more accessible to learners and easier to manage for instructors. Since individuals learn in a variety of ways, use of technologies such as podcasts and vodcasts, which offer visual and auditory representations of information, may be a step in the right direction to overcoming educational barriers to learning. Unfortunately, overcoming social barriers to learning may not be something facilitated well by these technologies. Downloading podcasts and vodcasts requires access to a computer with connectivity (preferably high speed). If an instructor offers the opportunity to search and download these technologies in school or the workplace, it allows all learners access to the programs, not just those with the proper equipment. However, a key part of podcasting and vodcasting is the portability of the information and the “anytime/ anywhere” aspect to learning. While downloading at school or the workplace may be a partial solution to accessing these technologies, a learner without a portable media device (which can be fairly expensive), is also restricted to listening to or watching the pod/vodcast at the same terminal to which it has been downloaded. This is a highly restrictive way to access information which was designed from the outset to be a “download and go” sort of activity. Lack of having a portable media device may then inhibit learners from using podcasts or vodcasts to enhance their education because of the visible social stigma it places on them by being restricted to using the technology
Podcasting and Vodcasting in Education and Training
at a desktop or laptop computer. Conversely, it does seem that many, many people currently have portable media devices, and many more have cell phones. At this time, although vodcasts may not be universally downloadable to a cell phone, virtually all new cell phones have the capability of holding a podcast. (If music can be downloaded to a cell phone, so can a podcast.) So in one sense, podcasting and vodcasting may restrict access of information and knowledge to a certain part of the population. However, the majority of equipment that a user needs to access these innovations can be obtained from a public library, school, or the workplace. The only piece of equipment that a user may want to (but not need to) own is a portable media device or a cell phone, both of which are significantly cheaper than a computer with high speed Internet access. So podcasting and vodcasting are not perfect solutions for eliminating social barriers to learning, but in many regards, they may be a more accessible form of information and knowledge than one might initially think.
legAl Issues And ethIcAl dIlemmAs wIth PodcAstIng And VodcAstIng The ramifications of podcasting and vodcasting on teaching, learning, and training are rapidly revealing themselves as these technologies become more prevalent in the educational environment. Generally at no cost and open to the public, most podcasts and vodcasts are copyright free and embody Creative Commons (Creative Commons, 2007) principles where information can be used, modified, and shared as long as it is not for personal gain or profit. Over the last few years, many professors and corporate organizations have lectures and other informational materials available for downloading to their constituents. Often, this information can only be accessed with a password, but it is still at no extra cost to the user. Well known television and radio programs
(i.e., CNN, NPR) also have free podcasts and/or vodcasts available to their audience in order to supplement or extend their shows. While each of these examples illustrates proper and legal use of podcasts and vodcasts, misuse also exists. Currently, the written laws that govern society could never have foreseen the power technology can bring to the masses or to an individual. With technological advances comes uncharted territories and with these territories comes opportunities, many of which are precursors to the subsequent laws that govern them. At times, laws are violated in some way or no laws exist that address new ideas, happenings, or processes. One industry that thrives on both legal vagaries and new technological advances in media is the adult industry. This highly lucrative and cutting edge technological business has made great use of vodcasting and enabled mature content to go mobile. This brings up questions of whether written laws on adult content exist for portable media or if current regulations can be applied to it. Interestingly, this same issue is not new due to the way the Internet already provides easy access to such material. Similarly, since podcasting is not regulated by the government in the way that traditional radio is regulated by the Federal Communications Commission (FCC), anyone can generate a podcast or a vodcast. As a result, there is no topic that is off limits and no language that is too colorful. This alone is a significant reason to be careful when bringing this technology into the classroom. One of the most high profile cases of the misuse of vodcasting is with the Japanese anime industry. Recently, fans were uploading and posting to You Tube (www.youtube.com) a wide variety and styles of anime, from the obscure and hardto-find to sequences of episodes of some of the most widely known titles. Given that the anime industry in Japan, like most countries, is a forprofit business worth large sums of money, these illegal uploads (and subsequent illegal downloads) were impacting profit margins. Not only that, but
675
Podcasting and Vodcasting in Education and Training
in Japan the animation industry is highly regulated by government guidelines and laws because it is a very important and deeply embedded part of their culture. From the perspective of the Japanese government, the illegal uploading and downloading of anime involves more than just money, it is a matter of honor. While this specific situation was ultimately resolved to the satisfaction of the Japanese government after they formally requested that the illegally uploaded anime was found and removed from You Tube (most, but not all of the anime is now gone), the situation poses an intriguing legal and ethical dilemma. While by its very design, the technologies of podcasting and vodcasting were meant to be used freely and legally, they often are not, thereby impacting someone or some entity’s opportunity for recognition and money. In addition, and perhaps more interestingly, these illegal uploads and downloads may also inadvertently trample the mores or decorum of a location or culture with which we as users think we have no interaction or influence. While clearly this is not the case, it certainly illustrates the need for caution with podcasting and vodcasting from not only a legal standpoint, but also a cultural one.
future trends Certainly, with every new and upcoming technology there are always the potential risks, worries and promises – and podcasting and vodcasting are no exceptions. One of the risks facing this technology includes user exposure to unpleasant material. At any point, a raving podcaster can upload his or her tirade for public consumption or unsuspecting individuals can be caught on an unsavory video that later becomes downloaded and/or swapped across the Internet. Another risk of the rise of pod/vodcasting is the one this technology poses towards the viability and popularity of other media, such as radio, television, film, DVDs, etc. This is exemplified by how quickly one can
676
gain access to content or how one can become a content provider and gain an audience as a podcaster or vodcaster. The result is that mainstream content providers in the areas of radio, television and film have jumped on the podcast/vodcast band wagon in order to provide a hungry audience with material from their programming. In some cases this material may be free, while in others, there are applicable fees. In the coming years, it will be interesting to see how the use of pod/vodcasts and portable media changes the face of such well established entertainment and information sources as television, radio, and film. Over the next several years, the number of current podcast users will increase exponentially. If this audience grows at that rapid a pace, what then will happen to the commercial recording, film, and news industries? Will the recording artist who becomes popular via free podcasts and then subsequently gets a recording contract (based on the number of downloads s/he had from the public-at-large) still be able to produce “free” or “creative commons-licensed” work for the masses that helped to get them where they are now? Will the public demand that such free artistry is available despite a huge commercial industry and profit demands. These are all questions that remain yet to be answered, but are certainly worth watching for in the future. On a related note, in the education/training world, simplifying the steps to creating a podcast or a vodcast will help to increase the creation and use of educational pod/vodcasts. The technology behind this medium’s creation may seem simple to some when it is broken down into manageable components, but the procedures to create a pod/vodcast may also intimidate those who have yet to try their hand at it. With innovators such as Adam Curry, a former MTV VJ and podcasting pioneer, offering tools such as Podshow and Odeo which claim to guide creation, publication, and subscription of podcasts, many formerly hesitant educators may find themselves producing informational podcasts for their students or workshop
Podcasting and Vodcasting in Education and Training
participants. While educators and trainers would likely want to make these types of productions freely available to promote learning, individuals like Curry are looking to commercialize podcasting and vodcasting simply because a market exists and there is profit to be made. It is likely in the future that more pod/vodcasts will have nominal fees for users, while at the same time many more grassroots sites will exist where these types of programs are still free to upload and download. An example of this type of stalemate between industry and high end media users is the continuing conflict with peer-to-peer file sharing of music. Although music sites have very reasonable fees associated with legally downloading songs, users still prefer to go underground on the Net and acquire music for free. It will be interesting to see if podcasts and vodcasts follow that same trend. In the future, innovations to look for associated with pod/vodcasting will include further uses of pod/vodcasting in corporate training in order to supplement professional development, downloadable physical fitness training routines (see iTrainer – http://www.itrainer.com.au/public/index.aspx), degree-based programs and information supplements and course offerings for portable media devices, specific demographic-based marketing, enhanced podcasting (a combination of a PowerPoint presentation and audio files), and screencasting (a combination of a PowerPoint presentation with video and audio files). Most of these innovations currently exist, but are just not widely used at the present time. As quickly as podcasting and vodcasting became mainstream, it is likely that these future trends will also become a part of our everyday language and lives.
conclusIon Although podcasting is a relatively new media, newer forms are already making their way into the world of digital media. These newer forms are known as “enhanced podcasts.” They are designed
to deliver photos and other images in synchronization with the audio of a podcast much like a slide show. The file size for enhanced podcasts will be much larger than a standard podcast and therefore may take longer to download. Despite newer forms of media constantly flooding the market, podcasting and vodcasting are still experiences that are fresh, current, entertaining, informative and in many cases, insightful. In many ways, they are a part of living history, where a moment in time is captured for the world to hear, see, and share virtually as soon as it occurs.
references Belanger, Y. (2005, June). Duke University iPod First Year Experience Final Evaluation Report. Retrieved April 26, 2007, from Boulos, MNK., Maramba, I., & Wheeler, S. (2005). Wikis, blogs and podcasts: A new generation of Web-based tools for virtual collaborative clinical practice and education [Electronic version]. BMC Medical Education, 6(41). Campbell, G. (2005, November/December). There’s something in the air: Podcasting in education. Educause Review, 40, 33-46. Retrieved on May 11, 2006, from http://www.educause. edu/ir/library/pdf/erm0561.pdf Cebeci, Z., & Tekdal, M. (2006). Using Podcasts as Audio Learning Objects [Electronic version]. Interdisciplinary Journal of Knowledge and Learning Objects, 2. Chan, A. & Lee, MJW (2005) An MP3 a day keeps the worries away: Exploring the use of podcasting to address preconceptions and alleviate pre-class anxiety amongst undergraduate information technology students. In Dirk HR Spennemann & Leslie Burr (eds), Good Practice in Practice. Proceedings of the Student Experience Confer-
677
Podcasting and Vodcasting in Education and Training
ence 5-7th September ’05. Wagga Wagga, NSW: Charles Sturt University. 59–71.
Maslow, A. (1971). The Farther Reaches of Human Nature. New York: The Viking Press
Chinnery, G. M. (2006). Emerging Technologies - Going to the MALL: Mobile Assisted Language Learning [Electronic version]. Language Learning & Technology, 10(1), 9-16.
Matthews, K. (2006). Research Into Podcasting Technology Including Current and Possible Future Uses. Retrieved April 28, 2007, from http:// mms.ecs.soton.ac.uk/2007/papers/32.pdf
Creative Commons (2007). Creative Commons Attribution 2.5. Retrieved May 18, 2007 at http:// creativecommons.org/licenses/by/2.5
Meng, P. (2005). Podcasting & vodcasting: A white paper: Definitions, discussions & implications. Columbia: University of Missouri, IAT Services. Retrieved April 27, 2007, from http://edmarketing.apple.com/adcinstitute/wp-content/Missouri_Podcasting_White_Paper.pdf
Gardner, H. (1983). Frames of Mind: The Theory of Multiple Intelligence. Basic Books, New York, NY. Donnelly, K. M., & Berge, Z. L. (2006). Podcasting: Co-opting MP3 Players for Education and Training Purposes. Online Journal of Distance Learning Administration, 11(3). Retrieved April 29, 2007, from http://www.westga.edu/~distance/ ojdla/fall93/donnelly93.htm Godwin-Jones, R. (2005). Emerging Technologies - Skype and Podcasting: Disruptive Technologies for Language Learning [Electronic version]. Language Learning & Technology, 9(3), 9-12. Herkenhoff, L. M. (2006). Podcasting and VODcasting as Supplementary Tools in Management Training and Learning. Retrieved April 28, 2007, from http://www.iamb.net/CD/CD06/MS/71_ Herkenhoff.pdf Hürst, W. & Waizenegger, W (2006). An overview of different approaches for lecture casting. Proceedings of IADIS International Conference on Mobile Learning 2006, July 2006. Ketterl, M., Mertens, R., & Morisse, K. (2006). Alternative content distribution channels for mobile devices. Retrieved April 30, 2007, from http://www2.informatik.uni-osnabrueck.de/papers_pdf/2006_02.pdf Maslow, A. (1954). Motivation and Personality. New York: Harpers Press.
678
Read, B. (2005). Seriously, iPods Are Educational [Electronic version]. The Chronicle of Higher Education, 51(28). Tavales, S., & Skevoulis, S. (2006). Podcasts: Changing the Face of e-Learning. Retrieved April 29, 2007, from http://ww1.ucmss.com/books/LFS/ CSREA2006/SER4351.pdf
AddItIonAl reAdIng Cochrane, T. (2005). Podcasting: Do It Yourself Guide. Wiley. Colombo, G. & Franklin, C. (2005). Absolute Beginner’s Guide to Podcasting (Absolute Beginner’s Guide). Que. Dagvs, A.J. (2005). Podcasting Now! Audio Your Way. Course Technology PTR. Farkis, B.G. (2005). Secrets of Podcasting : Audio Blogging for the Masses. Peachpit Press. Geoghegan, M. & Klass, D. (2005). Podcast Solutions: The Complete Guide to Podcasting (Solutions). Friends of ED; Bk&CD-Rom edition. Morris, T., Terra, E., Miceli, D., & Domkus, D. (2005). Podcasting for Dummies. For Dummies Series.
Podcasting and Vodcasting in Education and Training
Stolarz, D. & Felix, L. (2006). Hands-On Guide to Video Blogging and Podcasting: Emerging Media Standards for Business Communication. Focal Press. Walch, R. & Lafferty, M. (2006). Tricks of the Podcasting Masters. Que.
key terms Enhanced Podcasting: A combination of a PowerPoint presentation and audio files. Feeds: Look much like the URLs found on websites, but end with RSS or XML. Podcast: A syndicated web feed of audio files available through the Internet.
Screencasting: A combination of a PowerPoint presentation with video and audio files. Syndication: Allows other sites to display updated content and information via RSS feeds, Atom, and news aggregators. Vodcast: Also referred to as a Video Podcast, is a syndicated web feed of audio and video files available through the Internet. Web: A collection of linked hypertext documents accessed via browsers on the Internet.
endnote 1
Order of first authorship is listed alphabetically. Both authors share first authorship.
679
680
Chapter XLIV
Technoethics in Schools Darren Pullen University of Tasmania, Australia
AbstrAct School students are used to digital technology-they blog, create movies for public viewing on the web, create and download music and use instant messaging to communicate with their friends and family. Whilst today’s students are technologically capable, perhaps more so than their teachers and parents, they might not know how to legally and ethically interpret and use digital resources and materials. Teachers need to understand the social, ethical, legal and human issues that surround the use of digital technology in schools and then apply that understanding to help students become technologically responsible, and so to ensure that they, the workers of tomorrow, have the critical thinking and communication skills to match their technical skill.
IntroductIon Most areas of modern life are affected by digital technology. This pervasive and rapidly developing technology gives us rapid and easy access to information, and presents new challenges to society’s values and ethics, challenges within our homes, schools and places of work. As the technology develops we need also to develop our laws, policies, personal skills and attitudes to foster its desirable aspects and mitigate its undesirable aspects. In the field of education teachers must work with students towards the safe, legal and ethical use of digital resources and in particular of Inter-
net based resources. The teacher and the school must ensure that students use digital resources legally and ethically. This chapter aims to highlight current practice and research as it pertains to educational technoethics, and along the way to stimulate thought on the topic of educational technoethics. This will be done by exploring selected examples of technoethics in the context of schools. The study of ethical and social issues in relation to technology is clearly interdisciplinary in nature, involving research and practice from a variety of disciplines such as philosophy, theology, history, sociology, anthropology, psychology and education. This chapter will argue that to understand
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Technoethics in Schools
educational technoethics fully, and to use digital technology effectively and ethically, three related dimensions—the technical, the social and the ethical—must be considered. To understand the techno- and socioethical aspects of using technology, working definitions of education, technoethics and digital technology are needed. In this chapter, education will refer to the teaching in the compulsory years of schooling—typically 5–18 years of age—while digital technology will refer to a wide range of computing and communication devices, from stand-alone computers through to ‘networked’ computers and communication devices. This definition encompasses personal computers, laptops, mobile digital devices such as Palm Pilots and smart phones through to networked devices that can be connected together or to the Internet or other networks. Within this book’s overall definition of technoethics (TE), for the purposes of this chapter TE refers to the ethical issues and outcomes associated with using digital technology in a school system. As technology evolves we are finding more ways of applying it in our daily lives. For the purposes of this chapter the technology considered is digital technology, within which computers, the Internet and mobile communication devices will be of primary concern. Weiser (1993) professes that the most profound technologies are those that disappear into our everyday operations, to the point of becoming universal as well as invisible. It may be this disappearance that makes some people act unethically. The whole notion of working and acting ethically with technology is questionable due to the notion of what is ethical and what is unethical. In its purest form ethics comes from an innate sense of how to behave and underpins our notion of equity. Equity in turn underpins our notion of fairness and justice. An ethical dilemma or an unethical act occurs when individuals with different points of view consider issues parochially and make judgments
based on their own points of view. A point of view is determined by individual characteristics such as race, gender, cultural group, religion, education, socio-economic status and age to name but a few. For technology, as for other issues, it is important to understand one’s own ethical viewpoint and it is just as important to consider other points of view. This is particularly so because digital technology, in particular Web technology, is global, and the political, cultural and educational levels of people using digital technology are extremely diverse. Thus each individual has their own ethical point of view that is dependent on aspects such as their race, political persuasion, cultural identity and education. Of these aspects we will argue that it is education that has the greatest potential to inform and influence one’s ethical point of view. Educators traditionally have built on the values and beliefs that students are already forming when they come to school. The role of education is therefore to start from the student’s initial capabilities and beliefs, and to teach ethics firstly as it relates to the individual and then as it relates to higher societal levels. According to Moor (2005) new technologies never start in a fully mature form, but are continually developed. Technological development goes through three stages: introduction, permeation, and power (Moor, 2005, p.112). Digital computers are arguably in their power stage as they have been widely adopted by society and are used in almost all industries. The Internet and mobile communication devices, however, are in the permeation stage as they are still being developed and adopted, and their full power may be yet to come. Within education, much attention has been focused on the problem of academic dishonesty, in particular on plagiarism from the Internet and the illegal copying of files. At the university level, several studies have shown that academic dishonesty is common (Ashworth, Bannister, & Thorne 1997; Lathrop & Foss, 2000). Building student understanding and reasoning in ethics has long been a challenge for teach-
681
Technoethics in Schools
ers. With the spread of Internet use the issue of ethical behaviour and conduct has become more complex. Consider the case of Napster only a few years ago. Whilst most would consider stealing a music CD from a store to be a crime, what is the difference when the music is on the Internet and can be downloaded “freely”? Isn’t this a victimless crime? Comparing student beliefs and values about physical entities with those about immaterial entities is important because it allows exploration of a multitude of ethical issues in areas such as copyright law, fair use within copyright, computer use, questionable/situational material/language, and the use of networks, passwords and email (Swain & Gilmore, 2001). Due to the evolving nature of technology and its rapid uptake by consumers ethical issues emerge slowly and usually after a process or practice has been in place for some time. However, the fundamental basis of ethics—ethical norms or morals—has tended to remain. The ethical or moral norms prohibiting such acts as murder and stealing have not changed. What is changing as technology evolves is the ethics of new practices and of some of the difficulties that come with cultural, political and geographical uses of technology. However the ethical issues normally come only after the technology has been in use for some time and the new practices have become more widely adopted or embedded in ones practice. Technoethical considerations in education can be split between three groups of users—students, teachers and school administrators—or it can be segmented into technical areas as done in this chapter.
PrIVAcy And the ProtectIon of PersonAl InformAtIon The use of computers and particularly of the Internet has increased not only our proficiency in performing tasks. It also allows for information
682
about us to be kept by various people and for various reasons and from various sources to a degree not possible or feasible before. Consider the simple act of buying your groceries at a local supermarket and paying for them by credit card. The data collected then not only indicate what food you buy but over time may show a pattern. For instance, suppose you only ever buy a certain brand of goods when it’s on discount and the rest of the time you buy generic items. This information could be used by the supermarket in direct personal marketing or ‘sold on’ by the credit card company to other marketing firms. The data collected could be combined with other data about you held by the credit card company to reveal that you are male, 50 years of age, buy 2 dozen eggs and 10 rashers of bacon a week, have no recorded gym membership and drive a car 10 miles to work (as shown by your parking space payments and your known residential address). This may be taken to imply that you are probably unfit and unhealthy, and to cause your health fund application to be refused or your health care premiums to rise. However, before you become overly concerned there is the issue of privacy and protection of personal information. The United Nations in the Universal Declaration of Human Rights Article 12 proclaimed that no one shall be subjected to arbitrary interference with their privacy, family, home or correspondence, nor to attacks upon their honour and reputation. Everyone has the right to the protection of the law against such interference or attacks (United Nations, 1948). This can be viewed as affirming that everyone is entitled to privacy. However what is privacy? The notion of privacy is a difficult concept and can depend on the context in which an event occurs. In most countries intercepting and reading another’s postal mail is a crime but what about email? Suppose you just happen to see and read someone’s email when you pass their terminal or if the email was misdirected to you! The legality of real versus
Technoethics in Schools
virtual environments raises legislative, moral and cultural questions. Privacy is not easily defined. It has many aspects such as personal, physical or virtual and is highly dependent upon culture. Our notion of privacy is linked to our cultural beliefs and identity. Hence dealing with any issue of privacy in the educational domain is contentious, and increasingly so because the privacy laws in many countries are not keeping pace with technological development. Tavani (2007) suggests that technology affects privacy in four ways: by the amount, the speed, the duration and the kind of data that can be gathered, transmitted, retained and transferred (p.118). Technology makes it possible to collect, collate and store data in more ways than was possible before digital computers and in particular before the Internet. Privacy considerations affect a school’s dealing with its staff, students, parent body, third parties and vice versa. Privacy, as we have already noted, is not a clear cut concept. So in a school context it might be useful to contrast private and public, though placing the division must involve subjective judgement. The privacy–publicity contrast also depends on who uses the data and how and for what reason, in particular if the subject of the personal data has consented to the data being used. These distracting distinctions aside, the consideration of what is public and what is private is vital both for schools and for students. Public and private are not mutually exclusive as one can be private to some degree when in public. The notion of one’s privacy stems from our western moral concept of respecting each other’s right to privacy. Even though each day we forgo some of our privacy to interact with others we still expect our privacy to be respected by those around us. When we are in public, including the workplace and places of learning, our ‘right’ to privacy remains but the place is called public because our actions and conversations may be seen and overheard by those we share the space with. So our notion of privacy in public is about
what others can see and hear us do in a public place and about personal data that are part of the public record—such as our name or what has been written or said about us in the media. Within a school or a place of work, what is public and what is private competes with the school’s or employer’s rights. An individual’s ‘rights’ and what is private versus public hinge around anonymity, confidentiality, and informed consent and intent. As a rule of thumb, what is done on school computers locally or remotely by teachers and students, such as email and content files, should be considered semi-confidential. Being semi-confidential means that the owner of the equipment or their agent has the legal right to monitor the use and hence the users of the equipment. When we surf the Web we tend to think that we are anonymous and for the most part we are. However, if we are asked directly for personal data and we consent then we have ipso facto given away our anonymity. Such personal data may be given or sold to third parties and if we have not given our permission for this to occur we have not given our informed consent. We may even lose anonymity unknowingly, as in the case of cookies, which will be discussed in a later section. What conclusions can we draw from the above? One would be that the right to privacy is prima facie and that personal data should be kept safe and secure and only made available to those with a legitimate need. In the school context Winston and Saunders (1998) suggested that risk can be avoided by closing the school! As this is probably not an option schools need to thoroughly consider the legal and ethical concerns of using modern digital technology and the effect of such use on privacy. To start with, the school should have at least a policy on privacy that is closely tied to its technology policies. These policies should be developed by a variety of appropriate people, such as lawyers, parents, teachers and students, and others as legislated by the school governing authority. These policies should also be continu-
683
Technoethics in Schools
ally reviewed as new technology is adopted and as new ethical or moral issues arise. When schools teach students how to use digital technologies, particularly the Internet, teachers need to ensure that students are taught safe practices. As discussed later in this chapter, the Internet and its Web have some great benefits but also some limitations. Students, and in particular novices or the young, need to be made aware that some Internet users have bad intentions and that they should not give personal data to unknown people. To avoid danger, more and more children’s websites have no external links. If they need to collect personal data from a child user, such websites typically ask the user to obtain parental consent through “verifiable parental consent” options, for example an email to a parent notifying the parent of the child’s request to register with their site. The websites typically also inform the user and parent of what personal data are required, for what purposes and whether they give these data to third parties. In countries like the USA, federal legislation enforces these practices (Children’s Online Privacy Protection Act, 2000). What has been written above should not concern any ‘fair minded’ person. However if the students were adults then there might be some ethical concerns about, for example, legal informed consent, privacy laws and perhaps in some countries freedom of speech and the monitoring and surveillance of Internet users through both overt and covert mechanisms, particularly post 9–11 (see UK anti-terrorism laws http://www.homeoffice.gov.uk/security/terrorism-and-the-law/, the USA patriot act http://www.lifeandliberty.gov/ highlights.htm and the Australian anti-terrorism act http://www.comlaw.gov.au/ComLaw/Legislation/Act1.nsf/0/53D2DEBD3AFB7825CA2570B 2000B29D5?OpenDocument).
IntellectuAl ProPerty The broad definition of intellectual property encompasses copyrighted works, ideas, discov684
eries and inventions, which could range from a novel, a painting through to a pharmaceutical drug. What these items have in common is that under legislation the owners/creators of the item have their rights protected by law (Lectric Law Library, 2007). Within a teaching and learning environment, like schools, the main intellectual property is copyright focussed on the issues of cheating and plagiarism. In recent years the academic research and anecdotal literature have shown that academic misconduct, such as plagiarism in high school, has sharply increased (Ercegodac & Richardson, 2004, McCabe, 2001; Underwood & Szabo, 2003). Moon (1999) found that 60% of UK and American higher education students admitted to some sort of academic dishonesty and McCabe (2001) found that this trend is increasing. For students, teachers and schools to come to grips with the broad issue of plagiarism they need to understand what are the ethical issues involved. Ashworth et al. (1997) looked at how university students view cheating and plagiarism without assuming that the students had the same understanding or values on these as their teachers. The study found that students had strong moral values about plagiarism but that they didn’t fully understand, for example, how plagiarism differed from paraphrasing. Haziness over plagiarism, according to this report and others (Granitz & Loewy, 2007; Groark, Oblinger & Choa, 2001; Johnston, 1991), not only creates student anxiety but is increasing as digital technologies proliferate. The Ashworth et al. (1997) study also found that plagiarism ranked fairly low on students’ overall ethical or moral value systems. To clarify plagiarism many secondary and tertiary institutions have adopted formal policies on plagiarism. Some even require students to sign declarations of authorship or ‘academic honesty’ when they submit their assignments. The excerpt below from the author’s institution shows what might and might not be accepted from students, together with the consequences for offenders:
Technoethics in Schools
Plagiarism Plagiarism is a form of cheating. It is taking and using someone else’s thoughts, writings or inventions and representing them as your own, for example: • • •
using an author's words without putting them in quotation marks and citing the source; using an author's ideas without proper acknowledgment and citation; or copying another student's work.
If you have any doubts about how to refer to the work of others in your assignments, please consult your lecturer or tutor for relevant referencing guidelines, and the academic integrity resources on the web at http://www.utas.edu. au/tl/supporting/academicintegrity/index.html. The intentional copying of someone else’s work as one’s own is a serious offence punishable by penalties that may range from a fine or deduction/cancellation of marks and, in the most serious of cases, to exclusion from a unit, a course or the University. Details of penalties that can be imposed are available in the Ordinance of Student Discipline – Part 3 Academic Misconduct, see http://www.utas.edu. au/universitycouncil/legislation/ord9.pdf The University reserves the right to submit assignments to plagiarism detection software, and may retain a copy of the assignment on the database for the purpose of detecting future instances of plagiarism.
(University of Tasmania, 2007) My own institution’s response to plagiarism clearly defines what it is and the consequences of doing it. However it would be ethically or socially unjust only to lay down such a procedure. For students to be able to comply with institutional plagiarism requirements, educators must teach them what plagiarism is and also how to write and take notes to avoid unintentional plagiarism. Distinguishing authorship and plagiarism depends on academic teaching staff having a thorough understanding of their field and of the students’ work standards. This places a great deal of pressure on staff not only to detect plagiarism but also to deal with it in a transparent and consistent manner. To provide identification many schools are using academic databases of the World Wide Web.
Most simply, a teacher who suspects that a student may have copied a passage of work can cut and paste a sentence from it into a Web based search engine such as Yahoo or Google. The result may then reveal the source of the exact sentence and its context. If this fails, for example if the student has been mildly paraphrasing, a more delicate Web search could well succeed. But, because the teacher may have numerous similar assignments to mark for a class, as well as other assignments for other classes, sophisticated Web searching is usually quite impractical. This is where plagiarism detection databases come in. In recent years secondary and tertiary institutions have adopted several such databases, such as Turnitin (www. turnitin.com), a user-pay system, and WCopyfind (plagiarism.phys.virginia.edu/Wsoftware.html), which is free under the GNU licence agreement (see FLOSS later in this chapter). The ethical aspects for a school or teacher of using plagiarism detection databases are not clear cut. Whilst a school and its teachers have a moral and professional duty to ensure that students are engaged in learning, do they have the legal or moral right to submit a student’s work to a third party for checking? Some schools have attempted to address this issue by stating clearly in their course or unit outlines that their teachers can and do routinely submit student work to plagiarism detection databases. This has several ethical implications. Firstly, the teacher (school) needs to be sure that that the student has not in fact cited a source for the material in question. Secondly, the teacher should check the validity and accuracy of the cited or discovered sources. Thirdly, the degree of plagiarism must be considered, which raises a very important point. When teachers or administrators use plagiarism detection software, thought needs to be given to the complexity of the subject matter. What teachers may consider to have been plagiarised could in reality come from the students working from rote-learned subject matter. Whether this is plagiarism or not is a moot point, but it does
685
Technoethics in Schools
highlight the responsibility of ensuring that students have a thorough understanding of a topic before they write a paper on it. Plagiarism databases also look for textual patterns. Words appearing in close proximity to each other in the submitted text will be looked for in the database. Sources containing the same or a similar pattern will be notified by the database and the teacher has then to thoroughly review the student’s paper to see if the student has copied from the source. Another important ethical issue is that of student consent to have their work submitted for verification. This tends to get lost in teachers’ and schools’ enthusiasm for plagiarism detection software and databases. Most (if not all) plagiarism detection databases keep a copy of submitted student work for future cross comparison. Students have copyright in their original work and it would ordinarily be illegal for a third party to keep a copy of such original work. School policies on plagiarism and course disclaimers, which stipulate that student work may or will be submitted to plagiarism detection software, may not adequately cover this legal and ethical issue. What happens if a student refuses to allow the teacher or school to submit their work to a third party? Do they fail? This is one ethical trap whose spring is so tightly wound that it might soon release, and when it does whose fingers will it catch—the teacher who checked the work or the school that insisted on student works being copied to third parties? The writer’s own institution has a statement on plagiarism which is indicated in all course outlines and when students submit their assignments they have to fill in and sign an assignment cover sheet. Nowhere in this cover sheet does the student give the university or staff member permission to submit their work to a third party for plagiarism checking. Instead the university states that it has the right to have a student’s work sent to such a third party. This raises serious ethical and legal issues about the student’s copyright. A definitive answer cannot be given until there is a
686
court ruling or government legislation or regulation on this matter. Plagiarism has been made easier by the Internet and its Web making it easier to find and copy digitally represented text and images. This ease of access has created new circumstances and possibilities in our work, study and home lives. Technological expansion has created legislative issues for copyright, and many nations have reworked or are reworking their copyright acts. For example, the USA Congress has passed the Digital Millennium Copyright Act (1998), Australia the Copyright Act (1968) and Europe the European Union Copyright Directive (2001). The American Digital Millennium Copyright Act (DMCA) was enacted to protect any material that can be digitally represented and transmitted. This includes text, pictures, movies, music or any combination of those media. The Act was and still is considered controversial by some groups, such as scientists and academics, because of how the Act enforces copyright of material rather than the intent of applying copyright to that material. The controversy centres on the copyright holder’s use of encryption or passwords to restrict access to material. By doing this it makes it illegal to crack the encryption code or password protection which some critics of the DMCA claim limits or undermines research into digital security measures (DMCA, 1998). Within schools the issue of copyright is normally about plagiarism. If a student downloads an image from the Web and uses it in their own work without citing its source, this is the same as copying text from the Web and passing it off as one’s own work. Students need to understand that passing off another’s work as one’s own is cheating, whether that work is text, images or sound. Schools, teachers and students cannot be conversant with the full extent of the local copyright act but it is reasonable to expect that schools will educate staff and students on copyright and the reasons for it, and promote some simple rules of thumb for respecting it.
Technoethics in Schools
One of the simplest of these rules is to assume that all material is covered by copyright unless it is specifically stated otherwise. A second rule is if material is taken from a website, book or other published media and reproduced in an assignment then the source should be directly acknowledged either in text or as a reference list or both. A third rule is if a student has indirectly used material then at the least this should be acknowledged in a list of resources used or a bibliography. Another contentious issue concerning modern digital technology in education is that of cheating. Cheating in schools has probably been around for as long as there have been schools. This issue has become more pressing with the use of devices such as mobile phones and small security cameras and microphones, to facilitate and disguise cheating. Cheating has come a long way from writing notes on one’s hand or pencil case. Schools have reported students using camera phones to take pictures of a test paper which is then sent to an accomplice who transmits the answers back to the student taking the test (Dahl, 2005; Socol, 2006). Items such as MP3 players like the Apple iPod, smart phones, programmable calculators and small surveillance cameras and microphones can all be easily slipped into test situations and used by those who want to cheat. Aside from the very important moral issues, schools need to know how technology is used for cheating. As the technology develops and students get better at using the latest and smallest devices, teachers and school officials find it harder to cope with cheaters. The more cheating goes uncaught, the freer students feel to do it. This is a vicious cycle. Ethically and professionally, schools have a duty to prevent cheating which must go hand in hand with clear standards and policies about what cheating is. Honor codes (pledges to be honest) can also help. Yet, while honor codes call for doing the right thing, they may not give students a clear idea of why honesty matters. Teaching TE at school therefore requires schools to address the issues in terms that students can understand and
to then move on to more abstract ideas. Schools should also use the technology to educate the students (Elgin, 2007) and this may mean making assessments more authentic and relying less on facts and knowledge and more on how they can be used. In an interesting twist to the copyright and plagiarism concerns of schools Macnamara (2007) proposes that the education sector needs to rethink the notion of copyright to prepare students for the future workforce and knowledge economy. Underpinning this rethinking Macnamara (ibid.) believes that ensuring creativity in the workforce is one of the greatest challenges facing the education sector. To promote creativity the education sector therefore needs to take advantage of copyleft (see www.copyleft.or) as well as respecting traditional copyright. Whilst copyright is concerned with defining the intellectual property rights of the owner or creator, copyleft is about sharing and making materials and software freely available and adaptable. In Europe, software under copyleft is generally referred to as FLOSS, and in America as FOSS. The latter acronym stands for Free and Open Source Software and, as its components suggest, the software can be used for any purpose, redistributed or modified. In the English language the term free has many different connotations so many programmers, users and advocates of FOSS have coined the name FLOSS—Free/Libre Open Source Software—to highlight the “open” nature of the source code, which can be legally modified and distributed (at cost or no cost). FLOSS’ origins in a free cooperative philosophy give it many advantages over similar commercial products. For example, by using FLOSSbased products bugs and glitches in the software can be identified and corrected by a world-wide community of programmers and users, software can be adapted and may be cheaper than similar commercial products. Arguments for FLOSS range from pedagogical, total cost of ownership and other factors (Hudson & Moyle, 2004). An
687
Technoethics in Schools
extensive database of FLOSS applications in UK schools includes details of 117 Linux-powered servers in the region of Powys (see www.schoolforge.org.uk/index.php/Powys_LEA). Schools and companies are increasingly using FLOSS development techniques: see for instance Microsoft’s release of Office 2003 XML Reference Schemas as part of their intellectual property released for open use (but not necessarily adaptation)1. Free release of specifications increases the chance they will be adopted by more developers, and therefore increases sales of the originator’s products. Much FLOSS software adopts formal copyright using a GPL2 licence instead of commercial incentives for the promotion of technological innovation. Anyone can use the source code without charge if they protect any improvements with a GPL license. Companies can sell the software, providing they continue to make the source code freely available. The result is that the source code stays public and keeps getting refined (Tinker, 2000, p.8). Therefore FLOSS has TE implications.
Several studies have highlighted the potential of FLOSS within the business sector (Briggs & Peck, 2003; Office of Government Commerce, 2004). These studies conclude that FLOSS is a viable and credible alternative to proprietary software for infrastructure applications, such as running servers, and for desktop users. Table 1 shows some commonly used FLOSS alternatives to proprietary software for a number of commonly used applications for schools. The use of FLOSS in teaching and learning depends heavily on four interrelated factors: the technology used, the curriculum, teacher and organisational culture. In particular, we need to consider how teachers use technology to deliver the curriculum, as it is they who will drive curriculum reform. Within ET, FLOSS takes on a dimension that makes it more than just a piece of software. It embodies a cultural or philosophical stance that has a deep history of technological and collaborative development. Particularly in school settings, FLOSS offers the chance to move ICT integration
Table 1. Some examples of FLOSS software which may be suitable for schools Title
Description & Source
Nearest commercial equivalent
Drawing for Children
Simple animated drawing and painting tool http://www.cs.uu.nl/~markov/kids/draw.html
KidPix
ArcExplorer
Geographic Information System – viewer http://www.esri.com/software/arcexplorer/
ArcGIS
Stellarium
Star Map http://www.stellarium.org/
Desktop Universe
West Point Bridge Designer
Design a truss bridge and see if it is strong http://bridgecontest.usma.edu/download2005.htm
SAM version 5.1e
GIMPshop
Image editor http://plasticbugs.com/?page_id=294
PhotoShop
Sketchup
Computer Aided Drawing http://www.sketchup.com/
AutoCAD or TurboCAD
FreeMind
Mind mapping http://freemind.sourceforge.net/wiki/index.php/Main_Page
Inspiration
Finale NotePad
Music composition http://www.finalemusic.com/notepad/
Sibelius
Jahshaka
Real time video editing and effects http://www.jahshaka.org/
Premiere Elements
688
Technoethics in Schools
on from repetitive Office-centric applications3. This makes ICT integration more accessible to a greater range of subject and age-group specialist teachers and their pupils. The ET aspects of FLOSS centre on the issues of intellectual property and the use of others’ work in new and novel ways without the need to acknowledge the source. This is like the use of iPods in schools to reformat the use of data in tests rather than restate facts and figures (Elgin, 2007).
e-mAIl The widespread use of the Internet since the early 1990s has brought many business, educational and social advances. It has also brought many ethical and legal concerns. Consider the use of email in schools. A school provides computers and in some cases email accounts for its staff and students to use for their work and study. The school owns and maintains the computers and their networks so it has the right to control both the use of those resources and their content data. On the surface this might look like a clear cut case of legal ownership and the right to control the use of school resources. But it not might be as clear cut as this. Firstly there is a common misconception that electronic mail like postal mail is private and that interception or reading of another’s mail is a criminal offence however it’s delivered. This analogy is false, and it is so for several reasons. Legally email sent from a school or company computer belongs to that company or school. Unlike postal mail, when an email is sent and received a copy is kept on the sender’s and receiver’s computers, and on the organisations’ mail servers, which are usually backed up and archived. Hence many copies of the e-mail are kept. Email is also different in kind. It tends to be less formal, completed faster and replies sent more quickly than postal mail. This speed of transmission allows for errors or misdirection
of email to occur and thus, unlike regular mail, email may sometimes be received by the wrong person. This makes it less secure and so less private than postal mail. So what can schools do to ensure privacy of email and reduce disruption or annoyance from nuisance email? Firstly users need to be told of their rights and responsibilities when it comes to using the school’s computers and email accounts. This information is best given as a directive or policy, together with a general statement about email privacy. In wording the policy the organisation should be mindful of what it can and can’t do legally, ethically or morally. The organisation might opt for all email needing to be for work related purposes and to be monitored. Another organisation might opt for all email to be private between the sender and the intended recipients of a message. In either case consequences for specific breaches of the policy need to be clearly prescribed. Of course in both cases, the law and common sense dictate some exceptions. These exceptions include the organisation’s right to check emails for malice, offence or obfuscation. Also excepted are the organisation’s right to maintain equipment and resources and the organisation’s legal duty of disclosure, for example to protect the rights, property and safety of staff and the general public. Some organisations and many schools filter emails to block spam or unsolicited email, for virus detection, searching, spellchecking, forwarding, auto-responding, flagging messages, sorting, making URLs clickable and identifying certain words or phrases such as “blow up the school”. Filtering also enables the school to restrict or in some case block access to certain email accounts or block certain sender domains. This keeps down spam and unsolicited emails that could slow down network traffic. The policy should also clearly state that the organisation complies with legal processes, such as search warrants, court orders, or subpoenas seeking account information. The policy needs
689
Technoethics in Schools
to state which uses are acceptable and which are not, and how policy breaches are handled.
the world wIde web What applies to school email in part applies also to the use of the World Wide Web at school. As with email, a school needs to have a carefully worded policy clearly specifying what is acceptable and unacceptable use of Web resources and what are the consequences for unacceptable Web use. This could range from blocking sites through to removing Internet privileges. The Web is both opening up learning opportunities in education but also exposing users to the less savoury side. A school’s duty of care is to ensure that its staff and students are protected from inappropriate websites. Consider the well documented case of Amy Boyer (EPIC, 2006), a twenty-year-old who was stalked and murdered in Nashua, New Hampshire (USA). The stalker and murderer Liam Youens used the Web to get enough personal information about Amy to stalk her to her home. Youens also posted personal information about Amy and his intentions on the Web. This case whilst morally repugnant does indicate the sinister side of the Web and allows us to examine in detail the moral and ethical issues for schools, users, Internet Service Providers (ISPs) and legislators. The Amy Boyer case depends on the amount, type and access to an individual’s personal and public information. Information about anyone could be considered public if it is widely known or is in the public domain, for example in newspaper reports. Private information is information which you don’t want widely known, for example your tax file or social security number or bank account details. Personal information is often collected by a person or organisation and stored on the Web. A school might take pictures of students at a swimming carnival and publish those images, with the names of the children, in the school newsletter.
690
This newsletter is subsequently made available on the school’s website. The images and names of these children are now public. This might seem harmless but some users of the Web may have illicit or harmful intent. In Australian recently, Adrian Mayne, a groundsman at a Tasmanian school, used school equipment to download pornography (ABC news, 2007). He then used the school’s digital camera to take pictures of female students (mostly infant students) and teachers in the school toilets through holes he had drilled through the wall, floor and ceilings of the toilet blocks. Again using school equipment Mayne superimposed the images of staff and students onto the Web-based pornographic images he had collected. Apart from the legal implications of the Mayne case it does show that not all users of the Web or those who work with children or take their pictures use them for ‘pure’ intentions. Within schools, one of the biggest Web worries for teachers, parents and school administrators is of students or staff accessing or otherwise being exposed to perverted images. As the computers and resources belong to the school, censorship is not an issue. The school owns the resource and has the legal right to control when and how this resource is used. As censorship of pornography by a school is quite legal, it should be follow that there are no ethical issues involved. However, as with other aspects discussed here, it is not that simple. Schools trying to avoid exposure to pornography, or other material that they feel is inappropriate such as discrimination, suicide, bomb-making are using filtering software such as Child Safe, Cyber Patrol or Net Nanny. The latter tracks websites, chat rooms and news groups. This particular software like many others creates a log of users and the websites that they have used. Whilst many of these filtering software programs can be adjusted for different users, ethical concern centres around the tracking of a person’s use of the Web without their consent and in some cases without their knowledge. So teachers and schools need to
Technoethics in Schools
consider whether it is ethical to track and log Web use. Within the academic and research literature no reports on this aspect of school Web use was found. However in the business community there have been several cases of employees being reprimanded or dismissed for viewing pornography at work (see Day & Gehringer, unknown; Young & Case, 2004). The ethics of logging use of websites is like the ethics of “cookies”. Cookies are data a website stores on a visiting computer to be used by that website on later visits. Cookies have three basic purposes. The first purpose is to track a user’s transaction with the website so that if the user leaves the website before the transaction ends, for whatever reason, the cookie’s data allows the transaction to be resumed when the user returns. In this case the cookie is usually stored temporarily on the user’s computer and is mainly used for capture by the website of personal and financial data. The second purpose is to allow a website to collect data from the user across many transactions. For this the cookie is stored in the user’s hard drive. The similar third purpose is to allow a third party, such as an advertiser, to put cookies on a user’s hard drive. If the user selects an advertisement, the data in the cookie can redirect the user’s Web browser to other sites, usually done as a pop-up. There are several ethical concerns regarding cookies. The first is that the most popular Web browsers used in schools have as a default setting to ‘allow all cookies’, thereby denying the naive user the choice of rejecting them. This is important because the user does not know how the data collected by the cookies will be used. Perhaps more of an ethical concern is that cookies can be put on a hard drive without the user’s knowledge or consent. Data collected by cookies are personal and their use by others invades privacy, as does a third party putting cookie programs on a user’s hard drive without their consent or knowledge. For
a school this may lead to more serious invasions by hacking or keystroke logging, which endanger the school’s network. To prevent such invasion of school computers by cookies or other programs, the Web browser default should be set to notify and ask the user’s permission to install cookies. Students and staff should also be told about Web tracking software and the purpose of cookies as an informed user is a better and safer user of the Web. A school using filtering software should have a clear and concise policy on its use and those who manage the school’s filter logs should be clearly accountable for the use or misuse of those data. One of the biggest and fastest growing uses of the Web is for pornography. Here pornography is taken to be any text, images, sounds or combinations thereof which depict people in sexual acts or in situations or poses that teachers and parents would agree should not be shown to students. Whilst this definition is incomplete, it is adequate for discussion of pornography and schools. Many countries have passed laws to protect children from pornography and to prosecute those who distribute or transmit it to people under legal viewing age. Schools use filtering software to hide pornography from students and staff, and have brought in Web use policies indicating what is allowed and forbidden on school computers, and specifying penalties for breaking policy. Filtering raises ethical questions about censorship. Schools typically use filtering software to block undesirable content. Programs such as Net Nanny maintain a database of words to filter out and of websites to hide, both of which are regularly updated by both the software provider and the school. While school staff and parents would not object to hiding pornography, it might be deemed unethical if other material were to be hidden. In the United States of America the Constitutional First Amendment enshrining the right to free speech might make it illegal to do so. Content filtering becomes more complex considering that Internet Service Providers (ISPs)
691
Technoethics in Schools
can also do it. Who decides what is hidden and what is exposed? When schools or parents use a filter (downstream filtering) they have decided what is allowed on their computers. However when an ISP filters content (upstream filtering) or a government regulates what content is allowed this raises concerns about freedom of speech. For example when one of the world’s largest search engines, Yahoo (Schonfeld, 2006) negotiated entry into the restrictive Chinese Internet market it agreed to follow regulations and restrictions set down by the Chinese government. This meant that Web content freely available outside China was not allowed into China. The government through Yahoo could also monitor and track Web use. In essence the government could use Yahoo not only to control Web access but also to act as a pseudo ‘spy’ for the Chinese government. Beyond pornography and filtering, ethical questions arise about Web content relating to racism, bomb making and suicide. Two examples will help consideration of how and why a school might deal with such content. Many people, young and old, are using cheap and simple digital video cameras on mobile phone or otherwise to record events. These events are mostly of special occasions, innocent and otherwise. Because these images are digital they may easily be sent through email or stored on websites such as YouTube. Two shocking cases recently gripped the Australian public. In one case a group of high school students used their mobile camera phones to record students physically and verbally abusing others in a school classroom. A bullied student was placed in a garbage bin, which was repeatedly hit with a bat and kicked. The bullies then used their mobile phones to send the pictures from student to student across the school and beyond the school gates. This shows how easy it is to transmit digital content, regardless of its nature, in particular within schools (Rout, Metlikovec & Dowsley, 2007). The ethical or legal aspects of the bullying were
692
compounded by the use of digital technology to publicize the bullying. The second and more disturbing case also concerns a group of teenage boys. The group used an online chat room, partly with school computers, to befriend an intellectually disabled teenage girl (The Australian, 2007) and lure her to a local park. At the park the boys systematically humiliated and violated the girl. They used their camera phones to record these acts, ranging from throwing some of her clothes into a lake, through to setting her hair and clothes on fire using a cigarette lighter and an aerosol can. The acts then took on a sexual connotation when several of the boys made the girl perform sexual acts on them—all while others filmed the events. The boys then left this humiliated, demoralised and abused girl alone in the park while they sent the images to each other and friends. Several of the boys then put the images onto YouTube and DVDs, which they advertised on the Web and sold to friends and strangers. What can we learn from these two cases? Firstly both feature illegal, immoral and unethical behaviour. When one examines the complexities of the two cases it can be seen that each involved a number of individuals that went along with the acts. In this situation the question of individual and social responsibility needs to be considered. In both cases the participants had the opportunity to stop the acts from escalating and to report them to the authorities. In a similar way the individuals who viewed the websites created by Liam Youens of Amy Boyer (see www.amyboyer.org) had the opportunity to report their concerns to the authorities. In addition the ISP’s who hosted these websites could have prevented Youens from posting his hateful and ultimately murderous intentions onto the web. However neither the general public nor the ISP’s acted to prevent Youens’ from posting his websites. In a school context teachers and school administrators need to carefully monitor the content which is placed by both staff and students onto its web pages. Website
Technoethics in Schools
content and use in schools need to be carefully monitored, as explained in the previous section on the Internet. Students should also be taught about the responsible use of the Internet and what to do if they believe Internet information or content violates their moral, religious or legal beliefs or rights. By encouraging students to take a stance against illegal or unethical website content they will be in a better position to influence the shape of the Internet and become more ethical users and producers of Internet materials.
technoethIcs In An eVolVIng dIgItAl world Before discussing future technology and their TE implications we need to look at the digital technology divide, which simply refers to the gap between those who can use modern digital technology and those who can’t. In education there will be schools and students who have the latest equipment and there will be those who have little or none. The digital divide in education is not just the obvious international divide, it is also the gap in level of equipment and services between schools within a country. This creates a gap between technologically literate and illiterate students. Because different groups in a society have different levels of access to digital technology, the ethical obligations of each group will consequently be different. The digital divide is an ethical issue just as are education and health care divides. Those without adequate access to and training in digital technologies are disadvantaged as they would be without education or health care. This analogy is supported by Moss (2002) who holds that people without access to technologies are ultimately deprived of resources that are vital for their wellbeing because their access to knowledge and their ability to fully participate in society and economic processes is hindered or greatly diminished. In a liberal society there is an obligation to see that the poor or dis-
advantaged are not left behind in an increasingly digital world. To lessen the digital divide, liberal societies provide public access to modern digital technology by making it available, freely or at low cost, in public schools and public spaces such as libraries and online access centres. The education system tries to bridge the digital divide by providing digital technology within the school. However schools need to do much much more if they are to succeed in this. As students go through their education from preschool through to college, the complexity of subject matter naturally increases and so does the need for resources to help learn it. Typically these resources include material to be found on the Web. To counter the digital divide with Web access and computers, schools and governments across the globe have strategies that range from doing nothing through to subsidising Internet use and home computers, in some cases giving obsolescent computers to disadvantaged families. In schools the digital divide goes much deeper than the ‘have and have-nots’. It also affects how people from different races, genders and ability groups can use the available technology. The low rates of technology use by girls and some ethnic groups particularly American Hispanics and Australian Aborigines (Digital Divide, 2007) has ethical implications. Some of these differences aren’t simply a matter of different access to the technology or to education. For example, males and females in developed countries have similar compulsory education participation rates, similar attitudes toward technology, and very similar home and school access to digital technology (Volman & van Eck, 2001). However the literature and reality in the classroom indicates that boys tend to use the technology much more in the classroom and workplace (Gehring, 2001; Weinman & Haag, 1999). Ethically, should special efforts be made to encourage the use of digital technology by girls and women? Strategies for increasing female usage are very similar to those for increasing ethnic usage.
693
Technoethics in Schools
There are three general aspects to consider. The first is the pedagogical attitude of the school and teaching staff. Leading educational technologists Grabe and Grabe (2007) believe that “tinkering” with technology empowers learners to not only learn about the subject matter but to learn about the technology. For girls they believe that such tinkering fosters higher order thinking skills as well as moving them beyond simple application experiences, such as using PowerPoint to go from displaying static data to building multimedia presentations or webquests (www.webquest.org). The second aspect is that the technology needs to cater for different learning needs or styles. This may mean that software developers, in particular, need to make programs adapt to less capable users, in particular girls. (Ching, Kafai & Marshall, 1998). To make digital technology more equitable schools need to ensure that its software suits the curricula and learning styles of all its students. This may mean that in mathematics, for instance, the software for boys is ‘game’ orientated and for girls is based more on solving problems (Ching, Kafai & Marshall, 1998). Schools also need to ensure that students with disabilities adapt their hardware and software to their use both of the technology and of their curriculum. This will not usually be easy. Students with visual impairment will find many websites hard to read, though screen reader programs can help if the webpage has been designed for such use. Web designers generally design webpages for older browsers but not always for adaptive devices. This raises questions of equity and perhaps of discrimination if deliberately done. The World Wide Web Consortium (1999) has issued some guidelines to help Web developers ensure that their webpages can be used by adaptive devices. Schools should support the Consortium’s initiative firstly, by ensuring that their own webpages conform to guidelines, secondly by notifying website owners if students using adaptive devices have problems with their site, and thirdly by including a link to
694
the Consortium’s guidelines to foster websites that are accessible to those with disabilities. The fourth aspect is that industry needs a philosophical lift so that diversity in society can be respected and catered for in the workplace. This will allow girls and those from minority groups to value digital technology in schools because they can see how they can use it in the workplace.
future trends In thinking and writing about TE in schools it would appear that as the technology develops and we evolve in our use of it new TE will occur for which we had not prepared for. Technology is currently becoming cheaper, smaller, more mobile and with more options. What this means for the education sector is that we need to be conversant with technological change and plan for how it can be used and abused by consumers. This process will then allow us to better prepare and educate students for an ever increasing digital world.
conclusIon The issues that confront us in the digital era are certainly not new, but the contextual complexities associated with new technologies do demand that we revisit and redesign how we work with technology and view the ethical issues around its use. Dewey (1972)—a leading educational theorist—once wrote: “technological revolution is not a matter of distinguishing technologies from the ways in which we use them, because our technologies are the ways we use them”. In writing this chapter we have investigated and pondered over this profound statement of over thirty-five years ago, before the advent of mass computing and the Internet revolution, Dewey succinctly said that it is the user who determines how the technology is used and abused. So we have two choices: take the English mill workers’
Technoethics in Schools
stance of 1811–1812 where they burnt weaving mills to stop the industrial revolution (giving us the term luddite) or embrace the technology and educate the populace on how to use it and minimise its harm. This has been the approach we have adopted by looking at the current and future issues in education as we firmly believe that by educating students we are in a better position to have students being ethical users and producers of digital resources.
references ABC News. (2007). Child pornographer pleads guilty. Retrieved May 17,2007, from www.abc.net. au/news/items/200705/1918736.htm?tasmania Ashworth, P., Bannister, P. & Thorne, P. (1997) Guilty in whose eyes? University students’ perceptions of teaching and plagiarism in academic work and assessment. Studies in Higher Education, 22 (2), 137–148. Briggs, J. & Peck, M. (2003). A case study based analysis on behalf of the Office of Government Commerce. Retrieved May 17, 2007, from www. ogc.gov.uk/index.asp?id=2190 Ching, C., Kafai, Y. & Marshall, S. (1998). Give Girls Some Space: Considering Gender in Collaborative Software Programming Activities. In Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 1998 (pp. 56–62). Chesapeake, VA: AACE. Commonwealth of Australia. (2005). Anti-Terrorism Act 2005. Retrieved May 17, 2007, from http://www.comlaw.gov.au/ComLaw/Legislation/ Act1.nsf/0/53D2DEBD3AFB7825CA2570B200 0B29D5?OpenDocument Copyleft (2006). CopyLeft. Retrieved May 17, 2007, from http://www.gnu.org/copyleft/
Dahl, M. (2005). High-tech cheating comes to high schools. In Detroit School News, Saturday, September 24, 2005. Retrieved May 17, 2007, from http://www.detnews.com/2005/schools/0509/24/ 0scho-325779.htm Day, M. & Gehringer, E. (unknown). Educators and pornography: The “unacceptable use” of school computers. Retrieved May 17, 2007, from http://research.csc.ncsu.edu/efg/ethics/papers/acceptableuse.pdf Dewey, J. (1972). The collected works of John Dewey. In Hickman, L. (1996), Techne and politeia revisited: pragmatic paths to technological revolution. Society for Philosophy and Technology, 1 (3–4) (September). Retrieved May 17, 2007, from http://scholar.lib.vt.edu/ejournals/SPT/v1_n3n4/ Hickman.html Digital Divide Organisation. (2007). Ushering in the second digital revolution. Retrieved May 17, 2007, from www.digitaldivide.org/dd/index. html Electronic Privacy Information Centre –EPIC. (2006). The Amy Boyer case. Retrieved May 17, 2007, from http://www.epic.org/privacy/boyer/ Elgin, M. (2007). School iPod ban is cheating students. PC Advisor, May 12, 2007. Retrieved May 17, 2007, from http://www.pcadvisor.co.uk/ news/index.cfm?newsid=9340 Ercegovac, Z. & Richardson, J. Jr. (2004) Academic dishonesty, plagiarism included, in the Digital Age: A literature review. College and Research Libraries, July 2004. Retrieved May 18, 2007, from http://privateschool.about.com/ cs/forteachers/a/cheating.htm Gehring, J. (2001). Not enough girls. Education week, 20 (35) 18–19. Government of the United States of America (1998). The digital millennium copyright act of 1998. Washington, DC: U.S. Copyright Office.
695
Technoethics in Schools
Retrieved May 17, 2007, from www.copyright. gov/legislation/dmca.pdf
r+Education+Supplement+September+3,+1999 &hl=en&ct=clnk&cd=1
Grabe, M. & Grabe, C. (2007). Integrating technology for meaningful learning (5th Edition). New York: Houghton Mifflin Company.
Moor, J. (2005). Why we need better ethics for emerging technologies. Ethics and Information Technology, 7, 111–119.
Granitz, N. & Loewy, D. (2007). Applying Ethical Theories: Interpreting and Responding to Student Plagiarism. Journal of Business Ethics, 72, 293–306.
Moss, J. (2002). Power and the digital divide. Ethics and information technology, 4 (2), 159–165.
Groark, M., Oblinger, D. & Choa, M. (2001). Term Paper Mills, Anti-plagiarism Tools, and Academic Integrity. EDUCAUSE Review, 36(5), 40–48. Hudson, F. & Moyle, K. (2004). Open source software suitable for use in Australian and New Zealand schools: A review of technical documentation. Department of Education and Children’s Services, South Australia. Retrieved May 17, 2007, from http://www.mceetya.edu.au/verve/_resources/open_source_aust_nz.pdf Johnston, D. (1991). Cheating: Reflections on a moral dilemma. Journal of moral education, 20, 283–291. Lathrop, A. & Foss, K. (2000). Student cheating and plagiarism in the Internet era: A wake-up call. Englewood, CD: Libraries Unlimited Inc. Macnamara, D. (2007). Leveraging IP for learning knowledge, a key resource for the 21st Century. Australian College of Educators edventures, 8 (January). McCabe, D. (2001). Students cheating in American high schools. Retrieved May 17, 2007, from http://www.academicintegrity.org/hs01.asp Moon, J. (1999). How to ... stop students from cheating. The Times Higher Education Supplement, September 3, 1999. Retrieved May 17, 2007, from http://72.14.253.104/search?q=cache :2IyJTIUJdhUJ:www.jiscpas.ac.uk/images/bin/ underwoodtertiary.doc+%22The+Times+Highe
696
Office of Government Commerce 2002. Open source software: Guidance on implementing UK Government policy. Retrieved May 17, 2007, from www.ogc.gov.uk/index.asp?id=2190 Office of Government Commerce 2004. Government open source software trials final report. Retrieved May 17, 2007, from www.ogc.gov. uk/index.asp?id=2190 Rout, M., Metlikovec, J. & Dowsley, A. (2007). Bully cam shot by students. Herald Sun, April 13. Retrieved May 17, 2007, from http://www. news.com.au/heraldsun/story/0,21985,21546947661,00.html Schonfeld, E. (2006). Analysis: Yahoo’s China problem. In CNNMoney.co. Retrieved May 16, 2007, from http://money.cnn.com/2006/02/08/ technology/yahoo_china_b20/ Socol, I. (2006). Stop chasing high-tech cheaters. Inside HigherEd, May 25, 2006. Retrieved May 17, 2007, from http://www.insidehighered. com/views/2006/05/25/socol Swain, C. & Gilmore, E. (2001). Repackaging for the 21st century: Teaching copyright and computer ethics in teacher education courses. Contemporary Issues in Technology and Teacher Education [Online serial], 1 (4). Retrieved May 17, 2007, from http://www.citejournal.org/vol1/ iss4/currentpractice/article1.htm. Tavani, H. (2007). Ethics and Technology: Ethical issues in an age of information and communication technology. MA: John Wiley and Sons.
Technoethics in Schools
The Australian. (2007). YouTube banned from schools in bullying crackdown. In The Australian online. Retrieved May 17, 2007, from www.theaustralian.news.com.au/story/0,20867,213062975006785,00.html The Lectric Law Library. Lexicon on intellectual property. Retrieved May 17, 2007, from http:// www.lectlaw.com/def/i051.htm Tinker, R. (2000). Ice machines, steamboats, and education: Structural change and educational technologies. Paper presented at The Secretary’s Conference on Educational Technology, September 11–12. Retrieved May 17, 2007, from http://www.ed.gov/rschstat/eval/tech/techconf00/ tinkerpaper.pdf Underwood, J. & Szabo, A. (2003). Academic offences and e-learning: Individual propensities in cheating. British Journal of Educational Technology, 34 (4), 467–477. United Kingdom. (2006). Terrorism Act 2006. Retrieved May 17, 2007, from http://www.homeoffice.gov.uk/security/terrorism-and-the-law/ United Nations (1948). Universal Declaration of Human Rights. Retrieved May 17, 2007, from http://www.un.org/Overview/rights.html United States Government Federal Trade Commission. (2000). Children’s Online Privacy Protection Act – COPPA. Retrieved May 17, 2007, from http://www.coppa.org/ United States of America. (2001). The USA Patriot Act. Retrieved May 17, 2007, from http://www. lifeandliberty.gov/highlights.htm University of Tasmania. (2007). Faculty of Education Unit Outline. Launceston: University of Tasmania. Volman, M. & van Eck, E. (2001). Gender equity and information technology in education: The second decade. Review of educational research, 71 (4), 613–634.
Weinman, J. & Haag, P. (1999). Gender equity in cyberspace. Educational leadership, 56 (5), 44–49. Weiser, M. (1993). Some computer-science issues in ubiquitous computing. Communications of the ACM, 67 (7), 75–84. Winston, R. & Saunders, S. (1998). Professional ethics in a risky world. In D. Cooper & J. Lancaster (Ed.), Beyond law and policy: Reaffirming the role of student affairs. New directions for student services, 82, (pp.77–94). San Francisco: Jossey-Bass. World Wide Web Consortium. (1999). Web accessibility initiative page author guidelines. Retrieved May 17, 2007, from http://www.w3.org/ TR/WD-WAI-PAGEAUTH Young, K. & Case C. (2004). Internet abuse in the workplace: new trends in risk management. CyberPsychology and Behaviour, 7 (1), 105–111. Retrieved May 17, 2007, from http://www.netaddiction.com/articles/eia_new_trends.pdf
key terms Cookies: A message given to a Web browser by a Web server. The browser stores the message in a text file. The message is then sent back to the server each time the browser requests a page from the server. The main purpose of cookies is to identify users and possibly prepare customized Web pages for them. This information is packaged into a cookie and sent to your Web browser which stores it for later use. The next time you go to the same Web site, your browser will send the cookie to the Web server. The server can use this information to present you with custom Web pages. So, for example, instead of seeing just a generic welcome page you might see a welcome page with your name on it.
697
Technoethics in Schools
CopyLeft: Is a general method for making a program or other work free, and requiring all modified and extended versions of the program to be free as well. Digital Technology: The word “digital” comes from Latin - digitus, finger - and refers to one of the oldest tools for counting! When information is stored, transmitted or forwarded in digital format, it is converted into numbers - at the most basic machine-level as “zeroes and ones”. In the context of this chapter the term represents technology that relies on the use of microprocessors, hence computers and applications that are dependent on computers such as the Internet as well as other devices such as video cameras and mobile devices such as phones and personal-digital assistants (PDAs). Education: Education encompasses teaching and learning specific knowledge, skills, and also something less tangible: the imparting of ‘learning how to learn” or “the concept of life long learning” which is based on knowledge, sound judgement and wisdom. Education has as one of its fundamental goals the imparting of culture from generation to generation in addition to the skills and knowledge required to operate in society. FOSS/FLOSS: FOSS is an acronym standing for Free and Open Source Software and as its component names suggest the software can be used for any purpose, redistributed or its source code modified. In the English language the term “Free” has many different connotations so many programmers, users and advocates of FOSS have termed the term FLOSS-Free/Libre Open Source Software-to refer to the “open” nature of the source code, which can be legally modified and distributed. FOSS or FLOSS should not be confused with “free software” as the products produced
698
using FOSS/FLOSS source code can be offered either at no cost or a charge may apply. Intellectual Property (IP): Is the general name given to the laws covering patents, trade marks, copyright, designs, circuit layouts, and plant breeder’s rights. Intellectual property laws protect the property rights in creative and inventive endeavours and give creators and inventors certain exclusive economic rights, generally for a limited time, to deal with their creative works or inventions. Internet Privileges: Are a set or rules or expectations that an organization enforces for the proper and safe use of computer and Internet resources that it is accountable for. Typically an internet use policy would cover acceptable and unacceptable behaviours and the consequences for unacceptable behaviours.
endnotes 1
2
3
http://www.microsoft.com/interop/osp/default.mspx The General Public License (GNU GPL or simply GPL) is a widely used free software license. It is the license used by the GNU/Linux operating system. The GPL is the most popular and well known example of the type of strong copyLeft license that requires derived works to be available under the same CopyLeft. By this we mean school computer use confined to word-processors, spreadsheets, presentation software etc, as opposed to a richer range of software, mostly comprising tutorial, simulation and adventure activities.
Section V
Further Reading in Technoethics
700
Chapter XLV
Moral Psychology and Information Ethics:
Psychological Distance and the Components of Moral Behavior in a Digital World Charles R. Crowell University of Notre Dame, USA Darcia Narvaez University of Notre Dame, USA Anna Gomberg University of Notre Dame, USA
AbstrAct This chapter discusses the ways in which moral psychology can inform information ethics. A “Four Component Model” of moral behavior is described involving the synergistic influences of key factors including sensitivity, judgment, motivation, and action. Two technology-mediated domains, electronic communications and digital property, are then explored to illustrate how technology can impact each of the four components believed to underlie moral behavior. It is argued that technology can create a kind of “psychological distance” between those who use technology for communication or those who acquire and use digital property (e.g., software or music) and those who may be affected by such uses (e.g., e-mail recipients or digital property owners). This “distance” potentially impacts all four components of moral behavior in such a way that the usual social or moral constraints operative under normal (non-technology-mediated) circumstances (e.g., face-to-face communication) may be reduced, thereby facilitating the occurrence of unethical activities like piracy, hacking, or flaming. Recognition of the potential deleterious impact of technology on each of the four components leads to a better understanding of how specific educational interventions can be devised to strengthen moral sensitivity, judgment, motivation, and action within the context of our increasingly digital world. Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Moral Psychology and Information Ethics
IntroductIon We ignore ethics and computing at our peril! Rogerson & Bynum, 1995 Unethical behavior is pervasive and timeless, as is the question of why people do bad things. What makes some people behave morally or ethically and others not? Psychologists interested in moral development have attempted to answer such questions by examining the psychological components of morality, the elements that work in concert to bring about moral behavior (Rest, 1983). Emerging from this work is a model of moral behavior that identifies the joint action of four psychological processes: sensitivity, judgment, motivation, and action (Narvaez & Rest, 1995). Certainly, the “information age” has been accompanied by its share of technology-related ethical issues and challenges. Interestingly, many (if not most) of these challenges are not fundamentally new (Barger, 2001). Although there may well be exceptions, information technology appears to have created new and different ways to engage in the same kinds of unethical behaviors seen throughout history, from stealing property to invading personal privacy (Johnson, 2001). Because these issues have been studied and analyzed for years in other contexts, it is all the more important for Information Science (IS) researchers and practitioners to be well acquainted with general principles of moral and ethical development. Indeed, it is now well-attested that our perceptions of the moral landscape are influenced by developmental and social-cognitive factors (Lapsley & Narvaez, 2004). In order to plan educational interventions that help technology users develop appropriate ethical attitudes and behaviors with respect to their use of information technology, educators can take advantage of a wealth of knowledge about moral development from the field of moral psychology. The purpose of this chapter is to acquaint those working in the field of Information Sci-
ence with a psychological perspective on moral or ethical behavior. In this chapter we examine key psychological processes that are critical for moral behavior, discuss the function of these processes in the domain of technology, and suggest strategies to enhance education related to information ethics. At the outset, it is important to draw attention to our use of certain terms. While we make no substantive distinction between the terms “moral” and “ethical,” there is an important difference between what may be considered “moral” and what is “legal,” or conversely between what is “immoral” and what is “illegal.” To be “legal” is to conform one’s behavior to the laws established by the societies in which we live. Morality, on the other hand, is a matter of conformity to “divine law” or codes of conduct derived from principles of right and wrong that transcend societal strictures. There is no automatic correspondence between that which is “legal” and that which is “moral,” or vice versa. That is, depending on the society, what many would consider immoral practices may be considered legal (e.g., prostitution in Nevada), while some illegal practices (e.g., harboring Jewish fugitives in Nazi Germany during World War II) may be quite moral.
four comPonent model of morAl behAVIor The Four Component Model (Narvaez & Rest, 1995; Rest, 1983) represents the internal “processes” necessary for a moral act to ensue: moral sensitivity, moral judgment, moral motivation, and moral action. These components are not personality traits or virtues; rather, they are major units of analysis used to trace how a person responds in a particular social situation. The model depicts an “ensemble of processes,” not a single, unitary one. Therefore, the operation of a single component does not predict moral behavior. Instead, behaving morally depends upon each process and the
701
Moral Psychology and Information Ethics
execution of the entire ensemble. Each process involves cognitive, affective, and behavioral aspects that function together in fostering the completion of a moral action. Collectively, the following processes comprise the Four Component Model and are presented in logical order: (1) Ethical sensitivity involves perceiving the relevant elements in the situation and constructing an interpretation of those elements. This first component also includes consideration of what actions are possible, who and what might be affected by each possible action, and how the involved parties might react to possible outcomes. (2) Ethical judgment relates to reasoning about the possible actions and deciding which is most moral or ethical. (3) Ethical motivation involves prioritizing what is considered to be the most moral or ethical action over all others and being intent upon following that course. (4) Ethical action combines the strength of will with the social and psychological skills necessary to carry out the intended course of action. This fourth component, then, is dependent both on having the requisite skills and on persisting in the face of any obstacles or challenges to the action that may arise. When considering moral or ethical behavior, a post-hoc analysis of the situation is often most helpful. In this way, we can point out where the processes might have failed. Consider the young adult who is tempted to download copyrighted music that has been illegally placed on a filesharing system in violation of the owner’s rights. Let’s call this young adult, “Jim,” and examine the four component processes in an effort to understand what might happen. Moreover, let’s assume that downloading music for which one has not paid under these circumstances is both illegal and immoral.
ethical sensitivity To respond to a situation in a moral way, a person must be able to perceive and interpret events in a way that leads to ethical action. The person must
702
be sensitive to situational cues and must be able to visualize various alternative actions in response to that situation. A morally sensitive person draws on many aspects, skills, techniques and components of interpersonal sensitivity. These include taking the perspectives of others (role taking), cultivating empathy for and a sense of connection to others, and interpreting a situation based on imagining what might happen and who might be affected. Individuals with higher empathy for others and with better perspective-taking skills are more likely to behave for the good of others in a manner that is said to be “pro-social” (Eisenberg, 1992). So if Jim, our young adult, has highly developed ethical sensitivity skills, he takes the perspectives of all the people involved in producing the music. He feels empathy for their welfare and a sense of concern for them. He considers the ramifications of downloading copyrighted material including his and other people’s welfare and reactions.
ethical Judgment After Jim has identified the “lay of the land” through an active set of ethical sensitivity skills, he must determine which action to take. Ethical judgment has to do with assessing the possible actions and determining which is the most moral. Hundreds of research studies have demonstrated that individuals (male and female) develop increasingly sophisticated moral reasoning structures based on age and experience, especially related to education (Rest, Narvaez, Bebeau, & Thoma, 1999). Jim could use one of several moral schemas (conceptual structures) in making a decision about what to do. Rest et al. (1999) have identified three schemas individuals access depending on their level of moral judgment development. Using the Personal Interests Schema (common in high school students and younger), Jim would consider what benefits himself the most and perhaps choose to download the music from the file-sharing server. Alternatively, he might be worried about being caught
Moral Psychology and Information Ethics
and having to suffer the consequences, leading him to choose not to download. Based on recent threats in the news about how record companies intend to bring lawsuits against those who are participating in illegal sharing of copyrighted music files over the Internet, Jim’s mother might have warned him about doing such things. That she may find out also might deter him, because he wants to be a good son. If his reasoning is even more sophisticated, he would be concerned about societal laws and social order (Maintaining Norms Schema). This would likely deter him, unless he subscribes to some other non-civil set of laws (e.g., cult norms). Yet even more sophisticated (Postconventional Schema) reasoning would lead Jim to think of ideal social cooperation. At this level, he could behave as an Idealist by seeking to take an action that he could demand of anyone in his position (Kant’s Categorical Imperative), or he could adopt the view of a Pragmatist by choosing his actions according to “what would bring about the greatest good for the greatest number.” In either case, at the postconventional level of reasoning, Jim is likely to resist downloading. In fact, Friedman (1997) has shown that moral sensitivity and reasoning are critical to adolescents’ decisions and opinions regarding the acceptability of taking actions such as violating copyright protection by making illegal copies of computer programs (i.e., pirating) or invading someone’s privacy through unauthorized access to (i.e., hacking) his or her computer files. Friedman (1997) demonstrated that adolescents who viewed pirating and hacking as permissible did so not out of lack of respect for property and privacy rights in general, but because they judged computer property to be different than other types of property (see “Technology and Ethical Behavior” section below), suggesting that moral sensitivity (i.e., assigning moral relevance to some kinds of “property” and not others) was more at issue here than was moral judgment. The difference in question seems to be related to the relative lack of tangibility associated with digital instantiations
of things like documents or songs (i.e., computer property) compared to things like bicycles or cars (i.e., physical property).
ethical motivation After deciding that a particular action is the most moral, Jim must set aside other goals and interests to further its completion. He may have developed the necessary dispositional skills to maintain a sense of moral integrity such as the ability to distract himself from his original (impulsive) goal to download. Jim can more easily acquire these skills if he is already conscientious and has cultivated a sense of responsibility to others, or if he has a religious orientation in which he derives meaning from a power greater than himself. Research suggests that persons who chronically maintain moral standards as central to the self are more likely to interpret situations and react in ways that are consistent with these standards (Lapsley & Narvaez, 2004). So, if Jim has not developed these qualities, he may give in to his initial impulse to download at this point. In so doing, Jim would elevate other values (e.g., status, power, pleasure, or excitement) above the moral standards related to ethical action.
ethical Action The final component of the model is comprised of the skills that facilitate successful implementation of the moral action. Jim must know what steps are necessary to complete a moral action and possess the perseverance necessary to follow them. This component may be less salient in our hypothetical situation because it involves a singular personal decision to download or not download. But, imagine a more complex situation in which Jim has a friend who did illegally download copyrighted material on a campus computer. What should Jim do? If he decides to report the friend, he would need to know what steps to take and would need to have the motivation to follow through even if it costs him the friendship. 703
Moral Psychology and Information Ethics
Recall that the Four Component Model is a set of processes that, working in concert, result in moral behavior. This implies that the course of moral behavior may fail at any point due to a weakness in one or more processes. Some people may function well in one process but may be deficient in another. For instance, Jim may demonstrate great sensitivity but poor judgment skills, or he might make an excellent judgment but fail in follow-through. We next examine the domain of technology to see how it potentially affects information ethics and the four component processes outlined above.
technology And ethIcAl behAVIor While technology itself may not pose fundamentally new ethical challenges, it may well impinge in unique and important ways on one or more components of the model presented above. This, in turn, would be expected to affect ethical behavior. In this section, we will briefly review some of the known ways in which technology can exert such influences.
technology-mediated communication and “Psychological distance” A growing body of evidence suggests that technology-mediated communications may differ in important ways from face-to-face or other traditional forms of interpersonal interactions. Kiesler, Siegel, and McGuire (1984) have elaborated on this possibility by identifying several ways in which e-mail (perhaps the most used means of computer-mediated communication) may differ from other forms of communication. For instance, e-mail can be relatively rapid and can be easily configured to reach just one or many recipients. Since it is predominantly textual, e-mail lacks the kinds of nonverbal cues that accompany face-to-face interactions and also is devoid of the
704
information conveyed by voice intonations and inflections. In addition, e-mail can be viewed as a less personal medium of communication because the recipients are not actually present, leaving the audience either to be “imagined” by the sender or not envisioned at all. Thus, the normal triggers for empathy and interpersonal sensitivity that occur in face-to-face encounters are missing. As Sproull and Kiesler (1991) have noted, the reduced audience awareness occurring during e-mail correspondence, due to the fact that participants neither see nor hear one another as messages are being sent or received, can have a variety of social-psychological consequences on both sides of the communication process. From the sender’s perspective, unlike synchronous communications by phone or in person, there is no information available as the message is being composed and delivered to guide clarity or stimulate adjustment based on recipient reactions. This can reduce a sender’s sensitivity to the “social correctness” of the message and likewise can reduce the sender’s apprehension about being judged or evaluated by the recipient (Sproull & Kiesler, 1991). Similarly, the ephemeral nature of e-mail can render its recipients less sensitive to the sender’s status or position and can compromise their ability to discern any affect or special points of emphasis intended by the sender, at least in the absence of special formatting or the use of “emoticons” (Kiesler et al., 1984; Sproull & Kiesler, 1991). Moreover, the accepted, regulating conventions and boundaries of more traditional communication do not necessarily apply to e-mail (Kiesler et al., 1984). This can blur distinctions of traditional importance (e.g., office vs. home, work hours vs. personal time) and can greatly diminish or abolish the use of commonly accepted communication protocols (e.g., letterheads) and other forms of etiquette (e.g., salutations). Also, those who correspond frequently using this electronic means may come to expect diminished response time to e-mail (Kiesler et al., 1984).
Moral Psychology and Information Ethics
As a consequence of its altered social context and norms, computer-mediated communication may be distinctive in at least three important ways (Kiesler et al., 1984). When it is asynchronous, like e-mail, without the usual regulatory influences of the feedback inherent in real-time interactions, messages may be more difficult to understand and more challenging to compose with the desired level of clarity. Second, given a reduced sense of status among participants, electronic communications may be less formal and more like those characteristic of peer-to-peer interactions. Third, a reduced sense of audience may depress the self-regulation that is commonplace in more traditional communications, and may therefore render computer-mediated exchanges more open and less inhibited by normal social standards and boundaries. Apparently, then, computer-mediated communication is less socially constrained than traditional forms of interpersonal interaction. In this way, the technological medium creates a kind of “psychological distance” between communicator and audience (Sumner & Hostetler, 2002). This factor has important implications for behavior within this medium. Of particular interest is the possibility that computer-mediated messages, exchanges, or discussions may be more open and frank than their traditional counterparts. That this might be true was strongly suggested by Weizenbaum’s (1976) provocative observations of how people behaved with respect to “Eliza,” a computer programmed to simulate a Rogerian psychotherapist. Weizenbaum noted that people appeared quite willing to reveal intimate issues to the computer, perhaps even more so than might be the case with an actual therapist (Sproull & Keisler, 1991). Subsequent research did in fact confirm that computer-mediated self-disclosure via an electronic survey is indeed qualitatively different — seemingly in favor of more open and honest responses–from that obtained with a paper and pencil questionnaire (Kiesler & Sproull, 1986), suggesting fewer social inhibitions. Sumner and
Hostetler (2002) reported a similar finding in the context of e-conferencing. Moreover, comparing the efficacy of therapy using face-to-face, audio, and real-time video conferencing modes of communication, Day and Schneider (2002) found that clients participated more in the distance modes than in the face-to-face mode, although therapeutic outcomes were similar across all modes. Decreased social inhibition may be the cause of a heightened tendency within computer-mediated communication to engage in behavior of a less than ethically desirable nature. For example, being less inhibited in electronic communications can lead to a behavior known as “flaming,” in which one makes derogatory or off-color comments via e-mail or in a chat room that very likely would not be made in comparable face-to-face situations (Sproull & Keisler, 1991). Of course, a reduced threat of physical retaliation also could play a role in activating this behavior. In addition, electronic communications may facilitate another ethically questionable activity, spamming (Johnson, 2001), probably not just because e-mail makes it easy or cost- effective to do, but also because social inhibitions related to initiating unsolicited communications may be reduced. Finally, Johnson (2001) describes a whole category of “virtual actions,” such as “cyber-stalking” and “cyber-rape,” that probably are influenced at least to some degree by the reduced social constraints associated with computer-mediated communication. With respect to technology-mediated communication, then, it seems quite reasonable to suppose that its altered social context will impinge importantly on one or more of the four components of the model described above. For example, a technological communication medium that reduces audience awareness likely will decrease Ethical Sensitivity (Component 1). In turn, Ethical Judgments (Component 2) and Ethical Actions (Component 3) associated with this medium could depart from those expected under more conventional modes of communication. However, the communication process is not
705
Moral Psychology and Information Ethics
the only aspect of human behavior where ethical behavior may be influenced by technology. Views of what constitutes “property” also may be affected as discussed in the next section.
Perceptions of digital objects and materials As noted above, Ethical Sensitivity (Component 1) relates to the question of how situations and objects are perceived. One way to think about the “psychological distance” associated with computer-mediated communication is as a form of altered “Ethical Sensitivity.” This occurs because the interactive rules for face-to-face interpersonal communication are not as easily activated or applied in the cyberworld. Another way in which technology can impact Component 1 is by changing perceptions of what constitutes “property.” Mounting evidence suggests that electronically encoded materials or objects are perceived differently than physical materials or objects. For example, Friedman (1997) reported the results of a 1988 study with high school students in which perceptions and ethical judgments about physical and digital objects were compared. Students made a clear distinction between physical objects that were private or not private. All students saw a trash receptacle on a street corner as not being private, while 97% saw someone’s bicycle as private property. Interestingly, however, only 25% of the students believed that a commercially published and copyrighted computer program was private property. Friedman did not find the latter result to be readily attributable either to a general lack of computer experience among the students or to their lack of knowledge about applicable copyright policies. Instead, a certain domain-specific sensitivity appears to be lacking. In further assessing the matter of privacy, Friedman (1997) examined student perceptions of different locations of information: an individual’s computer files, the contents of a notice tacked on a school bulletin board, and a personal diary. Almost
706
all students (97%) regarded the diary information as private, whereas everyone regarded the bulletin board notice as not being private. In addition, a full third of the students also saw the contents of the computer files as not being private. Teston (2002) was interested to determine if the perception of software as non-private property noted in the Friedman (1997) study also characterized the views of middle school students. In a sample of 264 seventh graders, Teston found the majority (55%) characterized software as being “public” property. In addition, over 58% believed that any property rights of the software developers were terminated at the time of purchase by a software user. Like Friedman, Teston also found that a majority of participants held this view despite recognizing the applicability of copyright laws. While the percentages of students holding these beliefs about software differed in the Friedman and Teston studies, the fact that the data in these respective studies were collected 10 years apart cannot be ignored. Nonetheless, taken together, these findings reveal that digital instantiations of objects (e.g., programs) or materials (e.g., computer files) are viewed differently than their physical counterparts (e.g., diaries).
digital objects and ethical Judgments The apparent differential perceptions of digital and non-digital materials reported by Friedman (1997) and Teston (2002) beg this question: How might ethical judgments (mediated by Component 2) differ with respect to these materials? One might expect that behavior considered ethically “wrong” in connection with tangible property or materials could be viewed differently when it comes to digital property or materials. That is, to the extent that digital objects or materials are perceived as being less private than their more tangible counterparts, a greater moral permissiveness is likely to be attached to behavior involving those objects or materials.
Moral Psychology and Information Ethics
Both Friedman’s (1997) and Teston’s (2002) findings confirmed these suspicions. In terms of property, Friedman observed that none of the students in her sample thought it was alright to take someone else’s physical property (a bicycle). In contrast, 77% felt it was okay to copy someone else’s computer program (i.e., pirate it) for their own use; 47% said it was alright to pirate a program to give to someone else; and 40% even approved of piracy for purposes of making a profit by selling the copies. In addition, 62% also thought it was okay to pirate music to give away. With respect to materials, only 3% of the students said it was okay to read someone else’s private diary, and only 10% said it was acceptable to read an open letter lying on someone else’s desk. But when it came to materials in electronic form, 43% said it was fine to access someone else’s computer files if you didn’t read them, and 16% said it was okay to access and read someone else’s files. Interestingly, however, no one in the sample approved of accessing and changing information in those files. Teston (2002) found a similar pattern of results with younger adolescents. While only 10% of the students advocated taking someone else’s bicycle, 52% thought it was okay to pirate software, and 65% found it all right to pirate music CDs. When the possibility of pirating digital objects via the Internet was explored, even greater latitude was observed. That is, 60% of the students said it was okay to pirate software from the Internet, and 85% found it acceptable to pirate commercial music files in MP3 format. The increased permissiveness associated with digital property was highlighted by Teston’s (2002) overall finding that 88% of those who advocated software piracy were opposed to stealing a bicycle. Thus, it seems that perceptions of digital objects and materials, as well as judgments about what constitutes appropriate behavior with respect to such materials, differ from those associated with more tangible objects. Just as was noted for computer-mediated communication, wherein the electronic medium seems to “distance” commu-
nicator from audience, digital instantiations of property (i.e., programs, music, or information) seem to “distance” users from property owners. Consequently, in both cases, a kind of increased permissiveness can arise resulting in situational behaviors (e.g., flaming, piracy) that may deviate from that which would be observed in non-technologically mediated circumstances (i.e., situations involving face-to face communication or tangible property), wherein more accepted codes of conduct probably would be followed. An interesting question here relates to the extent to which “distance” and its possible ameliorating effects on normal inhibitions also may play a role in non-technology mediated forms of communication where sender and recipient are somewhat removed from one another (e.g., letters to the editor or a printed newspaper or magazine).
digital world and ethical motivation As intimated in the previous review, moral motivation (Component 3) can be altered in the digital arena. Whereas a bicycle connotes an “owner,” software does not, and the usual rules concerning property rights do not engage. To the extent that people communicate in situations where the medium (e.g., technology) “distances” the person at the “other end” (e.g., software developer, message recipients), recognition of the need for adherence to usual norms or standards of conduct appears to be diminished. In turn, this “psychological distance” can alter the perception of consequences and harm to others, thereby increasing the motivational importance of personal interests. So, what we have shown here, then, is that technology can influence the processing of morally relevant information by virtue of its distinctive effects on one or more of the processes that guide such behavior. Specifically, we have focused on two domains, communications and personal property, within which behavior seems to be influenced in unique ways when an electronic format is involved. In these cases, the electronic format acts as
707
Moral Psychology and Information Ethics
if it establishes a kind of “psychological distance” between communicators and their audiences as well as between people and property owned by others. This “distance” potentially impacts all four component processes involved in ethical action. Ethical Sensitivity can be reduced because the “distance” factor makes it more difficult to empathize with the audience or property owner who ultimately might be affected. Ethical Judgment may be altered because reduced empathy can reorder the priority of possible actions that could be taken such that what might be unethical in a different context (e.g., stealing a bike) now becomes more acceptable (e.g., pirating software). In turn, Ethical Motivation can change because the “distance” makes it far less obvious who is potentially harmed, thereby elevating personal goals over a concern for others, and the lack of immediate social sanction makes the cyberworld appear more like a lawless free for all. Finally, Ethical Action is influenced by a “no harm, no foul” mentality, which can lead to the occurrence of unethical behavior (e.g., flaming, cyber-rape, pirating, illegal downloading of MP3s, hacking into personal computer files, or plagiarizing from the work of others, etc.). Since some aspects of cyberspace, like the Internet, are in the public domain, the “problem of the commons” comes into play. Clearly, many can (and have been) hurt by the abuses of a few in the cyberworld. The recent rash of annoying or harmful computer viruses and worms are but one marked example of this abuse. Two further points about the effects of technology on behavior should be noted here. First, we must acknowledge that technology may have many other influences on human action than those we have focused on here. We do not pretend to have offered an exhaustive look at all the possibilities in this regard. Second, not all of the consequences of technology are bad. Even in terms of the “psychological distance” factor we have identified, there are some instances in which enhanced self-disclosure or a reduced sense of
708
evaluation anxiety mediated by a technological format may in fact be beneficial. For example, using technologically mediated communication channels, shy patients may feel more comfortable revealing important kinds of information to doctors or therapists. Similarly, students reluctant to participate in class might “open up” using electronic discussion boards or chat rooms.
InformAtIon ethIcs educAtIon The field of information ethics is complex and multidimensional (see Johnson, 2001, for a review). We advocate, as have others (e.g., Smith, 1992), that this topic should be well represented in the curriculum of any program dealing with Information Science. At the same time, however, it is clear that IS majors/professionals are not the only people in need of information ethics education. The pervasive use of technology today by the general public, through the Internet, personal digital assistants (PDAs), and other means, strongly suggests that heightened awareness of information ethics should be engendered across the board. Although the exact ways by which this ambitious goal can be achieved are not immediately clear, the work of Friedman (1997) and Teston (2001) suggests that information ethics education should begin in the early grades. Using the Four Component Model as a framework, we make the following suggestions for learning experiences that can enhance the development of each process within the domain of information ethics. These activities can be adapted for both pre- and post-secondary educational contexts. Due to space limitations, our treatment here is necessarily brief. For more detailed suggestions we recommend that you consult work by Narvaez and colleagues (Narvaez, in press; Narvaez, 2001; Narvaez, Bock & Endicott, 2003; Narvaez, Endicott, Bock & Lies, in press) who have parsed each component process into a set of specific skills. The learning experiences outlined below
Moral Psychology and Information Ethics
presume that a list of information ethics situations has been generated that can be used in discussions about each component, as has been done in other domains (Rest & Narvaez, 1994).
developing ethical sensitivity To increase ethical sensitivity, students should spend a lot of time practicing ethical problem solving in many contexts and with guidance from someone more expert, that is, someone who is familiar with the ethical landscape of the domain. Students also should spend time interpreting situations (e.g., determining what is happening, perceiving the moral aspects, responding creatively). For situations involving information technology, the gap between communicator and audience or user and property owner imposed by the technologically inspired “psychological distance” we have described above must be narrowed so that proper ethical sensitivity can be achieved. Here we would recommend exercises designed to enhance personal empathy skills, particularly as they relate to technology use. These exercises would focus on highlighting who is affected by personal technology use. Who is on the other end of that communication, or who really owns that resource? How would you react or what would you expect if you were in their position? Students might be encouraged to imagine the person on the other end of the communication as someone they know, as usually happens in instant messaging behavior with friends.
developing ethical Judgment To increase ethical reasoning, students should discuss moral dilemmas (hypothetical and real) that will bring about cognitive conflict and challenge their thinking; they should discuss their reasoning with peers (especially peers with different viewpoints); and they should practice perspective-taking — both generally and within the technology domain — in order to learn to view
the world from multiple perspectives (Lapsley, Enright, & Serlin, 1989). Ethical reasoning skills include reasoning about standards and ideals, using moral codes (e.g., discerning moral code application), understanding consequences (e.g., predicting consequences), reflecting on process and outcome (e.g., reasoning about means and ends, monitoring one’s reasoning, making right choices), and learning to choose environments that support moral behavior. Exercises in this category should enhance the ability to recognize what is ethical from what is not and to reason about possible actions. Important in this effort would be creating an awareness of the relevant moral and ethical standards in question. For example, in terms of information ethics, students should be exposed to established codes of conduct like the “Ten Commandments of Computer Ethics” (Barquin, 1992). At the very least, such exposure should be accompanied by discussion of these codes in the context of an examination of what behavior is and is not consistent with them.
developing ethical motivation Ethical motivation skills include cultivating conscience (e.g., developing self-command), acting responsibly (e.g., meeting obligations, being a global citizen), valuing traditions and institutions (e.g., understanding social structures), and developing ethical identity and integrity (e.g., choosing good values, reaching for one’s potential). In addition, students should be encouraged to build a self concept as an ethical person (Grusec & Redler, 1980) and learn about and be encouraged to adhere to personal, professional, and societal codes of ethics. In terms of technology use, these exercises should acquaint users with institutional “fair use” policies, which normally include statements related to the consequences of violations, and should allow for exploration of existing mandates (or laws) and consequences related to domains like privacy, intellectual property, and intellectual honesty.
709
Moral Psychology and Information Ethics
developing ethical Action Ethical action skills include planning to implement decisions (e.g., thinking strategically) and cultivating courage (e.g., standing up under pressure). To increase the ability to complete an ethical action, students need to develop ego strength (i.e., strength of will) and specific implementation skills. To increase ego strength, students should learn “selftalk” that allows them to encourage themselves towards a moral goal and distracts them from temptation. They should also know how to mobilize support from others for the ethical action. To increase implementation skills, students need to observe models implementing specific skills. They need to practice implementing, step-by-step, a particular ethical action in multiple contexts. For information ethics, a primary focus might be on identifying obstacles and challenges to ethical action: What tends to get in the way of doing that which is right and how can such challenges be managed? Of course, peer pressure often is a perennial challenge in this regard that should be considered at some length.
conclusIon In this chapter, we have argued that information ethics can be informed by moral psychology: specifically, the Four Component Model of moral behavior. Moreover, we have examined some of the ways in which technology may impinge on the components of moral action through the creation of “psychological distance.” Further research is needed to study such questions as how a sense of social embeddedness can be facilitated and how “psychological distance” can be reduced in the cyberworld. For example, in technology mediated communication, can “psychological distance” be reduced by incorporating visual representations of the audience through photos, video, or digital representations (i.e., avatars)? There is no doubt that technology use will continue and even escalate with time. Therefore, 710
it is imperative continuously to examine ways in which our understanding of technology’s impact and implications for personal and societal behavior can be guided by principles derived from other fields of study. Establishing clear ties between the fields of moral psychology and information ethics is a good place to start.
references Barger, R. (2001). Is computer ethics unique in relation to other fields of ethics? Retrieved September 9, 2003 from: http://www.nd.edu/~rbarger/ ce-unique.html Barquin, R.C. (1992). In pursuit of a ‘ten commandments’ for computer ethics. Computer Ethics Institute. Retrieved September 9, 2003 from: http://www.brook.edu/dybdocroot/its/cei/ papers/Barquin_Pursuit_1992.htm Day, S., & Schneider, P. L. (2002). Psychotherapy using distance technology: A comparison of faceto-face, video, and audio treatment. Journal of Counseling Psychology, 49, 499-503. Eisenberg, N. (1992). The caring child. Cambridge, MA: Harvard University Press. Friedman, B. (1997). Social judgments and technological innovation: Adolescents’ understanding of property, privacy, and electronic information. Computers in Human Behavior, 13(3), 327-351. Grusec, J. , & Redler, E. (1980). Attribution, reinforcement, and altruism: A developmental analysis. Developmental Psychology, 16, 525-534. Johnson, D. G. (2001). Computer ethics (3rd ed.). Upper Saddle River, NJ: Prentice Hall. Lapsley, D. K., Enright, R. D., & Serlin, R. (1989). Moral and social education. In J. Worell & F. Danner (Eds.), The adolescent as decision-maker: Applications to development and education (pp. 111-143). San Diego, CA: Academic Press.
Moral Psychology and Information Ethics
Lapsley, D., & Narvaez, D. (2004). A social-cognitive view of moral character. In D. Lapsley & D. Narvaez (Eds.), Moral development: Self and identity, pp. 189-212. Mahwah, NJ: Erlbaum. Kiesler, S., Siegel, J., & McGuire, T. W. (1984). Social psychological aspects of computer-mediated communication. American Psychologist, 39, 1123-1134. Kiesler, S., & Sproull, L. (1986). Response effects in the electronic survey. Public Opinion Quarterly, 50, 402-413. Narvaez, D. (2001). Moral text comprehension: Implications for education and research. Journal of Moral Education, 30(1), 43-54. Narvaez, D. (in press). The Neo-Kohlbergian tradition and beyond: Schemas, expertise and character. In C. Pope-Edwards & G. Carlo (Eds.), Nebraska Symposium Conference Papers, Vol. 51. Lincoln, NE: University of Nebraska Press. Narvaez, D., Bock, T., & Endicott, L. (2003). Who should I become? Citizenship, goodness, human flourishing, and ethical expertise. In W. Veugelers & F. K. Oser (Eds.), Teaching in moral and democratic education (pp. 43-63). Bern, Switzerland: Peter Lang. Narvaez, D., Endicott, L., Bock, T., & Lies, J. (in press). Foundations of character in the middle school: Developing and nurturing the ethical student. Chapel Hill, NC: Character Development Publishing. Narvaez, D., & Rest, J. (1995). The four components of acting morally. In W. Kurtines & J. Gewirtz (Eds.), Moral behavior and moral development: An introduction (pp. 385-400). New York: McGraw-Hill.
Rest, J.R. (1983). Morality. In P.H. Mussen (Series Ed.), J. Flavell & E. Markman (Vol. Ed.), Handbook of child psychology: Vol. 3, Cognitive development (4th ed.), pp. 556-629. New York: Wiley. Rest, J.R., & Narvaez, D. (Eds.) (1994). Moral development in the professions: Psychology and applied ethics. Hillsdale, NJ: Lawrence Erlbaum. Rest, J. R., Narvaez, D., Bebeau, M., & Thoma, S. (1999). Postconventional moral thinking: A neoKohlbergian approach. Mahwah, NJ: Erlbaum. Rogerson, S., & Bynum, T. (1995). Cyberspace: The ethical frontier. The Times Higher Education Supplement, No 1179, 9 (June), p. iv. Smith, M.W. (1992). Professional ethics in the information systems classroom: Getting started. Journal of Information Systems Education, 4(1), 6-11. Sproull, L., & Kiesler, S. (1991). Connections: New ways of working in the networked organization. Cambridge, MA: MIT Press. Sumner, M., & Hostetler, D. (2002). A comparative study of computer conferencing and face-to-face communications in systems design. Journal of Interactive Learning Research, 13(3), 277-291. Teston, G. (2002). A developmental perspective of computer and information technology ethics: Piracy of software and digital music by young adolescents. Dissertation Abstracts, 62, 5815. Weizenbaum, J. (1976). Computer power and human reason. San Francisco, CA: Freeman.
This work was previously published in Information Security and Ethics: Concepts, Methodologies, Tools, and Applications, edited by H. Nemati, copyright 2008 by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global).
711
712
Chapter XLVI
A Critical Systems View of Power-Ethics Interactions in Information Systems Evaluation José-Rodrigo Córdoba University of Hull, UK
AbstrAct Current developments in information systems (IS) evaluation emphasise stakeholder participation in order to ensure adequate and beneficial IS investments. It is now common to consider evaluation as a subjective process of interpretation(s), in which people’s appreciations are taken into account to guide evaluations. However, the context of power relations in which evaluation takes place, as well as their ethical implications, has not been given full attention. In this article, ideas of critical systems thinking and Michel Foucault’s work on power and ethics are used to define a critical systems view of power to support IS evaluation. The article proposes a system of inquiry into power with two main areas: 1) Deployment of evaluation via power relations and 2) Dealing with ethics. The first element addresses how evaluation becomes possible. The second one goes in-depth into how evaluation can proceed as being informed by ethical reflection. The article suggests that inquiry into these relationships should contribute to extend current views on power in IS evaluation practice, and to reflect on the ethics of those involved in the process.
IntroductIon It has been argued extensively in the literature of information systems (IS) evaluation that failures in implementation of information systems occur due to lack of consideration of different (e.g., softer) aspects that influence information systems adoption
(Hirschheim & Smithson, 1999; Irani, 2002; Irani & Fitzgerald, 2002; Irani, Love, Elliman, Jones, & Themistocleus, 2005; Serafeimidis & Smithson, 2003). Among these aspects, the issue of ethics also gains importance, yet few evaluation approaches consider it explicitly (Ballantine, Levy, Munro, & Powell, 2003). When evaluating the implementa-
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Critical Systems View of Power-Ethics Interactions in Information Systems Evaluation
tion of information systems, there is still a need to consider the context of human relations within which evaluation takes place (Walsham, 1999), and more specifically, the nature and impacts of power relations (Doolin, 2004; Gregory, 2000; Introna, 1997). This consideration has also been noticed in the realm of systems thinking, but there is a dearth of approaches to deal with the complexities of power (Gregory & Jackson, 1992; Jackson, 2000). In IS evaluation, power has been mainly considered as a “contextual,” “political,” or “external” variable (Serafeimidis & Smithson, 1999), and its impacts in practice (for instance regarding the treatment of ethical issues) are far from clear. Power is often understood as “politics” (Bariff & Galbraith, 1978), “interests playing” or struggle between parties (Walsham, 1993), and is associated with the dynamics of organisational change that are said to be difficult to manage (Lyytinen & Hirschheim, 1987). These connotations could limit a better understanding of the nature of power in IS evaluation and how practitioners can act in relation to it. Awareness of the nature of power for intervention has been a subject of discussion in critical systems thinking, a set of ideas and methodologies that aim to clarify stakeholders’ understandings prior to the selection and implementation of intervention methods in situations of social design (Flood & Jackson, 1991b; Jackson, 2000; Midgley, 2000). Using the commitments of critical systems thinking to critical awareness, pluralism, and improvement as well as Michel Foucault’s ideas on power and ethics, this article extends current understandings of power to inform IS evaluation. The article proposes a relational view of power that is dynamic, transient, and pervasive, and which influences, and is influenced by, individuals’ ethics. With this view, the article defines a “system of inquiry” with two elements of analysis for IS evaluation: (1) Exploring the deployment of evaluation via power relations; and (2) Dealing with ethics. With these areas, different manifestations of power can be accounted for and related
in evaluation interventions. In addition, inquiry into these areas enables people involved to reflect on the ethics of their own practices. The article is structured as follows. Critical systems thinking is introduced in relation to three (3) commitments that can inform systems thinking and practice. Then, information systems (IS) evaluation as interpretation(s) is described and reviewed in relation to how the issue of power is currently being addressed. It is argued that a critical, pluralistic and ethically oriented view of power is needed. To build up this view, the article presents the basic tenets of Michel Foucault’s work on power and ethics, highlighting implications for IS evaluation. A system of inquiry into power for IS evaluation is defined, and its relevance for evaluation practice discussed.
crItIcAl systems thInkIng This article stems from the UK-based systems research and practice, in which there is a variety of systems methodologies that contain principles, ideas, and methods to facilitate intervention for social improvement (Checkland, 1981; Flood & Jackson, 1991b; Flood & Romm, 1996; Jackson, 2000, 2003; Midgley, 2000; Stowell, 1995). The use of systems ideas has also pervaded the information systems (IS) field. Currently, it has been accepted that a systemic view of IS practice, one that looks at different elements of activity in organisational, social, and technical domains, can contribute to make sense of a variety of efforts in the IS field (Avison, Wood-Harper, Vidgen, & Wood, 1998; Checkland, 1990; Checkland & Holwell, 1998). This view also shares a common idea with other systems research movements elsewhere that conceive of an information system as part of an organisational system (Mora, Gelman, Cervantes, Mejia, & Weitzenfeld, 2003). In the UK, the popularity of systems thinking can also be reflected through the use of soft systems methodology (SSM) as a learning tool
713
A Critical Systems View of Power-Ethics Interactions in Information Systems Evaluation
(Checkland, 1981) and its applications in several areas in information systems. These include information requirements definition (Checkland, 1990; Checkland & Scholes, 1990; Lewis, 1994; Wilson, 1984, 2002), systems development (Avison & Wood-Harper, 1990), intervention methodology (Clarke, 2001; Clarke & Lehaney, 2000; Midgley, 2000; Ormerod, 1996, 2005), and professional practice (Avison et al., 1998; Checkland & Holwell, 1998). To this popularity, however, it has also been argued that the use of some methodologies like SSM can help in reinforceing the ‘status quo’ in a situation if it is not used in a more critical and informed manner (Jackson, 1982; Mingers, 1984). Jackson (1992) argues that the practice of information systems can be further developed if systems-based interventions are not only guided by one type of rationality, methodology, or research paradigm, and if assumptions about the ‘status quo’ in a situation of social design are critically reviewed. Using systems ideas, practitioners should be able to foster creativity, complementarity and social responsibility. Jackson and others have developed a collection of ideas, methodologies, and approaches under the name of “critical systems thinking” (Flood & Jackson, 1991b; Flood & Romm, 1996; Gregory, 1992; Jackson, 2000, 2003; Midgley, 2000; Mingers, 1992, 2005; Mingers & Gill, 1997; Ulrich, 1983). Critical systems thinking (CST) has been defined as a continuous dialogue between systems practitioners who are concerned with the issue of improvement (Midgley, 1996). As an evolving set of ideas, it contains a variety of notions that aim to foster continuous stakeholders’ reflection prior to the selection and implementation of planning and design methods. In critical systems thinking, Midgley (1996) distinguishes three common and inter-related commitments to guide the efforts of researchers and practitioners. These commitments are: (1) Critical awareness, continuous re-examining of taken-for-granted assumptions in a situation
714
(including those inherent to systems methodologies); (2) Pluralism (or complementarism), using a variety of ideas and approaches in a coherent manner to tackle the complexity of the situation; and (3) Improvement, ensuring that people advance in developing their full potential by freeing them of potential constraints like the operation of power. The commitments of critical systems thinking have been put into practice in different ways. For instance, there is a system of systems methodologies (Jackson & Keys, 1984) to help those involved in an intervention choose the most adequate system methodologies to tackle a problem situation according to methodologies’ own strengths and weaknesses. In addition to methodology choice, creativity can also be fostered when thinking about problem situation with the use of metaphors, and reflection is included to enable learning and understanding through the use of methodologies (Flood & Jackson, 1991a; Flood & Romm, 1996). Recently, systems practice has also been enriched with generic principles to ensure that intervention is guided by continuous critique, the use of different methods and definition of local and temporary improvements (Jackson, 1999, 2003; Midgley, 2000). An emerging (UK- and non-UK-based) slant on critical systems thinking is that developed by Ulrich (1983; 2003) and Midgley (2000) on boundary critique. According to them, our processes of producing knowledge about a situation are bounded by a number of assumptions about purpose(s), clients, theories, methodologies, methods, and other aspects related to an intervention. These assumptions are intimately linked to systems boundaries. Here the idea of a system is that of an intellectual construction that guides analysis and decision-making (Churchman, 1970, 1979). According to Ulrich and Midgley, such boundaries and their underpinning assumptions need to be identified, analysed and debated with people involved in relation to their value content, so that individuals can make more informed decisions
A Critical Systems View of Power-Ethics Interactions in Information Systems Evaluation
regarding the implications of privileging some boundaries at the expense of others. In line with the above, in critical systems thinking, the issue of power has been discussed at length, and it has been argued that power can inhibit individuals’ own reflection about the conditions that influence their own improvement (Flood, 1990; Flood & Romm, 1996; Valero-Silva, 1996; Vega-Romero, 1999). Power has not been defined in a unique way. It has been associated with phenomena of coercion, which affects relationships between stakeholders (Gregory & Jackson, 1992; Jackson, 2000). Critique on systems boundaries adopted for analysis and decision making in a social situation has been enhanced with the idea that such boundaries are the result of the operation of power and its manifold manifestations (Flood, 1990; Midgley, 1997; Vega-Romero, 1999). Despite acknowledging the importance of power for systems practice, in critical systems thinking there is little about how practitioners can identify and act in relation to power issues in intervention. Although this could be attributed to the diversity of meanings of power (and hence an interpretation of a commitment to pluralism), there is a need to provide further insights into the nature of power and how reflection about it can be developed in practice, if a commitment to improvement in social situations is to be honoured. In this article, we use the above commitments in critical systems thinking to develop a view of power for intervention. With this view, we generate a “system” (e.g. a “whole”) of inquiry into power that aims to follow these commitments. We apply our view and system to the domain of information systems (IS) evaluation in order to provide guidance to practitioners on how to identify and manage power in evaluation practice. In the next section we review the practice of IS evaluation in relation to power.
InformAtIon systems eVAluAtIon In general terms, information systems (IS) evaluation is about assessing the continuous value that systems and communication technologies give to organisations and individuals (Irani & Love, 2001; Parker, Benson, & Trainor, 1988; Piccoli & Ives, 2005; Remenyi & Sherwood-Smith, 1999). IS evaluation is still considered a “wicked” phenomenon (Farbey, Land, & Targett, 1999), a “thorny” and complex process (Irani, 2002; Serafeimidis & Smithson, 2003) that is difficult to carry out given different aspects that affect its outcomes. To date, there are a number of approaches and techniques that are used to support successful evaluation of IS and technology investments prior to, during, or after their implementation, although a strong focus on financial techniques still remains (Irani, 2002; Parker et al., 1988; Serafeimidis & Smithson, 1999). In IS evaluation, it has also been argued that success depends on the usefulness of evaluation processes and outcomes to inform managerial decision-making. This usefulness has been related to the identification of different issues (i.e., financial, ethical, organisational, and cultural) that affect IS implementation so that these are promptly and adequately addressed (Avison & Horton, 1992; Ballantine et al., 2003; Doherty & King, 2001; Hirschheim & Smithson, 1999; Irani, 2002; Irani & Love, 2001; Symons & Walsham, 1988). With the inclusion of a variety of issues in IS evaluation, a growing concern is the usefulness that evaluation will have for those individuals involved and affected by it (Irani, 2002; Irani & Love, 2001; Serafeimidis & Smithson, 1999). People would like to benefit from being involved in an evaluation or using evaluation outcomes. Therefore, individual perceptions have become relevant, and researchers have suggested that IS evaluation can be better understood as a continuous and subjective process of interpretation(s) (Hirschheim & Smithson, 1999; Smithson &
715
A Critical Systems View of Power-Ethics Interactions in Information Systems Evaluation
Tsiavos, 2004; Walsham, 1999). In other words, evaluation is a process of “experiential and subjective judgement, which is grounded in opinion and world views, and therefore challenges the predictive value of traditional [IS] investment methods” (Irani et al., 2005, p. 65) (brackets added). For Walsham (1999), IS evaluation processes are about understanding and learning through stakeholders’ perspectives and actions; stakeholder participation can contribute to minimise resistance IS to implementation (Walsham, 1993). The idea of IS evaluation being a subjective process is expanded by Serafeimidis and Smithson (2003) who argue that IS evaluation “is a socially embedded process in which formal procedures entwine with the informal assessments by which actors make sense of their situation” (p.253, emphasis added). They provide the following roles of IS evaluation as: 1.
2.
716
Control, meaning that evaluation is and becomes embedded in traditional procedures of organisational appraisal. IS evaluation processes adhere to existing hierarchies and accepted ways of assessing and monitoring investments. The aim of IS evaluation is to deliver value to the business. Financial techniques that appraise the contribution of information systems and technologies to business strategies are preferred to any other type of evaluation approach (Serafeimidis & Smithson, 1999). In control-evaluation, traditional channels of communication are used. Participation of stakeholders contributes to minimise the risks related to investments and to ensure commitment. However, those people who benefit from controlling other individuals can use evaluation to advance their own interests. Sense making, or clarifying any implications that IS investments and projects could have to stakeholders. Informal communication complements formal communication. In sense-making evaluation, establishing a common language helps those leading the
3.
4.
evaluation (evaluators) and those taking part (evaluands) to share their expectations and concerns about IS investments or projects. Sense-making evaluation, though, does not exclude the possibility that the revealing of meanings can be used for political purposes or to advance the evaluators’ own interests (Legge, 1984; Weiss, 1970). Social learning, or fostering the creation, storing, and exchange of knowledge. Stakeholders can take part in this exchange and contribute so that they reduce any uncertainty about the implementation and success of information systems. In social learning, evaluators facilitate the exchange of knowledge through interactions with stakeholders (for example by promoting conversations about how systems will address people or business-related expectations). The selection of what type of knowledge is relevant for evaluation can become an instrument of political influence (e.g. directed to achieve particular objectives), as well as the ways in which this knowledge can be disseminated or exchanged. An exploratory exercise, to help organisations to clarify their strategic direction and promote change. Those involved in IS evaluation develop new ways of appraising and monitoring the value that systems have to organisations. This requires thinking creatively. In doing so, people involved in evaluation can contribute to shift the existing balance of power: They can challenge those who advocate evaluation techniques based solely on financial benefits or traditional accounting and reporting techniques.
In each of the above orientations on IS evaluation, the perceptions and actions of stakeholders can be used to reinforce or shift the balances of power, but power has not been defined yet. The wider (non-IS) literature on evaluation suggests situations of disadvantage or conflict can be ad-
A Critical Systems View of Power-Ethics Interactions in Information Systems Evaluation
dressed via more participation or empowerment (Guba & Lincoln, 1989; Mertens, 1999; O’Neill, 1995; Weiss, 1970, 1998). Moreover, it is suggested that evaluators should “sign in” with disadvantaged groups and ensure that their concerns, claims and issues are adequately considered and listened to in the evaluation process (Guba & Lincoln, 1989). However, as Gregory (2000) contends, participative evaluation approaches can easily overlook the operation of power and how it can contribute to generate and maintain the very same conditions that enable or inhibit participation to occur. By trying to address imbalances in participation, evaluators may well be privileging their own power as experts or facilitators, or inadvertently reinforcing the power of those who are in managerial control in a situation (Wray-Bliss, 2003). For Gregory (2000), the problem of participation in evaluation can only be approached through a wider understanding of power and its operation in practices that prohibit or promote such participation. There needs to be considerations about the context of power in which evaluation is taking place, as well as the role of those being involved in it as part of evaluation practice. Table 1 contains a summary of four different notions of power that can be related to the IS evaluation roles discussed before. These notions
are drawn from existing classifications in the IS literature (Dhillon, 2004; Horton, 2000; Jasperson et al., 2002) and elsewhere (Lukes, 1974; Oliga, 1996). As seen in the table, it can be common to associate power with tangible or distinguishable resources (i.e., information), skills or authority that some people have and use to control others (Bariff & Galbraith, 1978; Horton, 2000). Power can be also associated with institutional structures, so that its use can reinforce, perpetuate, or resist existing organisational hierarchies and “games” (Bloomfield & Coombs, 1992; Dhillon, 2004; Markus, 2002). Or power can be seen as the influence that any action of particular individuals have in the behaviour of others (Handy, 1976; Walsham & Waema, 1994). This includes, for instance, the influence that IS experts have over systems users (Horton, 2000), the political skills (Checkland & Scholes, 1990), or the style that managers have to define, implement, and evaluate IS plans (Walsham & Waema, 1994). The above views presented about power show individual notions, as if power had different but not intersecting manifestations. Nevertheless, power could be an intertwining of capacities, influences, or resources. These views describe very little about how power comes to be considered as such, in other words, how power is deployed
Table 1. Power in orientations for is evaluation IS Evaluation as (Serafeimidis & Smithson, 2003) Control Sense-making Social-learning, exploratory
Relational
Power as Resources (Bariff & Galbraith, 1978)
Manifestations Authority, skills, information, use of technology.
Capacity (Markus, 2002)
Structures that facilitate (or inhibit) communication
Influence (Checkland & Scholes, 1990; Handy, 1976; Walsham & Waema, 1994)
Expertise and styles used to facilitate (or inhibit) knowledge exchange and change
All of the above
In the relations between people (Foucault, 1984a), as a backdrop (Horton, 2000) and in the conditions that make evaluation possible.
717
A Critical Systems View of Power-Ethics Interactions in Information Systems Evaluation
as such in a situation. In IS practice, it has been acknowledged that explicit exercise of power can contribute to systems implementation (Markus, 2002; Serafeimidis & Smithson, 2003; Walsham & Waema, 1994). However, this does not fully consider the often indistinguishable, unintended, contradictory, and complex consequences of power in IS/IT implementations in a context of intervention (Jasperson et al., 2002; Robey & Boudreau, 1999). Therefore, it can be argued that IS evaluation faces a similar problem to critical systems thinking, that of not providing enough guidance to practitioners on how to identify and act in relation to power as a multifarious and complex issue that affects any action for improvement. It is necessary to consider a critical view on power in which power is studied in its deployment (how, why), and not only taking power as a given. The view also needs to be pluralistic in order to include different manifestations and forms of power, as well as the relationships between them. Moreover, an alternative view of power should help practitioners to explore possibilities for improvement in action in relation to power relations. To develop this view in line with the commitments of critical systems thinking, Michel Foucault’s ideas on power and ethics are now presented.
foucault on Power Michel Foucault’s work on the history of Western civilisation provides relevant insights into the problem of the human subject, be it individual or collective. For Foucault, the main question in modern society is how human beings are constituted as subjects (Foucault, 1982a, 1982b). His aim is to show connections between what counts as knowledge, the power relations used to make it valid, and the ethical forms that support its deployment. This for Foucault is a way of developing critique in contemporary society (Foucault, 1980b). For Foucault, the meaning of a “subject” is twofold: “someone subject to someone
718
else by control and dependence, and tied to his own identity by a conscience or self-knowledge” (Foucault, 1982a, p.212). Both meanings in the above definition suggest a form of power, which subjugates and makes one subject to it (Foucault, 1982a). This suggests that power operates in different ways (targeting individuals and/or groups), influencing the ways in which people relate to themselves and each other. According to Foucault, the end result of processes of production of knowledge is the potential operation of forms of “normalisation” in society which constrain our behaviour and limit our freedom as individuals. The set of analyses on how people become normalised is called by Foucault “subjectivity” (Foucault, 1977). With his historical analyses, Foucault also shows that the ways individuals define themselves and relate to others have been contingently defined, contested and deployed via power relations as “the ways we fashion our subjectivity” (Bernhauer & Mahon, 1994, p. 143). Subjectivity refers to the practices we perform on ourselves, and this includes what we consider ethical, as will be shown later. In Foucault’s analyses, one can find different definitions of power that also show power’s dynamic nature in society (Foucault, 1980b). Power can be identified in the relations between people, between actions influencing other actions. Power means power strategies through which individual try to define, determine, or guide the conduct of others (Foucault, 1984a). Power also helps deploying some forms of knowledge at a particular moment in time whilst obscuring others, so that certain practices prevail as the valid ones. Power can be seen as a “total structure of actions brought to bear upon possible actions: in incites, it induces, it seduces, it makes easier or difficult“ (Foucault, 1982a, p. 220). For Foucault (1980b, 1984b), power is not an objective issue; it can only be identified in its operation through the relations that it establishes, maintains (including resisting), or creates between individuals. Power is an analytical device that
A Critical Systems View of Power-Ethics Interactions in Information Systems Evaluation
helps us to understand how we have been constituted as the subjects we currently are in the relations with ourselves and others. Such relations are mobile, transient, and dynamic; they target single individuals or entire populations; their operation occurs across institutions and at different levels (micro, macro) in society. New forms of power emerge that reinforce, support, undermine, or resist previous ones, and this happens at any level (e.g., individual, micro and macro). In Foucault’s work, power is present where there is freedom and is essential to regulate relations between individuals in society (Foucault, 1984a). Power can be used intentionally, but the consequences of doing so cannot be fully determined (Foucault, 1984a, 1984b). Foucault’s work has been used in the realm of information systems to understand the effects of information systems planning and implementation in managerial practices (Ball & Wilson, 2000; Córdoba & Robson, 2003; Doolin, 2004; Horton, 2000; Introna, 1997). For instance, Introna (1997) suggests that Foucaultivan notions of power helps to identify some “obligatory passage points” in the design and implementation of information systems as sets of relations that determine what types of information and the practices associated with its management count as organisationally accepted. According to Bloomfield and Coombs (1992), such awareness can also help IS practitioners to map and better understand the conditions that enable the implementation of systems in an organisation. For Doolin (2004), the Foucaultvian concept of power can help explain how people can resist or react to existing implementation practices and how implementation is the by-product of many different organisational factors, some of which emerge in opposition to the implementation itself. In these accounts, the issue of ethics has not been explicitly addressed using Foucault’s ideas (Burrell, 1988), and this will be revisited later in the article. From the above discussion, we elaborate a fourth notion of power to support IS evaluation
(see last row of Table 1). In this notion, power operates in the relations between individuals. It includes different manifestations as well as the conditions and relations that make possible the existence and use of power as a resource, structure, or influence in evaluation as previously discussed. These different manifestations of power not only generate potential constraints that inhibit action (including the evaluation itself), but also opportunities that will make action feasible according to the “landscape” of possibilities that individuals are part of (Brocklesby & Cummings, 1996; Foucault, 1980b). As will be seen in the next section, these possibilities can be better defined in relation to the ethics of individuals.
ethIcs According to Brooke (2002), some authors see Foucault as failing to provide a concrete space within which debate can take place given the ever presence of power even as resistance to it. In particular, Foucault’s acceptance of the idea that “Yesterday’s resistance can become today’s normalisation… which in turn can become the conditions for tomorrow’s resistance and/or normalisation” (Darier, 1999, p. 18) is lacking any normative content and thus generates ambiguity or confusion (Rowlinson & Carter, 2002; Taylor, 1984). A question arises about how one can then discern and decide on ethical issues in evaluation (Ballantine et al., 2003). This question gains importance in light of a critical systems-based commitment with improvement as mentioned before. To the potential ambiguity of power analyses, more structured ways of dealing with questions of ethics in IS evaluation like the ones presented by Ballantine et al (2003) (based on Habermas) can provide alternative and systematic answers. These alternatives focus on reviewing and developing spaces for equal debate about ethical issues, as well as providing general rules for examining or conducting debate. In contrast to these alterna-
719
A Critical Systems View of Power-Ethics Interactions in Information Systems Evaluation
tives, for Foucault it is essential to explore the conditions that led debate and inequalities to emerge in the first place. These conditions could be unique in a context of intervention (Brooke, 2002), including those that enable participation in IS evaluation to take place. To address the above question, there is a still largely unexplored area in Foucault’s work that needs to be made more explicit, and that is ethics. Foucault’s work is not power but the human subjects, how we have been constituted as the individuals we are (Chan & Garrick, 2002; Foucault, 1982a). According to Foucault (1977), any action in relation to power cannot be considered exterior to power relations, so that inevitably any debate on issues (including ethical) in evaluation takes place in relation to power relations. Therefore, we need to look at power relations from the inside (Brooke, 2002). Foucault’s analyses aim to show how subjects position themselves to situations according to what they think it is ethical (Darier, 1998; Foucault, 1977). In his study of the history of sexuality, Foucault says: Morality [ethics] also refers to the real behaviour of individuals to the rules and values that are recommended to them…the manner in which they respect or disregard a set of values… (p. 25)…those intentional and voluntary actions by which men not only set themselves rules of conduct, but also seek to transform themselves, to change themselves in their singular being. (Foucault, 1984b, p. 10) This means that it is possible for subjects to make strategic use of their freedom (Foucault, 1984a, 1984c) and use it to “no longer being, doing or thinking what we are, do, or think” (Foucault, 1984c, p. 46). Foucault is aware that we need to continuously recognise the limits of our actions, what is no longer necessary (or dangerous) for the constitution of us as autonomous subjects and act accordingly. He says:
720
The question, in any event, is that of knowing how the use of reason can take the public form that it requires, how the audacity to know can be exercised in broad daylight, while individuals are obeying as scrupulously as possible. (Foucault, 1984c, p. 37) This means that in the light of power relations in a particular context of intervention, it is possible to develop a reflexive and ethically oriented practice of individual freedom. Ethical practice becomes a way of providing direction to action for improvement, an opportunity to (re) develop forms of ethics within what is possible in relation to power relations. This aspect will be further discussed when proposing a system of inquiry into power for IS evaluation in the next section.
towards a system of Inquiry into Power for Is evaluation From the above discussion on power and ethics, two important implications can be derived to inform the definition of a critical systems view of power for IS evaluation. First, the inclusion of power would require considering it as a backdrop (Horton, 2000) of relations against which any IS evaluation orientation can be studied. Any manifestation of power (as a resource, capacity, structure, or influence) in IS evaluation should be considered the by-product and medium of power relations operating in a context of intervention, with these relations having varied implications (for instance, economic, political, social, and cultural). Identification and analysis of how power relations operate would help those involved in evaluation to reflect on how they become subjects of evaluation activities and what they can do about it. The above does not mean that power should be avoided but its possibilities and constraints used strategically according to what individuals consider relevant to do (Brocklesby & Cummings, 1996) in relation to what has been institutionally unfolded and accepted as IS evaluation (Smithson
A Critical Systems View of Power-Ethics Interactions in Information Systems Evaluation
& Tsiavos, 2004). For those involved in evaluation, analysis of power requires them to reflect on their participation in power relations that make evaluation (im) possible and that facilitate or inhibit unfolding of events. It becomes necessary to explore the origins and deployment of power relations in which IS evaluation has arisen as a process to be carried out. Secondly, the analysis of power relations as a system requires ethical awareness from those involved about ethical issues that they adopt, debate, or resist in IS evaluation, and this also includes the ethical issues that are adopted to analyse power. This requires direct intervention from the “inside” of evaluation. Foucault is proposing to continuously study power in order to see the limitations of its “normalising” ethical systems, and how power can also offer possibilities for action for people as they see them fit (Brocklesby & Cummings, 1996) or ethically appropriate (Vega-Romero, 1999). In other words, Foucault is proposing to study and reflect on the internal conditions that can make ethical action possible in IS evaluation in order to define the “battleground” and possibilities for further action. Considering the above, the following is the definition of a system of inquiry into power to support IS evaluation as presented in Figure 1. The system is composed of two areas interacting with each other and informing existing role(s) of the evaluation process as described by Serafeimidis and Smithson (2003). This analysis brings together an understanding of evaluation as a series of interpretations as described by Smithson and Tsiavos (2004), and a way of reflecting on ethical issues in IS evaluation as proposed by Ballantine et al (2003), so that those involved in evaluation can reflect on power from their own participation. The areas of inquiry are: 1.
Exploring the deployment of IS evaluation. Analysis of power in relation to forms of being, knowing, and acting consists of locating how power relations contribute to deploy
2.
(implement) or undermine IS evaluation activities. The purpose is to identify how evaluation became possible and accepted as such, and how it progresses. This type of analysis requires unveiling power relations at different levels (for instance, economic, social, political)—as maps of actions influencing other actions—(Foucault, 1984a) that constitute the definition, approval and unfolding of the evaluation under study. A good starting point or “points of entry” to analyse power is to see how it helps in the deployment of accepted evaluation roles (i.e., as control, sense-making, social-learning, and exploration) (Serafeimidis & Smithson, 2003); in other words, to study how these roles came to being, and the wider relations that made them possible and valid. The analysis can then be complemented or developed with the following questions (Córdoba & Robson, 2003): How is that evaluation is defined and approved? How does it engage those involved? What role(s) for evaluation are accepted? Through which mechanisms and justifications? How do activities in evaluation become successful or unsuccessful? How are evaluations institutionally completed or abandoned? Dealing with ethics. As said before, for Foucault (1977), one cannot be exterior to the power relations one is analysing or intervening. Therefore, analyses should also show how individual subjects position themselves in situations (Darier, 1998; Foucault, 1977). This consideration should lead those involved in IS evaluation to consider what is ethical for them to do according to power, and to go beyond the idea of interpretations. Analysis of power should also yield insights as to what behaviours and actions are ethically acceptable or unacceptable (including the analysis itself as a practice that is guided by ethical values), and what to do about them. Those involved in evaluation can decide to
721
A Critical Systems View of Power-Ethics Interactions in Information Systems Evaluation
adopt a critical stance and go beyond what is being established, to imagine new forms of being and acting (Foucault, 1984c). This could mean that the purpose and nature of evaluation are re-defined according to what people consider ethical to do in a context of intervention. Using Foucault’s (1984b) elements of analysis of ethics, those involved in evaluation can formulate the following questions to help them decide on how to treat ethical issues: In the dominant role(s) of evaluation, what part of our behaviour (thinking, acting) do we need to be ethically concerned with? Through which evaluation activities (including analysis of power) ought we to show our ethical behaviour? What individual activities do we need to work on to become ethical? Most importantly, what type of ethical subjects do we want to be in relation to existing power? Answers to these questions can yield further insights as to how to define action to carry on with evaluation activities. Figure 1 shows how these two areas of analysis are related. To the deployment of IS evaluation through power relations, analysis of power (e.g., how is evaluation deployed?), could trigger the
Figure 1. A system of inquiry into is evaluation IS Evaluation
1. Studying evaluation deployment as control, sense-making, social learning, or exploratory
Power relations and forms of power (resources,structures, influences)
what do we want to be as subjects?
722
2. Dealing with ethics
identification of ethical issues for those involved. Using this analysis, individuals could then identify and reflect upon their ethics and how to develop it by considering what has been deployed as ethically acceptable. This could place people in a better position to define their possibilities and constraints for action according to existing power relations. As new issues of concern emerge in an evaluation process, further analysis of power and forms of ethics is required, as the interactions between the elements of Figure 1 show.
An exAmPle As an example of how to use the above elements of inquiry, let us consider that in an IS evaluation process, financial control and communication to stakeholders are seen as essential (Irani, 2002; Serafeimidis & Smithson, 2003) in order to guarantee compliance with organisational procedures of auditing. In this context, evaluation can then be seen as control mechanism, more specifically as a way of exerting control over IS investments (or perhaps as a way of enabling financial officers to exert control over the rest of the organisation). The deployment of evaluation as an accepted process can have many manifestations. These could include, for instance, continuous exercise of formal authority (e.g., via established practices of reporting to finance officers), traditional use of financial skills and resources to get evaluation activities “done” (e.g., by an influential chief financial officer), or emerging pertinence of financial matters in IS investment decisions (e.g., a sound business case with “numbers” that now needs to be elaborated before being approved). These manifestations could be the by-product of previous practices (i.e., a history of financial success or failure in the organisation). With this understanding of evaluation as a deployment of power relations, those involved in evaluation could then proceed to reflect on how a particular issue (e.g., communication) and its
A Critical Systems View of Power-Ethics Interactions in Information Systems Evaluation
treatment can be dealt with. This issue can be then considered “ethical,” and the expected behaviours or ways of thinking about it identified. People involved in evaluation could decide not to pay any more attention, for instance, to requests to analyse or communicate (financial) progress to other stakeholders or use existing communications to raise a different set of ethical issues (e.g., confidentiality, quality, etc.) Decisions can follow people’s desire to become ethically different (e.g., more professional in their practice) or to be “seen” as ethical (and then using the power available to make themselves known). These decisions need to be examined in the light of potential consequences for individuals and their organisations, and any effect that could be foreseen (for instance, excessive professionalism could then generate a desire for people to become “professionally accredited”). In this particular case, the emergence of new issues to be discussed in evaluation (for example, due to new business practices related to improvement in customer service), or new ways of conducting evaluation in the context of intervention (e.g., those seen as more “professional”) can then trigger further analyses on how these elements are being deployed and how they need to be managed. Although this example is brief, it illustrates the type of analysis that can be conducted and the actions that could result to improve the practice of IS evaluation. The example can also prompt evaluation practitioners to reflect on the scope of their analyses by considering manifestations and effects of power at different levels (economic, social, “political”, etc).
viduals with little guidance in relation to how to identify and act about it. Using the commitments of critical systems thinking and Foucault’s ideas on power and ethics, the article develops a view of power and a system of inquiry into how it can be analysed in IS evaluation. The system enables practitioners and others involved in evaluation to be critically aware of the influence of power to deploy evaluation. It also allows for the inclusion and study of different manifestations of power and relations between them. Using this system, practitioners can inquire about how evaluation becomes possible through power relations. Inquiry should lead practitioners to reflect on ethical issues associated with IS evaluation and develop their own actions to improve their practice according to what they consider is ethical to do. In comparison with other perspectives on power, Foucault’s ideas can prompt those involved in evaluation to study the power conditions of the evaluation itself before establishing any possibilty of dialogue or debate6. This can help them to frame their actions into the possibilities and constraints given by power relations in the context where they are immersed. In evaluation practice, there is still a need to compare the study of power from this perspective with others. We see an opportunity to incorporate the use of the proposed system of inquiry with the use of systems methodologies to promote participative IS evaluation. We hope the view on power developed in this article contributes to open up further opportunities of dialogue and research between critical systems thinking and information systems.
conclusIon
references
In this article, a review of the issue of power in critical systems thinking and information systems evaluation has been undertaken to define an alternative view about it. It has been found that existing interpretations of power as operating “externally” from those involved in evaluation leaves indi-
Avison, D., & Horton, J. (1992). Evaluation of information systems (Working Paper). Southampton: University of Southampton, Department of Accounting and Management Science.
723
A Critical Systems View of Power-Ethics Interactions in Information Systems Evaluation
Avison, D. E., & Wood-Harper, A. T. (1990). Multiview: An exploration in information systems development. Henley on Thames, UK: Alfred Waller (McGraw-Hill Publishing Company). Avison, D., Wood-Harper, A. T., Vidgen, R. T., & Wood, J. R. G. (1998). A further exploration into information systems development: The evolution of Multiview2. Information Technology and People, 11(2), 124-139. Ball, K., & Wilson, D. (2000). Power, control and computer-based performance monitoring: A subjectivist approach to repertoires and resistance. Organization Studies, 21(3), pp. 539-565. Ballantine, J., Levy, M., Munro, I., & Powell, P. (2003). An ethical perspective on information systems evaluation. International Journal of Agile Management Systems, 2(3), 233-241. Bariff, M., & Galbraith, J. R. (1978). Intraorganizational power considerations for designing information systems. Accounting, organizations and society, 3(1), 15-27. Bernhauer, J., & Mahon, M. (1994). The ethics of Michel Foucault. In G. Gutting (Ed.), The Cambridge Companion to Foucault (pp. 141-158). Cambridge, UK: Cambridge University Press. Bloomfield, B., & Coombs, R. (1992). Information technology, control and power: The centralization and decentralization debate revisited. Journal of Management Studies, 29(4), 459-484. Brocklesby, J., & Cummings, S. (1996). Foucault plays Habermas: An alternative philosophical underpinning for critical systems thinking. Journal of the Operational Research Society, 47(6), 741-754. Brooke, C. (2002). What does it mean to be “critical” in IS research? Journal of Information Technology, 17(2), 49-57. Burrell, G. (1988). Modernism, post modernism and organizational analysis: The contribution
724
of Michel Foucault. Organization Studies, 9(2), 221-235. Chan, A., & Garrick, J. (2002). Organisation theory in turbulent times: The traces of Foucault’s ethics. Organization, 9(4), 683-701. Checkland, P. (1981). Systems thinking, systems practice. London: John Wiley and Sons. Checkland, P. (1990). Information systems and systems thinking: Time to unite? In P. Checkland & J. Scholes (Eds.), Soft systems methodology in action (pp. 303-315). Chichester, UK: John Wiley & Sons Ltd. Checkland, P., & Holwell, S. (1998). Information, systems and information systems: Making sense of the field. Chichester, UK: John Wiley and Sons. Checkland, P., & Scholes, P. (1990). Soft systems methodology in action. Chichester: John Wiley and Sons. Churchman, C. W. (1970). Operations research as a profession. Management Science, 17, b37-b53. Churchman, C. W. (1979). The systems approach and its enemies. New York: Basic Books. Clarke, S. (2001). Information systems strategic management : An integrated approach. London: Routledge. Clarke, S., & Lehaney, B. (2000). Mixing methodologies for information systems development and strategy: A higher education case study. Journal of the Operational Research Society, 51, 542-566. Córdoba, J. R., & Robson, W. D. (2003). Making the evaluation of information systems insightful: Understanding the role of power-ethics strategies. Electronic Journal of Information Systems Evaluation, 6(2), 55-64. Darier, E. (1998). Time to be lazy: Work, the environment and modern subjectivities. Time & Society, 7(2), 193-208.
A Critical Systems View of Power-Ethics Interactions in Information Systems Evaluation
Darier, E. (1999). Foucault and the environment: An introduction. In E. Darier (Ed.), Discourses of the environment (pp. 1-33). Oxford: Blackwell. Dhillon, G. (2004). Dimensions of power and IS implementation. Information & Management, 41, 635-644. Doherty, N., & King, M. (2001). An investigation of the factors affecting the successful treatment of organisational issues in systems development. European Journal of Information Systems, 10, 147-160. Doolin, B. (2004). Power and resistance in the implementation of a medical management information system. Information Systems Journal, 14(4), 343-362. Farbey, B., Land, F., & Targett, D. (1999). Moving IS evaluation forward: Learning themes and research issues. Journal of Strategic Information Systems, 8(2), 189-207. Flood, R. L. (1990). Liberating systems theory. New York: Plenum Press. Flood, R. L., & Jackson, M. C. (1991a). Total systems intervention: A practical face to critical systems thinking. Systems Practice, 4, 197-213. Flood, R. L., & Jackson, M. C. (Eds.). (1991b). Critical systems thinking: Directed readings. Chichester: John Wiley and Sons. Flood, R. L., & Romm, N. (1996). Diversity management: Triple loop learning. Chichester: John Wiley and Sons. Foucault, M. (1977). The history of sexuality volume one: The will to knowledge (Vol. 1). London: Penguin. Foucault, M. (1980a). Truth and power. In P. Rabinow (Ed.), The Foucault reader: An introduction to Foucault’s thought (pp. 51-75). London: Penguin.
Foucault, M. (1980b). Two lectures. In C. Gordon (Ed.), Power/knowledge: Selected interviews and other writings Michel Foucault (pp. 78-108). New York: Harvester Wheatsheaf. Foucault, M. (1982a). Afterword: The subject and power. In H. Dreyfus & P. Rabinow (Eds.), Michel Foucault: Beyond structuralism and hermeneutics (pp. 208-226). Brighton: The Harvester Press. Foucault, M. (1982b). On the genealogy of ethics: An overview of work in progress. In P. Rabinow (Ed.), The Foucault reader: An introduction to Foucault’s thought (pp. 340-372). London: Penguin. Foucault, M. (1984a). The ethics of the concern of the self as a practice of freedom (R. e. a. Hurley, Trans.). In P. Rabinow (Ed.), Michel Foucault: Ethics subjectivity and truth: Essential works of Foucault 1954-1984 (pp. 281-301). London: Penguin. Foucault, M. (1984b). The history of sexuality volume two: The use of pleasure. London: Penguin. Foucault, M. (1984c). What is enlightenment? (C. Porter, Trans.). In P. Rabinow (Ed.), The Foucault reader: An introduction to Foucault’s thought (pp. 32-50). London: Penguin. Gregory, A. (2000). Problematizing participation: A critical review of approaches to participation in evaluation theory. Evaluation, 6(2), 179-199. Gregory, W. J. (1992). Critical systems thinking and pluralism: A new constellation. Unpublished doctoral dissertation, City University, London. Gregory, A., & Jackson, M. C. (1992). Evaluation methodologies: A system for use. Journal of the Operational Research Society, 43(1), 19-28. Guba, E. G., & Lincoln, Y. S. (1989). Fourth generation evaluation. Newbury Park, CA: Sage Publications.
725
A Critical Systems View of Power-Ethics Interactions in Information Systems Evaluation
Handy, C. (1976). Understanding organizations. Aylesbury: Penguin. Hirschheim, R., & Smithson, S. (1999). Evaluation of information systems: A critical assessment. In L. Willcocks & S. Lester (Eds.), Beyond the IT productivity paradox (pp. 381-409). Chichester, UK: John Wiley and Sons. Horton, K. S. (2000). The exercise of power and information systems strategy: The need for a new perspective. Proceedings of the 8th European Conference on Information Systems (ECIS), Vienna. Introna, L. D. (1997). Management, information and power: A narrative of the involved manager. Basingstoke: Macmillan. Irani, Z. (2002). Information systems evaluation: Navigating through the problem domain. Information & Management, 40, 11-24. Irani, Z., & Fitzgerald, G. (2002). Editorial. Information Systems Journal, 12(4), 263-269. Irani, Z., & Love, P. E. (2001). Information systems evaluation: Past, present and future. European Journal of Information Systems, 10(4), 189-203. Irani, Z., Love, P. E., Elliman, T., Jones, S., & Themistocleus, M. (2005). Evaluating E-government: Learning from the experiences of two UK local authorities. Information Systems Journal, 15(1), 61-82. Jackson, M. C. (1982). The nature of soft systems thinking: The work of Churchman, Ackoff and Checkland. Journal of Applied Systems Analysis, 9, 17-29. Jackson, M. C. (1992). An integrated programme for critical thinking in information systems research. Information Systems Journal, 2, 83-95. Jackson, M. C. (1999). Towards coherent pluralism in management science. Journal of the Operational Research Society, 50(1), 12-22.
726
Jackson, M. C. (2000). Systems approaches to management. London: Kluwer Academic/Plenum Publishers. Jackson, M. C. (2003). Creative holism: Systems thinking for managers. Chichester, UK: John Wiley and Sons. Jackson, M. C., & Keys, P. (1984). Towards a system of system methodologies. Journal of the Operational Research Society, 35, 473-486. Jasperson, J. S., Carte, T., Saunders, C. S., Butler, B. S., Croes, H. J. P., & Zheng, W. (2002). Power and information technology research: A metatriangulation review. MIS Quarterly, 26(4), 397-459. Legge, K. (1984). Evaluating planned organizational change. London: Academic Press. Lewis, P. (1994). Information systems development: Systems thinking in the field of information systems. London: Pitman Publishing. Lukes, S. (1974). Power: A radical view. London: Macmillan. Lyytinen, K., & Hirschheim, R. (1987). Information systems failures - A survey and classification of the empirical literature. Oxford Surveys in Information Technology, 4, 257-309. Markus, M. L. (2002). Power, politics and MIS implementation. In M. Myers & D. Avison (Eds.), Qualitative research in information systems. London: Sage. Mertens, D. (1999). Inclusive evaluation: Implications of transformative theory for evaluation. American Journal of Evaluation, 20(1), 1-14. Midgley, G. (1996). What is this thing called CST? In R. L. Flood & N. Romm (Eds.), Critical Systems Thinking: Current Research and Practice (pp. 11-24). New York: Plenum Press. Midgley, G. (1997). Mixing methods: Developing systemic intervention. In J. Mingers & A.
A Critical Systems View of Power-Ethics Interactions in Information Systems Evaluation
Gill (Eds.), Multimethodology: The Theory and Practice of Combining Management Science Methodologies. (pp. 249-290). Chichester, UK: John Wiley and Sons.
Ormerod, R. (2005). Putting soft OR methods to work: the case of IS strategy development for the UK Parliament. Journal of the Operational Research Society, 56(12), 1379-1398.
Midgley, G. (2000). Systemic intervention: Philosophy, methodology and practice. New York: Kluwer Academic/Plenum.
Parker, M. M., Benson, R., & Trainor, H. E. (1988). Information economics: Linking business performance to information technology. Englewood Cliffs, NJ: Prentice Hall.
Mingers, J. (1984). Subjectivism and soft systems methodology: A critique. Journal of Applied Systems Analysis, 11, 85-113. Mingers, J. (1992). Technical, practical and critical OR: Past, present and future? In M. Alvesson & H. Willmott (Eds.), Critical management studies (pp. 90-112). London: Sage. Mingers, J. (2005). ‘More dangerous than an unanswered question is an unquestioned answer’: A contribution to the Ulrich debate. Journal of the Operational Research Society, 56(4), 468-474. Mingers, J., & Gill, A. (1997). Multimethodology: The theory and practice of combining management science methodologies. Chichester, UK: John Wiley & Sons Ltd. Mora, M., Gelman, O., Cervantes, F., Mejia, M., & Weitzenfeld. (2003). A systemic approach for the formalization of information systems concept: Why information systems are systems? In J. Cano (Ed.), Critical reflections in information systems: A systemicapproach (pp. 1-29). Hershey (PA): Idea Group Publishing. Oliga, J. (1996). Power, ideology and control. New York: Plenum. O’Neill, T. (1995). Implementation frailties of Guba and Lincoln’s fourth generation evaluation theory. Studies in Educational Evaluation, 21(1), 5-21. Ormerod, R. (1996). Information systems strategy development at Sainsbury’s supermarket using “soft” OR. Interfaces, 16(1), 102-130.
Piccoli, G., & Ives, B. (2005). IT-dependent strategic initiatives and sustained competitive advantage: A review and synthesis of the literature. MIS Quarterly, 29(4), 747-776. Remenyi, D., & Sherwood-Smith, M. (1999). Maximise information systems value by continuous participative evaluation. Logistics Information Management, 12(1/2), 145-156. Robey, D., & Boudreau, M. (1999). Accounting for the contradictory organizational consequences of information technology: Theoretical directions and methodological implications. Information Systems Research, 10(2), 167-185. Rowlinson, M., & Carter, C. (2002). Foucault and history in organization studies. Organization, 9(4), 527-547. Serafeimidis, V., & Smithson, S. (1999). Rethinking the approaches to information systems evaluation. Logistics Information Management, 12(1-2), 94-107. Serafeimidis, V., & Smithson, S. (2003). Information systems evaluation as an organizational institution - Experience from a case study. Information Systems Journal, 13, 251-274. Smithson, S., & Tsiavos, P. (2004). Re-constructing information systems evaluation. In C. Avgerou, C. Ciborra & F. Land (Eds.), The social study of information and communication technology: Innovation, actors and contexts (pp. 207-230). Oxford: Oxford University Press.
727
A Critical Systems View of Power-Ethics Interactions in Information Systems Evaluation
Stowell, F. (1995). Information systems provision: The contribution of soft systems methodology. London: McGraw-Hill. Symons, V., & Walsham, G. (1988). The evaluation of information systems: A critique. Journal of Applied Systems Analysis, 15, 119-132. Taylor, C. (1984). Foucault on freedom and truth. Political Theory, 12(2), 152-183. Ulrich, W. (1983). Critical heuristics of social planning: A new approach to practical philosophy. Berne: Haupt. Ulrich, W. (2003). Beyond methodology choice: critical systems thinking as critically systemic discourse. Journal of the Operational Research Society, 54(4), 325-342. Valero-Silva, N. (1996). A Foucauldian reflection on critical systems thinking. In R. L. Flood & N. Romm (Eds.), Critical systems thinking: Current research and practice. (pp. 63-79.). London: Plenum. Vega-Romero, R. (1999). Care and social justice evaluation: A critical and pluralist approach. Hull: University of Hull.
Walsham, G. (1999). Interpretive evaluation design for information systems. In L. Willcocks & S. Lester (Eds.), Beyond the IT productivity paradox (pp. 363-380). Chichester, UK: John Wiley and Sons. Walsham, G., & Waema, T. (1994). Information systems strategy and implementation: A case study of a building society. ACM Transactions on Information Systems, 12(2), 150-173. Weiss, C. (1970). The politicization of evaluation research. Journal of Social Issues, 26(4), 57-68. Weiss, C. (1998). Have we learned anything new about the use of evaluation? American Journal of Evaluation, 19(1), 21-34. Wilson, B. (1984). Systems: Concepts, methodologies, and applications. Chichester, UK: John Wiley and Sons. Wilson, B. (2002). Soft systems methodology: Conceptual model and its contribution. Chichester, UK: John Wiley and Sons. Wray-Bliss, E. (2003). Research subjects/research subjections: Exploring the ethics and politics of critical research. Organization, 10(2), 307-325.
Walsham, G. (1993). Interpreting information systems in organisations. Chichester, UK: John Wiley and Sons.
This work was previously published in the Information Resources Management Journal, Vol. 20, Issue 2, edited by M. KhosrowPour, pp. 74-89, copyright 2007 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
728
729
Chapter XLVII
Ethical Issues in Web-Based Learning Joan D. McMahon Towson University, USA
IntroductIon If you were to survey course syllabi on your campus, you would probably find the standard syllabi to include: • • • • • • • •
Course title and number Instructor’s name and contact information Course objectives A list of required and recommended readings/materials A detailed outline of the topics for consideration Detailed descriptions of assignments and due dates Percentage of final grade A schedule of topics by date
You would also find a campus curriculum or departmental committee that initially approves such courses. Once the course is approved, it is not usually subject to review or scrutiny by the campus, unless the department requests a course change.
Meanwhile, faculty who teach the course change the syllabus at will based on new material in their discipline, changes in textbooks, and so forth. This is encouraged so that the students get the most up-to-date information in the discipline. If faculty switch courses, retire, or resign, then their syllabus is passed on to the successor to revise, again at will. There seems to be little or no systematic accounting of the legitimacy of the course originally approved to the course now taught. Department chairs are supposed to do this. Many take their responsibility for quality control seriously; many others delegate this to their capable administrative assistant who may not know enough about the subtleties of the curriculum to have recognized that an inconsistency exists.
whAt Is the oVerAll ethIcAl proBLeM? The problem is that course information is now being posted to the Web, thus creating problems with
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Ethical Issues in Web-Based Learning
values, rights, and professional responsibilities specifically in curricular quality control, advising, intellectual property rights, and succession planning (University of Washington, 1996). What is the harm in not having quality control in developing and posting courses on the Web? This is best addressed through a series of questions about rights and values, and is illustrated in Figure 1. 1.
2.
Has the delivery mode of the Web changed the approved course’s integrity? How does faculty pedagogical style affect course integrity? Has the course changed from the campus’s officially approved version? What is the professional responsibility of the faculty and the department in keeping courses current and still protecting curriculum integrity? How does one handle and value course updates without changing the course? This is a departmental problem. Do students have the right to get what they “pay” for? From an advising perspective, does the course reflect what is “advertised” in the campus catalog so that those seeking credit for the course elsewhere are assured that the course description in the official catalog is the same course taught or desired? This is an institutional problem.
Figure 1. Curricular quality control
730
3.
4.
How are the intellectual property rights of the faculty valued and protected by posting course material on the Web? This is an institutional problem. How will successive faculty comply with the course integrity whether they put their material on the Web or not? This is a departmental problem.
There need to be policies or procedures in place that allow faculty to upgrade their syllabi routinely within the accuracy of an approved course process to address the ethics of advising, course and curriculum integrity, intellectual property, and succession planning. With the advent of courses being developed online and faculty now able to easily state to the world that “I have my syllabus on the Web,” this plethora of ethical issues arises.
the course Integrity Problem The course integrity problem stems from the overall issue of quality control. Faculty are encouraged to keep up to date in their discipline and pass this on to their students. Currency of intellectual thought is valued. Yet the bureaucratic process of reapplying for course “approval” whenever a course/syllabus is revised, or each time it is taught
Ethical Issues in Web-Based Learning
by a new faculty member, would be untimely. Curricular policies typically do not allow for easy and quick updates within the framework originally approved. The Web and online learning only exacerbate the problem. Faculty can and do change their courses quickly and without bureaucratic approval. Ethically, how are they being professionally responsible? This feeds another problem of altering courses that might not match catalog descriptions. The process of course approval and revision needs to be reexamined at the departmental level; for now, many faculty are caught in the middle.
the Advising Problem When a campus prints a catalog of approved course descriptions, it is inherent that the course syllabi has been approved by the required committees. If students want to take a course on another campus or want their credits reviewed for transfer, it is the official course description in the catalog that is assessed. If a syllabus has been changed or updated so often by different faculty that it no longer reflects the approved course description, then there is an ethical problem of “misrepresentation.” Similarly, students who wish to enroll in a course, using the official course catalog description, fully expect that the course will reflect what is described. Students drop the course when they realize the course description and the actual course syllabus do not match. This was not the course they thought it was going to be. This issue denies students fundamental rights of “buying” what is advertised. Is there an issue of public trust being violated? These are issues that need to be raised institutionally
the Intellectual Property Problem and Academic freedom With each new iteration of a course, the syllabus has taken on the personality and pedagogical style of the professor. What worked for one instructor
does not work for his/her successor. Their pedagogical styles and beliefs are reflected in how and what they teach, and how they assess the learning in their assignments. So they revise the course to reflect their own styles and personalities, and are permitted by academic freedom to do so (Hecht, Higgerson, Gmelch, & Tucker, 1999, p. 138). When a faculty changes the course by changing the emphasis of the content, the type of assignments, and the new pedagogical material that helps student learn, they are developing their own intellectual property in the course. Who owns the intellectual property of the course is the subject of much current ethical debate. On some campuses, if you are an employee of the institution, the institution owns the intellectual property because the faculty member was “hired” to develop and teach it. This is “work for hire” (Kelley & Bonner, 2001). Other campuses feel that the faculty members own the intellectual property of the course. The course itself—that is, the course description—and the originally approved syllabi are the intellectual property of the institution. The faculty member’s interpretation of how to teach the course, the emphasis on the content, how the learning is assessed, and so forth is the intellectual property of the faculty member. The ethical problem is who owns what? And for how long? Is it the course or the syllabus that is sacred? What is posted to the Web for public access? And what is “secured” information to be obtained through enrollment fees, if any, that protect the intellectual property of the faculty? These are ethical issues for institutions to address.
the succession Planning Problem If a faculty member switches courses, goes on sabbatical, resigns, or retires, what happens to the course integrity? Does the course go with the faculty member who developed it? If given a copy of a previous syllabus, will successor faculty recognize it as the one originally sanctioned by the department, be able to recognize multiple
731
Ethical Issues in Web-Based Learning
modifications to it, or recognize the customized interpretation and intellectual property of the previous instructor? Do they know they can customize the course to fit their own scholarly interpretation without compromising the course and curriculum integrity? Again, is it the course or the syllabus that is sacred? The face-to-face version may look different from the online version. How will these successor faculty be coached on keeping the original course integrity intact while offering their own customized interpretation online? Who in the department will coach and monitor them?
WhAT Are SoMe SoLuTionS? An ethical problem is one that can be solved with a win-win solution if people think through it long enough to figure out what to do. Can campuses support or justify their actions within the context of their values, rights, and professional responsibilities? (See Table 1) Using the Web, campuses can now do several things: (1) protect the intellectual integrity of the curriculum if a faculty member leaves or no longer teaches a course, which in turn affects advising; (2) protect the intellectual property of the faculty so that only those who are “buying” the credits actually get to use it; and (3) hold the departments accountable for the accuracy of the course offerings. To do these, two simple solutions are recommended: 1.
2.
732
Publish a “generic one-page common course syllabus” on the campus Web site reflecting the approved generic syllabus Develop a customized “schedule” of the course which changes from semester to semester depending on who teaches it
Publish a generic common course syllabus that does not change over time This is a one-page document that has been approved by the department and campus curriculum committee or other required approving bodies. It can be posted on a campus Web site and provides enough information to viewers about whether the course meets their needs. This should be placed on top of any instructor’s own interpretation of the course. No matter who teaches the course, this is a “guarantee.” It only changes with official departmental/campus approval. The campus “owns” this course. It is a building block of a curriculum and is not subject to the changes in staffing. It is a course preview and could include the following elements: 1.
Heading that specifies department, course number, course title, and institution 2. Approved course rationale 3. Approved course catalog description 4. Prerequisites 5. Approved course objectives or goals 6. Approved content outline of about 10 topics 7. Possible learning assessments/assignments 8. Required departmental standards/readings/ materials 9. Required campus policies on cheating/plagiarism, attendance, and so forth (or these can be addendums) 10. An approved statement on disabilities assistance 11. Approved campus plagiarism policy 12. Campus attendance policy
Ethical Issues in Web-Based Learning
Table 1. Ethical issues reflecting values, rights and professional responsibilities
E thical I ssue
V alue(s)
R ights
Pr ofessional R esponsibilities
Course
Information delivered
What are the rights of faculty
What are the policies and
Integrity
should be current.
and students to provide and
procedures to allow for updates
learn up-to-date information?
within approved course structures?
A dvising
T he institution values
What rights do students have
What is the responsibility of the
curricular quality.
for accurate representation of
institution and department in
what is printed versus what is
providing accurate information?
delivered?
Intellectual
Faculty have unique
What rights do faculty have to
What are the policies on ‘‘work
Property
teaching styles and
carry their intellectual
for hire’’ and how are intellectual
pedagogical instructional
interpretation of the course to
property rights of the faculty and
design preferences
another campus? What rights does the campus have to protect curricular
How long can an institution use
integrity?
the intellectual property of a faculty member who leaves the institution?
Succession
Faculty grow and move
What rights do faculty have to
T o what extent should adjunct
Planning
on. Successor faculty
develop their own intellectual
faculty and other successor
should be of equal or
property A cademic freedom)?
faculty comply with approved
greater quality.
What rights do successors have
course syllabi?
to customize their interpretation of the approved
Who monitors the quality of what
course and not necessarily copy
these successors teach?
an old syllabus from someone else?
733
Ethical Issues in Web-Based Learning
develop a customized “schedule” of the course which changes from semester to semester depending on who teaches It A course schedule reflects whatever is changed in a course from semester to semester. If different faculty teach the course each semester, the course itself will not change, but rather the scholarly interpretation of the course will be dependent on who teaches it (Palloff & Pratt, 1999, p. 88). It reflects their teaching style and education philosophy. It is their intellectual property, the way they interpret and view the course based on the students they serve at the time. They can carry this course with them to another campus and adapt it to the needs of a different student body. It protects the fundamental rights of the faculty while complying with the course requirements of the institution. Several faculty teaching the same course ensures that the course objectives, content, and assessments are covered, while giving academic freedom to the faculty to implement their creativity and scholarship in unique ways in their customized version of the “schedule.” It could include the following: 1. 2. 3. 4.
5. 6. 7. 8.
734
Instructor name, credentials, and contact information Office hours Room assignments A statement of their personal assumptions about learning (their educational philosophy about teaching and learning) Their current list of required and recommended readings Their interpretation of the topics (more detail than the general outline) A timetable for implementation of topics and assignments Their interpretation of the learning assessments/assignments and weighting of such
These are two relatively simple solutions to solving the complexity of these ethical problems. Other ethical innovative solutions are possible, given the right processes to generate win-win solutions (Joseph & Conrad, 1995). Regardless, the solutions will suggest departmental faculty collaboration, faculty training, workshops, seminars, policy analysis, and perhaps external peer reviews.
long rAnge ImPlIcAtIons Ethically, the long-range issue affects curricular quality control, course integrity, and potentially accreditation. But the implementation of such a simple strategy as recommended above, the adoption of a one-page generic common syllabus and a changeable schedule, affects departmental decision making to the core. Having a departmental discussion about each course, what the central elements are and how they are a part of the larger curriculum whole, is an insightful exercise in interpersonal group dynamics, organizational development, and change management. These cannot be handled in a single faculty meeting. With the number of online courses being developed within departments, many faculty will simply say that “the delivery method” has changed and there is no need to review the online version of the course. But an in-depth analysis of the course in that delivery method may show the problems mentioned earlier. Palloff and Pratt (1999) pose guiding questions to consider in developing courses online and how they might differ from their face-to-face counterparts (p. 109). Regardless, online course assessment should be considered in light of its impact on the department’s curriculum. Another long-range implication is that of succession planning. If adjunct faculty step in for regular faculty, what assurances are there that the course will remain “intact?” What does “intact” mean now? If a faculty member resigns
Ethical Issues in Web-Based Learning
or retires, what happens to the course and curriculum integrity? By having a generic syllabus available for successor faculty, it provides them with the guidance and structure needed to develop their own materials in the course. The generic common course syllabus theoretically forms the basis for accreditation review.
conclusIon With the advent of online courses, departments will have to reexamine what it means to offer a course in this new delivery method. They will have to think through the issues of values, rights, and professional responsibilities specifically in curricular quality control, advising, intellectual property rights, and succession planning. The delivery of courses online and visible worldwide brings fundamental ethical problems to the forefront. They need to be grappled with by departments, curriculum committees, and other approval bodies. What an opportunity we have to dialog about these issues!
references Hecht, I., Higgerson, M., Gmelch, W., & Tucker, A. (1999). The department chair as academic leader. Phoenix, AZ: The American Council on Education and The Oryx Press. Joseph, V., & Conrad, A. P. (1995). Essential steps for ethical problem-solving. Retrieved from http://www.socialworkers.org/pubs/code/ oepr/steps.htm Kelley, K., & Bonner, K. (2001). Courseware ownership in distance education: Issues and policy models. Unpublished manuscript, University of Maryland University College, USA. Palloff, R., & Pratt, K. (1999). Building learning communities in cyberspace. San Francisco: Jossey-Bass. University of Washington. (1996). Recommended core ethical values. Retrieved from http://www. engr.washington.edu/~uw-epp/Pepl/Ethics/ethics3.html
This work was previously published in Flexible Learning in an Information Society, edited by B. H. Khan, pp. 209-217, copyright 2007 by Information Science Publishing (an imprint of IGI Global).
735
736
Chapter XLVIII
We Cannot Eat Data:
The Need for Computer Ethics to Address the Cultural and Ecological Impacts of Computing Barbara Paterson Marine Biology Research Institute, Zoology Department, University of Cape Town, South Africa
AbstrAct Computer ethicists foresee that as information and communication technology (ICT) increasingly pervades more and more aspects of life, ethical issues increasingly will be computer-related. This view is underpinned by the assumption that progress is linear and inevitable. In accordance with this assumption, ICT is promoted as an essential component of development. This notion ignores the cultural origin of computing. Computer technology is a product of the Western worldview, and consequently, the computer revolution is experienced differently by people in different parts of the world. The computer revolution not only threatens to marginalize non-Western cultural traditions, but the Western way of life also has caused large-scale environmental damage. This chapter argues that computer ethics has to critically analyze the links between computing and its effects on cultural diversity and the natural environment and proposes that the Earth Charter can function as a framework for such holistic research.
IntroductIon Computer ethics is a fast-growing and increasingly important field of practical philosophy. Deborah Johnson (1999) predicts that because the majority of moral problems will be computer ethics issues, computer ethics will cease to be a special field of ethics (Bynum, 2000). Kristina Gòrniak-
Kocikowska (1996) predicts that the computer revolution will give rise to a revolution of ethics and that computer ethics will become a global ethics relevant to all areas of human life. Bynum and Rogerson (1996) and Moor (1998) suggest that the second generation of computer ethics should be an era of global information ethics. These views seem to ignore the reality that the effects
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
We Cannot Eat Data
of the computer revolution are experienced differently by people in different parts of the world. While for some the challenge is to keep up with the continuous new developments, others are still struggling to put in place the infrastructure that may allow them to ride the waves of the information tide and participate in its benefits. Nelson Mandela has stated that the gap between the information rich and the information poor is linked to quality of life and that, therefore, the capacity to communicate is likely to be the key human right in the 21st century (Ng’etich, 2001). However, at the beginning of the new century, the digital divide between industrialized nations and the developing world is immense. Eighty percent of worldwide Internet activity is in North America and Europe (Gandalf, 2005), although these areas represent 19% of the worldwide population. The ratio of Internet users to nonusers in developing countries is 1:750, compared to 1:35 in developed countries (Ng’etich, 2001). Although Internet usage in Africa grew by 429.8 % between 2000 and 2005, it only represents 2.5 % of worldwide usage, and only 2.7% of Africans are Internet users (World Internet Usage Statistics, 2005). In Africa, poverty and illiteracy prevent many people from accessing computer technology. Many believe that these hurdles are simply a question of income per capita and infrastructure (Anyian-Osigwe, 2002; Grant, Lewis, & Samoff, 1992). In the first world, computing is experienced as a crucial element in the competitive market and, therefore, also is promoted in the third world as a vital part of development. The notion that computers are the solution to bridge the gap between the rich and the poor overlooks the fact that computers are the product of a particular worldview promoting values such as efficiency, speed, and economic growth (Berman, 1992; Bowers, 2000). Computer use requires people to act and think in a prescribed unified way (Heim, 1993, as cited in Gorniak-Kocikowska, 2001, 2004; Kocikowski, 1999, as cited in Gorniak-Kocikowska, 2001,
2004). Not only is there the danger that the computer revolution will marginalize cultural traditions other than the Western one, but the Western way of life also has precipitated environmental degradation to the extent that we are now facing an environmental crisis of global warming, natural resource depletion, and accelerated species extinction. The term West can have different meanings, depending on its context. Western is no longer simply a geographical distinction but also a cultural and economic attribute. Here, West is used to refer to societies of Europe and their genealogical, colonial, and philosophical descendants, such as the United States, Australia, or Argentina. The term Western culture will be used to refer to the common system of values, norms, and artefacts of these societies, which has been shaped by the historic influence of Greco-Roman culture, Christianity, the Renaissance, and the Enlightenment. Different cultures have unique ways of storing, representing, and transmitting knowledge, such as mythologies, storytelling, proverbs, art, and dance. These modes cannot all be equally well-represented through computerization, but all are equally valid and deserve to be preserved (Hoesle, 1992). We cannot assume that computer technology will solve the complex social and environmental problems at hand. In order to adequately address these complex issues, a diverse body of knowledge is required, and we simply cannot afford to lose any sources of knowledge. Although there is an increasing gap between the first and third worlds, and the environmental crisis presents some of the most pressing and difficult moral issues (Hoesle, 1992), there is only a small body of research in computer ethics that addresses the problem of the digital divide between the first and third worlds and the relationship between computing and the environmental crisis (Capurro, 1990; Floridi, 2001; GòrniakKocikowska, 2004). Just as computerization is a product of the West, most computer ethics is explored and defined by Western scholars.
737
We Cannot Eat Data
Although writers such as Johnson (1997) and Gòrniak-Kocikowska (2004) acknowledge that computer technology was created from within a particular way of life, most current computer ethics research ignores the cultural origins of the technological determinist stance. This stance is prevalent in writing on computing and also affects computer ethics (Adam, 2001; Winner, 1997). The evolutionary view of progress and its impact on the human world leaves other cultures no choice but to adopt computerization and to assimilate the values that are embedded in computerization (Bowers, 2000). Moreover, current computer technology adopts a logico-rational paradigm that often relies on convergences by eliminations and aggregations. It is not clear that computers will develop in such a way so as to mimic or represent values or value choices that invoke those elements at the core of any human being. This consequence might not only lead to information colonialism but also may arguably reinforce a worldview that is ecologically unsound. In this chapter, it is argued that computer ethics has to critically address the links between computing and its effect on cultural diversity and the natural environment. A framework for global ethics is being provided in the form of the Earth Charter, which has been developed in a decadelong global process in order to realize a shared vision of a sustainable global society. It is suggested that computer ethics make use of this framework in order to carry out the holistic research that is required to see how information and computer technology can be used responsibly to create a future that is ecologically and economically sustainable and socially and culturally just.
bAckground The field of computer ethics is underpinned largely by the assumption that technological progress is linear and inevitable. This observation has been made by Adam (2001) for the more popular writ-
738
ings and those dealing with professionalism, but it is equally true for the more academic computer ethics research. For example, James Moor (1998) states that the “computer revolution has a life on its own. … The digital genie is out of the bottle on a world-wide scale.” Kristina Gòrniak-Kocikowska (1996) predicts that the computer revolution will affect all aspects of human life and that, consequently, computer ethics will become the global ethics of the future not only in a geographic sense but also “in the sense that it will address the totality of human actions and relations.” The notion of a computer revolution carries the theme of a technology out of control, continuously developing while humans are limping along, hardly able to keep up with the innovations but always looking forward to the inevitable next step in computing power (Curry, 1995). In this view, technological progress is unavoidable and determines society rather than being determined by human need. Innovation becomes a goal in its own right (Veregin, 1995). This technological determinism takes the objectivity of the world for granted and ignores the complex relationships between society and technology, thus obstructing analysis and critique of technological development (Adam, 2001). Technological determinism is supported by the tool-based model of technology, which carries the implication that technology develops independently of social and scientific contexts, thus ignoring that technological innovation takes place against a background of social context and profit motives and that every technology imposes limits on thought and action (Pickles, 1995; Veregin, 1995; Winograd & Flores, 1986). The more technology becomes an integral and indispensable part of life, the greater is its influence (Veregin, 1995). The determinist stance is further reinforced by the increasing complexity of information systems and the increasing dependency on the technical knowledge of experts who understand them. For the average person, a
We Cannot Eat Data
computer system resembles a black box whose inner workings remain a mystery. Furthermore, technological determinism informs an evolutionary view of progress that places different cultures in a competitive struggle. Seeing computerization as inevitable means that nontechnological cultures must adopt computerization or become extinct (Bowers, 2000). The Declaration of Principles formulated at the World Summit on the Information Society affirms that “the Information Society should be founded on and stimulate respect for cultural identity, cultural and linguistic diversity, traditions and religions, and foster dialogue among cultures and civilizations” (WSIS, 2004). Nonetheless, Gòrniak-Kocikowska (2004, p. 3) predicts that the computer revolution will lead to “a takeover and ruthless destruction of traditional values of local cultures by the new digital civilisation.” The computer is a product of Western civilization, and the field of computer ethics is dominated by Western scholars who tend to overlook problems outside their cultural experience. This ethnocentrism marginalizes the need to consider the long-term implications of displacing diverse cultural narratives.
the comPuter As A Product of western culture The computer has its roots in 16th- and 17th-century Europe, an era of increased mechanization and increased focus on mathematics. The mechanic philosophy, which emerged during the Renaissance and the scientific revolution, was based on the assumption that sense data are discrete and that problems can be analyzed into parts that can be manipulated by mathematics. For Hobbes, the human mind was a machine, and to reason was to add and subtract (Merchant, 1980). The binary system and its significance for machines were advocated by Leibniz in the latter half of the 17th century (Freiberger & Swaine, 2003). As more and more processes of daily life were being
mechanized, the desire also to automate cognitive processes such as calculation came naturally. The computer is the result of the effort to achieve both high-speed and high-precision automatic calculation and a machine capable of automatic reasoning (Mahoney, 1988). The classical view of reality is still influential in Western common sense thought and in the notion that Western science produces objective, valuefree, and context-free knowledge (Merchant, 1980). Although “it is an inherent characteristic of common-sense thought … to affirm that its tenets are immediate deliverances of experience,” common sense is an organized cultural system composed of conclusions based on presuppositions” (Geertz, 1983, p. 75). The mechanistic metaphor has shaped Western culture’s view of nature, history, society, and the human being (Merchant, 1980). This metaphor has influenced the birth of economics with Smith’s (1976) “The Wealth of Nations,” which analyzed market economies as self-governing mechanisms regulated by laws and giving rise to an orderly society. The scientific revolution has brought about a strong focus on quantification and computation. The industrial revolution and its emphasis on increasing production through mechanization has given rise to a strong focus on economics attested by the development of both capitalism and Marxism. Twentieth-century information theory, the mathematical representation of transmission and processing of information, and computerization manifest the view that problem solving is essentially the manipulation of information according to a set of rules (Merchant, 1989, p. 231). The method of computer science is formalization (i.e., symbolization of real-world phenomena so they can be subjected to algorithmic treatment. The computer is thus a result and a symptom of Western culture’s high regard for abstraction and formalization; it is a product of the mathematician’s worldview, a physical device capable of operation in the realm of abstract thought.
739
We Cannot Eat Data
epistemological Issues of computerization One cornerstone of the rationalistic tradition is the correspondence theory of language (Winograd & Flores, 1986). This theory has influenced thinking about computers and their impact on society. Quantification and computer representations are taken as models of an objective reality. Whereas humans have developed a complex system of languages to interpret, store, copy, or transmit information that they receive in analog format through their senses, computers facilitate the external processing of information in digital format by representing it in binary form. Thus, computers reduce experience to numerical abstraction. As a consequence, the natural and social worlds are treated as being made up of discrete and observable elements that can be counted and measured. Reality is reduced to what can be expressed in numbers (Berman, 1992). Using a computer is an isolated activity of the individual. Although computers have revolutionized communication in terms of scope and speed, computers are not conducive to cooperation and teamwork. Human-computer interaction generally is characterized by a one-user-per-computer ratio. Collaboration with other computer users generally means division of a task into a linear sequence of subtasks, which can be addressed by single individuals. Furthermore, computer-based experience is a partial experience predominantly limited to the visual. The manipulation of objects displayed on the screen emphasizes the interaction between the active subject and the passive object, which is characteristic of a scientific mind that has been exposed as being male and disembodied (Keller, 1985). This interaction is underpinned further by a domination logic that objectifies both the natural world and people as other (e.g., women or indigenous people) to justify their exploitation (Merchant, 1980). The disengagement between subject and object reinforces a psychological distance between
740
the individual and the social and natural environment (Veregin, 1995; Weizenbaum, 1976). Computer-based experience is individualistic and anthropocentric and no longer influenced by geographic space. Computerization creates an alternative, unnatural environment, or infosphere (Floridi, 2001). The computer has created a different world in which most activities involve information technology; in addition, the concepts regarding these activities are shifting and becoming informationally enriched (Moor, 1998). Digitization has become a worldview in itself (Capurro, 1990). Such views may adequately describe the experiences of people whose lives are permeated by computerization but emphasize how different these experiences are from the realities of the majority of people on the planet. The value dualism of hard vs. soft information ignores the reality that the computerized abstraction of the world is not absolute but rather dependent on cultural context, scientific paradigm, and technological feasibility. The digital world excludes and obscures important aspects of the realms of society and natural environment. Contrary to the individualism of computerized experience, African thought emphasizes the close links among knowledge of space, of self, and one’s position in the community. Although African traditions and cultural practices are diverse, there are underlying affinities that justify certain generalizations (Wiredu, 1980). In African philosophy, a person is defined through his or her relationships with other persons, not through an isolated quality such as rationality (Menkiti, 1979; Shutte, 1993). African thought does not know the sharp distinction between the self and a world that is controlled and changed. The world is a place in which people participate in community affairs. In fact, participation is the keystone of traditional African society. Participation integrates individuals within the social and natural networks of the world. The members of a community are linked by participation, which becomes the meaning of personal and collective
We Cannot Eat Data
unity and connects people both horizontally and vertically, the living and the dead as well as the physical environment that surrounds them (Mulago, 1969). Setiloane (1986) calls participation “the essence of being”; “I think, therefore I am” is replaced by “I participate, therefore I am” (Taylor, 1963, p. 41). The individual’s personhood is dependent on the community, but the continuation of the community depends on the individual. The life of the ancestors is continued through the individual; the individual life is continued through the dependents (Mulago, 1969). Unlike the Western concept of community, the African meaning of community does not refer to an aggregation of individuals (Menkiti, 1979) but prioritizes the group over the individual while still safeguarding the dignity and value of the individual (Senghor, 1966). Shutte (1993) explains that the notions of a person as a free being and that of being dependent for one’s personhood on others are not contradictory. Through being affirmed by others and through the desire to help and support others, the individual grows, personhood is developed, and personal freedom comes into being. African thought sees a person as a being under construction whose character changes as the relations to other persons change. To grow older means to become more of a person and more worthy of respect. In contrast to Western individualism and its emphasis on the rights of the individual Menkiti (1979) stresses that growth is a normative notion: “personhood is something at which individuals could fail” (p. 159). The individual belongs to the group and is linked to members of the group through interaction; conversation and dialogue are both purpose and activity of the community. Consequently, African socialism aims to realize not the will of the majority but the will of the community (Apostel, 1981) and, therefore, rejects both European socialism and Western capitalism because both are underpinned by subject-object dualism, which produces relationships between a person and a thing rather than a meeting of forces. Subject-object dualism, as is reinforced through
computerization, alienates the individual from others. While Western rational thought values individuality, African tradition is afraid of solitude and closed individuality, and values solidarity, consensus, and reconciliation. Users who share the cultural assumptions embedded in computer technology are not aware of the inherent bias, but members of other cultures are aware that they have to adapt to different patterns of thought and culturally bound ways of knowing (Bowers, 2000; Duncker, 2002; Walton & Vukovic, 2003; Winschiers & Paterson, 2004). The uncritical acceptance of the computer obscures its influence on the user’s thought patterns. The digital divide is seen as a problem of providing access to technology. However, it may be that the digital divide is an expression of the dualisms inherent in the cultural concepts underpinning computerization. Hoesle (1992) stresses that the main issue of contrast between industrialized countries and the third world is cultural. Capurro (1990) warns that a critical awareness of how information technology is used to manipulate both ourselves and the natural environment is necessary. Such awareness must include the linkages between progress in developing countries, which goes hand-in-hand with more widespread use of computers and the emergence of Western individualism and subjectivism (Bowers, 2000) and impacts on the environment.
the lIngkAges between InformAtIon technology, deVeloPment, And the enVIronment Information technology generally is seen as an essential component of development and strongly promoted by international development agencies. It is assumed that information technology enhances both economic development and democratic practices. This promotion implies that unless developing countries apply information technology and
741
We Cannot Eat Data
join the fast train of the computer revolution, they will be left behind (Berman, 1992). Publications on ICT development in Africa describe African societies as “lagging behind” (Schaefer, 2002), while ICT structures in Africa are compared to “the North American picture … in the first half of the twentieth century” (Kawooya, 2002). This line of argument not only supports technological determinism but also subscribes to a development ideology that is based on a particular concept of history. This concept is as linear as the concept of technological progress, assuming that every society has to go through the same stages until they reach the same economic levels as countries considered developed.
Information technology and democracy The notion that information technology contributes to a more democratic society is based on the assumption that information technology is nondiscriminatory in that it potentially provides equal opportunities to everyone. However, equal access does not ensure equal benefit (Neelameghan, 1981). In addition, this assumption ignores the existing imbalances and asymmetric societal structures that do not allow everyone to participate equally (Adam, 2001; Grant Lewis & Samoff, 1992; Veregrin, 1995). Information technology is not inherently neutral but is linked to power from which moral implications arise: “Those who filter and package information for us in the [global information infrastructure] will hold enormous power over us” (Johnson, 1997). As a study by Introna and Nissenbaum (2000) suggests, the criteria by which search engines filter information are not transparent, and particular sites systematically are excluded. This conclusion runs counter to the democratic value associated with the Internet. As long as the public sphere continues to prioritize information from the North over information generated in the South (Lor & Britz,
742
2002), it is questionable whether people in Africa and other developing nations actually do have access to appropriate and useful information. Simply providing the infrastructure to tune in to the North-to-South information flow cannot bridge the gap between the information-rich and the information-poor. The information-poor have to generate information for themselves and about themselves. Thus, the issue raised by Johnson (1997) of power being exercised by those who filter and package information over those who receive this information is a pertinent one for Africa and other non-Western countries whose voices are marginalized in the global information scenario. Although the scope of online communication allows people to engage with a vast number of other people all over the world, Bowers (2000) argues that the learning of moral norms is of higher importance to democratic decision making than is access to information. Global communication, although spanning a much broader geographic scope, tends to join like-minded people who share a common interest. Online communities provide little opportunity for participants to understand the needs of others and one’s responsibility to them (Adam, 2001; Bowers, 2000; Johnson, 1997). The more time people spend communicating online with like-minded people, the less time is spent communicating with those whose geographic space they share. As a result, people engage less in debate and tend toward already formed biases. The paradox of growing insularity through increasing connectivity prompts Johnson (1997) to raise an old question: Is democracy possible without shared geographic space? In the African context in which multi-ethnicity is the norm rather than the exception, and tribalism and ethnic discrimination are a major obstacle for democracy, the question is extremely pertinent. In the light of Johnson’s and Bowers’ analyses, there seems to be the danger that computer-mediated communication may exacerbate the existing ethnic divide in many African countries by harboring insularity.
We Cannot Eat Data
Information technology and economic development Ogundipe-Leslie (1993) argues that development itself is characterized by cultural imperialism and ethnocentrism and interferes with the natural internal processes in the society to which it is introduced. Development is based on the assumption that every society shares the same values that characterize developed societies, such as efficiency, speed, and competitiveness. These values are evoked through references to the super information highway that will lead African countries into a future of material security and possession of particular goods and services that are typical of industrial societies. This eurocentric point of view is used as a standard of measurement so that societies who do not conform are not perceived as different but as primitive, traditional, or underdeveloped (Ogundipe-Leslie, 1993). The notion of development as an economic upliftment ignores a nation’s values, aspirations, beliefs, and patterns of behavior. As a consequence, measures to develop according to the Western economic model interfere with the natural internal processes in society and uproot the individual and collective lives of the people. Ogundipe-Leslie (1993) criticizes development not only for its cultural imperialism and ethnocentrism but also for ignoring the social costs that people have to pay for the interruption of their social dynamics. Computerization and particularly the global information infrastructure increasingly enforce Western ideology onto other cultures. Computerization is shaping consciousness and bodily experience to accept computer mediation as normal; consequently, computer illiteracy is considered socially abnormal and deficient, and those who do not use computers are less-developed and less-intelligent. Capurro (1990) confirms that the influence of modern information technology is shaping all aspects of social life. For example in the industrialized world in which computerization is permeating society, it is becoming apparent that
computerization rationalizes humans out of work processes (Bowers, 2000). Increasingly fewer people do more complex tasks in a shorter time, thus excluding more people from this process. One of Africa’s hidden and untapped assets is its human resources (Britz, 2002). Computerization is seen as a means to provide information, education, and economic opportunity to all people to overcome the problem of unemployment. There is, however, the danger that computer technology in Africa might further enforce and enlarge the gap between the advantaged and the disadvantaged. Berman highlights the origins of computer technology by stressing that during and after World War II and the subsequent period of expansion of the welfare state, economic planning data were used by the state to gain control over people. African countries inherited the concept of authoritarian state control in colonial rule. The computer’s logical structure, which emphasizes hierarchy, sequence control, and iteration, reflects the structure of bureaucratic organizations. This focus on control is obscured by the assumption of scientific objectivity. Hence, the power, which is exercised through computer application, is hidden behind the appearance of expert decision. The computer is a technology of command and control (Berman, 1992). Not only do computers reinforce authoritarianism, but they also become a symbol of advanced development and efficiency. But because computers narrow the scope to quantifiable information, indigenous knowledge is further marginalized, which makes it difficult for the state to take the qualitative aspects of social structure and culture into account. While some level of ICT may be important for development in the sense of improved well being and decreased suffering of African people, no level of ICT will be a sufficient condition for these hopes. The perception that ICTs are necessary for economic development ignores that technology invents its own need. The multitude of new consumer products and the rate at which they are introduced indicate that producers create needs
743
We Cannot Eat Data
where none existed (Veregin, 1995). The global ICT market is characterized by ever-decreasing intervals of software releases that force African countries continuously to invest in new software in order to avoid a further widening of the digital gap (Winschiers & Paterson, 2004).
the environment Bowers (2000) warns that the globalization of computer-based culture is not only a form of colonialism but that the cultural assumptions and lifestyles reinforced by the digital culture are ecologically problematic. Merchant (1980) stresses that the mechanistic view of nature sanctions exploitative environmental conduct. The mechanistic philosophy renders nature dead instead of a living, nurturing organism. As a consequence, cultural constraints, which previously restricted destructive environmental conduct, lost impact and were replaced with the machine metaphor (i.e., images of mastery and domination that sanctioned the exploitation of nature). While rural communities are aware that environmental conditions are unpredictable and that scarcity is a possibility, the modern Western way of life is based on the false assumption that progress does not depend on the contingencies of natural systems. Thus, the predominant challenge of the 21st century will be the environment. The notion of knowledge and information as strategic resources points toward the link between computerization and the industrial revolution and its main characteristic: the transformation of utility value into exchange value (Capurro, 1990). Computerization commodifies information and anything else that falls under its domain. In this sense, the computer revolution “represents the digital phase of the Industrial revolution … it perpetuates the primary goal of transferring more aspects of everyday life into commodities that can be manufactured and sold, now on a global basis” (Bowers, 2000). There are “connections between computers, cultural diversity and the
744
ecological crisis” (Bowers, 2000). The market has become a universal principle encompassing all forms of human activity and commodifying the relationships among people and between human beings and the environment. The mapping and subdivision of lived space based on national or global grids nullify places of local meaning while playing an important part in the functioning of capitalist economy by creating space as an exploitable resource (McHaffie, 1995). Geographic information systems (GIS) have further depersonalized this process. The parallel between information and nature is not accidental; both are considered resources. Nature is a shared resource that is vital for human survival, and information is a resource that is shared among people. Critics of Western-style development, such as Vandana Shiva (1989), have pointed out how the commodification of natural resources, which is typical of the Western paradigm, excludes natural systems from the economic model. In this paradigm, a river in its natural state is not considered productive unless it is dammed. The natural system has to be modified in order to produce value; the use value of the river has to be transformed into exchange value. The preoccupation with quantification contributes not only to the commodification of nature but also to the widespread acceptance of data as the basis of thought. Unless information can be computerized, it is not considered valuable and, hence, undermines the importance of indigenous knowledge. The global computer revolution perpetuates the assumption underlying development ideology that it is merely a question of time until developing countries will reach the level of industrialized nations. Hoesle (1992) asserts that this assumption cannot possibly be fulfilled. The ecological footprint of the so-called developed nations is not only far heavier than that of the third world but also is unsustainable. It therefore would become an ecological impossibility for the whole world to adopt the same lifestyle. Hoesle (1992) infers that because “the [Western] way of life is not univer-
We Cannot Eat Data
salizable [it is] therefore immoral.” He questions the legitimacy of a world society built according to Western values that have brought humankind to the verge of ecological disaster. Computerization is instrumental in shaping the ecological problems we are facing, because computerization decontextualizes knowledge and isolates it from the ambiguities and complexities of reality. The rationalistic tendency to fraction complex holistic processes into a series of discrete problems leads to the inability to address ecological and social issues adequately. Popular cyberlibertarian ideology wrongly assumes that information technology provides free and equal interactions and equal opportunities that neutralize asymmetric social structures (Adam, 2001; Winner, 1997). ICT can only be truly beneficial to Africans if they support the African concept of community and counteract insularity. Besides focusing on the advantages of ICTs, the negative consequences of computerization, such as the rationalization of labor, which are observable in industrialized nations, must be avoided. The current patterns of production (i.e., the high frequencies of hardware and software releases) are ecologically and socially unsustainable and need to change (Winschiers & Paterson, 2004). Hoesle (1992) therefore stresses the need to bring values back into focus and to recognize humans as part of the cosmos. Modern subjectivism and the sectorial and analytic character of scientific thinking have almost forgotten the advantages of a holistic approach to reality (Hoesle, 1992). The value and legitimacy of Africa’s rich tacit knowledge has been undermined because this knowledge is largely informal and does not fit the computer-imposed data formats (Adeya & Cogburn, 2001; Harris, Weiner, Warner, & Levin, 1995). Cultures are reservoirs of expression and symbolic representations with a truth claim of their own and, thus, need to be preserved (Hoesle, 1992). However, Lor and Britz (2002) call attention to the limited contribution that information generated
in the South is making to the global knowledge society and point to the bias of the public sphere toward information generated in the North. Only a minute proportion of Internet hosts is located in Africa, although most countries on the continent have achieved connectivity to the Internet (Maloka & le Roux, 2001). The pressing issue is not providing access to technology in order to turn more people into receivers of information that was created elsewhere and may not be useful to them, but, as suggested by Capurro (1990), it is to find ways that African countries can promote their identities in information production, distribution, and use. In terms of a global information ecology, he stresses the importance “of finding the right balance … between the blessings of universality and the need for preserving plurality” (Capurro, 1990, p. 130). In order to find this balance, a great conversation is necessary that transcends limitations of discourse among members of particular social groups (Berman, 1992; Moor, 1998). Such a global dialogue must be cross-sectoral, crosscultural, and transdisciplinary. Capurro (1990) reminds us that the electronic revolution is only a possibility that has to be inserted responsibly into existing cultural and social contexts in order to produce the necessary knowledge pluralism to address the complex social and ecological issues we are facing. In order to fill the need for such global dialogue that addresses the ethical requirements for development that is truly sustainable, a global ethical framework has been developed in the form of the Earth Charter.
the eArth chArter The Earth Charter (www.earthcharter.org) development began in 1987 with the Brundtland Report, “Our Common Future,” calling for a Charter for Nature (WCED, 1987) that would set forth fundamental principles for sustainable development. The Earth Charter was addressed again during
745
We Cannot Eat Data
the 1992 Rio Earth Summit and taken forward when Maurice Strong and Mikhail Gorbachev launched the Earth Charter Initiative in 1994. In a decade-long participatory and consultative process involving all major religions and people from different cultures and all sectors of society, the present list of principles of the Earth Charter was developed and finalized in 2000. The Earth Charter consists of a preamble, 18 principles, numerous subprinciples, and a conclusion suggesting “the way forward.” The preamble expresses that the future depends on the choices we will make and that “we must join together to bring forth a sustainable global society founded on respect for nature, universal human rights, economic justice, and a culture of peace.” The Earth Charter locates humanity as part of the cosmos and stresses the interdependencies among people and between people and nature. The preamble emphasizes that the foundations of global security are threatened by patterns of consumption and production, which undermine communities and cause environmental degradation. The Earth Charter emphasizes that every individual shares the “universal responsibility” of facing the challenges of using knowledge and technologies to build a just, democratic, and ecologically sound future. The Earth Charter principles address four themes: respect and care for the community of life; ecological integrity; social and economic justice; and democracy, non-violence, and peace. As the “way forward,” the Earth Charter calls for the development of a sustainable way of life based on a “collaborative search for truth and wisdom” in which cultural diversity is valued.
the earth charter and computer ethics Because computing is the product as well as the extension of a way of life and worldview that largely has caused the environmental crisis, and because computers are directly linked to the concept of third-world development, computer
746
ethics has to locate itself more explicitly within the broader context of the environmental issues on the one hand and development ethics on the other. The Earth Charter framework not only helps to address particular issues in light of their compliance with ecological integrity and respect for nature, but also stresses the gap between rich and poor and global responsibility to address poverty. By stating that “when basic needs have been met, human development is primarily about being more, not having more,” the Earth Charter prioritizes qualitative criteria to measure development over quantitative criteria. The Earth Charter and the WISIS declaration subscribe to the same values: peace, freedom, equality, solidarity, tolerance, shared responsibility, and respect for nature. The two documents do not replace each other but are compatible. While WISIS focuses on ICT development, the Earth Charter is much broader in scope, thus complementing and strengthening the WISIS declaration. Being global both in terms of content and scope (Dower, 2005), the Earth Charter is a proposal for a system of global ethics. It encompasses both human rights as well as less formalized principles for ecological, social, and political development. If ICTs are to be an integral part of this future, it is vital that computer ethics takes cognizance of the existence of this global ethical framework and examines how computing can be inserted into this vision of a sustainable and just future. The Earth Charter asserts the interconnectedness of people and the environment and affirms the wisdom of different cultural traditions, while at the same time confirming the contribution of humanistic science and sustainable technology. The preamble to the Earth Charter highlights the current environmental and social crisis but sees “these trends are perilous—but not inevitable.” In other words, the Earth Charter does not subscribe to technological determinism but rather declares that “the choice is ours.” The choices we are making as individuals as well as communi-
We Cannot Eat Data
ties determine the future. To change the course of current patterns of thought and behaviors is a matter of human will power and creative energy. It involves a change of attitudes, worldviews, values, and ways of living, such as consumption and production habits. Unlike the WISIS declaration, the Earth Charter addresses not only states but also the broader public. The global ethic formulated in the Earth Charter provides guidelines for behavior and action. But it is important to realize that the Earth Charter is not to be understood as final. It provides a framework and catalyst for reflection and discussion. As such, this framework is of value to computer ethics. It is the role of computer ethics to guide ICT development toward a sustainable future. By addressing whether the Earth Charter can be endorsed by computer professionals, computer ethics can examine the Earth Charter’s justification. The results of such an examination will be fruitful for both the Earth Charter and Computer Ethics. The vision for a sustainable future that is set forth in the preamble to the Earth Charter can guide the development of a vision for the role of computing and ICT in the future. The Earth Charter can be an ethical values framework for improving progress toward sustainability, designing codes of conduct for both professionals and education, and designing accountability systems. The Earth Charter principles set out under the heading Ecological Integrity can help to guide computing and ICT development in terms of environmental performance, which, for example, would refer to issues concerning energy and emissions, both in the production and the use of computer technology; the materials used; the use of resources in production cycles; and the disposal of computing technology, hazardous substances, and so forth. The social impacts of computing (e.g., the danger of increased insularity of users or the rationalization of labor) can be addressed by the principles under the heading Social & Economic Justice and Democracy. The Earth Charter prin-
ciples grouped under “Democracy, Non violence and Peace” provide guidance to address issues such as the impact of computing for community participation, the impact of computing on the well being of the community, the impact on community environment, and quality of life. Today’s globalized world is a multicultural world. However, the field of computer ethics is dominated by Western perspectives. It is necessary to overcome this eurocentric tendency by examining the implications of computer technology from different cultural paradigms. The Earth Charter, on the other hand, has been developed by people from various cultural contexts. Computer ethicists may examine how computing either can support or violate the principles and values stated in the Earth Charter. Using the Earth Charter as a framework for addressing ethical issues arising in computing will enable computer ethicists to examine these issues from the perspectives of different cultural backgrounds as well as the implications for the environment and the development of a sustainable global society. Using the Earth Charter to address particular computer ethics issues will help to put them in a larger global context, supporting a more critical and inclusive examination without giving in to technological determinism or information colonialism. Rather than accepting the current destructive tendencies of industrial civilization and imposing them on developing nations, the Earth Charter encourages the reinvention of industrial civilization through changes of cultural orientation and extensive revision of systems, practices, and procedures. As such, the Earth Charter is closely linked to development ethics (Dower, 2005). Computer ethics needs to acknowledge the linkages between computing, development, and environmental conduct. The Earth Charter provides a tool for the development of a computer ethics that is global in both content and scope and contributes toward a sustainable future for all.
747
We Cannot Eat Data
conclusIon
references
There are several positions in computer ethics that purport the global character of computer ethics. These views, however, ignore the observation that the digital divide is a divide between a minority of people whose lives are permeated by computerization and a majority of people whose lives largely are unaffected by the computer. These views also seem to ignore that computerization itself is a product of a particular culture and worldview. In spite of the advantages that computerization and information technology have to offer, there is the danger of traditional worldviews and cultural practices being transformed and replaced with Western values embedded in the technology. This replacement is a form of information colonialism and a threat to the environment endangering human survival. Gòrniak-Kocikowska (2004) predicts that although it would be desirable that the emergence of a new global ethic is a participatory process of dialogue and exchange, it is more likely that Western cultural values and worldviews will be imposed through computerization. It is the responsibility of computer ethics to prevent such ethnocentrism. To avoid that computerization enforces the adoption of a Western worldview, a broad crosscultural dialogue is necessary. Such dialogical ethical research requires a balanced framework that takes cultural diversity and the need for ecological sustainability into account. A suitable framework for global ethical dialogue is already in place in the form of the Earth Charter, a set of principles that lays down an inclusive ethical vision that recognizes the interdependencies of environmental protection, human rights, equitable human development, and peace.
Adam, A. (2001). Computer ethics in a different voice. Information and Organization, 11, 235-261.
Acknowledgment Les Underhill, Britta Schinzel, Tim Dunne, and John Paterson read earlier drafts of this chapter.
748
Adeya, C. N., & Cogburn, D. L. (2001). Globalisation and the information economy: Challenges and opportunities for Africa. In G. Nulerns, N. Hafkin, L. Van Audenhoven, & B. Cammaerts (Eds.), The digital divide in developing countries: Towards an information society in Africa (pp. 77112). Brussels: Brussel University Press. Anyiam-Osigwe, M. C. (2002). Africa’s new awakening and ICT. Toward attaining sustainable democracy in Africa. In T. Mendina & J. J. Britz (Eds.), Information ethics in the electronic age. Current issues in Africa and the world (pp. 36-46). Jefferson, NC: McFarland. Apostel, L. (1981). African philosophy: Myth or reality. Gent, Belgium: Story-Scientia. Berman, B. J. (1992). The state, computers, and African development: The information non-revolution. In S. Grant Lewis & J. Samoff (Eds.), Microcomputers in African development: Critical perspectives (pp. 213-229). Boulder, CO: Westview Press. Bowers, C. A. (2000). Let them eat data: How computers affect education, cultural diversity, and the prospects of ecological sustainability. Athens: The University of Georgia Press. Britz, J. J. (2002). Africa and its place in the twenty-first century. A moral reflection. In T. Mendina & J. J. Britz (Eds.), Information ethics in the electronic age: Current issues in Africa and the world (pp. 5-6). Jefferson, NC: McFarland. Bynum, T. W. (2000, Summer). A very short history of computer ethics. Newsletter of the American Philosophical Association on Philosophy and Computing. Retrieved July 2005, from http://www. southernct.edu/organizations/rccs/resources/research/introduction/bynum_shrt_hist.html
We Cannot Eat Data
Bynum, T. W., & Rogerson, S. (1996). Introduction and overview: Global information ethics. Science and Engineering Ethics, 2, 131-136. Capurro, R. (1989). Towards an information ecology. In I. Wormell (Ed.), Information quality. Definitions and dimensions. Proceedings of the NORDINFO International Seminar “Information and Quality,” (pp. 122-139) Copenhagen. Curry, M. R. (1995). Geographic information systems and the inevitability of ethical inconsistencies. In J. Pickles (Ed.), Ground truth (pp. 68-87). London: The Guilford Press. Dower, N. (2005). The earth charter and global ethics. Worldviews: Environment, Culture, Religion, 8, 15-28. Duncker, E. (2002). Cross-cultural usability of the library metaphor. In Proceedings of the Second Joint Conference on Digital Libraries (JCDL) of the Association of Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers Computer Society (IEEE-CS) 2002, Portland, Oregon (pp. 223-230). Floridi, L. (2001). Information ethics: An environmental approach to the digital divide. Philosophy in the Contemporary World, 9(1). Retrieved June 2005, from www.wolfson.ox.ac.uk/~floridi/pdf/ ieeadd.pdf Freiberger, P. A., & Swaine, M. R. (2003). Computers. In Encyclopædia Britannica 2003. [CDROM]. London: Encyclopedia Britannica. Gandalf. (2005). Data on Internet activity worldwide (hostcount). Retrieved July 2005, from http://www.gandalf.it/data/data1.htm Geertz, C. (1983). Local knowledge. Further essays in interpretive anthropology. New York: Basic Books. Gòrniak-Kocikowska, K. (1996). The computer revolution and the problem of global ethics. Science and Engineering Ethics, 2, 177-190.
Gòrniak-Kocikowska, K. (2004). The global culture of digital technology and its ethics. The ETHICOMP E-Journal, 1(3). Retrieved May 2005, from http://www.ccsr.cse.dmu.ac.uk/journal Grant Lewis, S., & Samoff, J. (1992). Introduction. In S. Grant Lewis & J. Samoff (Eds.), Microcomputers in African development: Critical perspectives (pp. 1-24). Boulder, CO: Westview Press. Harris, T. M., Weiner, D., Warner, T. A., & Levin, R. (1995). Pursuing social goals through participatory geographic information systems. Redressing South Africa’s historical political ecology. In J. Pickles (Ed.), Ground truth (pp. 196-222). London: The Guilford Press. Heim, M. (1993). The metaphysics of virtual reality. New York: Oxford University Press. Hoesle, V. (1992). The third world as a philosophical problem. Social Research, 59, 227-263. Introna, L., & Nissenbaum, H. (2000). Shaping the Web. Why the politics of search engines matter. The Information Society, 16, 169-185. Johnson, D. G. (1997). Is the global information infrastructure a democratic technology? Computers and Society, 27, 20-26. Johnson, D. G. (1999). Computer ethics in the 21st century. In Proceedings of ETHICOMP99, Rome, Italy. Kawooya, D. (2002). The digital divide. An ethical dilemma for information professionals in Uganda? In T. Mendina, & J. J. Britz (Eds.), Information ethics in the electronic age: Current issues in Africa and the world (pp. 28-35). Jefferson, NC: McFarland. Keller, E. F. (1985). Reflections on gender and science. New Haven, CT: Yale University Press. Kocikowski, A. (1999). Technologia informatyczna a stary problem totalitaryzmu. Nauka, 1, 120-126.
749
We Cannot Eat Data
Lor, P. J., & Britz, J. J. (2002). Information imperialism. Moral problems in information flows from south to north. In T. Mendina & J. J. Britz (Eds.), Information ethics in the electronic age: Current issues in Africa and the world (pp. 1521). Jefferson, NC: McFarland. Mahoney, M. S. (1988). The history of computing in the history of technology. Annals of the History of Computing, 10, 113-125. Maloka, E., & le Roux, E. (2001). Africa in the new millennium: Challenges and prospects. Pretoria: Africa Institute of South Africa. McHaffie, P. (1995). Manufacturing metaphors. Public cartography, the market, and democracy. In J. Pickles (Ed.), Ground truth (pp. 113-129). New York: The Guilford Press. Menkiti, I. A. (1979). Person and community in African traditional thought. In R. A. Wright (Ed.), African philosophy (pp. 157-168). New York: University Press. Merchant, C. (1980). The death of nature: Women, ecology, and the scientific revolution. San Francisco: Harper and Row. Moor, J. (1998). Reason, relativity, and responsibility in computer ethics. Computers and Society, 28, 14-21. Mulago, V. (1969). Vital participation: The cohesive principle of the Bantu community. In K. Kickson & P. Ellinworth (Eds.), Biblical revelation and African beliefs (pp. 137-158). London: Butterworth. Neelameghan, A. (1981). Some issues in information transfer. A third world perspective. International Federation of Library Associates (IFLA) Journal, 7, 8-18. Ng’etich, K. A. (2001). Harnessing computer-mediated communication technologies in the unification of Africa: Constraints and potentials. In E. Maloka & E. le Roux (Eds.), Africa in the new
750
millennium: Challenges and prospects (pp. 77-85). Pretoria: Africa Institute of South Africa. Ogundipe-Leslie, M. (1993). African women, culture and another development. In S. M. James (Ed.), Theorising black feminism: The visionary pragmatism of black women, A.P.A. Busia (pp. 102-117). London: Routledge. Pickles, J. (1995). Representations in an electronic age. Geography, GIS, and democracy. In J. Pickles (Ed.), Ground truth (pp. 1-30). New York: The Guilford Press. Schaefer III, S. J. (2002). Telecommunications infrastructure in the African continent. 1960-2010. In T. Mendina & J. J. Britz (Eds.), Information ethics in the electronic age: Current issues in Africa and the world (pp. 22-27). Jefferson, NC: McFarland. Senghor, L. (1966). Negritude—A humanism of the 20th century. Optima, 16, 1-8. Setiloane, G. M. (1986). African theology: An introduction. Johannesburg: Skotaville Publishers. Shiva, V. (1989). Staying alive: Women, ecology, and development. London: Zed Books. Shutte, A. (1993). Philosophy for Africa. Cape Town: University of Cape Town Press. Smith, A. (1976). An inquiry into the nature and causes of the wealth of nations (R. H. Cambell & A. S. Skinner, Eds.). Oxford, UK: Clarendon Press. Taylor, J. V. (1963). The primal vision: Christian presence amid African religion. London: S.C.M. Press. Veregin, H. (1995). Computer innovation and adoption in geography: A critique of conventional technological models. In J. Pickles (Ed.), Ground truth (pp. 88-112). London: The Guilford Press.
We Cannot Eat Data
Walton, M., & Vukovic, V. (2003). Cultures, literacy, and the Web: Dimensions of information “scent.” Interactions, 10, 64-71. WCED (1987). Our common future. Report of the World Commission on Environment and Development (WCED) (pp. 323-333). New York: Oxford University Press. Weizenbaum, J. (1976). Computer power and human reason: From judgement to calculation. New York: W.H. Freeman. Winner, L. (1997). Cyberlibertarian myths and the prospect for community. ACM Computers and Society, 27, 14-19. Winograd, T., & Flores, F. (1986). Understanding computers and cognition. Norwood, NJ: Arlex Publishing Corporation.
Winschiers, H., & Paterson, B. (2004). Sustainable software development. Proceedings of the 2004 Annual Research Conference of the South African Institute of Computer Scientists and Information Technologists on IT Research In Developing Countries (SAICSIT 2004)—Fulfilling the promise of ICT, Stellenbosch, South Africa (pp. 111-115). Wiredu, K. (1980). Philosophy and an African culture. London: Cambridge. World Internet Usage Statistics. (2005). Data on Internet usage and population. Retrieved November 2005, from http://www.internetworldstats. com/stats1.htm WSIS. (2004). The world summit on the information society: Declaration of principles. Retrieved November 2005, from http://www.itu.int/wsis
This work was previously published in Information Technology Ethics: Cultural Perspectives, edited by S. Hongladarom and C. Ess, pp. 153-168, copyright 2007 by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global).
751
752
Chapter XLIX
Current and Future State of ICT Deployment and Utilization in Healthcare: An Analysis of Cross-Cultural Ethical Issues Bernd Carsten Stahl De Montfort University, UK Simon Rogerson De Montfort University, UK Amin Kashmeery University of Durham, UK
AbstrAct The ever-changing face of ICT can render its deployment rather problematic in sensitive areas of applications, such as healthcare. The ethical implications are multifaceted and have diverse degrees of sensitivity from culture to culture. Our essay attempts to shed light on these interplaying factors in a cross-cultural analysis that takes into account prospective ICT development.
PreAmble Satisfactory provision of healthcare is central to our quality of life. At the same time, healthcare is a central cost factor in our personal as well as public expenditure. Healthcare systems in differ-
ent countries face different challenges and provide different levels of services. It is probably fair to say that there is no one model that can address or overcome all issues. It is probably also fair to say that most healthcare systems are trying to use technology in order to address some of the prob-
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Current and Future State of ICT Deployment and Utilization in Healthcare
lems they are facing. Among these problems, one can find issues of cost minimization, consistency of care provision, quality control, labor saving, and a variety of others. This chapter will explore the relationship of culture, ethics, and the use of information and communication technology (ICT) in healthcare. As this suggests, we will not be able to do justice to the intersection of these four topics. Instead, the chapter will attempt to identify some dominant issues that are of relevance today. The main purpose of this chapter is to develop a framework that will allow us to understand how culture can shape the perception of the ethicality of the use of ICT in healthcare. It is meant to provide the foundation upon which we can build valuable empirical research. We are interested specifically in the question whether there are cultural differences with regard to the perception of ICT in healthcare between individuals from cultures in a non-Western setting and those from European, specifically British, culture. Given the size and complexity of the topic, we will use this chapter to outline some of the relationships among the main concepts and to identify areas worthy of research. Following basic definitions of pertinent concepts, the chapter will start by discussing the relationship between culture and health informatics and then proceed to describe some of the ethical issues of health informatics. These two strands of thought then will be combined to develop the concept of cultural influence on the perception of the ethics and morality of health information systems. We then will describe several scenarios that will render it clear what kind of issues we believe to be likely encountered. Establishing the descriptive and theoretical part of our topic will pave the way for developing methodological considerations that are pertinent to empirical research and to the cultural impact and ethics of ICT use in healthcare.
ethics and morality It long has been established that, for the sake of practicality and application, a distinction should be made between ethics and morality. Morality can be defined as the set of acceptable social rules that are adhered to in a given community. Following this, one can define ethics as the workable scheme for the theory of morality. Ethics then can be used to describe, define, and justify morality. This distinction is not required by the etymology of the concepts, and it is not always used in English language writings on ethics. It is more widely adhered to in continental European philosophy (Stahl, 2004). We nevertheless believe it to be useful because it can help us distinguish between fundamentally different issues. Morality is a social fact and can be observed through the use of established social science methods. For example, we can observe patients in hospitals and find out whether they believe that a certain action is good or bad, whether they believe that a certain use of technology is acceptable or not. This question is fundamentally different from the ethical question of why the use of a technology is good or bad. While most individuals follow a morality of which they are aware, many of us rarely engage in explicit ethical reflection. That means that patients’ ethical convictions, while important for their moral attitudes, often are implicit and much harder to determine. This has methodological consequences that will be discussed later. Another reason the distinction between ethics and morality is important for our project is that it roughly corresponds to the difference between descriptive and normative research. One can undertake purely descriptive research on the social fact of morality, but when it comes to ethical justifications and normative suggestions, one changes the level of analysis. This is important for researchers to reflect on, and the conceptual distinction will make it easier for us to do so.
753
Current and Future State of ICT Deployment and Utilization in Healthcare
culture And InformAtIcs This section will briefly review the relationship between culture and health informatics. For this purpose, it is useful to state why we are speaking of health informatics rather than information systems, software engineering, or computer science. The reason is that the term informatics is more inclusive and aims at use and social context rather than at the technical artefact itself. This is particularly pertinent, given that technology applications in healthcare are never ends in themselves or pure gadgets, but are always there to facilitate the aims of providing care. Such care also is always embedded in situations, cultures, and communities, which must be reflected by the technology. The very term informatics, therefore, indicates that we are looking at a wider picture. Interestingly, a large part of academic and practitioner publications dealing with ICT in healthcare use the term. Having thus explained the use of the term healthcare informatics, this section now will discuss the concept of culture and its relationship to health informatics.
culture Culture is a multifaceted concept that is hard, if not impossible, to define. For our purposes, we can start with an understanding of culture as the totality of shared meanings and interpretations of a given group. This repository of shared understandings and interpretations of the word is represented by emblems whose meanings and interpretations members of the same culture share (Castells, 2000; Galtung, 1998; Ward & Peppard, 1996). The exchange of meanings and the agreement on appropriate interpretations of emblems breed skills of communication, which begin with primitive means but evolve the more the communities develop. Eventually, communities become communication aficionados to the extent that the very existence of the culture depends on it. The nature of the culture will be reflected in
754
the nature of communication with information or meaning being implicit or explicit. This often is referred to as low context communication and high context communication (Nance & Strohmaier, 1994). This, in turn, underlines the social nature of cultures. This is a rather wide understanding of culture that requires further specification. It is useful, however, because it allows an understanding of culture as a multiple phenomenon with areas of overlap and frequent change. For example, it facilitates cultures of different reach, such as organizational culture and national culture. Most organizations will have some particularities that are meaningful to their members and that outsiders cannot access easily. This is particularly so within a culture of collectivism (Nance & Strohmaier, 1994). They thus fulfill the definition of culture, and they arguably require a culture in order to facilitate their long-term survival (Robey & Azevedo, 1994). A similar description can be found for national cultures; namely, that they are the collection of things, ideas, and techniques, including institutions, which a society needs to persist (Gehlen, 1997). It should be clear that such a definition of culture will not allow easy delimitations and distinctions. Most individuals will be members of a variety of cultures (e.g., work, sports clubs, ethnic groups, families, nations, and regions). These memberships may be mutually reinforcing, but they also may be contradictory. An important aspect of culture is that it has a normative function. This means that cultures contain an idea of how things should be and how its members are expected to behave. This means that they are inherently utopian and imply a good state of the world (Bourdil, 1996). There are different ways in which the normative character of cultures is transmitted. One of these ways is what we usually call ethics or morality. These refer to the norms that are accepted in a given culture and the justification of such norms. This also can be translated in terms of values that are
Current and Future State of ICT Deployment and Utilization in Healthcare
implicit in all cultures. Therefore, one can say that culture is a “value-concept” (Weber, 1994, p. 71). A related and very important aspect is that of tenet. Tenets and creeds constitute what we call meta-ethics. Their essence, usually imposing values and principles on communities that share belief in them, is normative guidance. Some religions are so comprehensive that their creeds and tenets can collectively govern all aspects of life of individuals, including their interactions with other members of the community. To these communities, ethical rules that they would abide by are only those ordained by their religion. All of this should render it clear that cultures are linked deeply to questions of identity. On an individual level, identity as the answer to the question “Who am I?” is answered by a collection of narratives. These narratives draw on the cultures of which the individual is a member. Despite wide debate of a classification nature between relativists, utilitarianists, teleologists, and deontologists, it remains a fact of life that basic morality in the human species that commands decent conduct (i.e., do not lie; do not steal; do not deceive; etc.) are connate and natura insitus and, therefore, for that part universal. Culture, one thus can claim, is a universal ingredient of human existence. Clashes of cultures can evolve when interests conflict and desires intersect. In ramifications of the perplexity of today’s modern life, including those of technological applications, paradigm shifts might redefine postulates. When that happens, some cultures might tend to impose their compromised values on others, thereby leading to contradictory influences on identity and to cognitive dissonance, which can lead to pathological developments. This point will be addressed in this chapter, since description of distorted cultural values and their influence on the perception of ethics health informatics constitute part of an ongoing debate.
culture and Ict The last section indicates that there is a close link between culture and technology. If culture imposes necessary conditions for reproduction, for instance, then it would become clear that reproduction technology and culture will be mutually dependent. By analogy, understanding technology as a rational approach to the environment, which typically uses artifacts for the subordination of nature, we are safe to say that culture is one of the human constants. The close link of technology and culture extends to different types of technology but, most notably, to the most important and prominent technologies of a given culture. Early agricultural cultures can be characterized by their use of ploughs or other technologies that allowed them to develop agricultural production. Similarly, the cultures we live in today are characterized by their relationship to ICT. Talk of the “information society,” the “global village,” or similar constructs indicates that we are aware of the importance of ICT for our culture. ICT also is linked to the other defining technologies of our age, such as biotechnology and nanotechnology, but also to modern developments of more traditional technologies such as mechanical production technologies. ICT thus allows the functioning of social institutions in Western societies such as the UK. Public administration and economic activities would look different without it. On the other hand, the social institutions we find in our cultures allow the use and development of ICT. Apart from this high level of interdependency between culture and ICT, there are also links that are less visible but as important. If we go back to the definition of culture as a set of shared meanings, norms, and interpretations, then we will see that culture strongly influences our perception of ICT and of the uses we believe to be acceptable. Again, this is a two-way relationship in which technology also affects the repository of available signs and their interpretations. For example, we find it normal
755
Current and Future State of ICT Deployment and Utilization in Healthcare
to speak of humans as information processing machines and to compare our cognitive functions with those of a computer. This indicates that technology has found its way into the set of symbols and metaphors that we believe to be meaningful. The issue becomes more complicated when culture and, consequently, ethics plays on several notes or acts on a multitude of fronts. Ethics of ICT in healthcare demands macro- and micro-analyses of not only the impact of ICT application on societal values but also the product of associated impacts of the combined forces interplaying in the overlapping parts of ICT and healthcare domains.
ethIcs And heAlthcAre Healthcare procedures touch most of us on many different occasions. They are there during the most existential moments of our lives, from birth right across to death. They can affect our well being directly by providing remedies and alleviating pain and indirectly by offering us the certainty that we will be taken care of when needed. In light of the importance of healthcare for our physical and mental well being, it is easy to say that healthcare and ethics are closely related. But, nevertheless, it will be helpful to clarify the concepts used and to indicate some of the areas that we believe to be of relevance to our research.
Value-based Practice (VbP) vs. evidence-based Practice (ebP) in healthcare Contemporary applications of ICT expand across the spectrum of healthcare fields. New developments in areas such as ICT are still unfolding and will continue to do so for some time to come. This situation usually creates what could be described as a policy vacuum (Moor, 1985). Applications of ICT in healthcare systems lead to yet a further step of ambiguity and uncertainty. This is because a
756
policy vacuum breeds an ethics vacuum. In order for action policies to be formulated, a conceptual framework needs to be created through appropriate analysis of the situation in question. This is very much the case in healthcare. In healthcare settings, current and future extensive use of ICT undoubtedly would result in a new state of affairs, which needs to be conceptualized in order for it to be given the legal, moral, and ethical codes that would keep its deployment in an acceptable manner—legally, morally, and ethically. Over the past five decades, the world has been going through the initial, introductory phase of communication technologies, followed immediately by a boom in information technology applications and then a convergence of computing, communication, and media technologies. We currently are witnessing the new phase, which is still pervading diverse aspects of our lives in an unprecedented interfusion. Thus, this phase merits the title permeation stage. The ethical dimensions of healthcare ICT deployment under such circumstances can best be elucidated, if investigations covered areas, where vulnerable groups constitute the matrix. These mainly include neonates and infants, the elderly, palliative care, and mental illness patients. ICT could transform our concepts to the extent that the question is not anymore “Would it enhance healthcare?” but rather “What is healthcare?” When that happens, we then will realize how intangible healthcare has become. It is in such areas that ambiguity is evident as to whether health decision making should be the product of values or facts. In terms of systematic categorization, recent researchers divide this domain into two distinct major subsections; namely, values-based practice (VBP) and evidence-based practice (EBP). Bill Fulford, one of its prominent advocates, defines VBP as the theory and skills base for effective healthcare decision making, where different (and, hence, potentially conflicting) values are in play (Fulford, 2004). On the other side of the debate, Ronald Levant, an advocate of EBP, describes
Current and Future State of ICT Deployment and Utilization in Healthcare
their initiative as a movement that strives to achieve accountability in medicine, psychology, education, public policy, and even architecture. He maintains that professionals are required to base their practices to whatever extent possible on evidence (Levant, 2005).
how relevant is the VbP/ebP debate to health informatics? Values in a broader sense are “standards of behaviour” (Waite, Hawker, & Soanes, 2001). But this definition falls short of giving even a framework, if values were to be used for applied purposes, such as healthcare practice. In that sense, Sackett, Straus, Scott-Richardson, Rosenberg, and Haynes (2000) maintain that specifically patients’ values mean their unique individual preferences, concerns, and expectations. In practical terms, what these bring to clinical encounters should be integrated for the purpose of making sound, clinical decisions that would serve these patients’ interests. Combining the two definitions, it would appear that standards of behavior can become a function of preferences, concerns, and expectations; in short, the interests of all parties involved. Other definitions go as follows, singling out people’s interests: Value-based practices (VBPs) are practices that are grounded in people-first values, such as choice, growth, personhood, and so forth (Anthony, 2004). Seedhouse (2005) in his recent work sets out his vision for a democratic future for healthcare decision making in which values of all stakeholders in the healthcare system will be taken into consideration. Values, being a subjective domain, require practice skills and methods of delivery when applied as a tool for healthcare decision making. Interaction between patients and providers is an essential part of the healthcare process. Value-based practice takes such activity into account and considers its proper application a responsibility that the provider should strive to achieve. Among subheadings relevant to informatics and ICT, knowledge and communication stand
out in this context. These are two terms used for information retrieval, acquisition, and accumulation. Information usually is acquired through first-hand narratives, polls, surveys, and media reports. In order for communication to be effective in terms of value-based practice in healthcare, the human factor is indispensable. Elements, such as attentive listening, empathy, sympathy, and reasoning, are attributes only of human beings. Methods such as Internet polls, postal surveys, and camera surveillance are potential recipes for communication failure, which, by the same token, will impact the value-based assessment and decision making. The knowledge thus acquired might have a wider impact of a negative nature not only on the users (patients) but also on other groups involved in the process, such as managers, social workers, insurers, and so forth (Colombo, Bendelow, Fulford, & William, 2003). As interests vary from person to person and from group to group, the question as to how conflict is handled becomes inevitable. In fact, this is the paradox of value-based practice in healthcare. Therefore, the term value ought to be analyzed further into its micro-dimensions, which constitute a spectrum ranging from the abstract sense of ethics and over self-fulfilment criteria (wishes, desires, needs, etc.) right up to principles and beliefs. Another feature that renders the value concept problematic is the fact that values are not static; they change with time and can be modified under certain circumstances. This situation is exacerbated by the fact that some cultures allow such changes to take place, and others allow that to happen only within a very narrow margin. Also, the attitudes toward such changes can bear different connotations. They can be defined as developments in the positive sense of the word, but they also can be defined as degradation. Evidence-based practice, on the other hand, can be executed with the least reliance on the human factor. Artificial intelligence, expert systems, and diagnostic software are vivid examples.
757
Current and Future State of ICT Deployment and Utilization in Healthcare
When sufficient material for decision-making precursors is at hand, opinions based on facts and evidence come into play. As mentioned above, EBP advocates concentrate their concern on accountability and pursuit of fact and evidence to verify it (Levant, 2005). Others have their own agenda for EBP implementation, which, in their view, should be guided by the recovery-oriented process and its values (Stultz, 2004), a matter that would transform EBP into VBP as well. It is worthwhil to briefly considering the status of these considerations in terms of ethics and morality as outlined previously. If morality consists of the accepted norms and if ethics are the justifications, then values would seem to be part of both areas. Values are those things that we value and, thus, can be immediate generators of moral norms. On the other hand, values also can be part of the justificatory context of morality. The introduction of the term value to the debate also raises another issue: What are we to do if values contradict? As already stated, basing medicine on evidence is an immensely value-laden starting point. It implies assumptions on the nature of reality and of our abilities to access this reality. Evidence-based practice is thus value-based, even if this is not often recognized. Another problem of the concept of values refers to competing and contradictory values. When we speak of value-based medicine, we should realize that there is no single value and no coherent set of values that could guide this. The question thus appears—which values to choose among competing ones? An answer to this question would lead us beyond the current chapter and would have to go back to ethical theory. Very briefly, one solution could be the introduction of a hierarchy of values that would help us to identify which values we should prefer in case of a value conflict (Stahl, 2003).
Is ethical Impact Proportional to Technology Sophistication? In order to answer this question, it would be sensible first to define sophisticated ICT applications 758
in healthcare and what degrees of sophistication are meant. Two broad areas frequently are being identified by prominent bodies such as UNESCO: telehealth and telemedicine. “The former … [includes] health services, education and research supported by ICT, while the latter refers more specifically to medical care and procedures offered across a geographical distance and involving two or more actors in collaboration, often in interdisciplinary teams” (UNESCO, 2005) As such, both are seen as related to healthcare informatics. Equity, as a basic concept in all aspects of life, is a value that should be observed by all parties involved in any given setting in which spheres of interests overlap and lines of rights intersect. It is particularly so in healthcare systems. If we consider the situation of healthcare in a third world setting, inhabitants of remote rural areas who hardly have any care at their disposal would find equity a luxury that they cannot afford. ICT can benefit people who inhabit these isolated areas. In terms of observing, for example, the value of equity in healthcare under such circumstances, telemedicine applications are of particular benefit in that they do the following: • •
• •
•
Enhance access to better diagnosis to all people through computerized techniques. Enable online consultation with specialists, thereby reducing cost to the benefit of care providers. Enable follow-up through easy feedback on the efficiency of the prescribed treatment. Allow a chance for local medical professionals to receive training without having to move to more urbanized areas. Allow the establishment of databases for easy access to medical records.
Operations such as this could expand to an international level reaching as far as the technology can geographically go, thereby surpassing limits beyond which control becomes increasingly difficult. It is this degree of sophistication that
Current and Future State of ICT Deployment and Utilization in Healthcare
could cause concern. Amidst these vast operations, it would become obvious that values such as confidentiality would be liable to breach with less and less control possibilities. Confidentiality as a personal requirement, however, is being dwarfed in comparison to security at a national or communal level. An explanation might be that wide-scale research, such as clinical trials, can yield an enormous amount of information. If such information compiled by research conducted on whole communities is of a sensitive nature, such as DNA and genes, the breach of information confidentiality can become alarming. Genes can carry information that reveals traits common in the genetic pool of the whole community to the extent that their very existence could be at risk. Future bio-weapons fall into this category. This is supported by the fact that pharmacogenetics is a reality, which means that certain individuals, groups, or communities are more ready to respond to certain drugs than others, depending on their genetic makeup. This is being vigorously researched under the pharmaceutical domain for enhancing therapeutic and medical treatment. By the same token, however, individuals, groups, or whole communities can be inflicted specifically by disease through certain drugs or chemicals, depending on their genetic makeup. This is very much the case in societies whose building blocks are tribes or clans. We will explore this aspect further later on.
culturAl Influence on the ethIcs of Ict use In heAlthcAre: PArAdoxes from western And non-western settIngs In this section, we will have to choose some pertinent characteristics of the cultures that we hold to be representative and contrast them with certain uses of technology. It will end up with a
collection of scenarios that will elucidate some issues of concern and shed more focused light on their complexity.
Issues in british culture British culture, as a vivid Western example of dynamic liberalism and utilitarianism, can serve the purpose of contrasting Western vs. non-Western settings. The liberal tradition translates into a high regard for the individual and the belief that social phenomena can be reduced to the sum of individual ones. This is important for healthcare, because the individual’s rights are considered of primary importance, whereas collective considerations tend to be viewed as secondary. At the same time (and closely related to liberalism), British culture is influenced strongly by utilitarianism. This means that it is a generally accepted ethical principle to sum up all utilities and disutilities of a given act and to make decisions according to the comparison of the aggregate utility of an act. Utilitarianism often is vulgarized into a cost-benefit analysis in which the methodological problem of measuring utilities is replaced by measuring financial costs and benefits. This means that cost-benefits considerations are deemed appropriate in ethically charged situations. It also means that there is an intrinsic contradiction between the two main pillars. While utilitarianism is based on a methodological individualism and, thus, compatible with liberalism, it is also deeply collectivist, because the rights of the individual can be (morally) overwritten by the overall collective utility. Another aspect of mainstream British culture is that it is modernist, meaning that it relies on and trusts reason and science. While there is some resistance to this modernist view, it probably is safe to say that in mainstream British discourses, scientists are regarded as reliable and trustworthy, and the results of scientific research are seen as valid. This links to utilitarianism, which can be seen as the attempt to render ethics scientific. Sci-
759
Current and Future State of ICT Deployment and Utilization in Healthcare
ence is justified, because it will help bring about a greater sum of happiness. It also means that there is an intrinsic bias toward evidence-based medicine and by association healthcare, because this is based on the scientific approach. Considerations of value are not seen as equally valid.
Examples of Issues in British Culture On the basis of liberalism, utilitarianism, and modernism, British culture is fundamentally appreciative of new technologies. This is true for technology in healthcare as well. New healthcare technologies generally are described as positive and benevolent. There is, however, a stream of literature and research that looks at the intrinsic contradictions that grow out of the traditional view of technology in healthcare. Berg, whose work was done in the Netherlands and is transferable to the UK, describes some of these issues. The modernist view of ICT assumes that there is one governing rationale and that technology can be used accordingly to further the well being of patients. Doctors use technology to help and heal patients. This overlooks that modern societies are much more complex. One explicit reason for the use of ICT is thus to support organizational issues (Berg, 1999). Such an approach overlooks that healthcare is a complex system with a multitude of conflicting actors and interests. But even if it works, ICT then can be used to change the way in which healthcare workers and patients interact. Technology can lead to disliking doctors and nurses. On the other hand, it also can widen the access to health services. Technology, which formally structures processes, also will lead to bureaucracy, which produces costs and, thus, is not always desirable from the utilitarian point of view. Another interesting problem can be found in the intersection between healthcare, technology, and rationality. The modernist view of linear and individual rationality that objectively can determine desirable solutions (which is also the basis of
760
evidence-based medicine) is not just problematic, because it underestimates the complexity of organizations. It is, to some degree, self-contradictory because it requires the very ad hoc and pragmatic activities to survive that it sets out to replace (Berg, 1997). More importantly, it also can be seen as an ideology that promotes particular interests. Using the case of a new online service, NHS Direct, Hanlon, et al. (2005) argue that “the supposed dominance of this technocratic consciousness hides class, gender and jurisdictional struggles” (p. 156). The Electronic Patient Record is a good example of these issues. Fairweather and Rogerson (2001) argue for a morally appropriate balance between the various moral standards that are in tension in the field of electronic patient records (EPRs). EPRs can facilitate doctor-patient relationships. However, at the same time, they can undermine trust and so harm the doctor-patient relationship. Patients are becoming increasingly reluctant to tell their own doctor everything that is relevant. A number of moral principles and the question of consent to release records need to be considered.
Issues in non-western culture Social norms differ from one community to another in different parts of the world. What is acceptable and permissible somewhere might not be so somewhere else. Therefore, healthcare planners and strategists must have a clear vision of what would and what would not trigger sensitivities in the process of healthcare delivery and decision making. For instance, the vital communication element previously mentioned for good value-based practice can become totally defective if carried out, for instance, in a male-to-female setting in which social norms do not accept it. A similar attitude is expected in situations such as vaginal swabbing or artificial insemination, the meta-ethics being tenet-rooted. It is against the social norms and religious codes of many world populations. Muslims, for example, who consti-
Current and Future State of ICT Deployment and Utilization in Healthcare
tute just over a quarter of the world’s population (http://www.islamicweb.com/begin/results.htm) have attitudes that are overwhelmingly governed by Shari’a codes of conduct. These ordain many aspects of life, including those that fall within the sphere of healthcare. Questions concerning issues such as permissibility of a male healthcare provider to examine a female patient (or vice versa) are hot debate topics. This will be investigated further in the scenarios given next. The influence of culture on the perception of health information and communication technologies are issues in a Middle Eastern setting (women, tribal structures, etc.). Further, to points mentioned previously, issues of ethical dimensions can be exacerbated by cultural influences. In the following section, we will try to take the reader through selected scenarios, some hypothetical and others compiled from real life in parts of the world that have entirely different attitudes toward practices seen in Western settings as acceptable—the effect being cultural.
some scenarios Scenario 1: Outcry to the King Ali takes his wife, who is in labor, to a university teaching hospital. Shocked to learn from the receptionist of the obstetrics and gynecology department that the attending physician is male, not female, he reluctantly leaves the hospital to send a bitter letter of complaint through the media to the highest authority in the country: the king. Let us imagine how the situation would be if this scenario were repeated, and the wife had complications and would require consultation and on-air monitoring via telemedicine.
Scenario 2: 30 Years in Pursuit of a Female Orthopedist Yassir writes on May 18, 2005, to a medical forum asking for help. His mother has been suf-
fering from debilitating orthopedic problems for 30 years and is reluctant to be seen by a male orthopedist; there is no female specialist in the area where they live. Yassir’s mother could not be helped, even by telemedicine intervention, so long as the hands that would touch her were those of a man, as she put it. The patient received numerous messages of support and sympathy. Scenarios 1 and 2 are in total compliance with the Islamic code of conduct. The Islamic Jurisprudence Council of the Mecca-Based Muslim World League issued in its 14th session, convened in January 1995 in Mecca, its Fatwa (dictum) emphasizing the impermissibility of healthcare professional attending patients of the opposite gender. The Fatwa allowed for a margin of permissibility only under circumstances of absolute necessity. As is the case in all other similar situations, the degree of necessity is left to the individual to evaluate. In our two scenarios, the persons in question did not categorize their situation as absolute necessity and, therefore, abided strictly by the given Fatwa.
Scenario 3: Miss L. and the Monitoring System Miss L. is admitted to the hospital in the summer of 2003. During her stay in a single room on the surgery ward, a young male in a professional outfit made frequent visits to her, paving the way for a relationship to develop between the two. Eventually, kissing and hugging took place. Without realizing that the monitoring system was active, she enjoyed it. Soon, another man appeared and showed her photographs of her intimate encounters, threatening to make them public on the World Wide Web, unless she gave in to his demands, which turned out to be sheer sexual blackmail. Scenario 3 is a typical example of the abuse that technology could undergo as a powerful tool with which opportunistic people might fulfill
761
Current and Future State of ICT Deployment and Utilization in Healthcare
their desires. The perpetrator knows that Miss L. faces a very difficult situation, as cultural and social values of her society would not approve of her behavior.
Scenario 4: Mr. A.F. and the Monitoring System Again Mr. A.F. was admitted to the hospital as a private patient and stayed in a private, luxury room for two weeks for pulmonary infection treatment. On one occasion during the convalescence days, his wife, who was visiting him at the time, happened to be with him alone in the room. He locked the door and had a very steamy, intimate encounter with her, without realizing that the monitoring system was active. The hospital management, while reiterating the fact that they were doing their job, soon acknowledged the unfortunate incident, offered an apology, and promised to destroy the film that carried the embarrassment, reminding Mr. A.F. that the essential function of hospital rooms is healthcare and receiving visitors for the purpose of the patient’s welfare, but nothing beyond that. Mr. A.F. was adamant in not accepting the apology and insisted on suing the hospital. His argument was based on the conviction that his welfare extended to the activity he performed and that the hospital should have warned him beforehand of the monitoring system and what activities they had in mind for monitoring. He also maintained that had the monitoring system been run by a human being, he or she would have stopped filming the action immediately. Leaving it up to a machine led to the embarrassment, for which the hospital should be held responsible.
Scenario 5: Genetic Screening and the XL Clan of the LL Tribe Tribes in Middle Eastern regions and in some other parts of the world constitute the main building blocks of many societies extending across geographical boundaries and trespassing
762
political borders. They share common ancestry and, therefore, a gene pool. The chronicle of this ancestry extends deep in history. Qahtani tribes, for instance, are named after Noah’s descendant Qahtan, and Adnani tribes after Adnan, one of the descendants of Ishmael son of Ibrahim. Their branches and subsections are numerous and extend throughout the Arabian peninsula and beyond. For instance, about 70 tribes can now be identified in the UAE alone. Through the tribe and its hierarchy, the individual has the right of protection of the tribe and is obliged to abide by its rules. Disputes among members of the same tribe are dealt with by heads of clans, leaders of the subtribes, or the chief (sheikh) of the tribe. Verdicts and rulings thus formulated are binding to all parties. These stem from traditional tribal conventions and practices (urf), known to everybody. The traditional tribal system even makes a young man’s own choice of his bride largely immaterial, as it strongly advocates first and second cousin marriages. The first option is usually the daughter of his paternal uncle. It is, therefore, not surprising that some of these clans who have been living in isolated, remote areas for millennia have an exceedingly high rate of consanguinity and, hence, are expected to have a reasonably distinct genetic makeup (http://www. al-bab.com/yemen/soc/manea1.htm). This is dependent on population frequencies of specific alleles, though not to be taken as racespecific, as no extensive studies have been made available thus far to define race based on genetics. It also should be made clear that the tribes in question at the writing of this chapter have not been subject to studies within the population genetics domain that clearly point out 1% frequencies of certain alleles, which is by definition a polymorphism. Under such conditions, culture can put into action factors that are not reckoned with in the West. The sensitivity of information in a healthcare setting, such as patients’ records and stigmatization, can form a combination of devastating effects in
Current and Future State of ICT Deployment and Utilization in Healthcare
that culture. Saudi Arabia, for example, recently has introduced nationwide mandatory premarital screening tests, the impact of which on the social level is yet to unfold with a possibility of an unpleasant outcome if the tribal structure of the society is taken into consideration. The positive side of these tests is self-explanatory. In recognition of the high incidence rates of genetic diseases such as sickle cell anemia and thalassaemia in some regions of the country, such measures no doubt would reduce these rates. However, in the long run, with the accumulation of more and more genetically related information, whole clans and tribes could be stigmatized and girls with certain genetic traits victimized (in terms of spinsterhood), if procedures are not properly executed and/or information systems are not efficiently run and managed (Kashmeery, 2004). Our scenario is hypothesized for future projection. The XL clan extends across the borders of three neighboring countries. Their branch on the western side of the borders had in abidance by genetic screening rules set by their government to consider allowing such tests to be performed on its members. Within a few months, a trend was established from the compiled data that the clan members have a NOTCH4 gene triplet repeat polymorphism. Without realizing the significance of the finding, the medical record facility did not impose tight security measures on the results. An abstract leaked in a bona fide manner to the local media, which published a layman report on the procedure and praised its underlying policy. In academic circles, the impact was different. This polymorphism was known to have some association with a serious psychological defect: schizophrenia (Wei & Hemmings, 2000). Rumors spread swiftly, blowing the issue out of proportion. The chief on the top hierarchy of the tribe on the eastern side of the border got upset by the news that reached him anecdotally and ordered the XL clan chief not to cooperate with the genetic screening scheme. XL clan, chief, and individuals have their loyalty to the tribe more than to
the state. They all decided overwhelmingly to boycott the screening schemes, current and future ones, and thereby came into conflict with local authorities, who stood by their agenda and work plans. The issue assumed national proportions, following confrontations and arrests. Social unrest began to reach police records wherever members of the XL clan were engaged, at work or social activities. XL clan becomes more and more isolated and alienated, with intermarriage rates with other groups of society falling rapidly, leaving stigmatized women haunted by the state of spinsterhood, which is a woman’s nightmare in that society. The impact of these developments spilled over the borders to the east and south where other subsections of the same tribe live, and the scenario repeated itself, forcing members of the LL tribe to go through the same ordeal.
Scenario 6: The Bed-Ridden Elderly in the Care of Extended Family At the age of 82, Mr. S.K. had been bed-ridden for three years due to leg muscular dystrophy. He also was diabetic and hypertensive and, therefore, needed close health and nursing care. Values, culture, and tradition would not allow his family even to discuss the principle of admitting him to a nursing home. The social norms where Mr. S.K. lived demanded that he be looked after by his nearest of kin. To facilitate such a stipulation, members of the extended family usually live together in large premises. Mr. S.K.’s three sons were living with him in such a setting and managed to share the responsibility of his care. They were grown up professionals engaged in diverse occupations ranging from diplomacy and university professorship to high-ranking civil service. None of them was ever heard complaining or expressing the least bit of discontent, except for their admitted lack of expertise in some aspects of the care they were practicing. Despite their continuous pursuit for knowledge from physician friends of theirs, they have
763
Current and Future State of ICT Deployment and Utilization in Healthcare
always felt that the scheme would have worked more efficiently had there been a handy, simple software that gave guidelines for executing their tasks in a more professional way. Their worries always peaked during the night when they were in their rooms and while they all were away at work, for fear of not being there for help, if needed. They used to hypothesize an emergency situation and have always felt that an adequate monitoring system, connecting them simultaneously to the patient’s healthcare professionals who could intervene in the right moment, would have perfected the scheme. These shortcomings were dwarfed in relation to the great advantage of having their father looked after in the comfort of his own home surrounded by members of his own family, who would do anything to please him, reiterating to him time and again that they do that with pleasure and passion. Of course, he did not know that there are millions of parents in other parts of the world who go through the ordeal of leaving their homes and their loved ones when they desperately need them, and of losing their property in order to cover the expenses of nursing homes, where they might face a fate they frequently read about in the media. He didn’t know that. But what he knew very well was that looking after him and preserving his dignity, no matter how demanding that might be, are ordained by tenet and are a debt carried with pride from generation to generation.
ePIlogue Given the review of the literature and the scenarios just elaborated, the reader should have an idea of what sort of issues we expect to find. The reader also will realize that the research we are suggesting is at the crossroads of a number of disciplines and theories. So far, this chapter contains a collection of thoughts that is meant to support the contention that research in the cultural aspects
764
of the ethical properties of ICT in healthcare is desirable. We have refrained from developing a specific theory that will explain the relationship between culture, ethics, and ICT in healthcare. Instead, we intend to investigate these matters from the starting point outlined previously, but we will keep an open mind to issues that have not yet been raised. To some degree, we thus propose to follow a grounded theory approach that aims to develop theory inductively from observation (Glaser & Strauss, 1967). The purpose of this chapter was to outline an area of research between culture, ethics, and ICT. We hope that the chapter succeeded in persuading the reader that such a project is worthwhile. Given the early stage of the research, we expect the chapter to provoke vigorous debate and hot discussion about this topic—something that will help us to develop these considerations further. It is also a call for other researchers with similar interests to contact us in order to develop collaborative ties.
references Anthony, W. A. (2004). The principle of personhood: The field’s transcendent principle. Psychiatric Rehabilitation Journal, 27, 205. Berg, M. (1997). Problems and promises of the protocol. Social Science & Medicine, 44(8), 1081-1088. Berg, M. (1999). Patient care information systems and health care work: A sociotechnical approach. International Journal of Medical Informatics, 55, 87-101. Bourdil, P. (1996). Le temps. Paris: Ellipses/Édition Marketing. Castells, M. (2000). The information age: Economy, society, and culture, volume I: The rise of the network society (2nd ed.). Oxford: Blackwell. Colombo, A., Bendelow, G., Fulford, K. W. M., & William, S. (2003). Evaluating the influence of
Current and Future State of ICT Deployment and Utilization in Healthcare
implicit models of mental disorder on processes of shared decision making with community-based multidisciplinary teams. Social Science & Medicine, 56, 1557-1570. Fairweather, N. B., & Rogerson, S. (2001). A moral approach to electronic patient records. Medical Informatics and the Internet in Medicine, 26(3), 219-234. Fulford, K. W. M. (2004). Ten principles of valuesbased medicine. In J. Radden (Ed.), The philosophy of psychiatry: A companion (pp. 205-234). New York: Oxford University Press. Galtung, J. (1998). Frieden mit friedlichen Mitteln—Friede und Konflikt, Entwicklung und Kultur. Opladen: Leske + Budrich. Gehlen, A. (1997). Der Mensch: Seine Natur und seine Stellung in der Welt (13th ed.). Wiesbaden: UTB. Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. New York: de Gruyter. Habermas, J. (1981). Theorie des kommunikativen Handelns (Band I/II). Frankfurt a. M.: Suhrkamp Verlag. Hanlon, G., et al. (2005). Knowledge, technology and nursing: The case of NHS direct. Human Relations, 58(2), 147-171. Kashmeery, A. (2004). Who owns the human genes? An East-West cross-cultural analysis. Oxford Research Forum Journal, 2(1), 81-85. Levant, R. F. (2005). Evidence-based practice in psychology. Monitor on Psychology, 36(2). Retrieved April 19, 2005, from http://www.apa. org/monitor/feb05/pc.html Moor, J. H. (1985). What is computer ethics? Metaphilosophy, 16(4), 266-275. Nance, K. L., & Strohmaier, M. (1994). Ethical accountability in the cyberspace. Ethics in the
Computer Age (pp 115-118). Gatlinburg, TN: ACM. Waite, M., Hawker, S., & Soanes, C. (2001). Oxford Dictionary, thesaurus and wordpower guide. Oxford, UK: Oxford University Press. Robey, D., & Azevedo, A. (1994). Cultural analysis of the organizational consequences of information technology. Accounting, Management and Information Technologies, 4(1), 23-37. Sackett, D. L., Straus, S. E., Scott-Richardson, W., Rosenberg, W., & Haynes, R. B. (2000). Evidencebased medicine: How to practice and teach EBM (2nd ed.). Edinburgh: Churchill Livingstone. Seedhouse, D. (2005). Values based health care: The fundamentals of ethical decision-making. Chichester, UK: Wiley & Sons. Stahl, B. C. (2003). The moral and business value of information technology: What to do in case of a conflict? In N. Shin (Ed.), Creating business value with information technology: Challenges and solutions (pp. 187-202). Hershey, PA: IdeaGroup Publishing. Stahl, B. C. (2004). Responsible management of information systems. Hershey, PA: Idea Group Publishing. Stultz, T. (2004). Model transformation: From illness management to illness management and recovery. ACT Center of Indiana, 3(4), 2. UNESCO. (2005). Retrieved May 5, 2005 from http://www.unesco.org/webworld/observatory/ in_focus/010702_telemedicine.shtml Ward, J., & Peppard, J. (1996). Reconciling the IT/business relationship: A troubled marriage in need of guidance. Journal of Strategic Information Systems, 5, 37-65. Weber, M. (1994). Objectivity and understanding in economics. In D. M. Hausman (Ed.), The philosophy of economics: An anthology (2nd ed.)
765
Current and Future State of ICT Deployment and Utilization in Healthcare
(pp. 69-82). Cambridge: Cambridge University Press. Wei, J., & Hemmings, G. P. (2000). The NOTCH4 locus is associated with susceptibility to schizophrenia. Nature Genetics, 25, 376-377.
This work was previously published in Information Technology Ethics: Cultural Perspectives, edited by S. Hongladarom and C. Ess, pp. 169-183, copyright 2007 by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global).
766
767
Chapter L
Emerging Technologies, Emerging Privacy Issues Sue Conger University of Dallas, USA
AbstrAct With each new technology, new ethical issues emerge that threaten both individual and household privacy. This chapter investigates issues relating to three emerging technologies—RFID chips, GPS, and smart motes—and the current and future impacts these technologies will have on society. The outcome will be issues for social discussion and resolution in the coming decades relating to use of these technologies.
bAckground New data losses of millions of individuals’ personal information occur almost daily (Albrecht, 2002; Clarke, 1999; CNet, 2006). As losses amass, the realization grows that personal information privacy (PIP) is no longer managed by either individuals or the companies that collect the data. Research to date proposes that PIP is the responsibility of individuals’ forging contracts with corporations for protection of their data (Smith, 2004), that it is the responsibility of government to protect the individual from corporate abuses (OECD, 2000, 2003, 2006; Swire, 1997), or the responsibility of corporations to manage internal use (Cheung et al., 2005; Culnan, 1993; Culnan & Armstrong, 1999; Smith et al. 1996). These
views are all corporate-centric but threats have expanded beyond the corporation to its data-sharing partners, resulting in data aggregation and sales that are largely unregulated and uncontrolled (Conger, 2006; Conger et al., 2005). Dictionary.com has several definitions of privacy as shown in Table 1. These definitions leave one with a clear expectation that individuals control their own physical visibility to the world. The legal definition further includes privacy in “personal matters.” Privacy can be thought of from several points of view (cf., OECD 1998; Smith 2004). On the one hand, the question is how the individual’s inherent right to privacy can be protected, for example, by legislation. On the other hand, the individual has a contractual right of privacy, to
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Emerging Technologies, Emerging Privacy Issues
Table 1. pri·va·cy (http://www,dictionary.com based on Random House Unabridged Dictionary, 2006
pri·va·cy The American Heritage® Dictionary of the English Language, Fourth Edition Copyright © 2000
1.
the state of being private; retirement or seclusion.
2.
The state of being free from intrusion or disturbance in one’s private
3.
secrecy.
4.
Archaic. a private place.
1. a.
The quality or condition of being secluded from the presence or view
life or affairs: the right to privacy.
of others. b.
The state of being free from unsanctioned intrusion: a person’s right to privacy.
2.
The state of being concealed; secrecy.
pri·va·cy WordNet® 2.1, © 2005 Princeton University
1.
The quality of being secluded from the presence or view of others
2.
The condition of being concealed or hidden
pri·va·cy Merriam-Webster’s Dictionary of Law, © 1996
Freedom from unauthorized intrusion : state of being let alone and able to keep certain esp. personal matters to oneself—see also EXPECTATION OF PRIVACY, INVASION OF PRIVACY privacy interest at INTEREST 3b, RIGHT OF PRIVACY Griswold v. Connecticut and Roe v. Wade in the IMPORTANT CASES section
Privacy Kernerman English Multilingual Dictionary, 2006
The state of being away from other people’s sight or interest Example: in the privacy of your own home
control interactions with the world, including the release of private information such as address and social security number. In the past, privacy concerns were limited to protecting one’s credit card, home, or mailbox from theft. Privacy research in the past focused on collection, unauthorized secondary use, ownership, accuracy, and access (Conger & Loch, 1995; Culnan, 1993; Culnan & Armstrong, 1999; Loch & Conger, 1996; Smith et al., 1996). Most research never stated what data was collected, or described a limited domain of data relating to a given transaction and demographics that oversimplifies breadth of data that might be collected (cf. Chen & Barnes, 2007; Cheung et al., 2005; Cheung & Lee, 2004/2005; Culnan and Armstrong, 1999; Doolin et al., 2005; Drennan et al., 2006). Now, users of the Internet, worry that “personally revealing information about them is automatically generated, collected, stored, interconnected and put to a variety of uses.” (OECD 1998, p. 11). To accommodate the changes enabled by Internet technologies, a more complete view of
768
the current state of PIP in business to consumer (B2C) transactions (see Figure 1) describes how an individual, the 1st party, comes to transact with a company, the 2nd party vendor/provider (Cheung, 2005, Conger et al., 2006). Each unshaded box in Figure 1 and the arrows depicting the relationships between them represent areas in which significant research has already been conducted and incorporates the bodies of work summarized in Culnan and Armstrong (1999) and Cheung et al. (2005). Part of the individual’s decision includes what data to provide to the 2nd party based on the expected life and use of that data, perceived reasonableness of the data collected, expected benefits, and expectations of corporate use of the collected data (Conger et al., 2005). A decision to transact is based on an idiosyncratic evaluation of risk versus reward versus trust (Chen & Barnes, 2007; Dinev & Hart, 2006; Gallivan & Depledge, 2003; Geffen et al., 2003; Malhotra et al., 2004). Violate, or appear to violate, any of the decision factors and transactions will
Emerging Technologies, Emerging Privacy Issues
Figure 1. Information privacy model (Conger, et al., 2005)
not be enacted (Gaudin, 2007; Gauzente, 2004; Holes, 2006; McKnight et al., 2004; Mutz, 2005; Wang et al., 1998). The shaded boxes and arrows depicting their interrelationships represent areas in which little or no research has been published. After a transaction is complete, the information is shared with any number of legal data-sharing entities, the 3rd-party data user who is a known external data-sharing partner, for example, a credit reporting company such as Experian who shares data with 2nd-party permission. Companies, such as Experian, generate their revenues by matching consumer information to transaction information, profiling consumers, and reselling the expanded information. The Experians of the world are not necessarily the problem unless their use or access to data violates their legal and contractual agreements. The greater vulnerabilities arise from Experian’s data sharing partners, the 4th parties. Third-party organizations resell or provide their information through legal requests to 4th-
party organizations. Problems arise when 4thparty partners use data without 1st-party and/or 2nd-party permission. Such partnerships might be governmental pre-emption of data (ACLU, 2003; Ahrens, 2006; Cauley, 2006; DARPA, 2002; Myers et al., 2005; Seffers, 2000; Stanley & Steinhart, 2003; Waller, 2002), data aggregators, or legitimate data-sharing partners of the 3rd-party who violate the terms of their agreements. There is no actual way for, for instance Experian, to ensure proper use since compliance is self-reported. Further, government cooption of data has come under increasing scrutiny as violating constitutional rights to privacy provisions (Zeller, 2005). The U.S. is not alone in co-opting Internet search and other records about its citizens. Canadian authorities have had a legal right to “e-mail, Internet activity, and other electronic records” since 2004 (Smith, 2004). China uses Yahoo Internet e-mail data as the basis for jailing dissidents (Goodman, 2005).
769
Emerging Technologies, Emerging Privacy Issues
Other 4th party organizations are non-government companies that, unknown to the user, install malware, spyware, or other technically legal software on Internet user machines. In one study, Google found about 10 percent of Web pages, out of 4.5 million Web pages, were found to have spyware install programs (Anonymous 13, 2007). In another study, AOL and the National Cyber Security Alliance found that 91 percent of respondents did not even know they had spyware on their PCs (AOL/NSCA, 2005). Spyware software can monitor keystrokes, report back file information to the “mother” company, or do more malicious acts on one’s PC. Many Spyware companies sell the information they collect to data aggregators. Data aggregators also obtain information from public sites for marriage, legal, driving, property ownership, and other situations to build even more complete dossiers on individuals. Their clientele are both legitimate businesses and questionable entities who use the data unbeknown to the 1st or 2nd parties from which it emanated. The nebulous cloud with fuzzy boundaries identifies the last category: 5th-party data invaders. This category of 5th-party users, are unintended, unwanted, and often unethical and/or illegal users of vendor data operating out of bad faith, malice, or grave negligence (cf. Sherman, 2007). Fifthparty data users obtain data without permission or knowledge of their sources, which may be 1st, 2nd, 3rd or 4th parties (ACLU, 2003; Albrecht & McIntyre, 2004; Carlson, 2006; CNet, 2006; Stanley & Steinhart, 2003). People who steal computers and who leak names, addresses, and, for example, financial information, are in this category (Zeller, 2005). Fifth parties are active. From ChoicePoint’s famous identity theft scam in February, 2005 through December, 2006 there were 438 thefts, hacks, or leakages of consumer information of which 335 organizations reported losses over 181 million individual accounts with social security information (Anonymous 9, 2007; Dunham, 2006;
770
PRC, 2007; Reuters, 2006; Scalet, 2006). The 103 organizations either not reporting or not including SSNs, would approximately double the number of transgressions (Anonymous 9, 2007). In the first three months of 2007, the number of compromised SSNs approximately doubled to 360 million as losses and leakages continue (PRC, 2007). These problems are not unique to the U.S. In 2007, The Commission of the European Communities began an effort to create polices on cyber-crime because of “growing involvement of organized crime groups in cyber-crime” (CEC, 2007, pg. 2). One significant difference is that in the EU, many issues that are 5th party in the U.S. are criminalized and prosecuted (CEC, 2002). Many countries are experiencing increasing transgressions of all types (Denning & Baugh, 1997; Dolya, 2007; Gouldson, 2001; Hempel & Töpfer, 2004; HRW, 2006; ICAS, 2005; Kane & Wall, 2005; McCullagh, 2005A, 2005B; McMillan, 2006; Meller, 2006; Newitz, 2004; Woo, 2006; Wright et al., 2006). Emerging technologies threaten to change the scope of privacy issues again from relating to transactions to relating to constant, unanticipated surveillance and the concomitant sharing of that data with virtually the world. Data mining and business intelligence software allow heretofore unprecedented abilities to combine and permute information from multiple sources that provide not just product usage and contextual information but information on household individuals, their usage patterns, and their life styles. This is not a new thought. The ACLU published its RFID Position Statement in 2003 with an impressive, international list of signers (ACLU, 2003). Yet, organizations promulgating new technology use, such as the Internet Home Alliance and its members, provide only the sanguine view of RFID and its uses, ignoring the privacy issues completely (IdTechEx, 2007; IHA, 2005).
Emerging Technologies, Emerging Privacy Issues
new technologIes chAllenge PersonAl InformAtIon PrIVAcy (PIP) Three technologies further will reduce personal privacy in the coming 10 years if nothing is done to stem the wholesale collection and sale of personal information. The technologies are radio frequency identification chips (RFID), geographical positioning systems (GPS), and smart motes. These three technologies were selected because each is in a different stage of maturation, but each promises to change privacy issues and extend challenges to individuals in protecting their personal information. Rogers (1985) defined an S-curve of innovation that projects market growth over time (See Figure 2). GPS, introduced in 1978, and imbedded in all U.S. cell phones, is also a maturing industry for hand-held GPS with a U.S. market expected to reach $22 billion in 2008 and a global market about double at $60 billion (David, 2003; ETRI, 2005). Therefore, GPS is toward the top of the diffusion growth curve and represents a relatively mature technology. RFID, developed in the 1940s, is still finding its market. While it has had significant press, primarily
Figure 2. S-curve of innovation and emerging technologies
because of privacy concerns, the technology is still in a growth phase, represented in the figure as mid-way up the S-curve. Smart motes enjoy limited commercialization and are still under development. As such, they are the least mature technology and are about at the beginning growth inflection point on the S-curve. Each technology is described in the next section. Then, privacy issues are described. The state of remedies for the potential transgressions is described next along with the status of current approaches. Finally, future needs are defined for further development of solutions to problems arising from these technologies.
rfId Developed in the 1940s, RFID stands for Radio Frequency IDentification, a technology that use wireless computer chips to track items at a distance (Anonymous 5, 2002; Anonymous 10, 2007). An RFID system requires two basic elements: a chip and an interrogator. An RFID chip is composed of the following (see Figure 3): • • •
An antenna or coil A capacitor to capture and uses energy from a transceiver A transponder (RF tag) programmed with unique information (Anonymous 2, 2007; Alien Technology, 2007).
The interrogator is a powered radio frequency transceiver that, in a separate device gives power to passive RFID in the form of radio waves. For passive RFID, transceivers are slow (see Figure 4), reading about 20 items from less than 10 feet in about three seconds (Anonymous 5, 2002). Active RFID transceivers contain a battery, memory, and ability to continuously monitor and record sensor inputs. They are re-programmable and vulnerable to viruses and software attacks (Miller, 2006; Ricker, 2006). Transceivers for active RFIDs
771
Emerging Technologies, Emerging Privacy Issues
Figure 3. RFID chip
Figure 4. RFID capabilities. Adapted from Anonymous 5 (2002) and IDTechEx (2007) Active RFID Communication Range
300+ Feet
< 10 Feet
Tag Collection
• 1,000s tags • Over 7 acres • Moving at 100 MPH (reduces the rate)
• 100s tags • < 10 Feet • < 3 MPH
Sensor Capability
• Continuous monitoring and recording • Sensor input • Date/time stamps sensor events • Amenable to hostile environments
• Reader powered to read and send sensor value • No date/time stamp
Data Storage
• 128KB read/write data storage • Search/access capabilities • Re-programmable
• 128 bytes data storage
Media
• Plastic or other durable media
• Paper, file or other printable media
Startup Cost
• Reader - $800 • Tags - $50
• Reader - $400 • Printer - $1, 600 • Tags - < $.05
that can be as small as 2x3 inches are installed in thousands of door portals and can read data from thousands of tags a minute, from 300+ feet (100 meters), moving at 100 mph (Anonymous 5, 2002). Active RFID are amenable to hostile environments such as ship cargo holds, deserts, or warehouses. Prices of RFID chips have steadily dropped for the past 10 years and, with the development of RFID printers, the price is now under $.03 for a passive chip. The antenna on printed chips use conductive ink and are virtually invisible (Anony-
772
Passive RFID
mous 10, 2007; IdTechEx, 2007; McCarter, 2006; McGrath, 2006). RFID chips range from passive to active1. Passive RFID does not transmit until a reader “requests” data from a chip, has no energy source imbedded in the chip, and is limited to reading from about 10 feet (three meters). Active RFID transmit continuously and are readable by any reader within 30 meters, about 90 feet. RFID are used in every conceivable product including metals, liquids, textiles, plastics, pharmaceuticals, and others. For instance, they are imbedded in
Emerging Technologies, Emerging Privacy Issues
children’s pajamas, robots to guard playgrounds, and even shaving gear (Alien Technology, 2007; Gilbert, 2005; IDTechEx, 2007; Olsen, 2007, Sullivan, 2005). The global market for RFID is huge and growing, including governments and businesses in every industry. Sales are predicted to hit $1 Trillion by 2012 (IdTechEx, 2007; McCarter, 2006; McGrath, 2006). The price of RFID chips has been falling between 5 percent and 10 percent a year for six years (Murray, 2006), enticing new users every year. Many experts predict a world with RFID chips “incorporated into … everyday objects … wherein virtually everything is networked through the Web” (Murray, 2006; Stibbe, 2004). The new generation of RFID chips coming on the market contains electronic product codes (EPC), which are a 96-bit code capable of uniquely identifying everything on the planet (EPC Global, 2005; Stibbe, 2004). And, there are some people who think that recording everything about a person may have some use (Bell, 2004). RFID is not all bad. RFID can “accelerate, simplify, and improve the handling of data” (Schuman, 2006, p2). Repetitive counting tasks, such as taking a physical inventory, morph from onerous, backbreaking, days-long work to a walk down each aisle. Shrinkage, the euphemism for stolen, lost, or otherwise missing goods, transforms from a rising percentage of retail costs to almost a thing of the past, simply by placing RFID sensors at all ingress/egress points. In addition, shoplifters who might “hide” goods on their person while they checkout other goods will be discovered and charged for the items. Traffic jams are reduced by toll tags, and transit tickets. Luggage at airports or containers in shipping yards are easily identified and experience speedier processing with embedded RFID. Similarly, transactions are speeded by Speed Pass® and other smart cards with embedded RFID, as is identification and return of stray animals that have been “chipped.”
These clearly are desirable outcomes of RFID use for corporations and society.
globAl PosItIonIng systems The next technology that threatens privacy, global positioning system technology (GPS), is available as stand-alone devices or imbedded in devices such as cell phones and autos. Ivan Getting envisioned GPS in the 1950s while he was a researcher at Raytheon Corporation working on systems for guided missiles. Getting left Raytheon and started his own company where he realized his dream for GPS in the 1960s (Anonymous 12, 2007). GPS is enabled by a constellation of 27 earthorbiting satellites, 24 of which are in active operation at any one time. Each satellite circles the globe twice a day with orbits arranged so that at any time, anywhere on earth, there are at least four satellites “visible” in the sky. A GPS receiver is a device that locates at least three satellites to determine its distance to each and use this information to deduce its own location (Anonymous 1, 2007; Brain & Harris, 2007). Location identification is based on a mathematical principle called trilateration. Trilateration is the location of a single point in space relative to its distance from three other known points. The GPS receiver calculates location and distance from each of the satellites by timing how long it takes a signal to come from each. Figure 5 shows how the location of Denver is found by plotting its location along with known device locations at Boise, Tucson, and Minneapolis. By computing the difference in time from each satellite to each known point, an exact fourth point (Denver in Figure 5) is identified accurately to within ten feet (Anonymous1, 2007; Brain & Harris, 2007). GPS can tell you how far you have traveled, how long you have been traveling, your current and average speeds, and the estimated time of arrival at current speed. Further, a “bread crumb” trail
773
Emerging Technologies, Emerging Privacy Issues
Figure 5. Trilateration example
showing where you have traveled is available to track your trip. GPS tracking data can be stored inside the unit, or sent to a remote computer by radio or cellular modem. Some systems allow the location to be viewed in real-time on the Internet with a web-browser (Anonymous 1, 2007; Brain & Harris, 2007). Enhanced 911 (E-911) telephone service required that, as of October, 2001, GPS locators be installed in every cell telephone in the U.S. and able to be read by operators when cell calls to 911 are made. E-911 enables location-based commerce (L-comm) to help phone companies recoup the cost of E-911 service. Using L-comm, one might program his phone to call when he’s within two blocks of a Starbucks. Stores will be able to push advertisements to people in the area for special sales. Or, retailers might track how many cars pass their location and how many stop, without letting the car passengers know they are being watched (Said & Kirby, 2001). Location tracking requires that an RFID tag be placed on the object to be tracked. Then, the GPS receiver locates the object. Together, RFID and GPS enable location identification of anything on earth. The RFID identifies the individual, the GPS tracks them (PRC, 2006). On the positive side, E-911 allows the finding of cars under water, children stranded with
774
Figure 6. Smart mote on a U.S. penny (Kahn & Warnecke, 2006)
only a phone, and so on. E-911 also makes stolen goods a thing of the past because everything can be tracked. Yet, as the PC, PDA, RFID, GPS, telephone, and fast Internet access miniaturize into one pocket-sized device the last vestiges of PIP location information, disappear.
smArt motes Now we turn to the technology that has implications not just for household privacy but industrial espionage and loss of privacy in everything for everyone everywhere: Smart motes. Smart motes, also known as smart dust, will eventually be sand speck-sized sensors, or “motes” each of which is a “complete, wireless subsystem,” running TinyOS, and outfitted with an antenna connector, serial interface, analog inputs, and digital inputs and outputs (dustnetworks.com, 2006). Currently they are highly miniaturized micro-electromechanical devices (MEMs) with imbedded intelligence that pack wireless communication in the digital circuitry (Anonymous 8, 2003; Culler & Mulder, 2004; Warneke & Pister, 2004).
Emerging Technologies, Emerging Privacy Issues
Each mote digitizes the data collected by an attached sensor and uses an onboard radio to send it in packet form to neighboring motes. Motes are compact, ultra low-power, wireless network devices that provide modular connections to a wide range of sensors and actuators. The unique low-power capabilities of Smart Mesh enable network topologies where all motes are routers and all motes can be battery-powered. (dustnetworks.com, 2006) Smart motes are already used in energy and military applications but they have similar capabilities to RFID with the added benefit that they can be engineered to adhere to desired surfaces (see Figure 6), imbedded invisibly in paint, aerosoled into a situation, ingested by humans and animals, reprogrammed dynamically from “home,” and report “in” to “home” by piggybacking through any handy wireless network. Thus, motes have infinite capacity to invade and erode privacy. The smart mote on a penny in Figure 6 evidences the current state of the art. Capable of sensing an area 11.7mm3 and itself a miniscule 4.8mm3 in volume, the “Berkeley mote” is solar powered with bi-directional wireless communication and sensing. From university lab to commercialization took about four years but the demand for these small devices is going to be boundless when they become a few millimeters smaller and a few centimeters more powerful. Smart motes are a form of nanotechnology that will be the width of a few strands of hair and cost less than $5 each to create. Smart motes are sprayable, paintable, ingestible mechanisms that form self-organizing, intelligent, networks programmed to perform some task, usually surveillance of some type. In addition to obvious problems with organizational security, they can remove any vestige of personal privacy as well. If inhaled, motes might even report on the inner health of individuals (Anonymous 7, 2004; Anonymous 11, 2006; Requicha, 2003; Warneke & Pister, 2004).
If collected data are reported back to a “database integration” company that matches credit and demographic information with IRS, INS, personal movement, medical, biogenetic, financial, and household information and magnitude of the privacy losses becomes clear.
threats Threats to PIP relate to all three of the exemplar technologies. The privacy concerns fall into several different categories: •
•
•
• •
•
Invisibility (ACLU, 2003; Anonymous 4, 2006; Chromatius, 2006; Coffee, 2004; PRC, 2003; Rosenzweig et al., 2004; Stanley & Steinhart, 2003) Massive data aggregation (ACLU, 2003; Faber, 2007; OECD, 2000, 2003, 2006; Owne et al. 2004; PRC, 2003; Said & Kirby, 2001; Stanley & Steinhart, 2003) Individual tracking and profiling (Chestnut, 2006; Gibbons et al., 2003; OECD, 2006; PRC, 2003; Rosenzweig et al., 2004; Saponas et al., 2006; Stanley & Steinhart, 2003) Theft (Coffee, 2004; OECD, 2006; Konidala et al., 2005) Data corruption and infrastructure threats (Faber, 2007; Konidala et al., 2006; Mohoney, 2006; Naraine, 2006; OECD, 2000; 2003; 2006) Health risks (Albrecht & McIntyre, 2004; Singer, 2003).
RFID has been written about more than the other two technologies and, therefore dominates the threat discussion. However, the threats presented apply to all three of the technologies. Each privacy concern is discussed next.
Invisibility Consumers have no way of detecting imbedded RFID which can be as small as the dot on the let-
775
Emerging Technologies, Emerging Privacy Issues
ter “i”. RFID are capable of being “printed” with virtually imperceptible antennae that are becoming historic artifacts anyway (see Figure 7). Because transceivers (RFID readers) can be imbedded in building, road, street corner, traffic light, container, truck, ship, aircraft, or other infrastructures, RFID are being read everywhere. In supermarkets, the door portals contain RFID readers that can identify the shopper through smart cards; then, floor sensors track movements while hidden cameras, microphones, and heat sensors monitor the shopping experience. The individual shopper, if not already identified, is identified at check out either via loyalty card or payment methods if not cash. Thus, RFID and the other surveillances have led to eavesdropping concerns (Albrecht & McIntyre, 2004). GPS devices are more visible than RFID but cell phone GPS is not visible and users forget they are being tracked as the technology blends into daily use (PRC, 2006). Motes, especially ingested, painted, or implanted ones, are invisible and may be unknown recorders of corporate actions, troop strengths, or medical states (Chromatius, 2006; Kitchener, 2006; Singer, 2003; Williams, 2006).
data Aggregation Once data is collected, it can be matched to credit card transactions, and other demographic information. Third party data users of PIP aggregate data from many sources now: click streams, personal movements, health, or biological data, and criminal, court proceeding, genealogical, transaction, or financial history (Conger et al., 2005). RFID is the first tracking technology to allow snapshots of movements for products like clothes, appliances, medicine, and food. By linking these with the existing dossiers on individuals, privacy further erodes. As GPS and motes mature and report all whereabouts, privacy is retained only for noncomputing (and non-TV, non-iPod) actions in one’s home. It is only a matter of time before all genetic markers and DNA information are aggregated with the rest to form a complete record of humans and their activities. Aggregation, per se, is the enabler of other problems: Who has access and what they do with it. The next items address these issues.
Figure 7. RFID tages can be embedded anywhere and are invisible. http://www.revenews.com/wayneporter/archives/EPC-RFID-TAG.jpg
776
Emerging Technologies, Emerging Privacy Issues
individual Tracking and profiling The same capabilities that solve some business problems, like shrinkage and taking inventory, allow any person or company with an RFID reader to “spy” on individuals’ behavior, clothes, and so on. Like the label in Figure 8, RFIDs can be invisible to the naked eye, washed, dried, and still function, and continue sending their information for years (Albrecht, 2002; Albrecht and McIntyre, 2004; Anonymous, 14, 2007; OECD, 2000, 2003, 2006; PRC, 2003). But, unlike bar codes, RFID can be customized for each item and can both send and write information, including the credit card information of purchases, frequent shopper IDs, PINs, and so on (See Figure 8– Calvin Klein RFID inside the label). Related to eavesdropping is the illicit tracking of individuals—employees, friends, or strangers—because it can be done. Illicit tracking is difficult to identify or track since readers are everywhere and which reader led to the illicit activity is difficult to track unless a crime is committed and the perpetrator apprehended with an RFID reader on his person (Denning, 2001; McAfee, 2005; OECD, 2006). Figure 8. Calvin Klein RFID Label cryptome. org/mystery/calvin-closeup.jpg
One bizarre example of this was found by the U.S. Department of Defense, which, in 2006, found contractors in Canada had hollowed out quarters with RFID chips in them (See Figure 9). The inference was that the movements of the contractors were traced while they worked in Canada but details of the incident have not been published and claims were retracted by the U.S. Government (Anonymous 6, 2007; Associated Press, 2007; Briandis, 2006; Bronskill, 2007). In addition, the ACLU alleges that US citizens, post-2001, are “unknowingly becoming targets of government surveillance” (Anonymous 10, 2007). Post-2001 “erosion in protections against government spying and the increasing amount of tracking being carried out by the private sector (which appears to have an unappeasably voracious appetite for consumer information) and the growing intersection between the two” (Albrecht, 2002) have led to the increase in surveillance of all types by all types of organizations and their business partners with whom they share data. The U.S. Government is one of the most prolific gatherers of data, provoking outcries regularly. The major incidents are summarized in Table 2. Thus, the U.S. Government is an active and prolific transgressor of data collection and aggreFigure 9. Spy Coin from http://a.abclocal.go.com/ images/wjrt/cms_exf_2005/features/sci_tech/ cia_spy_coin200.jpg
777
Emerging Technologies, Emerging Privacy Issues
Table 2. Year
Incident
2000
DOD and the FBI sought to develop “real-time” Internet intrusion devices “without becoming intrusive” and accompanying database to fight network intrusions (Seffers, 2000, pg 1).
2002
The Total Information Awareness (TIA) program begins in Defense Advanced Research Projects Agency (DARPA) as six+ projects to collect information from Internet and phone sources to create a 360o view of individuals and their relationships with others. The six projects were: Genesys – to build a database to hold all of the collected information “TIDES:Translingual Information Detection, Extraction and Summarization” to populate the database with eMail, transaction and other digital information (DARPA, 2002, p. 11) “EARS: Effective Affordable Reusable Speech to Text” to support phone or other audio recordings and convert them to digital text (DARPA, 2002, p. 11) “EELD: Evidence Extraction and Link Delivery” to analyze and extract potentially illegal activities from integrated data from TIDES and EARS (DARPA, 2002, p. 12). “WAE: Wargaming the Asymmetric Environment” for bio-surveillance (DARPA, 2002, p. 13) “GENOA 11: [for] collaborative response” to frame questions integrating the above through a query engine (DARPA, 2002, p. 14) As of 2002, an initial version of TIA was in operation. The goal of TIA, according to DARPA’s Office of Information Awareness, “is to revolutionalize … U.S. ability to detect, classify, and identify foreign terrorists and decipher their plans” (Waller, 2002, p. 3). The system was called a “self-made public-relations disaster” (Waller, 2002, p. 1) that was likened to Orwell’s 1984 and the Nazi Gestapo. Major newspapers also sounded alarms (Waller, 2002). The ACLU described the project as providing “government officials with the ability to snoop into all aspects of our private lives without a search warrant or proof of criminal wrongdoing” (ACLU, 2003; Stanley & Steinhart, 2003). In February, 2003, Congress, as part of an omnibus bill, banned TIA funding pending further explanation of scope, purpose, and access to TIA and a privacy assessment impact statement (Waller, 2002). The ACLU, EPIC, and other privacy organizations mounted an anti-TIA campaign that was partly successful. TIA as a government project disappeared … but was successfully outsourced to a data aggregator that developed and deployed the database for DOD (ACLU, 2003; Waller, 2002).
2005
Lisa Myers, NBC news correspondent, reported on Department of Defense (DOD) collection of “domestic intelligence that goes beyond legitimate concerns about terrorism or protecting U.S. military installations” (Myers, 2005, p. 2). One field in the database contained the official assessment of incident threat with at least 243 non-threatening, legal incidents, such as a Quaker meeting in Florida, stored along with legitimate threats (Myers, 2005).
2006
Donald Rumsfeld, then Secretary of Defense, sponsored the multi-year development of the Joint Advertising, Marketing Research and Studies (JAMRS) database of “the largest repository of 16-25 year-old youth data in the country, containing roughly 30 million records” (JAMRS.org, 2005). “DOD was in violation of the Federal Privacy Act for over two years” (Ferner, 2005, p. 1) while JAMRS was under development and for numerous uses, contractor non-disclosure, and data disclosure violations (Rotenberg et al., 2005).
2006
The National Security Agency (NSA) was found to have amassed “tens of millions” of phone call records since 2001 with “the agency’s goal ‘to create a database of every call ever made’ within” the U.S. (Cauley, 2006, pg1). Ostensibly to identify terrorists, NSA has “gained a secret window to the communication habits” of about 200 million Americans that included identifying information (Cauley, 2006, p. 1). The NSA actions violated the Foreign Intelligence Surveillance Act (FISA) of 1978 that was developed to protest U.S. citizens from illegal eavesdropping. Under FISA an 11-member court for surveillance warrants must approve all requests. George Bush, U.S. President in 2006, signed an executive order waiving the need for a warrant (Cauley, 2006).
2006
The FBI and Department of Justice (DOJ), asked Google, MSN, AOL, and Yahoo to turn over current files and to retain data on surfing queries and click streams (Ahrens, 2006). Google did not comply and some public debate ensued (Vaas, 2006). continued on following page
778
Emerging Technologies, Emerging Privacy Issues
Table 2. (continued) Year
Incident
2006
U.S. Government tracks all cell phone calls (UPI, 2006).
2006
In a postal reform bill signed into law in December, 2006, President Bush added a phrase that declared post office rights to open mail “as authorized by laws for foreign intelligence collection” (Memott, 2007) thus, widening the collection of information about U.S. citizens without criminal wrongdoing or legal due process.
gation for purposes of tracking and monitoring everyone in their databases. When public outcries have thwarted public attempts at this collection of data, the agencies outsource or otherwise hide the activities. Lest one think the U.S. Government is the only transgressor, the International Campaign Against Mass Surveillance, a multi-national consortium sponsored by Canada, Philippines, U.K., and the U.S. with support from over 100 non-profit privacy rights organizations in 35+ nations also opposes European, Australian, South American, African, and Asian uses of RFID, biometric passports, and other means of tracking, data mining, and surveillance that they predict will lead to global surveillance and loss of individual privacy (ICAMS, 2005).
theft PIP theft involves two types of transgressions. First is data theft that comes from clandestine RFID readers being built into door portals, highway systems, street corners, and other public places to read the RFID chips that pass by (Faber, 2007). Anyone with a smart card, building radio card, toll tag, and so on is being scanned and read that is, they are being eavesdropped on, daily (OECD, 2006). This hidden eavesdropping is a form of theft enabled by the invisibility of the devices. For instance, you walk into a mall and have RFID chips read in your Calvin Klein shirt, your Levi’s jeans, your Jockey underwear, and your Nike shoes (Albrecht & McIntyre, 2004; Newitz, 2006). Then, you purchase a purse (also with an
imbedded RFID chip) and pay for it with your credit card. All of those chips’ readings will be matched with your credit card information and used to build, not a profile of your predicted activities, but a dossier of your actual activities. On the way home, you ride the subway and are jostled by someone who has a hand-held reader in his or her pocket. He scans all of your chips, including those in your smart bank card, and the robber now has all of your information—RFID and identifying—and can steal your identify (OECD, 2006). One study published in the Washington Post tracked 112 points of tracking by phone, e-mail, camera, and RFID credit cards, toll tags, and clothing tags (Nakashima, 20072). When RFID are ubiquitous, which is estimated to happen by 2012, we could be tracked as many as 1,000 times per day. If the person had a cell phone, further tracking of the path between points of rest and the amount of time spent at each location would also have been stored. Further, motes would enrich the data with bio-readings to indicate arousal states, eye movements, or other physical attributes, food ingestion, medical state, plus could have recorded all conversations. The other type of theft is “cloning” in which the cloner palms a coil that is the antenna for a wallet-sized cloning device, which is “currently shoved up his sleeve. The cloner can elicit, record, and mimic signals from smart card RFID chips. [The cloner] takes out the device and, using a USB cable, connects it to his laptop and downloads the data from the” card for processing (Newitz, 2007, p. 1). The OECD identifies cloning as one of the
779
Emerging Technologies, Emerging Privacy Issues
most serious problems with unprotected RFID since it is virtually undetectable and difficult to trace (OECD, 2006). One California college student uses an RFID reader about the size of a deck of cards, to scan items and download them to his computer from which he can change prices, walk back through a store and change the prices of desired items (Newitz, 2005). Similar tales of foiling building security based on RFID hotel room keys spoofing, spoofing smart cards with RFID tags, and overriding RFID tag information to obtain, for instance, free gas, and to clone implanted RFIDs all have occurred (Newitz, 2005).
data corruption, and Infrastructure threats The same methods used in the theft section can be used for more malevolent uses to corrupt data and therefore, create havoc in computer infrastructures around the globe. Active RFID chips suffer a buffer overflow bug, similar to that of Microsoft’s operating systems that can be used to inject a virus or obtain the contents of the chip (Ars Technica, 2006). Researchers at a conference demonstrated how to inject malware into RFID chips, thereby disrupting not just chip operation but the readers as well (Naraine, 2006). Active RFID, GPS, and smart mote operating systems are amenable to viruses and hacks. The RFID used in Netherlands passports was hacked by a high schooler in four hours; this is the same chip used in US e-Passports (Ricker, 2006). Viruses, injected into the operating system in an active RFID or smart mote network have the potential to cause havoc in the injected environment (Rieback, 2006). For instance, a suitcase contains an active RFID chip that is infected. The chip is read as the traveler enters the airport and promptly infects the airport systems. The infection will spread to all luggage similarly equipped with active RFIDs, and so on, as the virus spreads around the globe (OECD, 2006). Not only are all
780
of the chips in the luggage now unusable but every computer that has read the infected chips also is ruined. The re-programming job involved with such mobility of devices will be endless as infected luggage continues to proliferate and spread the problem. RFID, GPS, and smart motes change the perimeter of the network and, thus, require different security measures (Chromatius, 2006; Rieback, 2006). Because RFID, GPS, and smart motes all eventually interact with computers, all of the threats posed from one computer to another are present. Denial of Service (DOS) attacks in which the radio frequency channel is jammed with “noise” to prevent communication are added problems in addition to the spoofing, cloning, and viruses discussed above. Currently the RFID environment is “open” so that all readers are considered authentic and allowed to read a chip. This both gives out the maximum information to the maximum number of readers, but also opens the chip to being read, spoofed, cloned, and so on by unauthorized sources. Malicious RFID reading might become a new form of corporate espionage that is carried out by employees who simply carry an RFID jammer or spoofer, and so on in a pocket at work. In addition to buffer overflow problems discussed above, spurious data attacks that take advantage of poorly coded database queries, for instance, are an additional source of vulnerability. These vulnerabilities reside not only in the RFID chip-reader interface, but also in the RFID reader, RFID middleware interface, any interfaces with EPC software, and any EPC connectivity (Konidala et al., 2006). There are technical responses to each of these threats, none of which are present in current RFID designs. As nanotechnology techniques become more sophisticated, smart motes will get both smaller and smarter. Eventually, they will be reprogrammable “bot” armies that are global, self-reproducing and capable of currently unimaginable intelligence (Anonymous 4, 2006; Brenner, 2006;
Emerging Technologies, Emerging Privacy Issues
Pelesko, 2005; Singer, 2003; Warneke and Pister, 2004; Warneke et al., 2001; Williams, 2006). In addition, most intelligence won’t be used against just individuals, it will be turned against corporations and governments as well (Singer, 2003).
health threats RFID readers, GPS phones and motes all emit electromagnetic energy over the airwaves and cover areas as large as six square acres (Albrecht & McIntyre, 2004). Further, readers are installed and operational in walls, floors, doorways, shelving, cars, security, and payment devices with RFID slated for release in clothing, refrigerators, medicine bottles and other everyday objects imminently (Albrecht & McIntyre, 2004, pg 51). Yet, as of January, 2007, there is no published research on electromagnetic energy impacts on human health and well-being (OECD, 2006). Therefore, the jury is out on whether or not RFID, GPS, motes, or other similar devices will prove harmless to us or not. As mote technology matures, their ability to manipulate life forms will come about and wreak new kinds of havoc (Pelesko, 2005; Singer, 2003; Warneke et al., 2001). Someone who is able to detect motes’ presence might also be able to repurpose them for destructive uses. Millions of motes, working together, might be able to take over the execution of, for instance, government or nuclear facilities, or research computers, reprogramming them to any end.
solutions The solutions are corporate self-regulation, legislation and legal remedies, and technical controls. Each is discussed in this section followed by an assessment of their effectiveness to date.
corporate self-regulation
Several groups have developed guidelines for self-regulation, making it easier on companies not to have to develop their own (Grant, 2006). One group, The Center for Democracy and Technology, issued standards for corporate self-regulation, recommending that: •
•
• •
Vendors should notify customers of RFID tags in items and tell them how, if possible, to turn off the tags For vendors that collect personally identifiable data, the vendor should “disclose how that data will be employed” Vendors should provide “‘reasonable’ access to information collected” Vendors should notify consumers of RFID presence before completing a transaction (All Gross, 2006 p.1).
In 2005, Electronic Product Code Global (EPC Global), the standards organization the created the EPCglobal 96-bit code for RFID, proposed industry self-regulatory guidelines that include: • • •
•
Notices on packages of RFID tagging Education of consumers to recognize EPC tags Choice information about discarding, removing, or deactivating RFID chips in products. Record use, retention, and security guidelines propose that the RFID chips not “contain, collect, or store any personally identifiable information.” (All OECD, 2006, pg. 24)
legislation Legislative means of regulating RFID exist in Europe via Directives 94/46/EC and 2002/58/EC and the US via Section 5 of the FTC Act (OECD, 2006). Plus, the OECD’s 1980 Guidelines on the Protection of Privacy and Transborder Flows of
No industry likes legislated regulation so companies always profess to be able to self-regulate. 781
Emerging Technologies, Emerging Privacy Issues
Personal Data contains eight principles that also apply to RFID privacy: 1. 2.
3.
4. 5.
6.
7. 8.
Collection limitation should be limited and with consumer consent Data quality should be maintained. Data should exist for a single purpose and be destroyed afterward. Purpose for data collection should be specified to consumer before the transaction, and data collection, is completed Usage limitations recommend no sharing or disclosure without consumer consent Security safeguards should protect against loss and unauthorized access, modification or disclosure. Openness should be provided to allow consumers to know who controls the data and all aspects of its storage, maintenance and use. Individuals have rights to open access to all information collected about them. Accountability, in the form of a “Data Controller” responsible for compliance to the guideline should be maintained.
While there have been fines and some interdiction for RFID violations, they are after the fact and largely, ineffective. The OECD guidelines are strictly followed because of EU legislation for transborder data flows, but not for RFID or other clandestine data collection. In addition, the Asia-Pacific countries and Europe have comprehensive privacy laws that cover most situations in which data collection, data aggregation of the collected data, and use of collected data all apply. The U.S. has a mélange of laws that react to specific situations rather than a comprehensive privacy strategy.
Legal Protection In response to “technological changes in computers, digitized networks, and the creation of new
782
information products,” privacy law attempts to protect “against unauthorized use of the collected information and government access to private records” (BBBOnLine, No date p. 1). Thirty-four states have notification laws (Wernick 2006). Typically, the state laws cover combinations of an individual’s name with unencrypted data items ranging from social security number to DNA profile. However, statutes exclude information available to the public in federal, state or local records. California created a State Office of Privacy Protection in 2000 and has enacted laws that protect citizens’ privacy across many facets of their lives. State regulations, for example, include limits to retrieval of information from automobile “black boxes” (California Department of Consumer Affairs 2006, p. 1), disclosure of personal information on drivers’ licenses, protection of confidentiality of library circulation records, and bans on embedding social security numbers on “a card or document using a bar code, chip, magnetic strip…” (p. 3). The State also defines a “specific crime of identity theft” (p. 4). Similarly, U.S. Federal privacy laws afford privacy protection of cable subscriber information, drivers’ license and motor vehicle registration records, “prohibits persons from tampering with computers or accessing certain computerized records without authorization” require protection of medical records and so on (BBBOnLine, No date p. 3). Legal recourse is also available under some conditions that are more abstract than, for example, protecting disclosure of specified transactions. To enact a transaction, the individual discloses personal information based on an assumption of trust in a specific relationship with the recipient of the data. A tort of breach of confidentiality offers legal recourse when that trust is broken (Cate, 2007; Solove 2004, 2006). The problem is that although many statutes address privacy protection in many facets of individuals’ lives, governments have the power to “trump” those laws (Smith, 2004; Stanley, 2004).
Emerging Technologies, Emerging Privacy Issues
Once data integration occurs in the context of a short-term emergency, such as ferreting out terrorists, individual privacy cannot be restored. In fact, known transgressions of HSA by the government have led to records of innocent parties being propagated through generations of federal databases of suspected terrorists (Gellman, 2005). The EU has the most actionable and protected privacy laws in the world. The European Union Directive on Data Protection of 1995 protections include, for instance, •
• • •
•
“Personal information cannot be collected without consumers’ permission, and they have the right to review the data and correct inaccuracies. Companies that process data must register their activities with the government. Employers cannot read workers’ private e-mail. Personal information cannot be shared by companies or across borders without express permission from the data subject. Checkout clerks cannot ask for shoppers’ phone numbers.” (Sullivan, 2006)
Each of the 26 EU countries has adopted its own laws, some of which vary from the above recommendations, however, all provide basic individual privacy rights that protect individuals from corporations. Further, the EU law has been duplicated with fewer limitations in Australia and many Asian countries as privacy rules were adopted by the Asia-Pacific Economic Cooperation (APEC). Both sets of laws are more comprehensive, seeking to be positive in supporting individual privacy rights rather than, as in the U.S., reactive and piecemeal (Sullivan, 2006). In addition, in the U.S., it is generally believed that citizens distrust the government more than corporations and that, that distrust has caused the piecemeal law adoptions and general erosion of basic privacy rights (Sullivan, 2006).
In spite of the legal protections in Europe, personal privacy is threatened (Meller, 2006) and government surveillance systems are being deployed throughout whole countries, such as England and France (Wright et al., 2006). Further, other countries are experiencing increasing numbers of 4th and 5th party transgressions (Chatterjee, 2006; Woo, 2006).
technical controls Finally, there are potential technical solutions such as • • •
•
Disabling RFID during the check-out process Signal encryption (Konidala et al., 2006) A “privacy bit” on each RFID chip that would enforce only legitimate readers access to tag information (OECD, 2006; Konidala et al., 2006) Develop a consumer-worn, privacy enhancing technology to block reading of hidden RFID in clothing (OECD, 2006).
Konidala et al. (2006) developed a detailed report on all of the vulnerabilities of RFID at each stage of the technology and their interactions—six stages in all. They further detailed technical mitigations of each risk for each level of technology. Thus far, these technologies have proven too costly or beyond the technical capability of current RFID technology without sacrificing size, weight, or cost in the process. A temporary reprieve may come in industry because of the sheer volume of data. Consider the scenario where a major retail chain will be tagging all its goods in all its stores, at the single item level. The number of tagged items in this scenario can easily reach 10 billion or more. This means that the data identifying the 10 billion items amounts to 120 gigabytes (10 billion
783
Emerging Technologies, Emerging Privacy Issues
x 12 bytes per tag). If these items were read once every 5 minutes somewhere in the supply chain, they would generate nearly 15 terabytes of tracking data every day (120 gigabytes x 12 times per hour x 10 hours per day). That’s 15 terabytes of additional data generated by one retail chain every day. Using this formula, 10 major retailers tagging and tracking every item will generate 150 terabytes of data. This is bigger than the estimated 136 terabytes of data from 17 million books in the U.S. Library of Congress. (West et al., 2005) This reprieve will not help with governments who can raise taxes to fund their data gathering projects.
Assessment of control effectiveness The reality of all of these solutions is that Internet technology capabilities, “frequent shopper” cards, RFID reporting, and GPS have whetted organizations’ appetites for data such that organizations are no longer content with transactional information. Now, organizations desire the “ubiquitous understanding of on- and off-line consumer purchase behavior, attitudes and product usage” afforded through these technologies (Albrecht, 2002). From a practical perspective one has to ask whether we really “need to know where [a product] is every five seconds in your supply chain” (Schuman, 2006, p. 2). So far, companies have answered “yes” in resounding numbers. The first solution to privacy threats, corporate self-regulation, is not being used (Culnan, 2003; Schwaig et al., 2005). Evidence amasses of corporate transgressions of the innocence and trust of ordinary shoppers. For instance, Gillette and Proctor and Gamble imbedded cameras, microphones, heat sensors, RFID, and other technology into store shelves to be alerted when shoppers lifted an item from a shelf. Company employees recorded and monitored the shopping behavior (Albrecht and McIntyre, 2004). Similar stories have been published about similar incidents at
784
Tesco’s in England and other parts of Europe (OECD, 2006). Companies clearly are not selfregulating. Legislative regulation, at least in the U.S., loses credibility when U.S. Government organizations—DOD, FBI, DOJ, and the post office— amass significant information about everyone in the U.S. (Ahrens, 2006; Cauley, 2006; DARPA, 2002; Ferner, 2005; Seffers, 2000; Stanley & Steinhart, 2003; Waller, 2002). Similarly, China uses search engine records to build a legal case against suspected subversives (Goodman, 2005). The Council of Europe, in 2000, mandated that member countries “have domestic laws requiring (Internet) service providers to cooperate in both the collection of traffic data and the content of communications” (Taylor, 2004). Some research suggests “pay for data” programs for industry (Laudon, 1996) or “pay for privacy” programs for individuals (Alba et al., 1997). Both of these models bear further investigation for social acceptability and implementation issue resolution. Further, the allegation that privacy can be good for business is worth future research consideration (cf. Poneman, 2006). If providing improved 1st party privacy can drive profits for 2nd parties, then privacy will move to being a priority without legislation or other coercion. Technical controls appear to be the most effective solution but control implementation is at executive direction and executives balance their desire for consumer information against collection of that information that violates consumer privacy. At the moment, the privacy side of the balance equation appears to be losing. The most effective legislation seems to be a technical solution to problems, such as buffer overflow. With legislated technical solutions, manufacturers and software developers would be legally responsible for their software transgressions and would, therefore, be able to be fined and/or sued upon problem occurrences. Instead, without legislated technical solutions, the same errors proliferate through generations of shoddy
Emerging Technologies, Emerging Privacy Issues
software and provide the basis for many 4th and 5th party invasions.
conclusIon Most of the transgressions relating to new technologies are extensions of other transgressions for which the discussions of ethics and proper behavior started long ago. Emerging technologies make those discussions more urgently needed or privacy will be a thing of the past for all but the few who live off the grid. Of possible solutions to privacy issues, corporate self-regulation and legislative regulations are unlikely to be effective. Technology controls, while articulated and appearing complete have not been implemented to date. Legislated technical solutions and responsibility appear to have the most potential for a lasting solution.
references ACLU (2003). RFID position statement of consumer privacy and civil liberties organizations, American Civil Liberties Union (ACLU)., November 30. ACLU (2003). Total information compliance: The TIA's burden under the Wyden amendment, a preemptive analysis of the government's proposed super surveillance program. American Civil Liberties Union (ACLU)., May 19. Ahrens, F. (2006). Government, Internet firms in talks over browsing data. Washington Post. June 3, p D3 Alba, J., Lynch, J., Weitz, B., Janiszewski, C., Lutz, R., Sawyer, A. & Wood, S. (1997). Interactive home shopping: Consumer, retailer, and manufacturer incentives to participate in electronic marketplace. Journal of Marketing, 61, 38-53.
Albrecht, K. (2002). Supermarket cards: The tip of the retail surveillance iceberg. Denver University Law Review, 79(4, 15)., 534-554. Albrecht, K. and McIntyre, L. (2004). RFID: The big brother bar code, ALEC Policy Forum, 6(3)., pp 49-54. Alien Technology (2007, January 10). RFID tags. retrieved from http://www.alientechnology.com/ products/rfid-tags/ Anonymous 1 (2007). Global positioning system. Wikipedia. Anonymous 2 (2007). Radio frequency identification. Wikipedia. Anonymous 3 (2007). Smart card. Wikipedia. Anonymous 4 (2006). Smart dust. Wikipedia. Anonymous 5 (2007). Part I: Active and passive RFID: Two distinct, but complementary, technologies for real-time supply chain visibility. Retrieved from http://www.autoid.org/2002_Documents/ sc31_wg4/docs_501-520/520_18000-7_WhitePaper.pdf Anonymous 6 (2007). Pocketful of espionage: Beware the spy coins. CNN. Retrieved from http://www.cnn.com/2007/US/01/11/spy.coins. ap/index.html Anonymous 7 (2004). Heterogeneous sensor networks. Intel. Retrieved from http://www.intel. com/research/exploratory/hetergeneous.htm Anonymous 8 (2003, June 11). What is smart dust, anyway? Wired Magazine, pp. 10-11. Anonymous 9 (2007). Data loss database. Retrieved from http://attrition.org/dataloss/ Anonymous 10 (2007). What is RFID? Retrieved from http://www.spychips.com/what-is-rfid. html Anonymous 11 (2006, August 22). SmartDust & ubiquitous computing. Nanotechnology News.
785
Emerging Technologies, Emerging Privacy Issues
Retrieved from http://www.nanotech-now.com/ smartdust.htm Anonymous 12 (2007). Famous inventors: GPS. Retrieved from http://www.famous-inventors. com/invention-of-gps.html Anonymous 13 (2007, May 11). Google searches web's dark side. BBC News. Retrieved from http:// news.bbc.co.uk/2/hi/technology/6645895.stm. Anonymous 14 (2007). What is a smart card? Retrieved from http://computer.howstuffworks. com/question322.htm AOL/NCSA (2007, December). AOL/NCSA Online Safety Study. AOL and the National Cyber Security Alliance. Retrieved from http://www. staysafeonline.info/pdf/safety_study_2005.pdf APEC (2005). APEC Privacy Framework. Asia-Pacific Economic Cooperation (APEC).. Ret r ieved f rom w w w.ag.gov.au /.../$f ile/ APEC+Privacy+Framework.pdf Associated Press (2007, January 10). There's an undercurrent of espionage in that currency: Canadian coins with transmitters planted on U.S. defense contractors, baffling both countries. Attrition.org (2007, March 3). Data Loss Archive and Database (DLDOS).. Attrition.org. Retrieved from http://attrition.org/dataloss/. BBBBOnLine (2008). A Review of Federal and State Privacy Laws. BBBBOnLine, Inc. and the Council of Better Business Bureaus, Inc. Retrieved from http://www.bbbonline.org/UnderstandingPrivacy/library/fed_statePrivLaws.pdf Bell, G. (2004). A personal digital store. Communications of the ACM, 44(1)., 86- 94. Brain, M. and Harris, T. (2007, January). How GPS Works. Retrieved from http://electronics. howstuffworks.com/gps.htm Brenner, S. (2006, March 19). Bugs and dust. Cybercrim3. Retrieved from http://cyb3rcrim3.
786
blogspot.com/2006/03/bugs-and-dust.html Briadis, T. (2006). U.S. warns about Canadian spy coins. Physorg.com. Retrieved from http://www. physorg.com/news87716264.html Bronskill, J. (2007, January 14). Spy coin caper loses currency. Ocnus.net. Retrieved from http:// www.ocnus.net/artman/publish/article_27531. shtml California Department of Consumer Affairs (2006, February 14). Privacy Laws. California Department of Consumer Affairs Office of Privacy Protection. Retrieved from http://www.privacy. ca.gov/lawenforcement/laws.htm. Carlson, C. (2006, February 1). Unauthorized sale of phone records on the rise. eWeek. Cate, F. H. (Forthcoming). The failure of fair information practice principles. In Consumer protection in the age of the ' information economy.' Cauley, L. (2006, May 11). NSA has massive database of Americans' phone calls. USA Today. Retrieved from http://www.usatoday.com/news/ washington/2006-05-10-nsa_x.htm. CEC (2002). Creating a safer information society by improving the security of information infrastructures and combating computer-related crime. Council of the European Communities (CEC).. Retrieved from http://europa.eu.int/ISPO/ eif/InternetPoliciesSite/Crime/CrimeCommEN. html#1.Opportunities%20and%20Threats%20i n%20the%20Information%20Society Chatterjee, S. (2006, October 3). India BPOs under scanner after TV channel exposes data leak. International Business Times. Retrieved from www.ibtimes.com/.../nasscom-kiran-karnikhsbc-data-privacy-cyber-crime-channel-4-bpocall-center.htm Chen, Y-H. and Barnes, S. (2007). Initial trust and online buyer behavior. Industrial Management and Data Systems, 107(1)., 21-36.
Emerging Technologies, Emerging Privacy Issues
Chestnut, A. (2006, March 9). Cell phone GPS tracking-privacy issues, eZine Articles, Retrieved from http://ezinearticles.com/?Cell-Phone-GPSTracking---Privacy-Issues&id=159255.
Culler, D.E., and Mulder, H. (2004, August 2). Sensor Nets/RFID. Intel. Retrieved from http:// www.intel.com/research/exploratory/smartnetworks.htm.
Cheung, C.M.K. and Lee, M.K.O. (2004/2005). The asymmetric effect of Web site attribute performance on Web Satisfaction: An empirical study. E-Service Journal, 3(3)., 65-105.
Culnan, M. (2000).. Protecting privacy online: is self-regulation working? Journal of Public Policy & Marketing, 19(1)., 20-26.
Cheung, C.M.K., Chan, G.W.W., and Limayem, M. (2005). A critical review of online consumer behavior: Empirical research. Journal of Electronic Commerce in Organizations, 3(4)., 1-19. Chromatius (2006, February 19). Dust: A ubiquitous surveillance technology. Blog Critics Magazine. Retrieved from http://blogcritics. org/archives/2006/02/19/111529.php Clarke, R. (1999). Internet privacy concerns confirm the case for intervention. Communications of the ACM, 42(2)., 60-66. CNET News (2006). Three workers depart AOL after privacy uproar. Retrieved on August 23, 2006, from http://news.com.com/Three+work ers+depart+AOL+after+privacy+uproar/21001030_3-6107830.html
Culnan, M.J., & Armstrong, P. (1999). Information privacy concerns, procedural fairness, and impersonal trust: An empirical investigation. Organization Science, 10, 104-115. Culnan, M.J. (1993). How did they get my name? An exploratory investigation of consumer attitudes toward secondary information use. MIS Quarterly, 17(3). 341-363. DARPA (2002). DarpaTech 2002 Symposium: Transforming fantasy. U.S. Defense Applied Research Projects Agency. David, L. (2003, November 5). Satellite navigation: GPS grows up, market lifts off. Space.com. Retrieved from http://www.space.com/businesstechnology/technology/satcom_gps_overview_031105.html.
Coffee, P. (2004, April 19). Privacy concerns dog RFID chips, eWeek.
Denning, D. (2001). Is cyber terror next? Social Science Research Council. Retrieved from http:// www.ssrc.org/sept11/essays/denning.htm
Conger, S. & Loch, K. (1995). Ethics and computer use. Communications of the ACM, 38(12)., 30-32.
Denning, D. and Baugh, W. (1999). Hiding crimes in cyberspace. Information, Communication and Society, 2(3)., 251-276.
Conger, S., Mason, R.O., Mason, F., and Pratt, J.H. (2005, August). The connected home: Poison or paradise. In Proceedings of Academy of Management Meeting. Honolulu, HI.
Dictionary.com (2007). Definitions of Privacy. Retrieved from http://www.dallasnews.com/sharedcontent/dws/bus/stories/092306dnbushp.7a661b5. html.
Conger, S., Mason, R.O., Mason, F.. Pratt, J.H. (2006, December 10). Legal sharing, shadow sharing, Leakages—Issues of personal information privacy have changed. In Proceedings of IFIPS 8.2 OASIS Meeting. Milwaukee, WI.
Dillon, G. & Torkzadeh, G. (2006). Value-focused assessment of information system security in organizations. Information Systems Journal, 16, 293-314.
787
Emerging Technologies, Emerging Privacy Issues
Dinev, T. & Hart, P. (2006). An extended privacy calculus model for E-commerce transactions. Information Systems Research, 17(1)., 61-80. Dolya, A. (2007, April 18). Internal IT Threats in Europe 2006, CNews.ru. Retrieved from http://eng.cnews.ru/cgi-bin/oranews/get_news. cgi?tmpl=top_print_eng&news_id=246325 Doolin, B., Dillon, S., Thompson, F. & Corner, J. (2005). Perceived risk, the internet shopping experience and online purchasing behavior: A New Zealand perspective. Journal of Global Information Management, 13(2)., 66-88. Drennan, J., Mort, G. & Previte, J. (2006, JanuaryMarch). Privacy, risk perception, and expert online behavior: An exploratory study of household end users. Journal of Organizational and End User Computing, 18(1)., 1-22. Dunham, W. (2006, May 22). Personal data on millions of U.S. veterans stolen. Computerworld. Retrieved from http://www.computerworld.com/ action/article.do?command=viewArticleBasic&a rticleId=9000678. Dustnetworks.com (2006). Technology overview. Retrieved January 10, 2007, from http://dustnetworks.com/about/index.shtml EPC Global (2005, September). Guidelines on EPC for consumer products. Retrieved from at http:// www.epcglobalinc.org/public/ppsc_guide/ EU (1995). Directive 95/46/EC of the European Parliament and of the Council of 24 October 1005 on the protection of individuals with regard to processing of personal data and on the free movement of that data. Council of the European Union (EU).. Faber, P. (2007, January 9). RFID strategy—RFID privacy and security issues. Industry Week. Retrieved from http://www.industryweek.com/ ReadArticle.aspx?ArticleID=13371
Ferner, M. (2006, February 4). Pentagon database leaves no child alone. Counterpunch. Retrieved from http://www.counterpunch.org/ ferner02042006.html Gallivan, M.J. & Depledge, G. (2003). Trust, control and the role of interorganizational systems in electronic partnerships. Information Systems Journal, 13, 159-190. Gaudin, S. (2007, April 11). Security breaches cost $90 to $305 per lost record. Information Week. Retrieved from http://www.informationweek.com/shared/printableArticle. jhtml?articleID=19900022 Gauzente, C. (2004). Web merchants' privacy and security statements: How reassuring are they for consumers? A two-sided approach. Journal of Electronic Commerce Research, 5(3)., 181-198. Gefen, D., Karahanna, E. & Straub, D. (2003). Trust and TAM in online shopping: An integrated model. MIS Quarterly, 27(1)., 51-90. Gellman, B. (2005, November 6). The FBI's secret scrutiny. The Washington Post, A01. Gibbons, P., Karp, B., Ke, Y., Nath, S., and Seshan,S. (2003, October-December). IrisNet: An architecture for a worldwide sensor web, Pervasive computing, 2(4).,22-33. Gilbert, A. (2005, July 8). Will RFID-guided robots rule the world? CNet News.com. Retrieved from http://news.com.com/2102-7337_3-5778286. html. Goodman, P.S. (2005, September 11). Yahoo says it Gave China Internet data. Washing Post Foreign Service, A30. Retrieved from http:// www.washingtonpost.com/wp-dyn/content/article/2005/09/10/AR2005091001222.html Goss, G. (2006, May 1). RFID standards released IT by vendors, privacy groups. News Service. Gouldson, T., (2001, July 27). Hackers and crackers bedevil business world, Computing Canada, 27(16), 13.
788
Emerging Technologies, Emerging Privacy Issues
Greenaway, K. & Chen, Y. (2006). Theoretical explanations for firms information privacy. Journal of the Association for Information Systems-Online, 6, 1.
Internet Home Alliance (IHA) (2005). Industry summaries. retrieved from http://www.internethomealliance.org/resrch_reports/industry_summaries.asp
Hempel, L., & Töpfer, E. (2004, August). CCTV in Europe. Center for Technology & Society and UrbanEye.net.
JAMRS.org (2005). Joint advertising market research & studies. Retrieved from http://www. jamrs.org/
Hoffman, L.J., Lawson-Jenkins, K., and Blum, J. (2006). Trust beyond security: An expanded trust model. Communications of the ACM, 49(7), 94-101.
Kahn, J.M., and Warneke, B.A. (2006, August 22). Smart dust and ubiquitous computing. In Nanotechnology Now.
Hoffman, T. (2003, March 24). Smart dust: mighty motes for medicine, manufacturing, the military and more. Computerworld. Holmes, A. (2006, March 25). The Profits in Privacy, CIO Magazine, 19(11), 3. Retrieved from http://www.cio.com/archive/031506/privacy. html Horton, M. (2005, February 9-10). The future of sensory networks. Xbow.com. Retrieved from http://www.xbow.com HRW. (2006). The race to the bottom: Corporate complicity in Chinese Internet censorship. Human Rights Watch (HRW). Retrieved from http://www.hrw.org/reports/2006/china0806/china0806webwcover.pdf, 2006 Hutcheson, R. (2006, October 17). U.S. Government has long history of abusing personal information. Common Dreams News Center. Retrieved June 26, 2007, from http://www.commondreams. org/headlines06/0513-04.htm ICAMS (2005, April). The emergence of a global infrastructure for mass registration and surveillance. International Campaign Against Mass Surveillance (ICAMS). Retrieved from http://www. i-cams.org/ICAMS1.pdf IDTechEx (2007, January 12). The RFID Knowledgebase. IDTechEx. Retrieved from http://rfid. idtechex.com/knowledgebase/en/nologon.asp
Kane, J. & Wall, A. (2005). Identifying the links between white-collar crime and terrorism. U.S. Department of Justice. Ke, G. S., and Karger, P. A. (2005, November 8). Preventing security and privacy attacks on machine readable travel documents (MRTDs). IBM Document RC 23788. Kitchener, G. (2006, March 16). Pentagon plans cyber-insect army. BBC News. Retrieved from http://www.bbc.co.uk,/2/hi/americas/480342. stm Konidala, D., Kim, W-S., and Kim, K. (2006). Security assessment of EPCglobal architecture framework. (White Paper #WP-SWNET-017). Daejeon, Korea: Auto-ID Labs. Laudon, K. (1996). Markets and privacy. Communications of the ACM, 39(9), 92-105. Lazarus, D. (2006, June 21). AT&T rewrites rules: Your data isn’t yours. San Francisco Chronicle. Retrieved on July 9, 2007, from http://sfgate.com/cgi-bin/article.cgi?file=/ chronicle/archive/2006/06/21/BUG9VJHB9C1. DTL&type=business Lichtblau, E. (2006, September 25). Europe panel faults sifting of bank data. The New York Times. Liptak, A. (2006, August 2). U.S. wins access to reported phone records. The New York Times.
789
Emerging Technologies, Emerging Privacy Issues
Loch, K., & Conger, S. (1996). Evaluating ethical decision making and computer use. Communications of the ACM, 39(7), 74-84. Malhotra, N., Kim, S. & Agarwal, J. (2004). Internet users’ information privacy concerns (IUIPC): The construct, the scale, and a causal model. Information Systems Research, 15(4), 336-355. McAfee. (2005). Virtual criminology report: North American study into organized crime and the Internet. McAfee, Inc. Retrieved from www. softmart.com/mcafee/docs/McAfee%20NA%20 Virtual%20Criminology%20Report.pdf McCarter, C. (2006 , January 19th). RFID Market $2.71Bn in 2006 rising to $12.35Bn in 2010. RFIDTimes.org. Retrieved from http://rfidtimes. org/2006/01/rfid-market-271bn-in-2006-risingto.html McCullagh, D. (2005a, April 19). New RFID travel cards could pose privacy threat. C/Net. Retrieved from http://www.news.com/21021028_3-606574.html McCullagh, D. (2005b, January 13). Snooping by satellite. C/Net. Retrieved from http://www.news. com/2102-1028_3-5533560.html McGrath, D. (2006, January 26). RFID market to grow 10 fold by 2016, firm says. EE Times. Retrieved on June 3, 2007, from http://www. eetimes.com/news/latest/showArticle.jhtml?arti cleID=177104240 McKnight, H., Choudhury, V., & Kacmar, C. (2004). Dispositional and distrust distinctions in predicting high and low risk internet expert advice site perceptions. E-Service Journal, 3(2), 35-59. McMillan, R. (2006, August 6). Defcon: Cybercriminals taking cues from Mafia, says FBI. Computerworld Security. Retrieved from http://www.computerworld.com/action/article. do?command=viewArticleBasic&taxonomyNa
790
me=cybercrime_hacking&articleId=9002230& taxonomyId=82 Meller, P. (2006, May 30). Officials downplay EU data privacy concerns. NetworkWorld.com, IDG News Service. Retrieved from http://www. networkworld.com/news/2006/053006-officialsdownplay-eu-data-privacy.html. Memott, M. (2007, January 4). Bush says feds can open mail without warrants. USA Today. Retrieved from http://blogs.usatoday.com/ondeadline/2007/01/bush_says_feds_.html Miller, P. (2006). German hackers clone RFID e-passports. Retrieved from http://www.engadget.com Mohoney, D. (2006, September 1). Same threats, different technology. MRT Magazine. Retrieved from http://mrtmag.com/mag/radio_threats_different_technology/ Mutz, D. (2005). Social trust and e-commerce: Experimental evidence for the effects of social trust on individuals’ economic behavior. Public Opinion Quarterly, 69(3), 393-416. Myers, L., Pasternak, D., Gardella, R., and the NBC Investigative Unit (2005, December 14). Is the Pentagon spying on Americans? MSNBC. Retrieved from http://www.msnbc.msn.com/ id/10454316/ Nakashima, E. (2007, January 16). Enjoying technology's conveniences but not escaping its watchful eyes. Washington Post. Naraine, M. (2006, March 15). Dutch researchers create RFID malware. eWeek. Newitz, A. (2006, November 30). Nike + IPod = Surveillance. Wired Magazine. Retrieved from http://www.wired.com/science/discoveries/news/2006/11/72202 Newitz, A. (2004). The RFID hacking underground. Wired Magazine, 13(12). Retrieved
Emerging Technologies, Emerging Privacy Issues
from http://www.wired.com/wired/archive/14.05/ rfid_pr.html
(PRC). Retrieved from http://www.privacyrights. org/fw/fw2b-cellprivacy.htm
OECD (2000). Guidelines on the protection of privacy and transborder flows of personal data. Organization for Economic and Co-operation and Development (OECD).
PRC (2007, February 24). A chronology of data breaches. Privacy Rights Clearinghouse (PRC). Retrieved from http://www.privacyrights.org/ar/ ChronDataBreaches.htm
OECD (2003). Privacy online: policy and practical guidance. Organization for Economic and Co-operation and Development (OECD).
Requicha, A. (2003, November). Nanorobots, NEMS, and NanoAssembly. Proceedings of the IEEE, 91(11), 1922-1933.
OECD (2006). Report of cross-border enforcement of privacy laws. Organization for Economic and Co-operation and Development (OECD).
Reuters (2006, September 26). GE: Laptop with data on 50,000 staffers stolen. Reuters News Service.
Ohkubo, M., Suzuki, K., & Kinoshita, S. (2005). RFID privacy issues and technical challenges. Communications of the ACM, 48(9), 66-71.
Ricker, T. (2006). Dutch RFID e-passport cracked—US next? http://www.engadget.com
Olsen, S. (2007, January 9). RFID coming soon to scooters, diapers. ZDNet. Retrieved from http://www.zdnet.com.au/news/hardware/soa/RFID_coming _to_scooters_diapers/0,130061702,339272981,00.htm?ref=search Owne, E. et al. (2004, October 29). GPS—Privacy Issues. M/Cyclopedia. Retrieved from http://wiki. media-culture.org/au/index.php/GPS_-_Privacy_Issues Pelesko, J. A. (2005, July 24-27). Self assembly promises and challenges. In Proceedings of International Conference on MEMS, NANO and Smart Systems 2005 (pp 427-428). IEEE. Poneman, L. (2006, July 15). Privacy is good business. CIO Magazine. PRC. (2003, November 20). PFID Position statement of consumer privacy and civil liberties organizations. Privacy Rights Clearinghouse (PRC). Retrieved from http://www.privacyrights. org/ar/RFIDposition.htm PRC. (2006, October). When a cell phone is more than a phone: Protecting your privacy in the age of the super-phone. Privacy Rights Clearinghouse
Rieback, M. Crispo, B., and Tanenbaum, A. (2006). RFID malware: Truth vs. myth, IEEE Security and Privacy, 4(4).70-72. Rogers, E. (1995). Diffusion of innovations (4th Edition). New York: The Free Press. Rosenzweig, P., Kochems, A . and Schwartz, A. (2004, June 21). Biometric technologies: Security, legal and policy implications. The Heritage Foundations. Retrieved from http://www.heritage. org/research/homelanddefense/lm12.cfm Rotenberg, M., Hoofnagle, C., Kapadia, D., Roschke, G. (2005, July 15).The pentagon recruiting database and the privacy act (Memo). Electronic Privacy Information Center. Said, C. and Kirby, C. (2001, March 19). GPS cell phones may cost privacy. San Francisco Chronicle. Saponas, T.S.,Lester, J., Hartung, C., and Kohno, T. (2006, November 30). Devices that tell on you: The Nike+iPod sport kit. (Working paper University of Washington). Retrieved from http://www. cs.washington.edu/research/systems/privacy. html
791
Emerging Technologies, Emerging Privacy Issues
Scalet, S. (2006, February 3). The never-ending Choicepoint story. CSO. Retrieved on June 26, 2007, from http://www.csoonline.com/ alarmed/02032006.html
Stanley, J. and Steinhart, B. (2003, January). Bigger monster, weaker chains: The growth of an American surveillance society. American Civil Liberties Union.
Schuman, E. (2006, February 27). The RFID hype effect. EWeek.
Stanley, J. (2004, August). The surveillance industrial complex: How the American government is conscripting businesses and individuals in the construction of a surveillance society. American Civil Liberties Union.
Schwaig, K., Kane, G. C., & Storey, V. C. (2005). Privacy, fair information practices and the fortune 500: the virtual reality of compliance. SigMIS Database, 36, 49-63. Seffers, G. (2000, November 2). DOD database to fight cybercrime. Federal Computer Week. Retrieved from http://www.fcw.com/fcw/articles/2000/1030/web-data-11-02-00.asp Sherman, T. (2007, May 10). ‘New strain’ of terrorism: Hatched in the U.S. Newark Star-Ledger. Singer, M. (2003, October 24). Smart dust collecting in the enterprise. Retrieved from http:// siliconvalley.internet.com/news/ Smith, G. (2004, August 23). Police to seek greater powers to snoop. Globe and Mail. Retrieved from http://www.theglobeandmail.com/servlet/Page/document/v5/content/subscribe?user_ URL=http://www.theglobeandmail.com%2Fserv let%2Fstory%2FRTGAM.20040823.wxpolice23 %2FBNStory%2FNational%2F&ord=2967516& brand=theglobeandmail&force_login=true Smith, H. (2004). Information privacy and its management. MIS Quarterly Executive, 3(4), 201-213. Smith, H., Milberg, S., & Burke, S. (1996). Information privacy: Measuring individuals' concerns about organizational practices. MIS Quarterly, 20(2), 167-196. Solove, D. J. (2004). The Digital Person. NYU Press. Solove, D. J. (2006, January). A Taxonomy of Privacy. University of Pennsylvania Law Review. 154(3). 477-560.
792
Stibbe, M. (2004, February). Feature: Technologies that will change our lives. Real Business. Retrieved from http:// www.realbusiness.co.uk/ ARTICLE/FEATURE-Technologies-that-willchange-our-lives/1035.aspx Sullivan, B. (2006, October 19). La difference’ is stark in EU, U.S. privacy laws. MSNBC. Retrieved from http://www.msnbc.msn.com/id/15221111/ Sullivan, L. (2005, July 18). Apparel maker tags RFID for kid's pajamas. Information Week, p.26. Swire, P.P. (1997). Markets, self-regulation, and government enforcement in the protection of personal information. Privacy and Self-regulation in the Information Age (pp. 3-20). Washington, DC: US Department of Commerce. Takahashi, D. (2007, January 30). Demo: aggregate knowledge knows what you want to buy. San Jose Mercury News. Retrieved from www. mercextra.com/blogs/takahashi/2007/01/30/ demo-aggregate-knowledge-knows-what-youwant-to-buy/ Taylor, G. (2004). The Council of Europe cybercrime convention: A civil liberties perspective. Electronic Frontier Australia. Retrieved from http://www.crime-research.org/library/CoE_Cybercrime.html UPI (2006, May 11). Report: NSA tracking all U.S. phone calls. GOPUSA.com. Retrieved on May 11, 2006, from http://www.gopusa.com/news/2006/ May/0511_nsa_phonesp.shtm
Emerging Technologies, Emerging Privacy Issues
Vaas, L. (2006, January 25). Government sticks its fingers deeper into your data pie. eWeek Magazine. Retrieved from http://www.eweek. com/print_article2/0,1217,a=170019,00.asp
tors the ability to manipulate life processes—and even affect human behavior. MIT Technology Review. Retrieved from http://www.technologyreview.com/Biotech/16485/
Waller, J.M. (2002, December 24). Fears mount over 'total' spy system: civil libertarians and privacy-rights advocates are fearful of a new federal database aimed at storing vast quantities of personal data to identify terrorist threats—Nation: homeland security. Insight Magazine. Retrieved from http://www.findarticles.com/p/articles/ mi_m1571/is_1_19/ai_95914215
Woo, R. (2006, July 6). Privacy crises in Hong Kong and how the privacy commissioner is dealing with them. Paper presented at the Privacy Laws and Business Conference, Cambridge, England. Retrieved from http://www.pcpd.org.hk/english/ files/infocentre/speech_20060705.pdf
Warneke, B., Last, M., Liebowitz, B., Pister, K.S.J. (2001). Smart dust: Communicating with a cubicmillimeter computer. Computer, 34(1), 44-51. Warneke, B.A. & Pister, K.J.S. (2004, Feb. 16-18). An ultra-low energy microcontroller for smart dust wireless sensor networks. In Proceedings of the Int'l Solid-State Circuits Conf. 2004 (ISSCC 2004). San Francisco. Retrieved on June 26, 2007, from www-bsac.eecs.berkeley.edu/archive/users/ warneke-brett/pubs/17_4_slides4.pdf Wernick, A.S. (2006, December). Data theft and state law. Journal of American Health Information Management Association, 40-44.
Wright, D. Ahoenen, P., Alahuhta, P., Daskala, B., De Hert, P., Delaitre, S., Friedewald, M., Gutwirth, S., Lindner, R., Maghiros, I., Moscibroda, A., Punie, Y., Schreurs, W., Verlinden, M., Vildjiounaite, E. (2006, August 30). Safeguards in a world of ambient intelligence. Fraunhofer Institute Systems and Innovation Research. Zeller, T., Jr. (2005, May 18). Personal data for the taking. The New York Times.
endnotes 1
West, Andrew C., Smith, C. D., Detwiler, D. J. , and Kaman, P. (2005, November). Trends in Technology. (AGA CPAG Research Series. Report #3). Westervelt, R. (2007, February 23). Data breach law could put financial burden on retailers. SearchSecurity.com. Williams, M. (2006, March/April). The knowledge: biotechnology’s advance could give malefac-
2
There are actually five classes of RFID, dumb passive, passive with some functionality and/or encryption, semi-passive which use broad-band communication, active which communicate via broadband and peer-topeer and with both tags and readers, and active which can give power to the three passive classes and communicate wirelessly. (Sensitech, http://www.sensitech.com/pdfs/ Beyond_Passive_RFID.pdf , 2003). The article reported 74 points of tracking but it forgot the tracking from deleted emails—another 38.
This work was previously published in Computer Security, Privacy, and Politics: Current Issues, Challenges, and Solutions, edited by R. Subramanian, pp. 232-270, copyright 2008 by IRM Press (an imprint of IGI Global).
793
794
Chapter LI
Ethics of “Parasitic Computing”: Fair Use or Abuse of TCP/IP Over the Internet? Robert N. Barger University of Notre Dame, USA Charles R. Crowell University of Notre Dame, USA
AbstrAct This chapter discusses the ethics of a proof-of-concept demonstration of “parasitic computing.” A “parasite” computer attempts to solve a complex task by breaking it up into many small components and distributing the processing of these components to remote computers that perform this processing without the knowledge or consent of those owning the remote computing resources. This is achieved through the use of the TCP/IP Internet protocol and, in particular, the checksum function of this protocol. After a discussion of similar exploits, the ethical issues involved in this demonstration are analyzed. The authors argue that harm should be the standard for determining if parasitic computing is unethical. They conclude that a revised notion of the rights of ownership is needed when dealing with the shared nature of the Internet. Suggestions for future research are offered.
IntroductIon This chapter will examine some of the issues raised by a proof-of-concept demonstration of “parasitic computing” reported in the journal, Nature (Barabasi, Freeh, Jeong, & Brockman, 2001). In this type of computing, a “parasite” computer
attempts to solve a complex task by breaking it up into many small components and distributing the processing related to those components over a number of separate remote computers. While the parasitic procedure represents a form of distributed computing, it differs importantly from other well-known examples such as the
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Ethics of “Parasitic Computing”
Search for Extraterrestrial Intelligence (SETI) Project (SETI@home, 2003). The distributed computing utilized in SETI involves volunteers from around the world who allow their local computers to be used for ongoing analysis of vast amounts of data obtained from a radio telescope constantly scanning the heavens. SETI allows anyone with a computer and Internet connection to download software that will read and analyze small portions of the accumulated data (SETI@ home, 2003). In effect, SETI has created a super computer from millions of individual computers working in concert. Like SETI, parasitic computing takes advantage of the power of distributed computing to solve complex problems, but the parasite computer induces “participating” computers, already connected to the Internet, to perform computations without the awareness or consent of their owners. By their own admission, Barabasi et al. (2001) were aware of the ethical issues involved in their demonstration of parasitic computing. On the project Web site they state: “Parasitic computing raises important questions about the ownership of the resources connected to the Internet and challenges current computing paradigms. The purpose of our work is to raise awareness of the existence of these issues, before they could be exploited” (Parasitic Computing, 2001). In this chapter, we will begin to explore these “important questions” by focusing on the type of exploitation inherent in parasitic computing and by considering some of the ethical issues to which this new form of computing gives rise.
bAckground The proof-of-concept demonstration reported by Barabasi et al. (2001) involved a single “parasite” computer networked to multiple “host” Web servers by means of the Internet. The underlying communication between the parasite and hosts followed the standard TCP/IP protocol.
Within this context, the parasite exercised a form of covert exploitation of host computing resources, covert because it was accomplished without knowledge or consent of host owners, and exploitation because the targeted resources were used for purposes of interest to the parasite, not necessarily the host owners. Covert exploitation of networked computing resources is not a new phenomenon (Smith, 2000; Velasco, 2000). In this section, we will review a few common examples of covert exploitation including some that take advantage of known vulnerabilities in the Internet communication process.
Internet communication Protocols The Internet evolved as a way for many smaller networks to become interconnected to form a much larger network. To facilitate this interconnection, it was necessary to establish standards of communication to insure uniformity and consistency in the ways by which a computer attached to one part of the Internet could locate and exchange information with other computers located elsewhere. These standards, known as “protocols,” emerged through the influence of the Internet Society, the closest thing the Internet has to a governing authority. The de facto standard that has emerged for Internet communication is a family of protocols known as the Transmission Control Protocol/Internet Protocol (TCP/IP) suite (Stevens, 1994). This TCP/IP standard helps to insure certain levels of cooperation and trust between all parties employing the Internet. As shown in Figure 1, the TCP/IP protocol suite usually is represented as a layered stack where the different layers correspond to separate aspects of the network communication process (Stevens, 1994). The bottommost link layer in the stack corresponds to the physical hardware (i.e., cables, network cards, etc.) and low-level software (i.e., device drivers) necessary to maintain network connectivity. The middle two layers represent the network and transport layers, respectively.
795
Ethics of “Parasitic Computing”
Figure 1. Layers of the TCP/IP protocol
Adapted from Stevens (1994), Figure 1.1
Roughly speaking, the network layer is responsible for making sure that the “packets” of information being sent over the network to a remote computer are being sent to the proper destination point. Several different forms of communication are employed by this layer, but IP is the main protocol used to support packet addressing. The transport layer, just above network, uses TCP as its main protocol to insure that packets do, in fact, get where they are supposed to go. In essence, at the sending end, the TCP layer creates and numbers packets, forwarding them to the IP layer, which figures out where they should be sent. At the receiving end, the TCP layer reassembles the packets received from the IP level in the correct order and checks to see that all have arrived. If any packets are missing or corrupt, TCP at the receiving end requests TCP at the sending end to retransmit. The top layer of the stack contains application services users employ to initiate and manage the overall communication process, applications like file transfer (FTP), e-mail (POP and SMTP), and Web browsing (HTTP).
worms, Viruses, and trojan horses Covert exploitation of computing resources has taken many forms over the years, some more 796
malicious than others. Perhaps the most nefarious examples are those involving what is called “malware,” short for malicious software, designed to damage or disrupt a system (Wiggins, 2001). Malware often takes the form of worms, viruses, or Trojan horses, problems that have become all too common in recent years and do not need to be explored further here. Suffice it to say that while there is some debate about the precise distinctions among these variants of malware (Cohen, 1992), it is clear that all operate covertly to insinuate themselves into computer systems for purposes of exploitation.
IP-related Vulnerabilities Prior to the widespread use of networks and the Internet, the spread of malware among stand-alone computers was dependent upon transfer by means of removable media like floppy disks. With the advent of networking, and the attendant increase in e-mail usage, many other methods became available for gaining unauthorized access to computing resources. While e-mail still may be the most common method used to achieve the spread of malware (Wiggins, 2001), certain forms of covert exploitation associated with vulnerabilities in the TCP/IP protocol have been known for some time (Bellovin, 1989). Space limitations preclude a detailed treatment of this matter here, but three categories of vulnerability will be mentioned: IP spoofing, denials of service, and covert channels. Each represents exploitation of the “trust” relationships Barabasi et al. (2001) describe as being inherent in the TCP/IP protocol. IP spoofing, as described by Velasco (2000), is a method whereby a prospective intruder impersonates a “trusted” member of a network by discovering its IP address and then constructing network packets that appear to have originated from this source. Other network computers may then accept those packets with little or no question because they seem legitimate and further authentication is not mandatory under the TCP/IP protocol
Ethics of “Parasitic Computing”
(i.e., trust is assumed). While the technical details of this approach are rather intricate, involving both the impersonation process itself as well as a method for disabling TCP acknowledgments sent back to the system being impersonated, intruders have used this technique to establish communications with remote computers, thereby potentially “spoofing” them into further vulnerabilities and/or unauthorized access (Velasco, 2000). Denials of service involve malicious attempts to degrade or disrupt the access of network members to a particular host by consuming the TCP/IP resources of the host or the bandwidth of the network itself (Savage, Wetherall, Karlin, & Anderson, 2000). Like IP spoofing, denials of service usually exploit TCP/IP trust and also normally involve some effort to conceal the identity of the perpetrator. An important protocol vulnerability here is based on how the TCP layer responds to an incoming request from the network for communication. TCP trusts that such requests are legitimate and, therefore, upon receipt, it automatically reserves some of its limited resources for the expected pending communication. By flooding a host with bogus requests and withholding follow up from the presumed senders, a malicious perpetrator literally can “choke” the host’s communication capacity by keeping its resources in the “pending” mode. Alternatively, by flooding the network with bogus traffic, a perpetrator can consume bandwidth, effectively preventing legitimate traffic from reaching its intended destination. Covert channels are forms of communication hidden or disguised within what appears to be legitimate network traffic (Smith, 2000). Using such channels, intruders may be able to gain unauthorized access to networked computing resources. As Smith (2000) indicates, certain Internet protocols, like TCP/IP, are susceptible to this potential problem. For example, information directed to or from the TCP layer is marked with a unique identification “header.” Generally, in the TCP/IP suite, each layer has its own distinct
header. Certain spaces within these headers may be “reserved for future use,” and therefore may not be checked or screened reliably. This space thus offers a vehicle by which a covert channel could be established. Data placed in this channel normally would not be subject to scrutiny within the TCP/IP protocol and therefore might be used for malicious purposes. Smith (2000) has reviewed several examples of this kind.
other covert exploits Bauer (2001) has identified several other forms of covert exploitation involving Internet protocol features. Unlike the circumstances described above, these exploits are not malicious and appear to be largely harmless, yet they represent unauthorized uses of networked computing resources. Many of Bauer’s examples apply to the uppermost layer of the TCP/IP protocol, and therefore involve application services like e-mail and HTTP rather than the inner workings of TCP/IP itself. For example, one way to exploit e-mail systems as a means of temporary storage is by sending self-addressed mail through an open mail relay system and then disabling receipt until desired. For at least some period of time, the relay system thus will serve as a temporary storage unit. A similar exploit can be accomplished using a Web server that is instructed to store information of interest to the server owner within cookies on the computers of those who browse to a particular webpage hosted on the server. Assuming they will eventually return to the site, the people with the cookies on their computers are unwittingly providing temporary storage to the server owner.
PArAsItIc comPutIng As noted above, the proof-of-concept demonstration of parasitic computing reported by Barabasi et al. (2001) essentially was an experiment in distributed computing in which a complex problem
797
Ethics of “Parasitic Computing”
was decomposed into computational elements that each had a binary, yes or no, outcome. The parasitic computer then “out-sourced” these elements to multiple Web servers across the Internet. Each server receiving an element unwittingly performed its task and reported its binary outcome back to the parasite. The “participating” servers were induced to perform their tasks through another form of TCP/IP vulnerability. As Barbarasi et al. (2001) note, their demonstration was predicated on the fact that “the trust-based relationships between machines connected on the Internet can be exploited to use the resources of multiple servers to solve a problem of interest without authorization” (p. 895). To understand how this was done, we need to look again at the TCP/IP protocol.
tcP checksum function One feature of the TCP protocol that is very important to the Barabasi et al. (2001) implementation of parasitic computing is the checksum property. Checksum is that part of TCP layer operation that is responsible for insuring integrity of packet data being sent over the Internet. Before a packet is released to the IP layer (Figure 1) of the sending computer, TCP divides the packet information into a series of 16-bit words and then creates a one’s complement binary sum of these words. The resulting so-called “checksum” value is a unique representation of the totality of information in that packet. The bit-wise binary complement of this checksum is then stored in the TCP header before the packet is sent. When the packet arrives at the receiving computer, the TCP layer there performs its own binary sum of all the information in the packet including the checksum complement. If the packet was received without corruption, the resultant sum should be a 16-bit value with all bits equal to 1 since the original checksum (i.e., the total arrived at by the sending computer) and its exact complement would be added together forming a unitary value (see Barabasi et al., 2001, Figure
798
2, for more details). If this occurs, the packet is retained as good and is passed to the application layer for action; if not, the packet is dropped and TCP waits for a prearranged retransmission of the packet by the sending computer. As Freeh (2002) indicates, the TCP checksum function performed by the receiving computer is, in essence, a fundamental “add-and-compare” procedure that forms the basis for any other Boolean or arithmetic operation. As a consequence, TCP can be exploited to perform computations without “invading” (i.e., hacking or cracking into) those systems induced to participate (Barabasi et al., 2001; Freeh, 2002). In this sense, then, parasitic computing is a “non-invasive” form of covert exploitation that does not penetrate beyond the TCP/IP layers of the host. This differentiates parasitic computing from the other methods described above for capitalizing on IP-related vulnerabilities.
np-Complete Satisfiability problem To demonstrate how this exploitation of the TCP checksum function was possible, Barabasi et al. (2001) elected to solve an NP-complete satisfiability (SAT) problem via distributed computing. As described by these authors, the specific version of the problem was a 2-SAT variant involving a Boolean equation with 16 binary variables related by AND or XOR operators (see Barabasi et al., 2001, Figure 3, for more details). The method used to solve this problem involved parallel evaluations of each of the 216 possible solutions. To accomplish these parallel evaluations, a TCP/IP Checksum Computer (TICC) was devised (see Freeh, 2002) that could construct messages containing candidate solutions to the problem that were then sent, along with a template for determining the correct solution, over the Internet to a number of target Web servers in North America, Europe, and Asia. Similar to the behavior of a biological “parasite,” the TICC acted to take advantage of the targeted “hosts” by inducing them to evaluate
Ethics of “Parasitic Computing”
the candidate solutions they received against the correct solution template and return for each one a binary, “yes/no” decision to the TICC. Inducement without security compromise (i.e., invasion) was achieved by exploiting the TCP checksum function on each targeted host. The parasitic TICC constructed special message packets for each host and injected them directly into the network at the IP layer. These messages contained one of the possible 216 candidate solutions encoded as packet data in two sequential 16-bit words, along with a 16-bit version of the correct solution (in complemented form) substituted in place of the normal TCP checksum value. When the packet was received by the host computer, the two 16-bit words containing the candidate solution, which were presumed by the host to be packet data, were added together with the complemented checksum (i.e., the correct solution) according to the usual operation of the TCP checksum function described above (see Barabasi et al., 2001, Figure 3, for more details). If the enclosed candidate solution was a correct one, then its one’s complement binary sum would combine with the complemented correct solution, masquerading as the TCP checksum, to form a 16-bit unitary value, just as would occur if normal packet data had been transmitted without error. In response to a unitary sum, the host’s TCP layer passed the packet up to the HTTP application layer, acting as if the packet were not “corrupted.” However, because the packet’s message was artificial and thus unintelligible to the host, it was prompted to send a response back to the parasitic TICC saying it did not understand the message. This returned response was an indication to the parasite that the candidate solution was, in fact, a correct one, a decision made automatically, but unwittingly, by the host in response to the “artificial” packet. Messages containing an incorrect solution failed the checksum test on the host and were presumed to be corrupt; therefore, no response was sent back to the parasite as per the standard behavior of TCP.
Barabasi et al. (2001) acknowledge the possibility that “false negatives” could have complicated their demonstration of parasitic computing. This complication arises from the fact that incorrect solutions were signified by no return response from the host. However, a lack of returned responses also could be caused by other means such as artificial packets that never made it to their intended hosts for evaluation, or by host responses that got lost on the way back to the parasite. So, this means that correct solutions could be erroneously categorized as incorrect if a false negative occurred. Reliability tests by the authors performed by repeatedly sending correct solutions to hosts showed that false negatives occurred no more than 1% of the time and sometimes less than one in 17,000 cases (Barabasi et al., 2001, pp. 896-897).
ethIcAl Issues rAIsed by PArAsItIc comPutIng As the first author of this chapter has noted elsewhere (Barger, 2001a), most ethical problems considered under the rubric of Internet Ethics are basically variants of older ethical issues (e.g., theft, copyright infringement, invasion of privacy) disguised in modern-day (i.e., electronic or digital) clothing. Parasitic computing may be unique, however, in that, on the surface, it seems to present a truly original conundrum: namely, whether or not it is ethical for a parasitic computer to generate Internet-based communications in a manner that induces remote “host” computers, freely attached to the Internet by their owners, to perform computations of interest to the parasite without the knowledge or permission of host owners. The ethical “gray area” here arises from the fact that the specific host resources targeted by the parasite already were part of the “public domain” by virtue of being attached to the Internet. Moreover, these resources were not instigated to do anything malicious or even out of the ordinary.
799
Ethics of “Parasitic Computing”
However, the uses to which the host resources were put by the parasite clearly were not sanctioned in any explicit way by the host owners. It is perhaps a truism to say that ethical behavior must be assessed against standards inherent within an accepted moral code. Is there an accepted “moral code” for computer ethics? In a white paper published by the Computer Ethics Institute, Barquin (1992) presented what he called the “Ten Commandments of Computer Ethics,” which amounts to a list of moral imperatives to guide ethical behavior related to the use of computing and information technology resources. These guidelines have become fairly well known and have been endorsed by other professional societies (e.g., Computer Professionals for Social Responsibility, 2001). Barquin’s “commandments” overlap with similar strictures contained in a statement published by the Association for Computing Machinery (ACM) entitled the “ACM Code of Ethics and Professional Conduct” (Association for Computing Machinery, 1992). For purposes of the present discussion, certain of Barquin’s “commandments” appear directly relevant to the ethics of parasitic computing.
thou shalt not use a computer to harm others or Interfere with their computer work These imperatives, abstracted from Commandments 1 and 2, clearly position as unethical any form of “malware” or other type of covert exploitation of computer resources with harmful purpose or consequences. Benign forms of exploitation without mal-intent, like the Barabasi et al. (2001) demonstration of parasitic computing, would seem under this mandate to be an instance of “no harm, no foul.” One difficulty here, however, lies with the assessment of harm. Directly harmful effects to a user as a result of someone else’s covert exploitation are one thing, but indirect consequences may be quite another. How does
800
one know for sure what might have happened had the covert exploitation not been ongoing, or what might have been precluded by its presence? In an interview with one of the parasitic project authors, reported in the September 6, 2001, issue of Security Wire Digest, Vincent Freeh said this about that: “It’s sort of like meeting a friend and leaving one car in a shopping center parking lot. You’ve used the facilities in an unintended way that doesn’t benefit the provider of the parking lot — most people wouldn’t consider that unethical. But if you bring in a fleet of cars, impacting people who come to do business at the store, I think that’s unethical” (McAlearney, 2001). The second difficulty, alluded to in the above comment by Freeh, arises from the potential for future harm given an escalation of the exploitation. As the authors of an online dictionary, The Word Spy, put it in comments under their entry for “parasitic computing”: “The bad news is that, although messing with a few checksums won’t cause a perceptible drop in the performance of the server, hijacking millions or billions of checksum calculations would bring the machine to its digital knees. It’s just one more thing to keep Web site administrators chewing their fingernails” (The Word Spy, 2001). The authors themselves acknowledge this possibility in stating that (a presumably escalated form of) parasitic computing “could delay the services the target computer normally performs, which would be similar to a denial-of-service attack, disrupting Internet service” (Barabasi et al., 2001, p. 897). But, as Freeh (2002) points out, the TICC implementation employed by these authors is not likely an effective platform for the launch of denial-of-service attacks. In the final analysis, then, we are not convinced that the demonstration of parasitic computing reported by Barabasi et al. (2001) was harmful in any sense. However, the potential may well exist for harmful effects stemming from an elaborated or escalated form of parasitic computing.
Ethics of “Parasitic Computing”
thou shalt not use others’ computing resources without Authorization or snoop in their files These imperatives, adapted from Barquin’s Commandments 2 and 7, potentially pose the most serious ethical challenges for parasitic computing and also highlight the ethical “gray area” associated with this matter. Parasitic computing does not appear to us in any way to be a form of invasive “hacking.” We thus agree with the authors’ contention that “unlike ‘cracking’ (breaking into a computer) or computer viruses, however, parasitic computing does not compromise the security of the targeted servers…” (Barabasi et al., 2001, p. 895). Clearly, there was no “snooping,” data corruption, or even residue (back doors, etc.) as a result of their implementation of parasitic computing. Moreover, it is important to emphasize that lack of “awareness” is not necessarily the same thing as lack of authorization. As the authors note, their version of parasitic computing operated “without the knowledge of the participating servers” (Barabasi et al., 2001, p. 895), but does this mean it was unauthorized? To answer this question we must pose two related queries. One is: To what extent is this TICC version of parasitic computing just a form of normal Internet communication? Barabasi et al. (2001) contended that “parasitic computing moves computation onto what is logically the communication infrastructure of the Internet, blurring the distinction between computing and communication” (p. 897). If this is true, then parasitic computing as implemented by these authors involves nothing more than mere Internet traffic. From the viewpoint of the receiving “host” computers, this certainly was true. As noted above, the specially prepared packets sent by the parasite were reacted to in exactly the same way as all other Internet traffic and were indistinguishable from “normal” packets in terms of their consequences for the host, at least at the level of TCP. That the special packets sent by the parasite were not “normal”
does not automatically imply that they exercised a form of unauthorized access. Therefore, a second important question is: To what extent does the owner of a Web server consent implicitly to the exercise of TCP/IP functions on that machine, which happen automatically because of network traffic anyway, by virtue of the fact that it was connected to the Internet in the first place? There are different viewpoints here. On the parasitic computing Web site at Notre Dame, under a FAQ section, the authors both ask and answer the following question: “How do I reliably stop parasitic computing from occurring on my Web server? [Answer] Unplug it from the net” (Parasitic Computing, 2001). This answer certainly suggests that any computer connected to the Internet has made an implicit acknowledgment that TCP/IP functions may be exercised on that machine. Barabasi et al. (2001) state their position on this matter even more clearly by saying that parasitic computing “accesses only those parts of the servers that have been made explicitly available for Internet communication” (p. 895). But, is connecting a computer to the Internet the same thing as giving permission for any conceivable use of TCP/IP? Server owners could argue that, even if one grants that those parts of a computer connected to the Internet are in the public domain, there is still a reasonable expectation that Internet traffic will be constituted only in conventional ways. When it is not so constituted, this could be construed as a form of protocol abuse. However, what exactly is abusive about the exercise of TCP/IP functions on a Web server by non-standard Internet traffic? An analogous question could be posed about “non-standard” uses of any other physical sensors (e.g., motion sensors, electric eyes, etc.) located in the public domain? The most obvious answer here would focus on categorizing any exercise of public domain resources for purposes other than they were intended as being forms of abuse. By this view, electing to sleep on a picnic table in a public park
801
Ethics of “Parasitic Computing”
or picnic on a children’s merry-go-round would constitute abusive behavior, just as would asking a remote computer to help solve a problem that is of no interest to its owner. Again, however, there is not unanimous accord on this matter. Two of the parasitic computing researchers expressed a different view, as quoted in a Nature Science Update about parasitic computing by Whitfield (2001). In that article, Jay Brockman was quoted as saying: “The Web is a source of information. If someone can contrive a question so as to get an answer other than the one intended, I view that as clever, rather than subversive.” Along the same lines, Freeh reportedly indicated, “If you have a public service, I have the ability to use it for a purpose that you didn’t intend” (Whitfield, 2001). Yet, surely, there must be some limits on unintended uses of public or private resources, some line of demarcation between acceptable and abusive. Of course, this is the very problem with “gray areas.” In them, it is often very difficult to discern sought-after boundary lines. As a result, some like Mark Rash, an attorney who used to prosecute hackers, will say that parasitic computing probably qualifies as trespassing, although it was done without malice (National Public Radio, 2001). Others, like Freeh, will see it as being more like parking one’s car in a shopping center’s lot without going in to shop: no harm, no foul. In the end, with respect to the use of public resources, we are persuaded that harm should be the arbiter of abuse. The principal reason for our “harm being the arbiter of abuse” position in relation to parasitic computing lies in the shared nature of the Internet as a public resource. Traditional views of ownership rights involving the idea that an owner has sole control over use and profit from personal property (Calabresi & Melamed, 1972) do not apply strictly, in our opinion, when personal and public property are juxtaposed in a dependency relationship as they are in the context of computers being connected to the Internet. Because the act
802
of connecting to the Internet in order to receive communication services is optional and elective, such action necessarily involves an implicit consent to a form of property sharing that trumps sole propriety. This act is analogous to placing a mailbox on the public easement in front of one’s house in order to receive postal service. In these instances, personal property (mailboxes or TCP/IP layers) are made explicitly available to the public so that valued communication services thereby can be obtained. The dependency of the private upon the public arises here because one’s personal communications in these instances must pass through a public handling system. Of necessity, this means that one’s mailbox or TCP/IP protocol is subject to public interrogation, which amounts to a kind of shared relationship between owner and public. Of this shared relationship, at least two things can be said. First, as Barabasi et al. (2001) have noted, such actions are undertaken by an owner with a sense of trust that no harm will befall that which is made publicly accessible. In the case of a mailbox this means a trust that it will not be smashed, painted shut, locked, or stuffed so full of irrelevant material that intended mail will no longer fit. Any such circumstances would constitute a violation of this trust and therefore would harm the owner. Similarly, in making an Internet connection, a computer owner trusts that no one will attempt to invade or gain access to the computer, affect its inner workings, or attempt to deny or circumvent its access to the public communications system by means of the shared TCP/IP layer made available to the public. Second, the nature of the implicit consent involved is limited. In the case of the mailbox, the limitation restricts public access only to the mail receptacle and does not extend it to the yard or the house. For the computer owner, consent to public access is limited to the TCP/IP layer and is not extended generally to the RAM, processor, hard drive, or other peripheral resources connected to the computer.
Ethics of “Parasitic Computing”
PhIlosoPhIc PersPectIVes We now briefly examine the two basic opposite world views, Idealism and Pragmatism, along with the ethical viewpoints to which these positions give rise as they relate to parasitic computing. The Idealist derives greater meaning from ideas than things. Ideas do not change, so reality is basically static and absolute. The Idealist philosopher Immanuel Kant used what he called the “Categorical Imperative” to assess the ethics of any action. The first form of his Categorical Imperative states: “Act only on that maxim by which you can at the same time will that it should become a universal law” (The Internet Encyclopedia of Philosophy, 2001). In other words, if you wish to establish (or adhere to) a particular moral or ethical standard, you must be willing to agree that it would also be right for anyone else to follow that standard. It seems, then, from an Idealist perspective, that the originally intended purposes of things (i.e., the ideas upon which they were based) would weigh heavily in ethical judgments regarding their use or abuse. Thus, using anything, even an Internet protocol, for any purpose other than its intended one would be unethical on this view. The Pragmatist finds meaning neither in ideas nor things. Rather, it is found in experience or change. Pragmatism might therefore be regarded as a more expedient approach than Idealism. Since this philosophical view holds that everything can change, there is no permanent essence or identity. In terms of ethical behavior, this outlook implies that all moral values must be tested and proven in practice since nothing is intrinsically good or bad. If certain actions work to achieve a socially desirable end, then they are ethical and good (Barger, 2001b). In contrast to a Kantian Idealist perspective, Barabasi et al. (2001) seem to take, if anything, a Pragmatist approach to the ethics of parasitic computing. The Pragmatist prefers to look at intent or consequences to see how it could affect the morality of an action, whereas the Idealist
would concentrate on the intrinsic character of the act itself, something the Idealist believes is unaffected by intent or consequences. If a complex problem can be solved more effectively by distributing the computational load among many remote computers that are induced to perform calculations unwittingly, then such actions are acceptable, provided they represent some form of “greater good.” We must acknowledge, however, that these authors depart from full-blown Pragmatism when they note that their demonstration of parasitic computing currently is a very inefficient way to implement distributed computing. To this effect, they state: “To make the model viable, the computation-to-communication ratio must increase until the computation exported by the parasitic node is larger than the amount of cycles required by the [host] node to solve the problem itself instead of sending it to the target. However, we emphasize that these are drawbacks of the presented implementation and do not represent fundamental obstacles for parasitic computing. It remains to be seen, however, whether a high-level implementation of a parasitic computer, perhaps exploiting HTTP or encryption/decryption, could execute in an efficient manner” (Barabasi et al., 2001, p. 897). Obviously, this statement openly questions the effectiveness (i.e., the “greater good”) of their method. It also illuminates the need for further research on this topic. Also, these authors are less than fully pragmatic when they express concern about the potential harm that could be caused by parasitic computing were it to be conducted with malicious intent. As we have mentioned above, Barabasi et al. (2001) explicitly acknowledge the possibility of denial-of-service-like attacks using their general approach, however inefficient such attacks might be. Such openly stated concerns by these authors once again challenge the possibility that parasitic computing may conform to the pragmatist’s emphasis upon the “greater good.” Regardless of one’s position on the ethics of parasitic computing, the motivation behind
803
Ethics of “Parasitic Computing”
this line of research is laudable. As stated on the parasitic computing Web page, these researchers say: “By publishing our work we wish to bring the Internet’s various existing vulnerabilities to the attention of both the scientific community and the society at large, so that the ethical, legal, and scientific ramifications raised by it can be resolved” (Parasitic Computing, 2001).
conclusIon And future trends The demonstration of parasitic computing by Barabasi et al. (2001) has provided us with an interesting and provocative example of how many computers can be recruited to work together in pursuit of solutions to complex problems even when the respective participants are unaware of their involvement. How is one to evaluate the ethics of parasitic computing? Even the researchers involved seem to differ among themselves on this question. Judging by their published remarks, Vincent Freeh and Jay Brockman do not find ethical problems with the covert exploitation involved in their demonstration of parasitic computing. In contrast, when asked the question “Was it ethical?” in a National Public Radio interview, Albert-Laszlo Barabasi replied: “That’s a very good question. My thinking [is] that it is not. And that was one of the reasons actually why we haven’t really pushed much further” (National Public Radio, 2001). Such disagreement among principals is vivid testimony to the above-noted ethical “gray area” associated with parasitic computing; an area that we think calls for ongoing examination and debate. The question of where research on parasitic computing is headed is an interesting one from several perspectives. From an efficiency standpoint, as noted above, some attention could be focused on how the overall paradigm can be improved along the lines noted by Barabasi et al. (2001): “To make the model viable, the computa-
804
tion-to-communication ratio must increase until the computation exported by the parasitic node is larger than the amount of cycles required by the node to solve the problem itself instead of sending it to the target” (p. 897). One way to do this, as noted by Freeh (2002), is to increase the number of packets sent by a parasite to a single host. Such an approach also would increase the importance of dealing with the reliability problems associated with parasitic computing (i.e., false negatives and false positives). In terms of the TCP/IP vulnerability exploited by parasitic computing, Freeh (2002) has noted that more work is needed to better define and understand the viability and level of threat associated with various forms of exploitation inherent in Internet protocols. Finally, with respect to a justification for parasitic computing, Vinton Cerf, the co-inventor of the TCP/IP protocol suite, has noted that “one should also consider that compute cycles are highly perishable — if you don’t use them, they evaporate. So, some justification [for parasitic computing] might be found if the arriving packets had lowest possible priority for use of the computing cycles.” He would not, however, condone the use of parasitic computing on machines whose owners had not authorized such use (Personal communications, September 12 & 21, 2003). This observation suggests that a form of parasitic computing that took advantage of “cycle priority” and/or a “cycle donor pool” might be a less ethically challenging alternative.
references Association for Computing Machinery. (1992). ACM Code of Ethics and Professional Conduct. Retrieved September 9, 2003 from: http://www. acm.org/constitution/code.html Barabasi, A.-L., Freeh, V. W., Jeong, H., & Brockman, J. B. (2001). Parasitic computing. Nature, 412, 894-897.
Ethics of “Parasitic Computing”
Barger, R. N. (2001a). Is computer ethics unique in relation to other fields of ethics? Retrieved September 9, 2003: http://www.nd.edu/~rbarger/ ce-unique.html
McAlearney, S. (2001). Parasitic computing relatively benign. Security Wire Digest, 3(68). Retrieved September 9, 2003 from: http://infosecurity mag.techtarget.com/2001/sep/digest06.shtml
Barger, R. N. (2001b). Philosophical belief systems. Retrieved September 9, 2003 from: http:// www.nd.edu/~rbarger/philblfs.html
National Public Radio. (2001). All things considered. August 29. Retrieved September 9, 2003 from: http://www.npr.org/ramfiles/atc/20010829. atc.14.ram
Barquin, R. C. (1992). In pursuit of a ‘ten commandments’ for computer ethics. Computer Ethics Institute. Retrieved September 9, 2003 from: http://www.brook.edu/its/cei/papers/Barquin_Pursuit_1992.htm Bauer, M. (2001). Villian-to-victim computing and applications: Abuse of protocol features. Retrieved September 9, 2003 from: http://www1. informatik.uni-erlangen.de/~bauer/new/v2v. html Bellovin, S. M. (1989). Security problems in the TCP/IP protocol suite. ACM Computer Communications Review, 19(2), 32-48. Calabresi, G., & Melamed, D. A. (1972). Property rules, liability rules and inalienability: One view of the cathedral. Harvard Business Review, 85, 1089-1128. Cohen, F. (1992). A formal definition of computer worms and some related results. IFIP-TC11 Computers and Security, 11(7), 641-652. Computer Professionals for Social Responsibility. (2001). The ten commandments of computer ethics. Retrieved September 9, 2003 from: http://www. cpsr.org/program/ethics/cei.html Freeh, V.W. (2002). Anatomy of a Parasitic Computer. Dr. Dobb’s Journal, January, 63-67. Internet Encyclopedia of Philosophy, The (2001). Immanuel Kant (1724-1804): Metaphysics. Retrieved September 9, 2003 from: http://www.utm. edu/research/iep/k/kantmeta.htm
Parasitic Computing. (2001). Retrieved September 9, 2003 from: http://www.nd.edu/~parasite/ Savage, S., Wetherall, D., Karlin, A., & Anderson, T. (2000). Practical network support for IP traceback. Proceedings of the 2000 ACM SIGCOMM Conference, August (pp. 295-306). SETI@home. (2003). Retrieved September 9, 2003 from: http://setiathome.ssl.berkeley.edu/ Smith, J. C. (2000). Covert shells. Retrieved September 9, 2003 from: http://gray-world.net/ papers/covertshells.txt Stevens, W. R. (1994). TCP/IP illustrated, Volume 1. Reading, MA: Addison-Wesley. Velasco, V. (2000). Introduction to IP spoofing. Retrieved September 9, 2003 from: http://www. giac.org/practical/gsec/Victor_Velasco_ GSEC. pdf Whitfield, J. (2001). Parasite corrals computer power. Nature, August 30. Retrieved September 9, 2003 from: http://www.nature.com/ nsu/010830/010830-8.html Wiggins, G. (2001). Living with malware. Sans Institute. Retrieved September 9, 2003 from: http://www.sans.org/rr/paper.php?id=48 The Word Spy. (2001). Entry for “parasitic computing” (posted Dec. 6, 2001). Retrieved September 9, 2003 from: http://www.wordspy. com/words/parasiticcomputing.asp
This work was previously published in Information Ethics: Privacy and Intellectual Property, edited by L. Freeman & A.G. Peace, pp. 143-162, copyright 2005 by Information Science Publishing (an imprint of IGI Global).
805
806
Chapter LII
Simulating Complexity-Based Ethics for Crucial Decision Making in Counter Terrorism Cecilia Andrews University of New South Wales, Australia Edward Lewis University of New South Wales, Australia
AbstrAct “Counter-terrorism refers to the practices, tactics and strategies that governments, militaries and other groups adopt in order to fight terrorism.” Counter Terrorism (CT) is a complex system driven by political, stress and time pressures that contribute to the enormous difficulty that involved people face in making sustainable ethical decisions. This chapter proposes a systems planning approach for enhancing the sustainability of crucial ethical decisions in CT. First, we describe the need for enhancing crucial ethical decision-making using some recent cases. Next, we evaluate the relevance and utility of a systems planning approach in providing such enhancements for CT. We develop the “ideal state” for tools and techniques to be used for crucial ethical decision-making in CT. We propose the POWER systems planning framework as a model for advancing towards this ideal state Finally, we consider how games and simulation could be used to envision and inform, aid synthesis of and support evaluation of decision-making through the POWER model.
IntroductIon Ethics and values form the basis for the evolution of systems in society, such as military, information, political, control, economic and cultural. Values,
along with moral strategies and agents (people), form Belief Systems. The conflict between different Belief Systems is the real battlefield of terrorism, and if we can understand this conflict, then we can counter terrorism more effectively.
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Simulating Complexity-Based Ethics for Crucial Decision Making in Counter Terrorism
This chapter considers CT as risk management within a complex, adaptive systems model. CT is about determining terrorist risk and evaluating options for the mitigation of that risk. However, CT approaches commonly focus on the consequence of the risk to particular assets — those things that could be targeted by terrorists — rather than what conditions fertilize the growth of that risk and the motive for identification of those targets. If we understand these conditions influencing the risk, then we might be more successful in countering terrorism. The potential for risk can emerge from a combination of Belief Systems, involving factors like individual disenfranchisement and group compliance. Social psychological literature provides tools and insights into risk potential, both at the individual and group level (Pynchon & Borum, 1999). Group attitudes and opinions, group decision-making, motivations to group action and diffusion of individual responsibility in a group context all contribute to the development of a Belief System. Examples of the formation of Belief Systems include the unity of purpose in the faithful that can help overcome uncertainties in their environment that threaten individual existence (Bloom, 1999). The Belief Systems of closed groups can enable these groups to be led into violence. How can we come to understand the very embedded and complex nature of belief in societies? Complex systems may give us the tools we need. Complex systems can use systems dynamics and other systems modeling techniques to develop a picture of the influences and pressures on individuals within groups to inform intelligence on key agents and drivers in operations. An example of the use of complex systems includes the Social Network Analysis (SNA) of the 9-11 Terrorist Network undertaken by Krebs (2004) from public domain information. These kinds of analyses are interesting to develop a picture of who, what and where terrorist cells are developing, but they do
not provide information on why. An understanding of Belief Systems might give us the insight into “why” that we can use for the effective risk management of terrorism. If we can develop models that help to identify those pervasive and persistent patterns of Belief Systems that evolve into terrorist motives, we can provide counter measures well before risk potential develops into risk reality. As well, such models can help us to develop war games that can be used to understand or train for the complex interactions within the terrorism problem (Smith, 2002).
rIsk mAnAgement In counter terrorIsm A number of components exist for a risk analysis of terrorist threats across a number of models. The higher order components are Intent and Capability. Intent comprises motive or desire, objectives (purpose) and expectance. Capability comprises technical, knowledge, methods, social factors (such as group and organization), resources and skills (Holliss, 2002). Risk analyses are made at both the strategic and tactical level as to the likelihood and impact of a threat being realized, given intelligence factors derived from Intent and Capability. When it comes to terrorism, Intent is the key factor in deciding the nature of the threat. Intent of points of view is embedded in the very definitions of terrorism (ASIO Act Amendment Bill, 2002; Hocking, 2003; Wikipedia, 2004). Terrorism is not something defined by its process, or even its agents and their knowledge or resources, but it is violence defined by its purpose. It is a conflict rooted in belief — whether political, religious, economic or social. The success of terrorism and any model that purports to simulate terrorism should be measured in terms of social outrage. High-impact, high social-outrage events are “successful” terrorist
807
Simulating Complexity-Based Ethics for Crucial Decision Making in Counter Terrorism
events. The terrorists’ motive is not personal gain, but intent to cause social outrage through a crisis of fear, leading to a weakening of the target society and change in social order. The model should include the real targets of terrorists — those groups or societies that hold or practice conflicting belief systems to those held by the terrorist group. Terrorists create the means for the disruption, and perhaps, subsequent destruction of society through intimidation and fear: through terror inspired by the undermining of beliefs and their icons thought to be facts or truths (pervasive and persistent beliefs) by the targeted society. The motivations for becoming a terrorist are diverse, and they depend not just upon applied political psychology (Robins & Post, 1997), but also the socio-historical context of the time and place (Crenshaw, 2001). Becoming a terrorist involves exposure to like-minded others — it emerges from communities of belief, not from individual belief systems, a factor that any simulation needs to account for, and it evolves over time and place. “We have seen a new dimension in unconventional tactics against targets that we have failed to recognize as targets” (Senglaub, Harris & Raybourn, 2001). How can we understand what is at risk if we do not understand why it would be at risk? To identify targets successfully, the terrorist must have been immersed in the belief system of the target society long enough to develop a knowledge of the symbology, objectives and icons of that society’s belief system(s), as “Humans have a continuous tendency to imbue an adversary with behavior and constraints that mirror their own” (Senglaub et al., 2001). We must do the same to counter the threat. Terrorists look at the pillars of society and then attack vulnerable icons of those pillars. Although the attacks on the World Trade Center, the Pentagon, United States (U.S.) Embassies and Balinese nightclubs were aimed at buildings, it is the attacks on these symbols of society that cause
808
the greatest social outrage. Terrorism follows its violent belief networks in targeting symbols of the society, with the Intent to provoke a reaction of that society to close in and develop phalanx responses, repressing civil liberties in the face of a common threat and creating reactions like broad powers for intelligence and police agencies that could dilute and threaten the strategic diversity (and therefore sustainability) of society.
the need to enhAnce crucIAl ethIcAl decIsIon-mAkIng In ct Crucial ethical decisions have long-term ramifications for the quality of life and liberty of generations. If we can build immunity to the intended outrage or deflect the Intent, then we might overcome the strategic threat of terrorism. This immunity does not mean protection of society as we define it now; it may mean a change to our society, our beliefs and so our icons towards a more sustainable future. Accordingly, these decisions are crucial, because they go to the very being of our society. Crucial decisions are made by the people with the greatest influence upon society: those working at the political executive level for the nation. These decision makers prepare strategic plans, develop policy and analysis for intelligence, and coordinate prediction and response teams. It is these decision makers who are in a position to develop strategic programs, who have the ability to intervene higher in the risk chain and who are interested in “what-if” scenario planning of Intent rather than assessing tactical responses to Capability. Their decisions concern ethical trade-offs between options concerning risk to life-and-life and to liberty itself. It is these decision makers that we need to help to understand Intent through modeling belief networks and strategies. If we can achieve this understanding, then we can more clearly identify what resources — physical, legislative, social and
Simulating Complexity-Based Ethics for Crucial Decision Making in Counter Terrorism
conceptual — need immunity to terrorist threat in order to have an immediate effect upon the expectancy of success within the terrorist belief network. We can use these soft factors in risk analyses to develop a more targeted, anticipatory alertness and immunity to the threat rather than create alarm and develop a lagging reaction against the Capability of the threat. If you remove Intent, you have no threat. It might be a more effective and sustainable strategy to undermine the terrorist belief network by undermining their belief pillars (use their strategy against them) than to deny them a Capability. If we can develop a simulation to identify where the critical beliefs are and where the critical paths of the network of belief go, then we might identify strategies to target those beliefs and invalidate the conflict between our beliefs and the terrorists’ beliefs. Terrorism is Belief Systems attacking Belief Systems. We need to understand Belief Systems so we can defend against this attack.
the belIef systems model for comPlexIty-bAsed ethIcs Without changing our pattern of thought, we will not be able to solve the problems we created with our current patterns of thought. - Albert Einstein (Allio, 2002) Human social systems are richly and diversely complex, adaptive systems of Agents (Axelrod, 1997). Values and moral strategies participate in complex, adaptive Belief Systems that, along with Identity (or Cultural) Systems, form the basis for human social systems. Complex Science in Artificial Intelligence (Liu & Williams, 1999) examines belief in Belief Revision (BR). BR has examined the role of MultiAgent approaches in MABR (Multi-Agent Belief Revision), DBR (Distributed Belief Revision) and MSBR (Multi-Source Belief Revision), with
a particular focus on the schematic level, which is the syntactic process level of updates in belief states. This important work provides options for the algorithmic determination of revision schemes towards understanding the mechanism of change in agents’ belief. However, it is proposed that understanding the schematic mechanism of belief change alone is insufficient to solve ethical dilemmas. MABR does not tell us much about the knowledge domain of ethics and the persistence of connection, or how ethics relates to social systems. It also does not provide any statements about how the “application layer” might be applied to solve a problem. Although MSBR examines the importance of credibility and reliability of information and source in selection and scope of BR (Liu & Williams, 1999), it does not consider how the credibility and reliability measures come about, who is involved in the determination of these measures, and why they end up with particular values and not others. It is very important to recognize bias in a system, but the nature of bias in belief is not something that we can observe to be linearly determined; it is something that emerges from the system of belief itself. Furthermore, what is contained in the “message” interacts with other systems and evolves from its relationship with changes at the component level in co-evolution. Further contributions to our understanding of ethics or belief in the real world may be gained from examining more than a set of algorithms for changing discrete and individual beliefs. We need a system that incorporates a historical process of value change through non-linear interactions between multiple agents. As well, the system must incorporate change in application of those values and in moral strategies through non-linear interactions between multiple agents towards an emergent social purpose. In taking this perspective, we can identify other factors that might, through less immediate means, influence the choice and evolution of belief in agents other than through measures of credibility and reli-
809
Simulating Complexity-Based Ethics for Crucial Decision Making in Counter Terrorism
ability. Finally, we need to be able to apply this model back to the real world to help solve ethical dilemmas from value through to action. A useful model of Belief Systems should show how belief emerges from the self-organizing interaction between agents, moral strategies, values and external co-evolving systems. It needs to show how Belief Systems behave as a complex adaptive system because of the strong interactions and learning among elements, so that current events heavily influence the probabilities of many kinds of later events (Axelrod & Cohen, 2000). The connections between elements of Belief and the conflicts that occur are messy, coupled tightly to other problems. Interventions in the system at one place can, because of dynamic, non-linear links, have unintended consequences elsewhere (Ackoff, 1999). According to Di Paolo (2001) and Liu and Williams (1999), we need systematization and ontology to address the simulation of life and belief. The following Belief Systems Model (BSM) provides a preliminary framework or informal ontology that goes towards meeting these needs. This BSM illustrates how belief leading to action through ethical decision-making can emerge from the self-organizing interaction between co-evolving components of systems. The model regards Belief Systems and social systems as co-immersed entities in human organization and illustrates the dynamic relationship between Decision Systems, Belief Systems and other human systems. Belief Systems are strategic, complex and adaptive systems comprising: • • •
Social values as attributes Moral strategies as processes Stakeholders and role models as agents
Moral strategies are the ways agents enact values to try to achieve the ideal state. Some examples could be “Utilitarianism” or “Do unto others as you would wish them do to you” (The Golden Rule) (Rachels, 1998).
810
Agents in Belief Systems act (consciously or unconsciously) to optimize many and sometimes conflicting objectives, both locally in terms of their own values and globally in terms of their relationship to other agents. They try to optimize values through moral strategies as applied in ethical decisions and their subsequent actions. The uncertain consequences of these ethical decisions and the conflict and volatility in the evolution of Belief Systems can create a great difficulty for the agent in this optimization. The Belief System encompasses both historical processes (Di Paolo, 2001) and hierarchical and self-referring encapsulation in the components of the Belief Systems, between Belief Systems and across other coupled external systems. The purpose of Belief Systems is to control and generate power or will (intent) for human social systems (individuals to global human social systems) in their interactions within and across other human systems and with their environment, to envision an ideal state and to act to close the gap between the current state and the ideal. The actions to close the gap are acts of ethical decision making that instantiate Belief Systems. Ethical decision making is trading off between options that have consequences of moral outrage and balancing wishes for conformance with societal mores or personal satisfaction and actions resulting from ethical decision-making instantiate Belief Systems. The nature of Belief Systems is strategic, because they involve setting visions and planning or acting to achieve those visions over a significant period of time. Even for individuals, belief involves long-term thinking and “striving for mountaintops.” Belief Systems deal with that subset of values that focus on the acceptability criteria in systems thinking terms. Acceptability measures decision performance on how well accepted the decision, its rationalization and its outcomes were to those who were affected by it. The BSM assumes that acceptability means maintaining acceptance across
Simulating Complexity-Based Ethics for Crucial Decision Making in Counter Terrorism
a diverse variety of viewpoints over space and time. This form of acceptability — Sustainable Acceptability — forms the crux of our approach to improving crucial ethical decision making and provides a measure of the performance of Belief Systems. Belief Systems form the pre-condition state for the emergence of Identity at individual and group levels. These systems (Belief and Identity) are dynamic co-evolving systems that form the substrate for social and antisocial behaviors and are the critical elements of transformational processes. Belief, Identity, Resource, Control, Environment and Decision systems co-evolve
asymmetrically, and human social systems are the emergent result. Systems theory has established the interdependency of systems through hierarchies (Van Gigch, 1974). The purpose of subsystems cannot generally be found within the system, but within higher systems in the hierarchy; so too with Belief Systems. Belief Systems are defined as controlling and generating intention for human systems and their interactions with the environment. As such, we must look to higher-order human social systems to find the purpose of Belief Systems: Individual needs have to be reconciled with the
Figure 1. Belief Systems model
811
Simulating Complexity-Based Ethics for Crucial Decision Making in Counter Terrorism
possible conflicting demands of the different groups to which the individuals belong. Human Social Systems are natural systems that use competitive and cooperative strategies to sustain human life by balancing individual and community needs. As Belief Systems are coupled with other human systems to form the emergence of human social systems, so Belief Systems must also contribute towards achieving a dynamic balance in sustaining life. That is, Belief Systems provide the basis for coupling and decoupling in Human Social Systems in contribution towards that purpose. In Figure 1, a person or group is an agent; values are assigned an alpha character and moral strategies have a numeric character to distinguish them. If we start with Person P1, we can see that P1 employs a Moral Strategy 2 (MS2). MS2 is comprised of both Values Y and Z and some action statement. An example of this relationship could be that P1 employs M2 (“act with integrity” and “always respect women”) where Value Z is “integrity” and Value Y is “respect women.” P1 also belongs to a Group G1, but G1 rejects the Value Y that P1 is using through M2. This is an instance where conflict can be shown through the model. Because the Model is a snapshot of a dynamic system at time 1, we would expect to see some kind of change in the relationships between P1, G1, MS2 and Value Y. If we did not see change, it would indicate some other factor involved in the resolution of the dissonance emerging from these conflicting interests, and the model would need to be extended for this analysis. We can see that Value Z is a compound value. This establishes that values in themselves can demonstrate coupling and decoupling to emerge new compound values and even new belief systems. The compound nature of value Z can be explained using the “integrity” example. In this system, it may be that “integrity” is comprised of “honesty” (Za) and “courage” (Zb).
812
Belief System B1 is immersed in Belief System B. This representation describes how Belief Systems can be part of other Belief Systems, either in whole or in part (as is Belief System B2). Belief System B2 is a child system of B, yet it has evolved so that a proportion of the values, moral strategies and actors do not reside within B1 or even B, although there is still some overlap at this time. A broad example of such intersecting sets of beliefs is the various Christian churches, with resultant schisms in creeds. Another example of how conflict can arise through the dynamic relationships between Agents, Values and Moral Strategies is in Belief System B2. Here Person P7 has influence (of some sort, either positively or negatively) over group G3. P7 employs MS4 and has a Value R, which is used in MS4. G3, however, rejects MS4. If P7 was a new CEO of an organization represented in this Belief System Model who had influence over a group of middle managers, and tried to start implementing MS4 because of either a previous experience with this strategy or because of the Value R it embodies, some conflict will emerge that may be exhibited in decisions taken by both the CEO as an individual and the Group of middle managers G3. We would expect some change in G3, P7 or their relationship; with the B2 system moving to equilibrium through self-organization. If the conflict could not find equilibrium with the resources both within and outside the Belief System, the organization may pay a high cost for the ideological conflict. Other elements in the model include the “external” co-evolving systems. Examples of Resource Systems include economic systems or natural resources. Examples of Environment Systems include the physical or natural environment or the virtual environment that the system is embodied within (this could include a single individual’s body). Regulatory and Control systems could include legal or convention systems (such as professional codes of conduct).
Simulating Complexity-Based Ethics for Crucial Decision Making in Counter Terrorism
Decision Knowledge Systems are artificial constructs used to represent the reasoning systems of the group or individual agents, forming the conduit for cognition. The Belief System is integrated within the Decision system of a human. The BSM shows the factors at play that influence the different aspects of the decision process resulting in Action, where the actual physical ethical action is taken. In the next iteration of the model (with perfect refresh), there will be feedback about the consequences of the Action, through cognition, into the Belief System. All of these “external” systems are coupled with each other (as depicted by the outer line on the model). The systems act both individually and in concert with each other to constrain or liberate the Belief System in its evolution. The final element of the Belief System Model depicted is the Social System emerging from Groups G1 and G2, and their interactions with moral strategies and belief, and with their members. What the model does not show (for clarity’s sake) is the emergence of the Identity System of these groups and their members as their Belief System evolves. The conditions for the emergence of Social System 1 are formed through the co-immersion of the group in its Identity System; that is, how they define themselves and their social narrative (their culture) and their Belief System, along with the constraints of the “external” systems.
using the bsm Models such as the BSM have the danger of being oversimplifications of reality, with assumptions and reductions made to enable it to be implemented in some program. Models can still be useful, even if limited to the description and illumination of possibilities. A simulation constructed on such a model can develop an understanding of the consequence of scenarios that may provide insight into heretofore-unacknowledged factors in social systems and, hence, ethical decisions.
So what benefits could we expect to achieve by using BSM in developing technology aids for ethical decision makers towards understanding ethics or belief? The benefits include increased understanding of how the components of the Belief Systems could relate to each other in order to anticipate possible effects of changes in the components when we need to compare and test alternative plans in complex situations (Van Gigch, 1974). Good systems are measured by how well they achieve their purpose, measured in our case as Sustainable Acceptability — maintaining acceptance across a diverse variety of agents over space and time. Selecting or acquiring strategies that promote variation and interaction is a successful biological approach to growing sustainable natural systems; that is, planning to create choices rather than limiting choices. Using a bio-mimicry approach to provide guidance in moral strategy selection could offer the same benefits in performance at individual agent and system design levels. Another approach is to develop design or planning methodologies incorporating reconciliation and emancipation strategies focused on identifying the commonalities and differences in and across Belief Systems. This approach has been used to great success by one of the authors in a local primary school in developing a curriculum for infant and primary school children on Comparative Belief Studies. The overt development and evolution of a BSM for an organization can be used to build awareness among agents of the role, nature and criticality of their beliefs and strategies in their decision-making, and subsequent possible consequences. This approach could be a more effective and acceptable method of education, given the ownership and dynamic nature inherently reflected in the model, than traditional linear, static methods, such as codes of conduct. Finally, the model could be used as a blueprint for the development of adaptive techniques for
813
Simulating Complexity-Based Ethics for Crucial Decision Making in Counter Terrorism
handling messy problems, assisting in teaching about thinking through a common language and structure, and enabling insights into thinking about complex issues, through encouraging a planning process that is broader and deeper than those usually proposed in existing ethical frameworks (Maner, 2002). The BSM goes beyond the limitations of the approaches toward ethical decision-making taken typically in other disciplines, such as •
•
•
Ethical Theory, with its examination of the elements of ethics systems in isolation and reduction rather than in synthesis within the system and with the hierarchy of systems coupled to it, leading to difficulties in encompassing aspects such as moral strategies in any unified theory (Rachels, 1998). Cognitive Science (Goldstein, Hogarth, Arkes, Lopes & Baron, 1997), with its consideration of the agent and the human system without regard to the coupling between the human system and the systems that control and are influenced by it. Political Science investigations of values science (Hocking, 2003), and even “crucial decisions” (Janis, 1989), with its focus on the outcomes or the various purposes of the human systems without sufficient consideration of attributes and moral strategies.
In fact, all of these disciplines investigate certain elements of the Ethics System, but in isolation of the nature of the other elements and the relationships between them and with the systems’ hierarchy. Each of these approaches uses a stepby-step procedure for resolving problems instead of considering ethics as dynamic, complex, nonpredictable and evolutionary; that is, as a system. Not recognizing the systemic nature of ethics is a risk that may lead to a sub-optimal solution, and any subsequent extrapolation will not assist the agent in resolving critical ethical dilemmas.
814
Using the BSM Model for Crucial Ethical Decision Making Future research can use the model to create simulation technology to assist in planning for the development of belief systems or in the resolution of crucial ideological conflicts, such as in Counter Terrorism, in a justifiable and sensible way. The use of this model creates opportunities to develop communication among conflicting parties by identifying where commonalities exist that can diffuse conflict. Creating a language and symbology to understand and communicate belief that is not immersed in the belief itself can remove the inherent misunderstandings caused by underlying value or moral strategy conflict. It creates options for communication rather than reducing them through barriers of belief. A key application is in the resolution of ideologically based Terrorism and Imperialism. Both of these strategies are underpinned by Belief Systems that are seemingly incapable of negotiating conflict resolution when confronted with seemingly opposing Belief Systems through social language. Using the BSM, we can model the historical processes and evolve the system to merge interests to a common objective without resorting to uncontained violence. We can also use the model to develop an understanding of the key beliefs and their icons as a way to contain the contagion of those belief networks while negotiation continues. It can also help us identify which icons of our own beliefs are vulnerable to threat. For example, if terrorism is not about the elimination, but rather the disruption, of society, the races preceding a major horse race on the weekend before would be of higher value as a target to a terrorist using the BSM because the loss of the horses or their jockeys’ would stop the major race. These races would under other models of capability or infrastructure threat analysis not have as high a value as the major race. Attacking them would offer a much higher chance of generating social outrage
Simulating Complexity-Based Ethics for Crucial Decision Making in Counter Terrorism
and disruption then the unlikely risk of being able to disrupt the major race itself.
allocation. This framework groups the basic components of planning under the acronym of P (urpose), O (ptions), W (hich option), E (xecution) and R (esources). Although described in linear terms below, this process is dynamic (Corner, Buchanan & Henig, 2002), with many feedback loops.
Applications of the model The BSM can help manage the risk of terrorism by suggesting the path through a planning framework that can be used by those responsible for crucial decision-making and by indicating the parameters that could be used in a simulation language that can explore the possible effects of their decisions. These applications are described in the following pages.
Plan the Plan The “Plan the Plan” sub-system can be used to choose the pathways through the process and the techniques used at each step, balancing effort with the necessary precision (Payne et al., 1993), allowing for the double-loop learning (Argyris, 1988) about planning that is necessary for reducing the repetition of disastrous thinking (McLucas, 2003), and adjusting for demands upon the planning process (e.g., Gigerenzer, 2001) or the skills of the planners. It is a recursive planning process in its own right. This sub-system links to the planning process through:
The POWER General Planning Framework We have designed the General Planning Framework shown in Figure 2 to cover all of the different approaches to decision making needed in CT, from strategic plans to tactical resource
Figure 2. The general planning framework Purpose Pressures drive
Points of View
Values Plan the plan
plan
Measures 1
…
Cost
m
Which
Options Evaluation
Choice
1
.
Alternatives / options
n
Recognition Creation
Execution and Resources reaction
Approval
Action Plan Audit Plan
815
Simulating Complexity-Based Ethics for Crucial Decision Making in Counter Terrorism
• •
•
Drives: The reasons for taking planning action Plan: The implementation of the chosen planning approach, such as the Kepner-Tregoe practice (Kepner-Tregoe, 2003) or the Military Appreciation Process (Australian Defense Force, 1999) Reaction: The feedback about the “success” of the planning, including comments from stakeholders concerning the results and approach taken in the planning process
Purpose The purpose is the set of values that form the objective(s) of a human system (individual, group, organization, society or nation), giving direction for action and the budgetary or procedural constraints upon the choice of options for action. It is derived from a study of pressures that are of concern to key points of view. Pressures sub-system: Pressures are external circumstances or internal conditions that can have an effect upon the actions of a human system. The analysis of these pressures should find the “risk drivers” of most concern to influential stakeholders. This sub-system is essential to any longer-term decision making. It is advocated in many different planning processes, such as SWOT Analysis, that is so often used in strategic planning (Martino, 1972), with some reservations these days (Haberberg, 2000); Situational Awareness, that is so important in models of naturalistic decision making (Klein, 2000; Klein, Orasanu, Calderwood & Zsambok, 1993) and its prescriptive derivatives (Australian Defense Force, 1999); “technological forecasting” (Martino, 1972) or “future studies” (Foresight International, 2003; World Futures Studies Federation, 2003); scenario-based planning, where stakeholders are provoked into considering possible futures (Makridakis, 1990; Schwartz, 1991); and, noting the contrary points raised by Mintzberg (2000),
816
strategic planning (Boar, 2001; Cusumano & Markides, 2001). These pressures entwine with each other, forming potential futures, uncertain in effect. Their interaction is dynamic, even chaotic. Accordingly, the likelihood of pressures, and the extent of their influence upon the system, can be hard to predict. However, the analysis of the pressures can help form the values of the points of view. Points of View sub-system: Stakeholders (those people or organizations who may affect, be affected by, or perceive themselves to be affected by, a decision or activity) react to pressures, seeking capabilities from the systems that will help to take up the opportunities or avoid the threats. The nature of this reaction can be complicated by stakeholders’ differing points of view and the interaction of influences of stakeholders upon each other. Stakeholder analysis has been widely used for many years in strategic planning (Freeman, 1984), the evaluation of social programs (International Institute for Environment and Development, 2003), cost-benefit analysis (Government of Alberta, 2003), the use of metagame analysis or drama theory (Howard, 2001) and systems analysis (Yourdon, 2001). The consideration of the different perspectives of a problem — technical, organizational and personal — also has a long history (Linstone, 1984). The types of stakeholders are of special concern in Soft Systems Methodology (e.g., Checkland, 1981). Differences in the views of stakeholders lead to considerable political dispute, resulting in the delay or disruption of systems planning (Carver, 2002; Friend, 2001; Hart, 1997, 1999; Warne, 1997) or defensive “groupthink” (Janis, 1989). There can be feedback from points of view to pressures. The desires of senior officers, outside the control of the planner, can place demands upon the plan that form pressures.
Simulating Complexity-Based Ethics for Crucial Decision Making in Counter Terrorism
Values sub-system: Values are statements of performance desired by a point of view, as “… preferences for states or things” (Keeney, 1988, p. 465). Values are also known as utilities (von Neumann & Morgenstern, 1947; Johnson & Huber, 1977), dimensions (Edwards, 1977), governing variables (Argyris, 1988), objectives (Keeney & Raiffa, 1993), worth (Kneppreth, Gustafson, Leifer & Johnson, 1974), “defining characteristics” (Hart, 1999), attributes (Payne & Bettman, 2001), even “emergent properties” (O’Connor & Wong, 2002) or requirements (any systems analysis text). At the strategic level, the values can be known as “performance indicators”; at the more tactical level, the values are “benefits” or “constraints”; and at the operational level, the values could be represented as selection or design “criteria.” The intent of a required value is to describe the ideal — what is wanted from the best system. Values form the objectives of the system and the constraints, determining how the system will be planned. There can be one or many values. Different planning approaches have been established for the three different outcomes of values: achieving effectiveness of the plan, meeting wishes of those influenced by the plan and being efficient in the use of resources. Classic decision analysis (von Winterfeld & Edwards, 1986) considers effectiveness and efficiency. On the other hand, the ethical decision-making frameworks (see Maner, 2002 for a comprehensive list) consider the “wishes” or “acceptability” values such as psychological needs (challenge, responsibility or esteem), physical needs (safety, security or comfort) or compliance with formal rules or informal practices. The values can be organized into means-ends trees (Martino, 1972), objectives trees (Raiffa & Keeney, 1993) or even causal loops (McLucas, 2003) to derive detailed sub-values and to form a full set of values. Measures sub-system: Measures are descriptions of the required performance given in
such detail that they can be assessed precisely, accurately and reliably for specified conditions. Corner, Buchanan and Henig (2002) call them “attributes”; others use “criteria.” They are more precise descriptions of values. There can be one or many measures for each value. Cost can be regarded as one, albeit most important, type of measure. It can be determined by careful cost accounting techniques (Rayburn, 1996) using detailed price models, or by a general judgment. Cost can form the basis for decision making by itself or in trade-offs in various ways (see the Choice section). Options Options are the people, products, processes or practices that could be chosen for incorporation into the planned action because they meet the values. They can be logical solutions; discrete items to be selected, as in tenders or houses for sale; designs, involving combinations of alternatives for components of the system, as in house design; or a mix, as in the allocation of resources, such as combinations of trucks with different carrying capacities. There can be one or many options. The existing state, or no action, is an option. Options can be obtained by recognizing possible solutions based on experience or by using creative techniques. Recognition sub-system: Recognition is the store of experiences or lessons from previous plans that can suggest alternatives for future plans. This sub-system is fundamental to recognition-based decision-making (Klein, 2000; Klein et al., 1993). It can form the basis for the “theory in use” that underlies usual planning (Argyris, 1988). It can be the basis for the patterns used by experts in their usually diagnostic decision making (Dreyfus & Dreyfus, 1986; Richman, Gobet, Staszwski & Simon, 1996). Recognition can involve insight — the sudden realization of either the form of a solution to a problem or to the nature of the problem itself
817
Simulating Complexity-Based Ethics for Crucial Decision Making in Counter Terrorism
— or intuition, which is the discovery of an option that appears to be satisfactory from unstructured thoughts based upon experience or expertise, rather than any deliberate use of a method for generating options (Goldberg, 1983; Klein, 2002; Rowan, 1986). Creativity sub-system: Creativity is the process of generating ideas, be they new products, scientific hypotheses or artistic representations. It can draw upon previous experience, captured in the Recognition process, or use tools to produce novel ideas. Many creativity techniques can be used to design options, captured by authors such as de Bono (1992), Creativity Unleashed (2003), Michalko (1991), Parnes (International Centre for Studies in Creativity, 2003), Rickards (1990) and van Gundy (1992). Alternatives/options sub-system: Alternatives are different ways of providing components of processes or products. They can be combined to form different mix or design options. Although the literature uses alternative and option interchangeably in effect (e.g., Friend, 2001), some planning approaches explicitly consider the combination of alternatives to form options (Keeney, 1992), especially morphological analysis (Zwicky, 1969). The act of designing options can trigger ideas about values, thus forming a loop back up the planning process. Which Option The core to planning is determining which of the options is the best to put into practice. This activity involves evaluating the options against one or more of the measures or values and then choosing the one that best performs over this set of values. Evaluation sub-system : Evaluation is the hub of decision making or decision analysis. It involves judgments or estimates of the performance of an
818
option on a measure. Evaluation can proceed from the measures to the options or from the options to the measures. The literature in this area is extremely large, starting in effect with von Neumann and Morgenstern (1947) and moving though thousands of books and journals, such as Decision Support Systems, Decision Sciences, Information and Decision Technologies and many econometric journals. Many ways exist for gathering judgments, evaluations or assessments. They can be made by individuals or in groups (with the whole Group Decision Support System movement at work here). In the normative literature, evaluations can be complex trade-offs (Keeney & Raiffa, 1993; von Winterfeldt & Edwards, 1986) or in the very many optimization routines from Management Science, including evolutionary algorithms (Sarker, Abbass & Newton, 2002). In the descriptive literature, the evaluations can be simple “yes/no” assessments of the satisfaction of the judge with an option on a single measure, such as the “Take the Best” heuristic (Gigerenzer, 2001). The actions can be intended to be rational and algorithmic, but are undertaken by people with cognitive limits seeking to find a satisfactory answer (Simon, 1955, 1956, 1988). Choice sub-system: Choice involves combining evaluations over more than one measure into an overall view of the performance of the option(s), balancing evaluation on one measure with evaluation on another. It can involve two steps: assigning a weight or priority to the value(s) already listed in the Values and/or Measures sub-system to enable trade-off or compensatory balancing, then combining evaluations according to this priority. Many variations exist for the first step of this sub-system. The “traditional” decision analysis approach is the gathering of utility functions from decision makers (e.g., Keeney & Raiffa, 1993) or directly through paired comparisons (Saaty &
Simulating Complexity-Based Ethics for Crucial Decision Making in Counter Terrorism
Vargas, 1982) or judgments of weight using rating scales, as in SMART (Edwards, 1977). Similarly, the second step also has many variations. The most common is a form of weighted scores. Price can be treated as a value and scored, or it can be treated separately and used as a divisor in a value-for-money ratio. Alternatively, shortfalls in performance can be costed and added to the price (Lewis, 1999). In the descriptive approaches, such as those described by Payne and Bettman (2001), the choice processes include simple screening or filtering. There is little, if any, comparison of a set of options against a set of values. One option and/or one value seems to be the usual model. The whole emphasis of the heuristic/descriptive movement is away from “optimization” and towards simple steps capable of being carried out by experts (albeit ill-trained in decision-making) working under stress. Execution and Resources The Execution and Resources phase involves describing the recommended tasks and associated resources for putting the chosen option into use. This phase involves gaining approvals, action planning and audit planning. Approval sub-system: In larger-scale planning, it can be necessary that approval is given before resources are committed to put a chosen option into use, but this sub-system might not be needed for plans at the individual or smallgroup level. Action planning sub-system: Action planning involves the preparation of detailed sequences of tasks, with associated responsibilities, timings and assets. It is the precursor to project management. Audit planning: Audit planning involves establishing the measures that will be used to as-
sess the success of the plan and the mechanisms for making these measurements.
Use of the POWER Planning Framework for Crucial Ethical Decision Making The POWER Planning Framework supports the systems thinking of ethical decision makers. It brings together disparate theories of decision making into a coherent framework to extend their use. It can serve as the blueprint for the use of adaptive techniques for handling messy problems; assist in teaching about thinking through a common language and structure; and enable insights into thinking about complex issues though encouraging a planning process that is broader and deeper than those usually proposed in the existing ethical frameworks, such as listed by Maner (2002); and lead to the design of more complete techniques. Most ethical decision-making processes suggest only one pathway through all of the subsystems of this framework. They usually involve the use of a small, particular set of acceptability values, derived from some concern about some stakeholders and, perhaps, the pressures they are under. There is little time or inclination for determining measures. The consideration of options could be based upon the values at a broad level. It concentrates upon the evaluation and choice of a small set of options, drawn mainly from experience rather than a formal or extensive creativity process. As an example of the use of the Framework to show how techniques can be developed for ethical decision-making, let us consider social dilemmas. Social dilemmas are present in “collective action” (Felkins, 2003) involving cooperation between people. The classic example is the Prisoners’ Dilemma. Such examples can illustrate ethical issues at a larger scale, as in resource allocation of the sort represented in the Tragedy of the Commons. In these situations, the issue is determining an
819
Simulating Complexity-Based Ethics for Crucial Decision Making in Counter Terrorism
option that pleases everybody, perhaps in some equitable sharing of resources. This sharing requires knowledge of the consequences of an option for each of the shareholders. Decision analysts regard the cooperative strategy because they assume the decision maker — also a stakeholder — takes a selfish view. There is no dilemma for an ethical decision maker who understands the need for considering all stakeholders systematically. It should be no surprise, if understanding POWER, that studies of the evolution of cooperation find that the tactic for a series of linked games is “Tit for Tat,” starting with an assumption that the other prisoner will cooperate. POWER can be used to extend ethical decision-making. The BSM suggests that the pathway through POWER should be one that emphasizes the analysis of the points of view and the values of concern to them. The entwining between pressures, points of view, and values that is represented in the BSM can be taken into account by the Risk-Remedy technique (Lewis, 1999), which is one of the few approaches to decision-making that makes full use of POWER.
AcesCT Modeling Tool The Agent-based Crucial Ethical Simulations for Counter Terrorism (AcesCT) modeling tool has been developed by a team based in the School of Information Technology and Electrical Engineering at the University College, University of New South Wales at the Australian Defense Force Academy in Canberra. The purpose of this team is to develop multi-agent-based simulation software based on the BSM approach, explicitly to explore the influence and behavior of Belief Systems in crucial ethical decision making for CT. The Tool will use Complexity Science to aid decision makers in planning for the prevention and containment of Terrorist Belief Systems and behaviors. It aims to establish whether an improved appreciation of the role, criticality, location and
820
nature of Terrorist Belief Systems can lead to sustainable plans for CT. AcesCT, ultimately, will be used to build a wide social synthesis of spheres of influence in terms of the coupling between agents, strategies, values and conjoined “external” systems over time (historically and in the future) and across social strata from the individual to the global. By manipulating parameters in the Tool, we will be able to see possible effects upon Belief Systems and their conflicts. AcesCT will become one of the tools within the POWER planning framework that can be used by those responsible for crucial decision making for CT. It will be of help in analyzing the pressures upon the society under Terrorism threats and in identifying agents (“points of view”) concerned about these pressures. It will be able to “war game” options as part of their evaluation. It will be able to examine the effects of the plans, as part of the audit. The AcesCT tool will also help in the training of staff in ethical decision-making. It will be able to provide exemplars for case studies and give trainees the opportunity to try out plans to see what works — or what does not.
conclusIon Terrorism is a battlefield of belief. If we can understand the drivers for those taking part in this battle, then we can develop effective strategies for winning the battle. The development of the Belief System Model can provide insight and convincing arguments for the resolution of ideological conflict. Using multi-agent, complex systems thinking and the subsequent application of the BSM to illuminate the nature of belief will assist in the appreciation of CT in a way that existing decision-making models do not achieve easily. The BSM — through the POWER planning framework supported by the AcesCT Tool — can
Simulating Complexity-Based Ethics for Crucial Decision Making in Counter Terrorism
provide a useful part of the CT toolkit needed to help free society from ideologically driven violence.
references Ackoff, R. (1979). The future of Operations Research is past. Journal of the Operations Research Society, 30, 93-104. Allio, R. (2000). Russell L. Ackoff, iconoclastic management authority, advocates a “systemic” approach to innovation. Strategy and Leadership, 31(3). Andrews, C., & Lewis, E.J.E. (2004). A systems approach to critical ethical decision making. 3r d International Conference on Systems Thinking in Management. Argyris, C. (1988). Problems in producing usable knowledge for implementing liberating alternatives. In Bell, Raiffa & Tversky (Eds.), Decision making: Descriptive, normative and prescriptive interactions (pp. 540-561). AS/NZS 4360. (2004). AS/NZS 4360: 2004 risk management, standards Australia. Sydney, Australia. Retrieved from www.standards.com.au Australian Defence Force. (1999). Joint publication 9 – Joint planning. Canberra: Commonwealth of Australia. Axelrod, R. (1997). The complexity of cooperation. NJ: Princeton University Press. Axelrod, R., & Michael D.C. (2000). Harnessing complexity. US: Free Press. Bell, D., Raiffa, H., & Tversky, A. (Eds.). (1988). Decision making: Descriptive, normative, and prescriptive interactions. Cambridge: Cambridge University Press Bloom, H. (1999). The kidnap of mass mind – Fundamentalism, Spartanism and the games
subcultures play. History of the Global Brain XVIII, online forum. Alle Rechte vorbehalten Heise Zeitschriften Verlag GmbH and Co. KG Boar, B. (2001). Art of strategic planning for information technology (second edition). New York: Wiley. Carver, L. (2002). MESS Model for more effective management of information systems under conditions of centralisation/decentralisation. Unpublished PhD thesis. Canberra: UNSW, Australian Defence Force Academy. Corner, J., Buchanan, J., & Henig, M. (2002). A dynamic model for structuring decision problems. Retrieved January 3, 2004, from www.mngt. wakato.ac.nz/depts./mnss/JIM/ResearchArticles. htm Creativity Unleashed Ltd. (2003). Home page. Retrieved December 30, 2003, from www.cul. co.uk de Bono, E. (1992). Serious creativity. London: HarperCollins. Di Paolo, E.A. (2001). Artificial life and historical processes. In J. Kelemen & P. Sosik (Eds.), Advances in Artificial Life, Proceedings of ECAL 2001 (pp. 649-658). Berlin Heidelberg: SpringerVerlag. Dreyfus, H., & Dreyfus, S. (1986). Mind over machine: The power of human intuition and expertise in the era of the computer. Oxford: Basil Blackwell. Edwards, W. (1977). How to use multiattribute utility measurement for social decisionmaking. IEEE Transactions on Systems, Man, and Cybernetics, SMC-7(5), 326-340. Ericsson, K. (1996). The road to excellence: The acquisition of expert performance in the arts and science. Sports and Games, Mahwah: Erlbaum. Felkins, L. (2003). Social dilemmas. Retrieved January 3, 2004, from http://perspicuity.net/sd/ sd.html 821
Simulating Complexity-Based Ethics for Crucial Decision Making in Counter Terrorism
Foresight International. (2003). Home page. Retrieved January 3, 2004, from www.foresightinternational.com.au
rity powers of detention, proscription and control. Australian Journal of Politics and History, 49(3), 355- 371.
Freeman, R. (1984). Strategic management: A stakeholder approach. Boston: Pitman.
Holliss, E. (2002). The application of Threat Assessment Models against small non-state groups in an information operations environment. Thesis. Canberra: School of Computer Science, University of New South Wales at the Australian Defence Force Academy.
Friend, J. (2001). The strategic choice approach. In Rosenhead and Mingers, pp. 121-158. Gigerenzer, G. & Sleten, R. (2001). Bounded rationality: The adaptive toolbox. Cambridge, MA: MIT Press. Gigerenzer. G., & Sleten, R. (Eds). (2001). Bounded rationality. Proceedings of the 84t h Dahlem Workshop on Bounded Rationality: The Adaptive Toolbox. Cambridge: MIT Press. Goldberg, P. (1983). The intuitive edge. Wellingborough: Aquarian Press (Thorsons). Goldstein, W., Hogarth, R., Arkes, H., Lopes, L., & Baron, J. (Eds.). (1997). Research on judgment and decision making: Currents, connections, and controversies. Cambridge: Cambridge University Press. Government of Alberta. (2003). Stakeholder analysis. Retrieved December 30, 2003, from www3.gov.ab.ca/cio/costbenefit/stake_tsk.htm Haberberg, A. (2000). Swatting SWOT. Strategy Magazine Archives, Strategic Planning Society. Retrieved December 30, 2003, from www.sps. org.uk/d8.htm Hart, D. (1997). Modeling the political aspects of information systems projects using “information wards.” Failures and Lessons Learned in Information Technology Management, 1(1), 49-56. Hart, D. (1999). Ownership, organizational politics and information systems. Unpublished PhD thesis. Canberra: UNSW, Australian Defence Force Academy Hocking, J. (2003). Counter-terrorism and the criminalisation of politics: Australia’s new secu-
822
Howard, N. (2001). The manager as politician and general: The metagame approach to analyzing cooperation and conflict. In Rosenhead and Mingers, pp. 239-261. International Centre for Studies in Creativity. (2003). Home page. Retrieved from www.buffalostate.edu/centers/creativity/ International Institute for Environment and Development. (2003). Stakeholder power analysis. Retrieved from www.iied.org/forestry/tools/ stakeholder.html Janis, I. (1989). Crucial decisions. Free Press. Johnson, E., & Huber, G. (1977). The technology of utility assessment. IEEE Transactions on Systems, Man, and Cybernetics, SMC-7(5), 311-325. Keeney, R. (1988). Value-focused thinking and the study of values. Ch 21 in Bell et al, pp. 465494. Keeney, R. (1992). Value-focused thinking: A path to creative decisionmaking. Cambridge: Harvard University Press. Keeney, R., & Raiffa, H. (1993). Decisions with multiple objectives: Preferences and tradeoffs. Cambridge: Cambridge University Press. Kepner-Tregoe. (2003). Home page. Retrieved January 3, 2004, from www.kepner-tregoe.com Klein, G. (1989). Strategies for decision making. Military Review, May, 56-64.
Simulating Complexity-Based Ethics for Crucial Decision Making in Counter Terrorism
Klein, G. (2000). Sources of power (sixth printing). Boston: MIT Press.
Mintzberg, H. (2000). The rise and fall of strategic planning. London: Prentice-Hall (Pearson).
Klein, G. (2002) Intuition at work. New York: Doubleday.
O’Connor, T., & Wong, H. (2002). Emergent properties. The Stanford Encyclopedia of Philosophy (Winter). Retrieved December 30, 2003, from http://plato.standford.edu/archives/win2002/entries/properties-emergent/
Klein, G., Orasanu, J., Calderwood, R., & Zsambok, C. (1993). Decision making in action: Models and methods. Norwood: Ablex. Kneppreth, N., Gustafson, D., Leifer, R., & Johnson, E. (1974). Techniques for the assessment of worth (TR 254, AD 784629). Arlington: U.S. Army Research Institute. Krebs, V. (2004). Social network analysis of the 9-11 terrorist network. Retrieved July 16, 2004, from Orgnet.com Lewis, E. (1999). Using the risk-remedy method to evaluate outsourcing tenders. Journal of Information Technology, 14(2), 203-211. Linstone, H. (1984). Multiple perspectives for decision making: Bridging the gap between analysis and action. New York: North-Holland. Liu, W., & Williams, M. (1999). A framework for multi-agent belief revision, part I: The role of ontology. In N. Foo (Ed.), AI’99, LNAI 1747 (pp.168179). Berlin Heidelberg: Springer Verlag.
Payne, J., & Bettman, J. (2001). Preferential choice and adaptive strategy use. Ch 9 in Gigerenzer and Selten, 124-145. Rachels, J. (Ed.). (1998). Introduction in ethical theory. New York: Oxford University Press. Rayburn, L.G. (1996). Cost accounting (sixth edition). Chicago: Irwin. Reddy Pynchon, M., & Borum, R. (1999). Assessing threats of targeted group violence: Contributions from social psychology. Journal of Behavioural Sciences and the Law, 17, 339-355. Richman, H., Gobet, F., Staszwski,, J., & Simon, H. (1996). Perceptual and memory processes in the acquisition of expert performance. in Ericsson, pp. 167-187. Rickards, T. (1990). Creativity and problem-solving at work. Aldershot: Gower.
Makridakis, S. (1990). Forecasting, planning, and strategy for the 21st century. London: Free Press.
Rosenhead, J., & Mingers, J (Eds.). (2001). Rational analysis for a problematic world revisited. Chichester: Wiley.
Maner, W. (2002). Procedural ethics. Retrieved January 3, 2004, from http://csweb.cs.bgsu.edu/ maner/heuristics/toc.htm
Rowan, R. (1986). The intuitive manager. Aldershot: Wildwood House.
Martino, J. (1972). Technological forecasting for decision-making. Elsevier.
Saaty, T.,& Vargas, L. (1982). The logic of priorities: Applications in business, energy, health, and transportation. Boston: Kluwer-Nijhoff.
McLucas, A. (2003). Decision making: Risk management, systems thinking and situation awareness. Canberra: Argos.
Sarker, R., Abbass, H., & Newton, C. (2002). Heuristics and optimization for knowledge discovery. Hershey: Idea Group.
Michalko, M. (1991). Thinkertoys. Berkeley: Ten Speed.
Schwartz, P. (1991). The art of the long view: Planning for the future in an uncertain world. Doubleday.
823
Simulating Complexity-Based Ethics for Crucial Decision Making in Counter Terrorism
Senglaub, M., Harris, D., & Raybourn, E. (2001). Foundations for reasoning in cognition-based computational representations of human decision making. SANDIA Report, Sandia National Laboratories. New Mexico. Simon, H.A. (1955). A behavioral model of rational choice. Quarterly Journal of Economics, 69, 99-118. Simon, H.A. (1956). Rational choice and the structure of the environment. Psychological Review, 63, 129-138. Simon, H.A. (1988). Rationality as process and as product of thought. Ch 3 in Bell, Raiffa, and Tversky, pp. 58-77.
von Neumann, J., & Morgenstern, O. (1947). Theory of games and economic behaviour (second edition). Princeton University Press. von Winterfeldt, D., & Edwards, W. (1986). Decision analysis and behavioral research. Cambridge: Cambridge University Press. Warne, L. (1997). Organizational politics and project failure: A case study of a large public sector project. Failures and Lessons Learned in Information Technology Management, 1(1), 57-65. Wikipedia. (2004). Definition of terrorism. Retrieved July 26, 2004, from http://en.wikipedia. org/wiki/Definition_of _terrorism
Smart, J.K. (2003). Real problem solving. Harlow: Prentice Hall.
World Futures Studies Federation. (2003). Home page. Retrieved December 30, 2003, from www. wfsf.org
Smith, R. (2002). Counter terrorism simulation: A new breed of federation. Simulation Interoperability Workshop Spring.
Yourdon, E. (2001). Just enough structured analysis. Retrieved December 30, 2003, from www. Yourdon.com/books/msa2e/
Van Gigch, J. (1974). Applied general systems theory. NY: Harper and Row.
Zwicky, F. (1969). Discovery, invention, research – through the morphological approach. Toronto: Macmillan.
van Gundy, A. (1992). Idea power. NY: AMACOM.
This work was previously published in Applications of Information Systems to Homeland Security and Defense, edited by H. A. Abbass & D. Essam, pp. 221-249, copyright 2006 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
824
825
Chapter LIII
Legal and Ethical Implications of Employee Location Monitoring Gundars Kaupins Boise State University, USA Robert Minch Boise State University, USA
AbstrAct This article summarizes the legal and ethical implications associated with employee location monitoring. It states that few international laws and no American laws directly address this location monitoring. International privacy laws and directives, the Electronic Communications Privacy Act, the USA Patriot Act and other laws and directives involving Internet and e-mail monitoring provide the pattern for future location monitoring laws. It also states that ethical considerations such as productivity, security, goodwill, privacy, accuracy, and discipline fairness affect future laws. Furthermore, the authors hope that the understanding of existing laws and ethical considerations associated with electronic monitoring can lead to practical and reasonable location monitoring policies. Employer and employee interests must be balanced. Location monitoring policies should include a legitimate business purpose, ensure that employees are notified that they are being monitored, provide for adequate storage and dissemination of monitoring data, and provide for consistent evaluation of monitoring effectiveness.
IntroductIon Emerging technologies are making it possible for an organization to monitor the location of its employees in real time virtually anywhere. These
technologies range in scale from the global positioning system (GPS), able to determine location outdoors worldwide, to sensor networks, able to determine location inside buildings (Minch, 2004). Individuals are generally locatable because
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Legal and Ethical Implications of Employee Location Monitoring
they can be associated with location-aware (also called location-enabled) devices, such as riding in a GPS-equipped vehicle or carrying a cell phone with built-in location technology. Rationales for monitoring employee locations and movements are many. Collection and delivery businesses have used the technologies to find the nearest driver to customer pickup locations in real time, thus improving customer service (Salheim, 2005). Other companies have monitored vehicle speeds in efforts to control fuel costs and enhance safety (Applegate, 2001). Benefits also include more efficient supply chain management and better tracking of assets (Chen, 2004). In addition to the relatively raw use of location data, location information may be processed and combined with other information to allow a great number of inferences that concern much more than mere location itself. Noting locations at multiple points in time may allow companies to infer the amount of time employees are spending at lunch or on breaks, and their route data may indicate whether they are taking the most efficient routes or combining personal travel with business, for example. Comparing location records for two employees can be used to infer whether or not they had the opportunity to exchange company property. We will address many more examples of these inferencing issues later. Location-aware devices are becoming pervasive because of lower costs, government mandates, and marketplace factors. The cost to make a device location-aware ranges from nothing in devices already inherently locatable to tens or at most hundreds of dollars when GPS or other location technology must be added. To allow better response in emergencies, agencies such as the U.S. Federal Communications Commission (FCC) are phasing in requirements that cell phones be locatable (FCC, 2005). Businesses and consumers are beginning to demand location-aware technologies in the marketplace—for example, it is estimated that up to 80 percent of new vehicles will come equipped with location-aware technology by 2006
826
(Teicher, 2003). Estimates of the size of the global location-based services market are 20 billion U.S. Dollars by 2005-2006, with 31 percent of this market in Europe and 22 percent in the U.S. (Mobileinfo.com, 2002). There is every reason to believe that locationaware technologies and their use for employee monitoring will increase dramatically in the future. Major telecommunications companies such as Sprint (Salheim, 2005) and AT&T Wireless (Chen, 2004) have entered the marketplace, and the largest software vendors such as Microsoft have developed “location-based service” tool sets (Microsoft, 2005). These services and software tools are already being used to develop sophisticated applications that monitor employee locations (Armonaitis, 2004), with many more sure to follow. The low cost and pervasiveness of locationaware technologies not only mean that employers can easily provide it for their employees, but also that the workers may already be locatable through their own personal (i.e., not work-related) devices—including phones, PDAs, laptop computers, automobiles, and so forth. Even if a device is not designed to be location-aware, it may be locatable. Wireless local area network (WLAN) technologies using fixed access points with a range of only 50 to 100 meters make all users of the WLAN locatable by virtue of their association with the access point, and shorter-range technologies such as Bluetooth allow positioning within approximately 10 meters (Haartsen et al., 1998). Thus it is reasonable to assume that in one form or another the technology necessary to locate employees in the future is virtually certain to be available and nearly as certain to be used by business.
PurPose We will examine several important legal and ethical implications of employee location moni-
Legal and Ethical Implications of Employee Location Monitoring
toring, and discuss those that both encourage and limit such monitoring. The technologies and issues are global in scope and therefore we will explore international dimensions. However, most of the related legislation that we discuss is focused on the United States and the European Union. The unique contribution of this paper is that it combines the legal and ethical implications of location monitoring in order to provide management policy recommendations on location monitoring policies. Given the fluid nature of location monitoring technologies and laws, we also provide recommendations for future research on location monitoring.
exIstIng legIslAtIon Location monitoring rights and restrictions are slowly being established. An attempt to codify location monitoring rights by the U.S. Congress, the Location Privacy Protection Act of 2001 (U.S. Congress, 2004), was proposed but not passed into law. Through a European Union (EU) directive, it is illegal for anyone other than police or emergency services to locate people from their satellite-linked cell phones (Society for Human Resource Management Research Department, 2004). The Norwegian Personal Data Act (Norwegian Parliament, 2000) requires consent for processing sensitive data said to include location data (Snekkenes, 2001) although the English translation of the act does not contain the word “location.” The Finnish Personal Information Law and Law about Privacy and Security of Telecommunications are said to be applicable to location privacy although there are no laws in Finland that explicitly concern location information (Levijoki, 2004). One way to analyze the appropriateness of employee location monitoring is to investigate parallel employer behaviors associated with employer monitoring of the Internet, E-mail, and typical work behavior. Many of the location monitoring laws in the future may be extensions of existing
laws associated with computer, video, and audio monitoring of employees. Sections below will first consider laws that tend to encourage the use of employee location monitoring, followed by discussion of laws that tend to discourage it.
legIslAtIon encourAgIng the use of emPloyee locAtIon monItorIng technologIes Privacy According to Miller and Weckert (2000) and Weckert (2005), privacy is a power or moral right that people have regarding the possession of information relative to others. Privacy is personal and should not be inferred by others such as employers. A survey by Privacy International and freedominfo.org found that 57 countries, mostly from Europe and North America, have passed privacy and freedom of information laws. Thirty-seven countries, mostly from Africa and South America, have pending efforts. Though this legislation focuses on making governmental information more available across national and international boundaries, there is a considerable attention to defining personal data privacy. Personal data is defined as any information relating to an identifiable individual (Banisar, 2004). The Organization for Economic Cooperation and Development (OECD) developed influential personal privacy rules under the 1980 OECD Privacy Guidelines, the 1985 Declaration on Transborder Data Flows and the 1998 Ministerial Declaration on the Protection of Privacy on Global Networks. The OECD guidelines encourage the transfer of personal data across countries in order to enhance business and economic relationships (Organization for Economic Cooperation and Development, 2004). The European Union’s Directive 95/46/EC, Australian Privacy Act of 1988, and Canadian
827
Legal and Ethical Implications of Employee Location Monitoring
Protection of Personal Information and Electronic Documents Act of 2001 embodied many of the principles of OEDC guidelines and provide governments and companies wide discretion in the types of personal data that may be collected. Though OECD guidelines prohibit the secret collection and use of personal data, Canadian privacy law permits such collection if it is reasonable for investigating a breach of agreement or law, the information is publicly available, a life-threatening emergency occurs, it is collected in the interests of the individual, or it is used for scholarly research (Corry & Nutz, 2003).
civil rights Location monitoring may provide evidence of civil rights violations such as sexual harassment. Based on the U.S. Civil Rights Act of 1964 and interpretations from the Equal Employment Opportunity Commission (EEOC), sexual harassment involves “actions that are sexually directed, are unwanted, and subject the worker to adverse employment conditions or create a hostile work environment” (Mathis & Jackson, 2003, p. 178). According to the EEOC, employers are legally liable for sexual harassment issues in the workplace if they should have known about sexual harassment and they did nothing about it. Directives from the EU (Council Directive 76/207/EEC) and discrimination laws from Canada (Employment Equity Act), Australia (Human Rights and Equal Opportunity Commission Act of 1986), and Great Britain (Sex Discrimination Act of 1975) provide a similar tone (Overman, 2003; Equal Opportunities Commission, 2004; Parliament of Australia Parliamentary Library, 2004; Department of Justice Canada, 2004). An employee, for example, could be accused of sexually harassing other employees at a particular business location not associated with his or her normal work location. The employee could be given oral and written warnings about sexual harassment in that location. If another incident
828
occurs and monitoring technology places the employee in the unauthorized location, this may be further evidence of sexual harassment.
labor relations Interpretations of the U.S. National Labor Relations Act (NLRA) from the National Labor Relations Board (NLRB) state that employees may be prevented from distributing literature in working areas at any time. They may not be prevented, however, from making distributions in nonworking areas on nonworking time. The solicitation restrictions may be in force as long as it is applied without discrimination—meaning that unions should not be singled out as the only group with restricted solicitation (Camardella, 2003). Location monitoring could provide some evidence that employees went throughout the company to distribute union materials at an improper time. Any grievance committee, however, would have to distinguish between coincidence and cause concerning material distribution.
occupational safety and health The U.S. Occupational Safety and Health Act (OSHA) and the EU Directive 89/391 encourage employers to monitor safety practices in the workplace. Monitoring of employee location could help companies enforce the legislation. For example, a paint room door could be opened and an employee could inhale toxic fumes. He or she could try to quietly leave the room without the company knowing about it. With a location monitoring system, the company could know who the employee was and address the safety concern.
criminal conduct Location monitoring is already being used to prosecute employees for criminal law. Four New Jersey police officers pleaded guilty to filing false records after GPS tracking devices were
Legal and Ethical Implications of Employee Location Monitoring
installed on their patrol cars in 2001 and used to provide evidence that the officers were not conducting patrols as they reported (Forelle, 2004). Employee location monitoring records could be subject to subpoena in criminal cases, and could also be used to prove innocence instead of guilt. If a victim accused an employee of assault, for example, the time-stamped location records of the employee could provide exonerating evidence if both parties were never in the same location at the same time.
national and company security Concerning national security, the USA Patriot Act makes it easier for the federal government to gain access to company-held records of location monitoring. The government must merely show that any information requests are related to terrorism (American Civil Liberties Union, 2004). Concerning corporate security, the U.S. Economic Espionage Act of 1966 enables the federal government to prosecute individuals who convert trade secrets for their own or others’ benefit with the knowledge or intention to cause injury to the trade secret owner (Chan, 2003). Confidential business information is treated as a property right, and location-based evidence of a company employee meeting with a competitor without authorization might be used as evidence of a crime (Standler, 1997).
legIslAtIon lImItIng emPloyee locAtIon monItorIng Privacy The 1980 OECD Guidelines, inspiration for European Union, Canadian, Australian and other international laws, include principles for privacy associated with personal data collection. Privacy rights also exist under the Fourth Amendment in the U.S., particularly when a person has a subjec-
tive expectation of privacy and society accepts that as reasonable. The following OECD principles potentially could be applied to location monitoring: 1.
2. 3.
4.
5.
6.
7.
8.
Collection Limitation Principle: Data should be collected by lawful and fair means with the knowledge of the individual. Data Quality Principle: Relevant data should be accurate, complete, and up-to-date. Purpose Specification Principle: The purposes of data collection should be specified. Use Limitation Principle: Data should be disseminated only based on an individual’s consent and for legal purposes. Security Safeguards Principle: Data should be protected from loss, misuse, or modification. Openness Principle: There should be general openness in the collection and use of the data. Individual Participation Principle: Individuals should have a right to know how personal data is collected and by what means. Accountability Principle: Data collectors should be accountable for their data sets (Organization for Economic Cooperation and Development, 2004).
The European Union (EU) data protection directive 95/46 is more far reaching than any privacy legislation in the United States. Unlike the United States, it is illegal in all EU member countries for anyone other than police or emergency services to locate people from their satellite-linked mobile phones. Furthermore, monitoring employees on the job with software designed to track computer usage is illegal. Employees must consent to having sensitive data such as health data and results of drug and genetic testing transmitted to employers (Society for Human Resource Management Research
829
Legal and Ethical Implications of Employee Location Monitoring
Department, 2004; Carlin, 1996, Fleischmann, 1995, Weckert, 2005). The U.S. Electronic Communications Privacy Act of 1986 (ECPA) potentially could place limits on location monitoring through its discussion of exceptions. If a company does not inform employees (consent exception), use its own equipment (provider exception) and use monitoring for business purposes (business purpose exception), then there may be a case against location monitoring using computers. Court decisions related to the laws have ruled that the employer’s business interests outweigh an employee’s privacy interest. However, courts have upheld claims of invasion of privacy only where the employer’s monitoring has been physically invasive and has no legitimate business purpose (National Workrights Institute, 2004).
civil rights Civil rights laws in the EU, North America, and other regions do not directly address location monitoring. However, location monitoring may potentially be used to discriminate against employees. Employers might know that some employees have cancer, are getting divorces, and are HIVpositive (Hartman, 1998). Some of this information can potentially be inferred by knowing the location of the employee. For example, an employee might be going to a breast cancer ward in the hospital every week. Such trips may or may not be an indication that the employee has breast cancer. The person could be visiting a friend, doing volunteer work, or eating in a cafeteria that is near the ward. This could be a violation of the Americans with Disabilities Act. A “secretly” pregnant woman could be discovered going to a pregnancy clinic. The employer might conclude that the woman is pregnant based on trips to that clinic and accordingly affect employment decisions based on that secret information. This could be a violation of the gender
830
discrimination and pregnancy protections in the EU, U.S., and other places. If there is potential discrimination because of illegal location monitoring practices, the burden of proof would be on the employer in the EU, Canada, and Australia. In the United States, most of the burden of proof lies with the employee (Society for Human Resource Management Research Department, 2004). The employee would have to prove that he or she is a member of a discriminated group and was unfairly treated based on gender, race, disability, or other protected categories (Mathis & Jackson, 2003).
labor relations The American National Labor Relations Act (NLRA) and interpretations from the National Labor Relations Board (NLRB) set limits on company monitoring of union activities of union and potential union members. According to the NLRB (Rossmore House, 1984), an employer who accidentally and casually observes a union meeting might not be a violation of the NLRA. However, if the employer observes who is at the meeting, asks specific questions to subordinates about the conclusions of the meeting, and follows the meeting with mandatory questions about the meeting, there probably would be a violation of law (Feldacker, 1990). Location monitoring could potentially be a violation of the NLRA because employers could know exactly who attended union functions and what time. Location monitoring has already been the subject of labor contract negotiations. United Parcel Service (UPS) Teamsters union member workers successfully included a contract provision in 2003 prohibiting the company from using GPS data in employee evaluations, and snowplow drivers in Massachusetts have protested a requirement that they carry GPS-equipped cell phones on their routes (Teicher, 2003). The city of Chicago allows workers to shut down the location-tracking features of their cell phones during lunch time and
Legal and Ethical Implications of Employee Location Monitoring
after hours, due to union negotiations (National Workrights Institute, 2004).
ethIcAl consIderAtIons encourAgIng emPloyee locAtIon monItorIng
ethIcAl Issues
security
Legal principles and ethical principles are often closely aligned, but they are not the same and they have different objectives. Laws involve is a system of rules that stabilize social institutions. They have a function of deciding when to bring social sanction on individual citizens and their specific acts. Ethics involve why and how one ought to act. They are more concerned than laws in promoting social ideals. Ethical principles also may be viewed as the standard of conduct that individuals have constructed for themselves (Candilis, 2002). Many professional codes of ethics prescribe principles that are perfectly legal to ignore. It is, for example, not a criminal offence for a professional engineer to be undignified. Relying on the law to resolve an ethical dilemma will fail to take into account many of the obligations and duties that our society expects of its members (Sims, 2003). Ethical concerns provide another way to analyze how employee location monitoring is appropriate or inappropriate. Many of these concerns may overlap with similar concerns associated with the Internet, e-mail, and regular work behavior. These include security, productivity monitoring, goodwill, privacy, accuracy, discipline fairness, and dignity. Though many location monitoring issues such as security, privacy, and accuracy are directly linked with the fair collection of information, we go beyond fair collection of information issues by also focusing on productivity, discipline, and dignity issues. These are separate from fair collection of information issues because productivity focuses more on good business practices, discipline focuses on procedural fairness, and dignity focuses on the state of being honored and esteemed.
Companies often experience questionable employee activity. For example, according to one survey, the number of employees sharing confidential business information via e-mail with other companies is about 26 percent. The same poll found that nearly three quarters of respondents sent or received adult-oriented e-mail at work (Boehle, 2000). Just as company e-mail is commonly monitored, location of employees might be monitored to discourage and detect possible unauthorized disclosure of confidential information to competitors. Another security concern is that employees might be in some parts of the company where they are clearly unauthorized. Parallel company restrictions include having keys to doors and files and providing cascading passwords to go into various computer files (Alder, 1998). Some of the unauthorized locations could be the company’s bank vault, employee records room, bathrooms (e.g., men in women’s bathrooms).
Productivity monitoring Businesses historically have had a right to improve employee performance. An aspect of employee performance is being at the right place at the right time. For package delivery firms, monitoring the locations of trucks and delivery personnel can help dynamically adjust routes and otherwise improve customer service. Businesses also historically have had the right monitor employee efficiency. They are concerned with determining the length of time employees work on certain projects to assess project costs and reduce wasted time (McCarthy, 2000). Organizations are concerned about Internet and e-mail mostly to protect their investments, assure
831
Legal and Ethical Implications of Employee Location Monitoring
a safe and hospitable working environment, and provide quality services to customers (Doherty, 2001; Attaway, 2001). Location monitoring technologies may be seen as just another means of improving employee performance and efficiency. Vendors of systems that allow such monitoring are using this as a selling point (James, 2004), promising reduced overtime, down time, time spent in unauthorized locations, and employee fraud.
goodwill According to the e-policy institute, employers wish to enhance goodwill, maintain their professional reputation and reduce liabilities associated with third parties such as customers, shareholders, suppliers, creditors, workplace neighbors, relatives of workers, and competitors (Porter & Griffaton, 2003). Employers may not want employees with company logo to go to casinos, bars, or other places where the employer may be embarrassed or have potential lawsuits.
ethIcAl consIderAtIons lImItIng emPloyee locAtIon monItorIng Privacy According to the European Convention of Human Rights (2005), privacy is also a human right only constrained by national security, public safety, economic well-being of the country, prevention of disorder or crime, protection of health, and protection of the rights and freedoms of others. Employee privacy rights and reasonable employer rights may need to be balanced on a case by case basis (Loch, Conger, & Oz, 1998; Moore, 2000; Miller and Weckert, 2000). During the course of a day, an employee may go to business-related places and non-businessrelated places. A trip to a bank to deposit coins
832
might be of legitimate interest for employers monitoring employee locations. However, monitoring a personal trip during a lunch break might be an unreasonable intrusion on employee privacy. According to Candice Johnson, assistant director for the Communication Workers of America, top management might not be able to resist using location monitoring to create oppressive work environments. Companies that limit restroom time to 15 minutes might now be able to check how long employees were in the restroom (James, 2004). Such intrusions on employee privacy can lead to charges that society might become a giant panopticon, in which employees would be aware of constant employer surveillance. They may feel punishment awaits them at any moment if they perform inappropriate behaviors. This George Orwellian experience reflects the increased power that management has over its employees (Bain & Taylor, 2000).
Accuracy Location-aware devices will never provide perfect information about employee location. For example, GPS-based systems are usually limited to outdoor use, have inherent accuracy limitations, may suffer from signal loss interrupting operation, may be subject to incorrect configuration by operators, and may of course simply malfunction. Inaccuracies of even a few feet could make the difference between an employee being accused of wrongdoing or exonerated. Monitoring of employee location is dependent upon a location-aware device being associated with that employee. This may be intentionally subverted by a dishonest employee. For example, to hide a trip to an unauthorized location, an employee could secretly give the location-aware device to another employee who would complete an authorized route. Even unintentional misplacement of location-aware devices could cause con-
Legal and Ethical Implications of Employee Location Monitoring
cern. Devices not carefully safeguarded could be stolen and used for fraudulent purposes. Even if location-aware device is properly associated with and establishes an employee to be at a certain location at a certain time, care must be taken to avoid assumptions of improper behavior based on circumstantial evidence alone. An employee may have traveled to a competitor because he or she was merely talking to a friend. An employee may have stopped his or her car near a strip bar because there was a malfunction in the car and not because he or she was visiting the bar. Employers might be held liable for firing employees based on false rumors employers illicitly received.
discipline fairness The hot stove rule is a major discipline principle used by many management and human resources textbooks as a guide to enhance fairness in discipline procedures. According to the rule, discipline should be consistent, immediate, impartial, and with a warning just as a hot stove that warns and penalizes people for touching it (Byars & Rue, 2004). Companies will often provide discipline for employees who engage in one prohibited activity (e.g., accessing pornographic Web sites) while not enforcing the same discipline for other prohibited activities (e.g., illegal gambling and playing games) (Nolan, 2003). Whether these violations are equivalent is subject to interpretation. Location monitoring may play a part in the inconsistency issue, since any prohibited locations for employees could at least be uniformly defined and infractions consistently detected. Any infractions should also be immediately enforced without regard to the person who is violating the rules. Consistency can be enhanced when the data concerning location monitoring are revealed to employees to verify that they have completed their trucking or other business routes. Some other employee purposes of examining location
records may include confirming the location of employees during alleged crimes, revealing management misbehavior in terms of using location information, and understanding the best routes and schedules. Providing a warning to employees that location monitoring is occurring helps employees know what locations outside of the company are off limits during working hours while they have company uniforms on. Places such as bars, strip joints, competitors, and related places would be off limits. Employees also should know times that they are not being monitored in order to complete personal business of a sensitive nature such as visits to pregnancy clinics, cancer wards, penal institutions, and others.
dignity Dignity is the “quality or state of being worthy, honored, or esteemed” (Mish, 1987, p. 354). If an employer can discover personal details of an employee’s life, mutual respect might be reduced. Such surveillance is reminiscent of servitude. Soon the worker might not go to a synagogue or visit a friend because he or she knows the boss is watching. Even in the workplace location monitoring might lead employees to think they are robots in a highly managed system that removes decision making from the job. The pressure to increase productivity might lead to micromanaging (National Workrights Institute, 2005).
PolIcy recommendAtIons for orgAnIZAtIons We believe that existing laws and ethical considerations especially associated with electronic monitoring have major implications for location monitoring policies. Employee handbook experts (e.g., Bureau of National Affairs, 2005), ethics code developers (e.g., Goodwin, 2003), legal researchers (e.g., Corry and Nutz, 2003),
833
Legal and Ethical Implications of Employee Location Monitoring
international organizations (e.g., OECD, 2004), and governments (e.g., State of Idaho Office of the Governor, 1998) have provided a variety of ways to look at location monitoring based on their recommendations for computer monitoring in general. The employer’s business interests must be balanced with an employee’s privacy interests. From their recommendations, legal monitoring policies tend to be associated with several dimensions— how monitoring is configured, how monitoring is communicated, how discipline is applied, and how the impact of monitoring is evaluated. Each dimension can range from no activity to significant action.
Configuration The first dimension refers to who shall be monitored, by what means are people monitored, what is allowed or restricted, and when and where will monitoring take place. Kevin Conlon, district counsel for the Communication Workers of America asserts that monitoring should be limited to the workplace. Only information relevant to the job should be collected. Monitoring should result in the attainment of some business interest (Hartman, 1998).
communication The second dimension refers to communication of the policies with employees. Though monitoring is on the rise through available technology and with some legal support, many employees are kept in the dark about how and when they are monitored. Four out of every ten employees do not know their company’s monitoring policies (Swanson, 2001). Eric Schmidt, chief information officer at Bricker & Eckler, suggests that monitoring policies be clearly defined and distributed to all employees through a wide variety of communication channels. The channels can be via letter, phone, fax, e-mail, Internet, Intranet, and a host
834
of other media. The timing of the communication can be important. Recruitment, training, and orientation programs should have some mention of the monitoring policy. Face-to-face meetings between managers and staff could help clarify the seriousness of the policy and allow questions and answers to be provided. These face-to-face meetings also could have illustrations of what would be an example of clear misuse of the standards (Boehle, 2000). The same can be said for location monitoring.
discipline The third dimension, discipline research, focuses on the need for employees to receive warnings for infractions of company policy (Leap & Crino, 1998; Zack, 1989). Warnings are a part of progressive discipline widely used in corporations and supported by discipline research and texts (Leap & Crino, 1998; Zack, 1989; Holley, Jennings, & Wolters, 2001). Warnings are part of the hot stove rule that suggests employers clearly communicate dangers in violating rules just as hot stoves clearly warn cooks of danger by glowing or emitting heat (Subramanian, 2004). The ratings also appear to support those who recommend that clear warnings should be given to employees about surveillance activities (ePolicy Institute, 2004; Boehle, 2000; AllBusiness, 2001; Mason, Mason, & Culnan, 1995; Venable & Venable, 1997, Bureau of National Affairs, 2004). Part of progressive discipline involves increasing penalties for continued infractions associated with being at the wrong place at the wrong time. The discipline may be with an oral warning, followed by a written warning, suspension, and discharge. If the infractions are severe enough, immediate discharge may be appropriate. In the United States, the concept of employment-at-will makes it easier for employers to discharge an employee for good or bad reasons. The main restrictions to the concept include laws such as anti-discrimination, public policy prin-
Legal and Ethical Implications of Employee Location Monitoring
ciples such as jury duty rights, implied contract principles that support contractual commitments, and good faith and fair dealing principles that focus on employment security based on mutual trust (Mathis & Jackson, 2003). European countries do not follow such doctrine. Employees, through their labor unions or work councils may negotiate discipline short of discharge for inappropriate behavior.
evaluation Typically top management is responsible for analyzing human resource-related policies (Mathis & Jackson, 2003). Top management can evaluate location monitoring policies using evaluation methods reminiscent of Kirkpatrick’s (1976) four levels of evaluation: reaction, learning, behavior, and results. Top management should analyze employee reactions to location monitoring. Reactions could be in terms of job satisfaction, location monitoring satisfaction, employee trust, perceived communication levels, and so on. Top management also can learn about typical employee behavior through location monitoring, identify how employee and management behavior has changed as a result of location monitoring, and discover how location monitoring affects major organizational results in affecting the bottom line and other organizational measures. Kirkpatrick (1976) suggests that evaluating organizational results of policies is the most difficult information to collect but potentially the most valuable.
PolIcy recommendAtIon summAry Table 1 provides a summary of policy recommendations based on the four dimensions just discussed. Each dimension contains a list of major policy questions. Solutions are based on recommendations from employee handbook experts, ethics code developers, legal research-
ers, international organizations, and government directives and laws. In cases such as “who will do the monitoring”, several choices are provided to organizations. To apply these policy recommendations, we suggest that companies establish clear location monitoring policies that can be published in their corporate employee handbooks, disseminated on the Intranet and Internet, or distributed via letter or e-mail to employees. Employees should acknowledge that they have read the location monitoring policy by signing an acknowledgment form. Unfortunately, employee handbooks and letters are not often carefully read by employees even though they sign acknowledgment forms. Management may have to remind employees with additional e-mails of the policies and be familiar with the policies themselves in case of policy disputes (Mathis & Jackson, 2003; Baskin, 1998). Stockholders and the Boards of Directors should be made aware that location monitoring is being done by discussing such policy annual meetings or quarterly/annual reports. Stockholders and Boards of Directors can provide top management advice on the legal and ethical ramifications. Customers might need to be aware that employees are being location monitored to protect their privacy. They might not want to be discovered being with an employee who is a competitor, illicit lover, or any other person that can cause embarrassment.
sAmPle PolIcIes Figure 1 shows sample employee handbook policy based on research summarized in Table 1. It incorporates configuration, communication, and discipline dimensions. The statement allows management flexibility in implementation. Employee handbook policies must be clear but also allow management room to make adjustments (Mathis & Jackson, 2003).
835
Legal and Ethical Implications of Employee Location Monitoring
Table 1. Suggestions for location monitoring policies Dimension
Questions
Sample Solutions
Configuration
Who will do the monitoring?
Supervisors, top management, IT director (Bureau of National Affairs, 2005)
Configuration
What equipment will be used?
GPS and RFID technologies (James, 2004)
Configuration
What/Who will be monitored?
Information is collected on an equal basis across all employees. Ban the collection of data unrelated to work performance (American Civil Liberties Union, 2004; Nolan, 2004)
Configuration
When will monitoring take place?
On company time (AllBusiness, 2001)
Configuration
Where will monitoring be allowed? Monitoring should be limited to the workplace (Hartman, 1998; National Workrights Institute, 2004) Monitor what is relevant (James, 2004).
Configuration
What specific behavior is allowed? Communications and information exchanges directly relating to the mission, charter and work tasks of the organization. (State of Idaho Office of the Governor, 1998)
Configuration
What specific behavior is not Giving information to competitors (Prince, 2001; Nolan, allowed? 2003)
Configuration
How are policies coordinated?
Communication
Who will be warned of monitoring? Use covert monitoring only when there is evidence that a crime has been committed (Goodwin, 2003) Avoid any covert monitoring (Kaupins, 2004; National Workrights Institute, 2004)
Communication
By what means will monitoring be Employee handbooks, letters of understanding, e-mails (Boehl, announced? 2000)
Communication
When will announced?
Communication
What reasons will be given for Major reasons may include productivity and security (James, monitoring? 2003) Sexual harassment (Camardella, 2003)
Discipline
Who will discipline workers for going Supervisor (Bureau of National Affairs, 2004; Attaway, 2001; to incorrect locations? Hawk, 1994)
Discipline
What are the different types of Apply progressive discipline (Bureau of National Affairs, disciplines associated with location? 2004)
Discipline
What can employees do to appeal Give employees the right to dispute electronic monitoring data their discipline? (American Civil Liberties Union, 2004)
Discipline
What about retaliation from any Provide a non-retaliation policy (American Civil Liberties party? Union, 2004)
Evaluation
Who will monitor the monitors?
Evaluation
By what means will location Analyze the impact of monitoring (Goodwin, 2003) Develop a monitoring be evaluated? comprehensive records retention policy (Nolan, 2003)
Evaluation
How frequently will location Periodical but negotiated evaluation of policies in general are monitoring policies be evaluated? recommended (Noe, Hollenbeck, Gerhart, and Wright, 2004)
Evaluation
How will monitoring success be Monitor reaction of employees and managers to the policy, measured? what management has learned about employee behavior, how employee and management behavior has changed, how policies affect the bottom line and other organizational measures (Kirkpatrick, 1976).
836
monitoring
Integrate e-mail, location monitoring, and other technologies into one policy (American Civil Liberties Union, 2004, Nolan 2003)
b e A reasonable time before monitoring begins (Organization for Economic Cooperation and Development, 2000)
Top management or data collection experts (Organization for Economic Cooperation and Development, 2004)
Legal and Ethical Implications of Employee Location Monitoring
Figure 1. Sample location monitoring policies Sample Employee Handbook Policy The company reserves the right to monitor the location of employees on company time for business purposes only. Business purposes may include productivity, safety, and security issues related to the mission and objectives of the company. Employees will be notified by their supervisor (or human resources, top management) that their location will be monitored before their job or assignment begins. Employees will be provided with cell phones, radio frequency identification tags, or other devices that can help monitor their location. Employees will follow the directions associated with these devices. Supervisors (or human resources, top management) are responsible for the storage and dissemination of location monitoring data. Employees have a right to dispute location monitoring data and discipline related to that data by contacting their supervisor (or human resources, top management) and following the standard discipline appeal procedures of the company. Sample Location Monitoring Evaluation Policy Top management will periodically review its location monitoring policies and procedures as needed to respond to internal company strengths and weaknesses and external threats and opportunities. The review process includes monitoring the reaction of employees and managers to the policy, what management has learned about employee’s behavior, how employee and management behaviors have changed, and how policies affect the bottom line and other organizational measures. Alternative Location Monitoring Policy As an alternative to the policies shown above, location monitoring may be subject to negotiation between employees and employers. All location monitoring could be banned unless managers and employees mutually agree to specific monitoring. Top management and employee representatives could periodically review its location policies and procedures as needed.
Alternative handbook policies can be more employee-oriented. Instead of allowing management wide discretion in monitoring employees for business purposes only, all location monitoring could be banned unless managers and employees mutually agree to specific monitoring. Agreements can be negotiated through labor union talks or personal contract discussions. Such policies might be more appropriate in countries that have stronger labor unions, higher concerns for privacy in the workplace, and national legal frameworks that make location monitoring more difficult. Figure 1 also includes a sample location monitoring evaluation policy incorporating the evaluation dimension. This policy is primarily for top management, the information technology department, and human resources department. Periodic changes in the policy may be needed to adapt to internal strengths and weaknesses
associated with company reactions to the policy and external opportunities and threats associated with legal, ethical, economic, social, or other considerations. Again, an alternative to such policy is to have both management and employees mutually evaluate and revise location monitoring policies.
suggestIons for future reseArch Several new avenues of research should be undertaken to enhance understanding of employee location monitoring and its legal, ethical, and employment policy. The present research surveyed almost exclusively English-language literature and used examples primarily from the U.S. and the EU. While there is some justification for this
837
Legal and Ethical Implications of Employee Location Monitoring
based on the relative maturity of the legal and business environments involved, the context of investigation obviously should be expanded. Other countries may have significant non-English texts covering location monitoring court cases relating to privacy, civil rights, and other laws. With respect to legal aspects of employee location privacy, more detailed case analyses can be done in multiple venues including American courts. The National Workrights Institute (2004) has summarized numerous court cases associated with the invasion of privacy and is collecting additional Electronic Communications Privacy Act cases. Some of their conclusions were reported earlier, but the number of relevant cases is growing rapidly and it will be a significant challenge to monitor not only emerging legislation but also evolving case law. Empirical and survey research is needed help analyze management and employee attitudes toward the need for and ethics of location monitoring. For example, in e-mail monitoring research, 68 percent of employers that monitor employees cite legal liability as their primary reason (Porter & Griffaton, 2003). Perhaps legal liability will also be a primary motivation to monitor the location of employees—only further research will shed light on these issues. Future research will need to keep abreast of both employee behaviors and technological advances that may allow employees to resist or defeat location monitoring. We have noted that in some cases employees may turn of the locationaware features of devices or fail to carry them as directed. Future developments may include technologies capable of corrupting or blocking transmission of location information from devices, as has been proposed by privacy advocates and incorporated into separate devices that jam RFID readers or erase the contents of RFID tags. It is not unreasonable to even anticipate the possibility that devices capable of generating counterfeit location information might become available.
838
Survey research also can help analyze what type of organizations will be most likely to use location monitoring and what location monitoring policies will tend to be the most important and most commonly used in practice. Key questions include whether employees should be notified about location monitoring in all or particular instances, and whether they should be or will be allowed to turn monitoring features off whenever they desire or at particular times. Various companies may have conflicting policies that reflect the conflicting recommendations shown in the literature. For example, Goodwin (2003) recommends using covert monitoring only when there is evidence that a crime has been committed. The National Workrights Institute (2004) recommends avoiding any covert monitoring. It is clear that due to increasing sophistication of technology and declining costs of implementation, location monitoring will only become more common in the future. What is less clear is whether a technical, legal, ethical, and business policy environment can be crafted that will effectively respect both the business needs of employers and the privacy rights of employees. By investigating important issues before widespread implementation of these technologies occurs, we may be able to avoid some of the problems that would otherwise be encountered, and encourage a positive environment for the incorporation of location-aware technologies in the workplace.
conclusIon We believe that existing laws and ethical considerations especially associated with electronic monitoring have major implications for location monitoring policy recommendations related to configuration, communication, discipline, and evaluation issues. Only information relevant to the job should be collected that results in the attainment of some business interest. Monitoring policies should be clearly defined and distributed
Legal and Ethical Implications of Employee Location Monitoring
to all employees through a wide variety of communication methods. Employees should receive warnings for infractions of company policy associated with being in incorrect locations. Employers should provide for consistent evaluations of monitoring effectiveness. Future research should analyze what type of organizations will be most likely to use location monitoring and what location monitoring policies will tend to be the most important and most commonly used in practice.
references Alder, G. (1998). Ethical issues in electronic performance monitoring: A consideration of deontological and teleological perspectives. Journal of Business Ethics, 17, 729-744.
Banisar, D. (2004). The freedominfo.org global survey: Freedom of information and access to government record laws around the world. Retrieved October 12, 2004, from http://www. freedominfo.org/survey.htm Baskin, M. (Winter, 1998). Is it time to revise your employee handbook? (Legal Report). Alexandria, VA: Society for Human Resource Management. Boehle, S. (2000). They’re watching you: workplace privacy is going, going…. Training, 37, 50-60. Bureau of National Affairs. (2005). BNA employment guide. Washington, DC: Bureau of National Affairs. Byars, L. L., & Rue, L. W. (2004). Human resource management (7th ed.). Boston: McGrawHill, Irwin.
AllBusiness (2001). Employee Records. Retrieved January 5, 2004, from http://www.allbusiness. com
Camardella, M. (2003). Electronic monitoring in the workplace. Employee Relations Today, 30, 91-100.
American Civil Liberties Union. (2004). Surveillance under the USA Patriot Act. Retrieved April 26, 2004, from http://www.aclu.org/SafeandFree/ SafeandFree.cfm?ID=12263&c=206
Candilis, P. (2002). Distinguishing law and ethics: A challenge for the modern practitioner. Psychiatric Times, 19(12). Retrieved September 8, 2005, from http://www.freedominfo.org/survey.htm
Applegate, J. (2001). Are your employees costing you? Retrieved September 6, 2005, from http:// www.entrepreneur.com/article/0,4621,289593,00. html
Carlin, F. (1996). The data protection directive: The introduction of common privacy standards. European Law Review, 21, 65-70.
Armonaitis, K. (2004, July 23). Microsoft location server integration: The good, the bad, and the ugly. Retrieved September 6, 2005, from http://www. devx.com/DevX/Article/21583 Attaway, M. (2001). Privacy in the workplace on the Web. Internal Auditor, 58, 30-35. Bain, P., & Taylor, P. (2000). Entrapped by the electronic panopticon?: Worker resistance in the call centre. New Technology, Work and Employment, 15(1), 2-18.
Chan, M. (2003). Corporate espionage and workplace trust/distrust. Journal of Business Ethics, 42, 43-58. Chen, A. (July 12, 2004). After slow start, location-based services are on the map. Retrieved September 6, 2005, from http://www.eweek. com/article2/0,1895,1621409,00.asp Corry, D., & Nutz, D. (2003). Employee e-mail and Internet use: Canadian legal issues. Journal of Labor Research, 24, 233-257.
839
Legal and Ethical Implications of Employee Location Monitoring
Department of Justice Canada. (2004). Employment Equity Act. Retrieved October 14, 2004, from http://laws.justice.gc.ca/en/E-5.401/50057.html
Hawk, S. (1994). The effects of computerized performance monitoring: An ethical perspective. Journal of Business Ethics, 13, 949-958.
Doherty, S. (2001). Monitoring and privacy: Is your head still in the sand? Retrieved June 3, 2002, from http://www.nwc.com/1213/1213fl.html
Holley, W., & Jennings, K., & Wolters, R. (2001). The labor relations process (7th ed.). Fort Worth: Dryden Press.
EPolicy Institute. (2004). EPolicy handbook. Retrieved September 13, 2004, from http://www. epolicyinstitute.com
James, G. (2004, March 1). Can’t hide your prying eyes. Computerworld, 38, 35-36.
Equal Opportunities Commission. (2004). Relevant legislation. Retrieved October 14, 2004, from http://www.epolicyinstitute.com
Kaupins, G. (2004). Ethical perceptions of corporate policies associated with employee computer humor. Ethics and Critical Thinking Quarterly Review, 2004(1), 16-35.
European Convention on Human Rights. (2005). The European Convention on Human Rights and its five protocols. Retrieved August 19, 2005, from http://www.epolicyinstitute.com
Kirkpatrick, D. (1976). Evaluation of training. In R. L. Craig (Ed.), Training and development handbook (2nd ed., chap. 18). New York: McGraw Hill.
Federal Communications Commission. (2005). Enhanced 911 — Wireless services. Retrieved September 6, 2005, from http://www.fcc.gov/911/ enhanced/
Leap, T., & Crino, M. (1998). How serious is serious. HR Magazine, 44, 43-48.
Feldacker, B. (1990). Labor guide to labor law. Englewood Cliffs, NJ: Prentice-Hall. Fleischmann, A. (1995, October). Personal data security: Divergent standards in the European Union and the United States. Fordham International Law Journal, 19,143-180. Forelle, C. (2004, May 14). On the road again, but now the boss is sitting beside you. The Wall Street Journal (Eastern Edition), p. A1. Goodwin, B. (2003, June 17). Tell staff about e-mail snooping or face court, new code warns. Computer Weekly, 38, 5. Haartsen et al. (1998, October). Bluetooth: Vision, goals, and architecture. ACM Mobile Computing and Communications Review, 2(4), 38-45. Hartman, L. (1998). The rights and wrongs of workplace snooping. Journal of Business Strategy, 19, 16-20.
840
Levijoki, S. (2004). Privacy vs location awareness. Retrieved October 12, 2004, from http://www.hut. fi/~slevijok/privacy_vs_locationawareness.htm Loch, K., Conger, S., & Oz, E. (1998). Ownership, privacy, and monitoring in the workplace: A debate on technology and ethics. Journal of Business Ethics, 17, 653-654. Mason, R., Mason, F., & Culnan, M. (1995). Ethics of information management. Thousand Oaks, CA: Sage Publications. Mathis, R., & Jackson, J. (2003). Human resource management (10th ed.). Mason, OH: Southwestern. McCarthy, M. (2000). Keystroke cops: New software raises troubling questions on worker privacy. Retrieved March 7, 2000, from http://www.msnbs. com/news/3/8/00.asp Microsoft. (2005). Microsoft MapPoint Location Server. Retrieved September 6, 2005, from
Legal and Ethical Implications of Employee Location Monitoring
http://www.microsoft.com/mappoint/products/ locationserver/default.mspx Miller, S., & Weckert, J. (2000). Privacy, the workplace, and the Internet. Journal of Business Ethics, 28, 255-266. Minch, R. (2004). Privacy issues in locationaware mobile devices. In HICSS-37 Proceedings. IEEE Press. Mish, F. (Ed.). (1987). Webster’s ninth new collegiate dictionary. Springfield, MA: MerrriamWebster. Mobileinfo.com. (2002) Location-based services. Retrieved October 12, 2004, from http://www. mobileinfo.com/locationbasedservices/market_outlook.htm Moore, A. (2000). Employee monitoring and computer technology: Evaluative surveillance v. privacy. Business Ethics Quarterly, 10, 697-710. National Workrights Institute. (2005). On your tracks: GPS tracking in the workplace. Retrieved October 10, 2005, from http://www.workrights. org/issue_electronic/NWI_GPS_Report.pdf National Workrights Institute. (2004). Electronic monitoring in the workplace: Common law and federal statutory protection. Retrieved October 12, 2004, from http://www.workrights.org/issue_electronic/em_common_law.html Noe, R., Hollenbeck, J., Gerhart, B., & Wright, P. (2004). Fundamentals of human resource management. New York: McGraw Hill Irwin. Nolan, D. (2003). Privacy and profitability in the technological workplace. Journal of Labor Research, 24, 207-232. Norwegian Parliament. (2000). Act of 14, April 2000 No. 31, relating to the processing of personal data (Personal Data Act). Retrieved October 12, 2004, from http://www.personvern.uio.no/regler/ peol_engelsk.pdf
Organization for Economic Cooperation and Development. (2000). OECD guidelines on the protection of privacy and transborder flows of personal data. Paris: OECD Publication Service. Retrieved October 12, 2004, from http://www1. oecd.org/publications/e-book/9302011E.pdf Overman, S. (2003, October 28). EU directives drive changes in UK employment law. HR News — Society for Human Resource Management. Parliament of Australia Parliamentary Library. (2004). Civil and human rights. Retrieved October 14, 2004, from http://www.aph.gov.au/library/intguide/law/civlaw.htm Porter, W., & Griffaton, M. (2003). Between the devil and the deep blue sea: Monitoring the electronic workplace. Defense Counsel Journal, 65-77. Prince, M. (2001). Employers should establish clear rules on e-mail. Business Insurance, 35, 25-28. Rossmore House. (1984). 269 NLRB 1176. Salheim, S. (2005, April 16). Sprint launches tracking service. Retrieved September 6, 2005, from http://www.eweek.com/article2/0,1895,1815843,00.asp Society for Human Resource Management Research Department. (2004). Does Europe matter? Workplace Visions, (1), 2-7. Sims, R. (2003). Ethics and corporate social responsibility: Why giants fall. Westport, CT: Praeger. Snekkenes, E. (2001, October). Concepts for personal location privacy policies. In Proceedings of the 3rd ACM Conference on Electronic Commerce, Tampa, Florida, October 14-17, 2001 (pp. 48-57). ACM Standler, R. (1997). Privacy law in the United States. Retrieved April 26, 2004, from http://www. rbs2.com/privacy.htm#anchor444444
841
Legal and Ethical Implications of Employee Location Monitoring
State of Idaho Office of the Governor. (1998). Executive Order 98-05: Establishing statewide policies on computer, the Internet, and electronic mail usage by state employees. Retrieved October 12, 2004, from http://ww2.state.id.us/gov/execord/ EO98-05.htm Swanson, S. (2001, August 20). Beware: employee monitoring is on the rise. Informationweek, (851), 57-58. Subramanian, S. (2004). Positive discipline. Retrieved October 6, 2004, from http://www. aapssindia.org/articles/vp2/vp2k.html
Teicher, S. (2003, December 22). It’s 2 a.m. Do you know where your workers are? The Christian Science Monitor, p. 14. U.S. Congress. (2004). Location Privacy Protection Act of 2001. Accessed October 12, 2004 through title search from http://thomas.loc.gov Weckert, J. (Ed.). (2005). Electronic monitoring in the workplace: Controversies and solutions. Hershey, PA: Idea Group Publishing. Venable, B., & Venable, H. (1997, July). Workplace labor update. New York: Venable. Zack, A. (1989). Grievance arbitration. New York: American Arbitration Association.
This work was previously published in the International Journal of Technology and Human Interaction, Vol. 2, Issue 3, edited by B. C. Stahl, pp. 16-35, copyright 2006 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
842
843
Chapter LIV
New Ethics for E-Business Offshore Outsourcing Fjodor Ruzic Institute for Informatics, Croatia
AbstrAct In today’s dynamic e-business environment where fast time to market is imperative, where information and telecommunications technology is costly and changing rapidly, and where skilled technical resources are scarce, e-businesses need reliable, high-end outsourcing infrastructure and resources. E-business companies should consider corporate social responsibility and they must work on reducing the pain and stress of disruption in home countries while increasing the socio-economic benefits of these jobs in the receiving country. E-business should develop offshore outsourcing ethics program. It presents the analysis of the nature and the social impact of information technology and the corresponding formulation and justification of policies for the ethical use of technology. These notions are considered in efforts to produce globally acceptable ethics program that would articulate both individual company and global interests in an appropriate way. This chapter examines e-business ethics development, and new standardization efforts toward unified, globally acceptable Code of Ethics. It covers the analysis and discussion on the needs for such instrument and findings on how to reach unified global solution.
IntroductIon Currently, there is considerable confidence that we are on the edge of an important deepening of the information society. This is the era of ubiquitous computing, in which computer-based devices become so cheap, seamlessly interoperable and easy to use that they will find application across a broad
field of everyday activities. The implications of these changes for policies on technology, employment and competitiveness will be profound. The issue of ubiquitous computing also directly raises a series of links to issues of competitiveness and employment. Chips are already embedded into many everyday devices (particularly automobiles and domestic appliances, and increasingly locks,
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
New Ethics for E-Business Offshore Outsourcing
alarms, payment machines, vending machines, cash machines, street furniture, hand tools and smart cards for identification, financial transactions and electronic wallets). In the same time, the production, services and trade become increasingly dispersed over wider area dealing with new goods market and labor market as well. Besides that, the new forms of e-business are aimed toward sharing resources in development, production and sale processes. At the turn of the millennium, the e-business has become a new environment where new business models are practiced. Web sites are in most a reflection of businesses on the Internet. The three principal keys in doing e-business: honesty, integrity, and trustworthiness crossover directly to the Web site framework and the Internet. This is especially true for the online service providers. Professional service providers have the funds and staff of programmers to do the global Web site right and after all, that is their principal business specialty on a very large scale. However, sad to say, doing Web sites right is not always true for some of these big online providers. In fact, there are many e-business regulatory perspectives, ethical, copyright, and electronic commerce global legal issues at prominence. The information society with e-business environment in use could form a more sustainable society (Cornford, Gillespie, & Richardson, 1999). It is possible due to the four key streamlines:
•
•
844
There are potential environmental gains from organizational re-engineering since information technology can be used to lower waste for instance by reducing material usage or more accurate matching of production and delivery levels to realized demand. Information technology is in its essence a dematerializing technology, and it works through three dimensions: the replacement of traditional control technologies by information technologies; the informational content of physical goods is increasing; the
•
•
business shifts from trading physical goods to immaterial services. Most of the fastest areas of growth in our economies are in informational services such as software, design, new media and telematic services — all areas that directly depend upon information and communications technologies. Information technology makes direct substitution effects, especially in transport. There are more substitution possibilities including teleworking and teleshopping. Teleworking, in particular, holds out a promise of a more efficient use of people’s time and the World’s resources. Information technology also provides decision tools for monitoring and controlling the use of the environment and resources over the distance, thus making e-business offshoring and outsourcing more reliable and useful.
The information technology based business, and specially doing business over Internet, is characterized by its openness. The result is that a rich and chaotic wealth of information, images and ideas are being made available on a worldwide basis, to anyone who has the tools and interest to work online. Because of its size, global scale, complexity, openness and dynamism, this chaotic network effectively renders obsolete traditional mechanisms for structuring and controlling informational content. Strains are apparent across a broad range of legal domains: decency laws, privacy protection, security services, intellectual property rules and authentication systems. With extensions of the Web to include more and more people and with ubiquitous computing to connect more and more devices, a large credibility gap could be opened between the scale of the information flows and regulatory tools. This scenario could lead to a loss of trust in the network society. The Internet economic phenomenon is extremely high, and it is currently generating a lot of attention as the fastest growing area of business. We have
New Ethics for E-Business Offshore Outsourcing
already seen that it raises serious challenges to existing rules on how to conduct trade, to protect consumer rights, to combat tax avoidance and so on. The intrinsic global nature of the Internet also means that new international conventions on these issues will have to be established and considerable effort is being expended by national and international bodies to adjust legal frameworks to create a coherent system of liabilities and trust. These issues are stimulus for strengthening efforts in the process of new forms of business ethics that could deal with the new challenges. Information technology systems are important for doing e-business, but will not be by themselves enough to provide an appropriate system of governance for the new era of computing. It is the fact that the e-business is different now — it is played faster and it extends deeper into private lives. Consequently, new rules are needed which will provide a mature framework of governance for the new phase of the information society. There are tremendous challenges involved such as globalization and communications convergence. However, the confidence of people and enterprises will also be needed. This will call for a clear statement of rights and responsibilities in the information age that could build confidence in the global networking. As technology in general, and the Internet in particular, becomes a more important part of how virtually all companies do business, many are finding themselves faced with new ethical dilemmas. These notions urgently ask for new agenda for business ethics. Business ethics is not new. Efforts to apply ethical theories and values to commerce can be found throughout history and in the works of the great philosophers. Until recently, business ethics was largely an academic affair. However, allegations of waste and fraud in the business are increasing and there is need for a solution to the growing problem. The answer is in issuing and enforcing codes of ethics with adequate controls to monitor these codes of ethics and sensitive aspects of business compliance. The particular insight in the e-business outsourc-
ing, as well as offshore outsourcing, is the key of successful and sustained development in all countries and regions where the e-business firms work under globally accepted and harmonized rules and universal code of ethics. The idea of this work is to express why and how the role of the global ethics environment is important to ebusiness, and what is done to make harmonized and balanced platform for sustained development of e-business offshore outsourcing in all regions around the globe.
bAckground on e-busIness outsourcIng And offshorIng In the process of finding definition on outsourcing, it is important to be clear about what is meant by outsourcing. Outsourcing has become a commonly misused term, and much of the apparent disagreement over the merits and risks of outsourcing can be traced to differing definitions. Outsourcing is essentially a how term rather than a what term. It describes how technology and services are obtained, and not what the services are. Outsourcing could also be defined as a contractual relationship where an external organization takes responsibility for performing all or part of a company’s functions. This can involve a partial or complete transfer of staff and/or resources. Outsourcing in broad terms is a transfer of some business functions or components of business processes to an outside contractor. To remain competitive, many companies outsource as a way to reduce costs, increase efficiencies, and refocus critical resources. Outsourcing is done for various reasons. The driver can be generating cost reductions and downsizing. These factors have been a traditional reason for outsourcing. But, there are others factors such as gaining capabilities that are not available internally, implementing lean programs, streamlining operations, strategically positioning the company, or improving capabilities to gain
845
New Ethics for E-Business Offshore Outsourcing
competitive advantage. Regardless of the reason, outsourcing succeeds when it is well thought out and done properly. All parties involved in outsourcing should consider several key activities during the outsourcing project cycle:
• • • • •
Defining the reasons for outsourcing Evaluating outsourcing business processes versus functions Recognizing seller and buyer roles Preparing for the risks Planning the change
Information technology outsourcing In a sense, outsourcing is just a name for old practices. Services such as, bureau services, contract programming and project management have long been outsourced. In its present usage, however, outsourcing implies a greater level of handing over ownership and/or managerial control than has hitherto been the case. There are differences between outsourcing and contracting out. Thus, outsourcing is distinguished from the in-house provision of services; and in-house provision means that the people who deliver the information technology goods or services are normally employees of the organization. Many organizations find keeping up with technical developments in computing and communications very difficult. Operational decisions in regard to the operation of computing and communications are often adversely effected by a lack of in-depth knowledge of the full range of technical options available. This can lead to using inappropriate technology and/or inappropriately using technology. Many e-business organizations have their own information technology departments catering to their software and other information and communications technologies (ICT) enabled services needs, while others go for information technology outsourcing. It is forecasted that the business process outsourcing segment of total information technology
846
outsourcing will lead the way, with a 30% annual growth rate in the first decade of 21s t century. The Internet and intranet management portion of this takes over 80% annually through 2004 and it would be through 2005, largely due to e-business and ecommerce spending. The impending, large-scale adoption of Internet-enabled wireless devices of all types will boost demand for networking upgrades, new services customization, and expanded desktop services outsourcing that includes this new technology and eventually ICT outsourcing. Because “Internet rates of speed” infect rates of technology change within the ICT industry as a whole, clients are increasingly reluctant to undertake independent, in-house migrations to upgraded architectures and applications. Information technology outsourcing is typically the catalyst driving corporate re-engineering processes to reduce costs, and focus on business strengths. Besides banking, discrete manufacturing and insurance that have traditionally led information technology outsourcing, the fastestgrowing sectors in the next few years will be the telecom and utilities industries. Hence the future prospects for information technology outsourcing will be very bright in the coming years.
e-business outsourcing Issues Outsourcing e-business processes is an obvious choice for most companies within national and international markets. Even for large enterprises, it is not possible to keep up with all the changes influencing e-business management. The lack of trained resources and faster time to market pressure accelerate this evolution. Cost reductions and higher quality results are free bonuses granted by an outsourcing strategy. Consequently, outsourcing technical jobs to other countries has become widely used solution. Business processes outsourcing often starts with technical information technology outsourcing. The growing complexity of information technology and the difficulty to find and retain
New Ethics for E-Business Offshore Outsourcing
trained computer and telecommunications staff are the key elements in the priority given to technical information and telecommunications technology outsourcing. Technical ICT outsourcing remains a very important decision with many complex aspects. Without an independent trusted ICT consultant with experience in both business management and technology, some outsourcing experiences ended in disasters. In the same time, information technology management in most ebusiness firms is not the core business function. Successful information technology management requires a fundamental understanding of both business and technology, and a long experience. Thus, in most cases, information technology management outsourcing is solution that is much more flexible and cost effective for most companies doing business electronically and over the network be it on regional, international or global level. E-business outsourcing services dealing with the ICT services depend on Web-based processes that include several key activities:
•
•
•
Web design: Creating a Web site is more than a simple online business card or catalogue. It is a real e-business tool contributing to corporation’s profit and making brand image and position on e-marketplace. Web site marketing and Web promotion: Without e-marketing and Internet marketing, no e-business strategy can be successful. Creating a Web site presence is a first step in a succession of actions. Web site marketing and Web site promotion should be essential parts of e-business management strategy. The worldwide scale of Internet access and its full time availability are two of the most powerful strengths of a Web site. Search engine optimization: Many words generate more than a 30 million searches per month. However, search engine optimization is complex. This is why a professional team best executes search engine optimization.
•
•
Viral marketing and e-mail marketing: Internet reacts fast, and viral marketing techniques can deploy very effective results in both a massive and quick way to get business message to the market. E-mail marketing is another very effective tool. Web services: As e-business activity is growing, Web services will become an easy and cost effective way to interact with business partners and customers. Since Web services are technically oriented business tool, they should be professionally set-up.
When e-business processes are planned to be outsourced, it is obvious that ICT outsourcing is involved. E-business firm should consider several outsourcing key categories:
• •
•
• •
Insurance: The outsourcer should have adequate public liability insurance against loss or liability through injury or damage. Third party suppliers: The arrangements setting which party will hold and which party will administer the terms of any agreements currently in place between the purchaser and other third party suppliers. Information technology licenses: Where third parties supply goods and services used to provide outsourced services, the appropriate licenses must be obtained. Any licenses currently held by the purchaser that relates to services being provided may need to be extended to cover the activities of the outsourcer. Ownership of information: The ownership of data and information needs to be agreed. Contract duration and commencement: The commencement date of the contract should be decided as early as possible to minimize transition difficulties when service provision is handed from the purchaser to the outsourcer. Given the complex problems that can arise during the handover of information
847
New Ethics for E-Business Offshore Outsourcing
•
•
•
848
technology services, it may be prudent to include a defined transition period as part of the term of the contract. Service level agreements: Service level agreements are put in place to define the minimum level of service that must be provided. They are, therefore, the basis for measuring the outsourcer’s performance. System access and security: Access to the purchaser systems by the outsourcer needs to be considered, and an outsourcer may only require system access at certain levels to enable them to perform their service. The level of security measures required to protect the purchaser’s system and information from unauthorized access will continue to require rigorous planning, implementation and management. Outsourcing services will bring additional issues of protection, confidentiality and ethics that the parties will need to ensure are documented and agreed with regard to their responsibilities and obligations. Personnel/staff: Although the issue of personnel is often crucial, it is sometimes overlooked by those involved in outsourcing. People are fundamental to a business and are required to maintain business continuity during the transition period. The arrangements for the retention, redeployment or other options for existing staff must be negotiated. This issue is critical, as the outsourcer will require the institutional knowledge of the purchaser’s staff. Business continuity must be maintained during the transition, which requires that the purchaser’s staff be kept fully informed where appropriate. Staff may need to be transferred from the payroll of the purchaser to the payroll of the outsourcer and a transition plan should be used to minimize the risk of service disruption and employment-related legal claims. The employment contracts and/or collective agreements under which the purchaser’s
•
staff is employed may require negotiations to be held with the relevant staff or their representatives. The early involvement of professional human resource managers and employment law specialists to advise and assist with contractual and privacy issues is critical to any transition to outsourcing. The purchaser may also specify that the outsourcer should hire a certain number of staff and the outsourcer may require a certain number of people for the purpose of acquiring system and corporate knowledge. In these cases, there should be an agreed process for the outsourcer to select, assess and engage the appropriately skilled staff from the purchaser. Intellectual property indemnity: Each party should generally indemnify the other against claims of intellectual property rights infringement arising from the use of facilities and resources that they supply to the other as part of the outsourcing arrangements.
e-business offshore outsourcing E-business topic that has become a key issue in the first decade of 21s t century is offshore outsourcing. Many e-business companies that have shipped work to countries like Ukraine, India and Russia have claimed the cost savings and the quality of the output have increased efficiency, productivity and return of investment. However, information technology professionals have noticed that the recent economic stage does not include an increase in jobs. Some are pondering whether a high-tech union should be formed to stop the massive exodus of jobs across the Pacific Ocean. Offshore development in the information and telecommunications technology (ICT) sector is the term most often used to describe the business of outsourcing software programming and engineering services beyond national boundaries. The three most common justifications for offshore development are cost reduction, internation-
New Ethics for E-Business Offshore Outsourcing
alization and inadequate supplies of domestic resources. Cost reduction is the traditional reason for offshore development. Low-level tasks such as coding and software testing can be performed in less-developed countries at costs as low as 10% of domestic ones. Software manufacturers seeking international markets and needing to localize their products to specific platforms, languages and cultural requirements often find it most efficient to use offshore development resources in, or near, their target markets. Ireland, Australia and Finland are in the most demand for this type of outsourcing today. Cost savings from these moredeveloped countries remain available, but are less dramatic. The most attractive sources for offshore development are countries with well developed information technology support infrastructures, favorable demographics and labor costs, competent technical education facilities in national university systems, a well established presence of leading hardware and software platform manufacturers, and a favorable government regulatory environment supporting offshore development activities. However, a number of fundamental drawbacks remain. The largest is a lack of experienced project managers at all levels of the process from top management to junior project managers, resulting in very few domestic companies whose project management and quality control processes are certified according to international standards. Language is another concern, as many major offshore software development locales are primarily English speaking. The growing shortage of information technology (specially, computer) professionals, especially in the most developed industrial countries, is rapidly becoming the most important reason for offshore development. Business needs are overcoming the biggest barrier against true offshore development, the lack of trust and perception of risk among clients, and now most offshore development is actually taking place offshore. In order to reduce the risk, many clients choose to retain as much control as they can over pro-
duction. One major method is mandating that at least the project manager for each project remains onsite. This type of onsite/offsite combination of offshore outsourcing is often referred to as the fourth generation outsourcing model. It is offered only by the more progressive offshore companies who have established strong, demonstrable communications infrastructures for this type of project management. Offshore outsourcing was developing through the initial three generations that made the basis for evolution into new, fourth generation. The first generation in the evolution of offshore outsourcing is recognized as onsite staff augmentation, where the offshore services originated as a tactical solution for importing high-quality but low cost services in a staff augmentation mode. Offshore professionals were brought onsite and were paid at lower rates. The potential for labor saving was limited because of the need to import high skilled professionals. The second generation of the offshore outsourcing is known as offshore production with small regional marketing offices near major customers, while sending work offshore. It was more cost effective than first generation offshore model, but limited to less complex engagements where e-business functions did not require extensive project management. The third generation in the evolution of offshore outsourcing is based on the emerging onsite and offshore models with adding local project management to improve daily coordination and problem resolution with projects sent offshore. Complexity of management is higher than in previous two generations and this model is in use for high complexity systems that require frequent change. Fourth generation of offshore outsourcing combines the traditional benefits of offshore development with sophisticated program management and in-depth local consulting presence. By contrast, traditional high-end consulting firms emphasize business strategy and reengineering.
849
New Ethics for E-Business Offshore Outsourcing
While many of them also provide systems integration, outsourcing, and in some cases offshore development services, their cost structures are significantly higher. Consequently, fourth generation providers deliver the economies of offshore development and maintenance, plus the ability to manage multiple highly complex projects, while drawing on local or regional talent to help clients refine their technology strategies and execute them. Although contractual relationships and roles will vary by client and engagement, fourth generation outsourcing providers focus on technology execution. Fourth generation offshore outsourcers have the processes and expertise to help advise their clients how to achieve business objectives through technology, and how to execute their technology strategies. Given today’s budgetary realities, most internal information technology units cannot afford to retain such expertise internally. Fourth generation providers maintain core competencies in a wide range of execution-related areas, ranging from program management and portfolio analysis to legacy platform consolidation, e-business integration, and emerging technologies such as Java, .NET, and XML Web services. These capabilities are critical for organizations consolidating multiple overlapping systems in the wake of merger or years of turbulent growth, or for enterprises focused on e-business integration. Complementing internal information technology unit, fourth generation providers furnish the program management competence, technology expertise, and technical resources that internal information technology unit of an e-business firm can no longer afford. Traditional outsourcing models provide ebusinesses firm with a common communications platform, and usually include a personal visit to the outsourcing partner’s destination. In the scope of offshore outsourcing, the global delivery model gives the benefit of resolving the issues onshore. As required, e-businesses firm can always verify
850
the necessary resources at the onshore office of the outsourcing partner. The involved parties could document issues collaboratively. Both parties, in real time, can easily and flexibly implement necessary alterations in business processes upon mutual agreement. This might not be possible with a traditional offshore model, where e-business company does not have any point of contact onshore. Pioneered by Infosys in 1980s, the Global Delivery (and Service) Model refers to the philosophy of breaking pieces of work into logical components, and distributing these components geo-locationally, to perform them where it creates the maximum value. The global delivery model is a great value multiplier, and it is driven by the currently highest process and quality standards in the world. Thus, it gets assurance of the best product quality, which cuts down costs of fixing defects and maintenance. Advantages coming out of the continuous improvements keep improving all the key parameters. In the same time, it also gives e-business access to the best global talent increasing the degree of innovation. Global Delivery Model gives e-business coping with offshore outsourcing extremely high values in:
•
•
•
Accountability: E-business firm contracting outsourcing offshore under global delivery model has a single point of accountability for all project engagements, and all contracted projects are managed through local, onsite personnel, typically through a project management office in near site office. Flexibility: Outsourcing contractor render service to e-business firm by allocates work across delivery options to support changing e-business requirements. Responsiveness: Outsourcing contractor supports various levels that can be quickly tailored to any requirement; resources are available when and where needed.
New Ethics for E-Business Offshore Outsourcing
Dealing with information and telecommunications technology in use, these companies generally do not allow their employees to have paper and pens at their desk. They work at terminals that are unable to download information or new software. Any intrusion or infraction is treated as criminal offenses. In that way, these employees are functionally the same as domestic employees for an e-business company, because they sign on to the same server and do all their work in that virtual space. No company data is transmitted or downloaded, minimizing the risk of a security defect. The security e-business has in place for its own firm is the same as for a staff member offshore, and they are under original e-business company rules and regulations. One of the most critical elements of offshore outsourcing is vendor evaluation process and vendor selection (Rice, 2002). Loss of control (on quality, time lines, etc.) is inherent to offshore outsourcing. Besides, there are concerns about compatibility, business culture and ethics. Clarity is necessary between the outsourcer and the offshore vendor on the following issues:
• • • •
Ownership of information Security Business continuity agreement Reporting
Outsourcing contracts are designed to enable customers to receive services from independent contractors. Independent contractors are governed by a variety of laws and legal principles, including taxation, employment, and contract. However, one of the darker areas is the law of conflicts of interest, fiduciary duty and ethics. Both customers and service providers should seek professional advice on such issues during the design, negotiation, delivery and termination of any outsourcing service contract. There are on the scene conflicts of interest, too. A conflict of interest arises when a person owes loyalty to two persons who have conflicting legal or economic interests. Parties en-
tering into services agreements should define the limits of any duties so that unwanted conflicts can be avoided. It is expanded to the term of fiduciary duty. A fiduciary duty is one kind of loyalty. Thus, a fiduciary must put its own personal interests behind, or secondary, to the interests and welfare of its beneficiary. The concept of fiduciary duty arose out of the concept of a trust under English common law, and under civil law in continental Europe and other countries. When we consider these notions, it is obviously that we are considering business ethics. Ethics principally relates to morality of what well-behaved people do, and sometimes, ethical rules are binding as on regulated professionals, such as lawyers and accountants. Hence, ethical rules are generally not incorporated into contracts, although e-business outsourcing and offshoring urgently need them.
ethIcs Issues In e-busIness offshore outsourcIng background on ethics The field of ethics, also called moral philosophy, involves systematizing, defending, and recommending concepts of right and wrong behavior. Philosophers today usually divide ethical theories into three general subject areas: metaethics, normative ethics, and applied ethics: Metaethics investigates where our ethical principles come from, and what they mean. Normative ethics takes on a more practical task, which is to arrive at moral standards that regulate right and wrong conduct and it is a search for an ideal litmus test of proper behavior. Applied ethics involves examining specific controversial issues, such as abortion, infanticide, animal rights, environmental concerns, homosexuality, capital punishment, or nuclear war. The lines of distinction between metaethics, normative ethics, and applied ethics are often
851
New Ethics for E-Business Offshore Outsourcing
blurry. In recent years, applied ethical issues have been subdivided into convenient groups such as medical ethics, business ethics, environmental ethics, and sexual ethics. Generally speaking, two features are necessary for an issue to be considered an applied ethical issue. First, the issue needs to be controversial in the sense that there are significant groups of people both for and against the issue at hand. The second requirement for an issue to be an applied ethical issue is that it must be a distinctly moral issue. On any given day, the media presents us with an array of sensitive issues such as affirmative action policies, gays in the military, involuntary commitment of the mentally impaired, capitalistic versus socialistic business practices, public versus private health care systems, or energy conservation. Although all of these issues are controversial and have an important impact on society, they are not all moral issues. Some are only issues of social policy. In theory, resolving particular applied ethical issues should be easy. With the issue of abortion, for example, we would simply determine its morality by consulting our normative principle of choice, such as act-utilitarianism. If a given abortion produces greater benefit than disbenefit, then, according to act-utilitarianism, it would be morally acceptable to have the abortion. Unfortunately, there are perhaps hundreds of rival normative principles from which to choose, many of which yield opposite conclusions. Thus, the stalemate in normative ethics between conflicting theories prevents us from using a single decisive procedure for determining the morality of a specific issue. The usual solution today to this stalemate is to consult several representative normative principles on a given issue and see where the weight of the evidence lies.
Normative Principles in Applied Ethics Arriving at a short list of representative normative principles is itself a challenging task. The principles selected must not be too narrowly focused,
852
such as a version of act-egoism that might focus only on an action’s short-term benefit. People must also see the principles as having merit on both sides of an applied ethical issue. The following principles are the ones most commonly appealed to in applied ethical discussions:
• • • • • • • • • •
Personal benefit: acknowledge the extent to which an action produces beneficial consequences for the individual in question Social benefit: acknowledge the extent to which an action produces beneficial consequences for society Principle of benevolence: help those in need Principle of paternalism: assist others in pursuing their best interests when they cannot do so themselves Principle of harm: do not harm others Principle of honesty: do not deceive others Principle of lawfulness: do not violate the law Principle of autonomy: acknowledge a person’s freedom over his/her actions or physical body Principle of justice: acknowledge a person’s right to due process, fair compensation for harm done and fair distribution of benefits Principle of rights: acknowledge a person’s rights to life, information, privacy, free expression, and safety
The above principles represent a spectrum of traditional normative principles and are derived from both consequentialist and duty-based approaches. The first two principles, personal benefit and social benefit, are consequentialist since they appeal to the consequences of an action as it affects the individual or society. The remaining principles are duty-based ones. The principles of benevolence, paternalism, harm, honesty, and lawfulness are based on duties we have toward
New Ethics for E-Business Offshore Outsourcing
others. The principles of autonomy, justice, and the various rights are based on moral rights.
Ethics and Culture All cultures have a set of ethical values or rules concerning what is morally right and what is morally wrong (Anscombe, 1981). Through globalization, non-western cultures around the world are being exposed to the values of the west, and on a superficial level at least, appear to be adopting western culture. However, the adoption of the outward signs of western culture such as business dress codes does not necessarily mean that one culture has abandoned its own social, ethical and moral values in favor of those of the other. Indeed, the underlying values of non-western cultures appear to remain intact in the face of exposure to western culture. Within e-business environment, while there is evidence that the processes of engineering and implementation of information technology systems are being successfully exported to nonwestern cultures as a consequence of globalization, the adoption of western social and ethical values by these other cultures is another matter. However, the ethical values of the world’s cultures remain diverse. The field of computing is generating new and difficult ethical questions, and the variation in the ethical and social norms across the globe merely adds to the level of complexity in finding answers to these questions. How can issues be answered if the ethical rules are not fixed? How, for example, can an ethical or moral question about the content of a Web site be considered when a hypertext link in that Web page may not only take the user to a different part of that site, but to a site in another part of the world where different ethical values may prevail? These notions impose the cultural relativism. Cultural relativism recognizes that moral values vary from one society or culture to another, and that no culture’s ethics are any better than any
other’s. This leads to the conclusion that the variations of values between cultures are all equally valid, and the variation between cultures can indeed be significant. Since the Second World War, the technological and economic growth of western nations has lead to the almost universal exposure of other cultures around the globe to western culture. Indeed, the culture of the West is propounded by many in the West to be of universal validity, which can result in a form of ethical imperialism. Although western e-business organizations are beginning to understand the problems associated with effort to universally apply western cultural values, it remains a problem for subsidiaries of western companies operating in the third world. Arguments by non-western cultures against this view of the universal validity of western values by the local cultures have invariably been dismissed in the West, particularly when issues such as royalties for western intellectual property rights and patents are at issue. A code of ethics developed by an organization will be a consensus of the moral and ethical values of the organization and the individual professionals within that organization (Weston, 1997). Can such a code be applied outside of the culture in which it was developed? Does there need to be some qualification of a code of ethics when it is applied to different cultures? Besides individual country ethics values, there are some fundamental values that cross cultures, and that some activities are wrong no matter where they take place. Therefore, it could be possible to define a set of ethical values that could be applied universally. As the Net develops, English will cease to be the dominant language, with Chinese, French, German and other languages generating webs within webs. Important for countries where English is only one spoken language is the fact that most of their citizens only speak a single language, while English is the most spoken second language in the world. In other words, as the Web sources develop, non-American surfers, who
853
New Ethics for E-Business Offshore Outsourcing
generally speak two or more languages, will have access to a larger web of services and contents. Enormous differences still remain from country to country. The U.S.-centric pattern wanes only with substantial and sustained infrastructure builds of the sort that has swept Europe, rolled into Asia, announced itself in Latin America, and stalled in most of Africa.
Applied e-business ethics The field of business ethics examines moral controversies relating to the social responsibilities of capitalist business practices, the moral status of corporate entities, deceptive advertising, insider trading, basic employee rights, job discrimination, affirmative action, drug testing, and whistles blowing. Issues in environmental ethics often overlap with business and medical issues. These include the rights of animals, the morality of animal experimentation, preserving endangered species, pollution control, management of environmental resources, whether ecosystems are entitled to direct moral consideration, and our obligation to future generations. The massive diffusion of information and telecommunications technology causes radical changes in public and private institutions in general as well as in national and international information and communication policies in particular. This may concern:
• • • • • •
854
The creation of specialized (regional) knowledge markets The development of electronic commerce The publication and diffusion of scientific knowledge through the Internet The creation of services for public access to the Internet The promotion of local cultures in the digital medium The participation of individuals and groups in the political (communal, regional, national, international) processes
In the scope of information and telecommunications technology, we are speaking about information ethics. Information ethics as a descriptive theory explores the power structures influencing attitudes towards information and traditions in different cultures and epochs. Information ethics as an emancipatory theory develops criticisms of moral attitudes and traditions in the information field at an individual and collective level. It includes normative aspects. A basis for ethical thinking in the information field that is the basis for e-business ethics as well are the following principles of the Universal Declaration of Human Rights: respect for the dignity of human beings, confidentiality, equality of opportunity, privacy, right to freedom of opinion and expression, right to participate in the cultural life of the community, right to the protection of the moral and material interests concerning any scientific, literary or artistic production. The literature dealing specifically with the ways in which the Internet affects ethics and moral decision-making in e-business is scarce. Specifically in terms of credibility and verification of information, two of the core issues relating to new media ethics, there seem to be little or no published studies at hand. The one possible definition of ethical issues in e-business environment could be defined as, a set of principles of right conduct — the rules or standards governing the conduct of a person or the conduct of the members of a profession. It also includes the statement that e-business ethics in an organization relates to a corporate culture of values. Speed, freedom and individual power are thoroughly modern concepts that define the Information Age in which e-business exists. They triangulate to create a new kind of human being particularly adapted to life in the networked society (Borgman, 2000). The result is that e-culture turns the once well-defined areas of social ethics into a huge gray area of individual and situational considerations that require research and reflection to navigate but which provides no time to do so.
New Ethics for E-Business Offshore Outsourcing
Among many issues related to applied e-business, ethics is social responsibility. Social responsibility in e-business environment is an organization’s obligation to maximize its positive impact on stakeholders and to minimize its negative impact (Deborah, 1991). It includes legal, ethical, economic, and philanthropic dimensions.
•
•
•
•
Legal dimension refers to obeying governmental laws and regulations civil law: rights and duties of individuals and organizations criminal law; prohibits specific actions and imposes fines and/or imprisonment as punishment for breaking the law. Ethical dimension refers to behaviors and activities that are expected or prohibited by organizational members, the community, and society (not codified into law) standards, norms, or expectations that reflect the concern of major stakeholders. Economic responsibilities refer to how resources for the production of goods and services are distributed within the social system. Philanthropic dimension refers to business’ contributions to society.
Legislation, codes and national standards relevant to the workplace include:
• • • • • • •
Security issues include:
• • • • • • • •
Security measures Privacy Confidentiality Information management Risk management Intellectual property Fraud prevention and detection Business ethics
Information and development support include:
In life, the people that we trust the most are those individuals that provide close consultations to us: our spouse, family and friends. Likewise, secure and successful e-commerce business owners are most likely to loan trust to those that qualify, were a good relationship has been built up. This is true for most of the e-business on the Internet. Technically, this important trust relationship is built by the three ethical issues in e-business ethics: honesty, integrity and trustworthiness.
•
e-business outsourcing ethics
• • • • •
Issues dealing with ethics in e-business outsourcing refer mostly on legislation, security, information, and business itself.
Award and enterprise agreements National, state, regional legislative requirements Industry codes of practice Copyright laws Privacy legislation Intellectual property, confidentiality requirements Legal and regulatory policies affecting ebusiness
• • • •
Advice on information and communications technology issues and compatibility Protocols for electronic data interchange Protocols relating to legal or security issues Personal identification and password for online access including electronic signature Contact person Ethical issues include: Privacy legislation Confidentiality of records and information Intellectual property Fraud prevention and detection Trade Practices Act
855
New Ethics for E-Business Offshore Outsourcing
Changes in technology and business processes can outpace companies’ ability to consider their ethical implications or to train employees to deal with them (Brenner, 1992). Most everyone in ebusiness agrees that questionable ethical moves that compromise customer privacy for short-term marketing gain are bad for business in the long run. Online business is entering a more mature phase, and the issue of who the customer trusts becomes more of a competitive differentiator. Web sites are a reflection of businesses on the Internet. The three principal keys in doing e business: honesty, integrity, and trustworthiness crossover directly to the Web site framework and the Internet. This is especially true for the big online service providers. The consensus seems to be that offshore outsourcing operations are at higher risk of copyright or intellectual property theft, especially when said operations pertain to software development. The European Union governments and U.S. have a good policy on intellectual property theft and piracy, compared to many other countries. Intellectual property protection laws are strictly implemented in first-world countries. Even Singapore, which accepts outsourcing jobs, has very firm intellectual property policies. This is part of what makes Singapore very attractive to outsourcing entrepreneurs, despite the high cost of labor and infrastructure. Still, it is not enough that the host country’s government supports intellectual property efforts. The prime concern of entrepreneurs who seriously consider offshoring is cost efficiency. Many entrepreneurs in the software development field offshore their projects even with the knowledge that intellectual property protection laws are loose in the host country. The support of a host country’s government is important in securing intellectual property rights in offshore operations. However, this is not to say that absolutely no software piracy occurs in first-world countries, where government control over intellectual property rights is known to be
856
the strictest and intellectual property violations happen everywhere.
deVeloPment of e-busIness ethIcs As ongoIng Process The competitive pressures companies face to reduce costs and increase efficiency, will not decrease anytime soon. The practice of offshoring, therefore, is a business reality, one that companies and their many stakeholders will face and need to manage far into the future. There is the fact that employees, governments, communities, and others are best served not by opposing the offshoring trend, but by campaigning to encourage companies and governments to address the negative impacts and ensure the greatest spread of benefits to those affected. With all the recent headlines about company misconduct and ethics violations has come a significant, and long overdue, increase in the consideration of ethics among businesses. Companies have quickly penned ethics codes, instituted ethics compliance monitoring programs, or have had high-level corporate officers visibly touting their company’s ethics focus in the hope of regaining consumer confidence in a devastated economy. While businesses are fighting for survival in adverse conditions, they need to be looking to the future and building solid foundations upon which to base their future efforts. Even without investing vast financial resources, any company can reap tremendous benefits from considering and initiating an ethics program. In addition to the widely recognized value of improved company image and a smoother, more effective and happier work environment, an ethics program can contribute to a better bottom line, through stronger and more solid client relationships and decreased expenses in a variety of areas. Attention to business ethics is critical during times of fundamental change (Madsen & Shafritz, 1990). In times of fundamental change, values
New Ethics for E-Business Offshore Outsourcing
that were previously taken for granted are now strongly questioned. Many of these values are no longer followed. Consequently, there is no clear moral compass to guide leaders through complex dilemmas about what is right or wrong. Attention to ethics in the workplace sensitizes leaders and staff to how they should act. Perhaps most important, attention to ethics in the workplaces helps ensure that when leaders and managers are struggling in times of crises and confusion, they retain a strong moral compass. Thus, attention to e-business ethics is next step in developing new e-business environment. Changes in technology and business processes can outpace companies’ ability to consider their ethical implications or to train employees to deal with them. Few companies have formal programs to complete of ethics training (Toffler, 1991). It is traditionally been seen as an add-on. Thus, it is needed to bring ethics into e-business context, since e-business raises ethical issues that may have existed before, but not in such clear reality. In large part to address potential information technology-related liabilities, both inside and outside a company, a growing number of businesses have high-level ethics executives or chief privacy officers to enforce company standards. The goal is to raise awareness, to be proactive and preventive rather than punitive. As e-business moves more and more business processes and transactions online, information technology, and the people who manage it, is at the forefront of decisions with ethical implications. The debate over ethical standards in business is not new. What is new, or at least more apparent than ever, is central role of information technology in some of the most important business-ethic issues of the day: privacy, the ownership of personal data, and the obligations created by extended e-business partnerships. How these controversies affected information technology managers and others involved with technology? What ethical issues, if any, are business executives grappling with in connection with cutting-edge
technology? And where do information technology professionals go for guidance on ethically ambiguous situations? Far from self-evident, the answers may be critical to the development of the trust and integrity needed to succeed at and global e-business. Trust between workers and employers is another key issue putting information technology managers in the middle of ethical decisions. Most companies forbid employees using company computers to access Web sites with material that is pornographic, violent, or hate-related. Most information technology managers and executives agree there needs to be more training in ethics, especially now that information technology has taken a central role in doing business. Indeed, thinking of business and ethics, or information technology and ethics, as opposing forces may be a false dichotomy.
standardization of e-business ethics at company’s level: code of ethics An e-business company’s ethics code ought to address both general values for which the company stands, and particular principles specific to the daily operations of that particular enterprise. Thus some codes may focus on full disclosure of their own abilities, time estimates, and costs, while others might address safety and/or full acceptance of responsibility for the quality of some product. The key is to generate a code that is tailored to the activities and goals of a particular organization, while simultaneously upholding universal ethical principles. A code of ethics, and thus compliance, must be universal. What is appropriate for the organization as a whole applies to all individuals. An organization’s code of ethics must not be waived for selected executives or board members. Proper use of company and customer property, electronic communication systems, information resources, material, facilities, and equipment is
857
New Ethics for E-Business Offshore Outsourcing
employees’ responsibility. They should use and maintain these assets with the utmost care and respect, guarding against waste and abuse, and never borrow or remove them from company property without management’s permission. While these assets are intended to be used for the conduct of business, it is recognized that occasional personal use by employees may occur without adversely affecting the interests of the company. Personal use of company assets must always be in accordance with corporate and company policy. Companies develop the code of ethics and ethics programs in order to allow employees and stakeholders to understand the values of the business to comply with policies and codes of conduct, and to create the ethical climate of the business (Dean, 1992). Within unethical behaviors found by employees in many firms there are sexual harassment, lying on reports or falsifying records, conflicts of interest, theft, lying to supervisors, discrimination, drug or alcohol abuse, improper accounting procedures, and violation of environmental laws. The code of ethics is defined as formal statement of what an organization expects in the way of ethical behavior (what behaviors are acceptable or unacceptable), and it reflects senior management’s organizational values, rules, and policies. In the process of formulating and implementing a code of ethics in an organization, it is presumed that several steps are considered:
• • • •
The code of ethics is distributed internally and externally. E mployees are assisted in understanding entire code of ethics. Management’s role is specified in detail. Employees are instructed about their responsibility for understanding and accepting the code of ethics.
The results in code of ethics effectiveness and proper utilization are under the special and
858
professional work that should be supported by an ethics officer (Berenbeim, 1992). Ethics officer primarily job is to coordinate the ethics program with top management, to develop, revise, and disseminate the code of ethics, to develop effective ethics training tools, to establish audit and control systems, and to develop enforcement techniques in order to give some kind of legality to the code of ethics in use. The process of development and implementation of an effective e-business ethics is under the special business function on business conduct. The ethics and business conduct staff manages and administers the code of ethics and e-business conduct program, and should be fulltime professionals who have responsibility for program development, including ethics training and revision of the code of ethics. They provide an objective resource available to assist employees in ethical decision making and in addressing allegations of unethical or illegal conduct. The code of ethics helps employees make ethically sound business decisions and provides an overview of company’s issue-resolution process. The code is often based on several key areas including:
• • • • • • • • • • • • • •
Antitrust and Competition Laws Company Assets Conflict of Interest and Corporate Opportunities Employment Practices and Expectations Environmental Responsibility Full and Fair Disclosure Gifts and Entertainment Government Affairs and Reports Inside Information and Insider Trading Laws Intellectual Property and Confidentiality International Business Conduct Privacy Safety and Health Suppliers, Contractors and Customers
New Ethics for E-Business Offshore Outsourcing
Ethical and Legal Guidelines The legal guidelines under which the firm operates are in the hands of corporation general counsel. When e-business activities are initially undertaken, the general counsel should be consulted for their perspective on proposed methods of collection and sources. Final decisions about the legality of activities are the exclusive purview of the counsel. Ethical guidelines, however, are the realm of various subjects that share duties, obligations and liability where differences in geographical and cultural levels of acceptance for methods vary widely. Ethical and moral hazard arises in outsourcing for two principal reasons. Businesses do not guard themselves prior to contract against their dependency on that supplier. Separately, they fail to appreciate the power which transfers to the outsource supplier in respect of their own business activities. They can manage and control both of these, to some extent, if they appreciate how and why power transfers after the contract has been signed. In the case of very substantial outsourcing contracts placed by major institutions, there is an increasing dominance. Some refer to it in non-legal terms as a monopoly — by a small number of very large outsourcing organizations controlling a significant share of the outsourcing revenues in that sector. That is a power and dependency that can only be controlled, in the longer term, by legal and, in sectors such as financial services, regulatory intervention. How does dependency on outsourcers arise? There is an increasing trend for organizations to outsource their non-core activities but maintain the conduct of their core activities. Outsourcing organizations, which are matured in the strategic partners, are engaged to conduct the non-core activities. The intentions are broadly to reduce the cost base for these services; maintain and where possible improve the delivery of these services; and to enable the resource savings to be deployed in core
activities for the benefit of the company and the shareholders. Outsourcing transactions have to be analyzed for the changes in power that they bring over the provision of service and the impact that has on the core business. This brings out the nature of the dependency and the power that gives to the supplier. Trust and confidence may be mitigating factors but are only tried and tested as a result of practical operation of the outsourcing agreement incorporating basic ethics standards. Besides the continually strive in increasing respect and recognition of basic ethical issues, there are several key subjects that most of codes of ethics involve:
•
• • • • •
To pursue one’s duties with willingness and patience while maintaining the highest degree of professionalism and avoiding all unethical practices. To faithfully adhere to and abide by one’s company’s policies, objectives and guidelines. To comply with all applicable laws. To accurately disclose all relevant information, including one’s identity and organization. To fully respect all requests for confidentiality of information. To promote and encourage full compliance with these ethical standards within one’s company, with third party contractors, and within the entire profession.
Now many organizations have presented codes and instruments for measuring corporate social and environmental performance. These codes and instruments vary widely in their goals, authors, country of origin, and effectiveness. Each contributes something to the effort to monitor corporate performance and inform corporate stakeholders of a firm’s successes and failures. Given this variety, there is an increasing need for cooperation and focusing among the various global organizations
859
New Ethics for E-Business Offshore Outsourcing
that seek to bring transparency, fairness and trust to global business operations.
Compliance with Code of Ethics In the context of corporate governance, compliance means comply with the law. Ethics is the intent to observe the spirit of the law — it is the expressed intent to do what is right. In the wake of recent corporate scandals, a program that strongly emphasizes both ethics and compliance is good business. In fact, the business case for such a program is compelling. Within U.S. e-business environment, the Sarbanes-Oxley Act of 2002, along with related mandates by the Securities and Exchange Commission and new listing rules instituted by the major stock exchanges, raise the ante for ethical behavior and effective corporate compliance programs. Public companies and their senior executives and board members may be held accountable not only for the financial reporting provisions of the new legislation, but also for the aspects pertaining to ethics and corporate compliance. Conversely, companies and their leadership that are complying both to the letter and with the spirit of the law can achieve substantial benefits. An interesting landscape is developing ethics and corporate compliance. Public and private policies are being enacted that will force companies and their executives to behave better. However, is compliance with these policies and related legislation enough? In the current e-business outsourcing and offshoring, corporate leaders should extend their efforts and their ethics beyond the propositions of the law. In reality, companies that follow both the letter and the spirit of the law by taking a values-based approach to ethics and compliance will have a distinct advantage in the marketplace. Such an approach requires senior executives to understand clearly the e-business culture and compliance controls that exist at all levels of their organizations. Companies must position ethics and compliance programs as a
860
responsibility of each employee and a respected part of the company culture, not just an obligation. Companies’ senior executives and board members must adhere to the code of ethics and compliance policies in the same way that all other employees must. Benefits of this approach include improvements to a company’s market performance, brand equity, and shareholder value that is the must for the success on the global market. Controls are used to safeguard corporate assets and resources, protect the reliability of organizational information and ensure compliance with regulations, laws and contracts — a control helps in:
• • • • •
Limit employee or management opportunism Ensuring that board members have access to timely and quality information The ability to anticipate and remedy organizational Minimizing negative situations Uncertainties that need to be hedged
Code of Ethics Compliance Audit Code of ethics compliance audit is systematic evaluation of an organization’s ethics program and/or performance to determine its effectiveness, and it focuses on the key factors that influence how ethical decisions are made. A critical component of an effective ethics and compliance program is the ability to monitor and audit compliance. As e-business companies cross geographical and industry boundaries, it is becoming harder to perform this role in the traditional manner. Consequently, e-business companies are increasingly seeking technology solutions to help them identify potential unethical behaviors before the cost becomes too great. There are software tools that are deployed through a worldwide network using search-and-retrieval technology coupled with powerful data and network analysis capabilities. These tools identify and analyze potential
New Ethics for E-Business Offshore Outsourcing
indicators of misappropriations and financial statement frauds, as well as preserve the information as evidence for use in court. Employing these sophisticated technological tools, an e-business company is capable in proactively detecting unethical behavior and helps it maintain compliance with its own policies and procedures. Many e-business firms argue that they do not have the budget or staff to develop, implement and enforce full-scale ethics and compliance policies. This may be true, but it must not be excuse because ethics compliance is the forerunner of e-business outsourcing activities. To develop a solid compliance program, an e-business firm should force several activities as follows:
•
•
•
•
•
•
Develop open lines of communication: For a compliance program to be effective, the most important element is that employees feel comfortable asking questions and reporting possible violations. Identify the risks: Management must first search out risks that the company faces, so the right factors can be monitored, audited and evaluated. A wide range of potential risks should be considered, including environmental risks, health and safety, money laundering, especially when involved with foreign entities. Establish standards and procedures: Some fundamental standards and procedures should be included in any organization’s compliance program. Designate a compliance official or committee: Every compliance program must be overseen by an individual or committee that has ultimate accountability. Conduct appropriate training and education: Every employee in the organization must receive both initial and periodic training to ensure employees fully understand the company’s compliance policies. Respond to detected offenses: When employees violate the company’s policies,
•
action must be swift and decisive. Corrective action must be taken, and any corrective action must be documented and communicated to all employees. Enforce disciplinary standards through well-publicized guidelines: Provide a detailed explanation of the consequences for breaches in conduct. Ensure that compliance officials, managers and employees are comfortable discussing ethical matters openly.
Many e-business organizations are also in possibility to use five-phase approach for assisting them in creating or enhancing an ethics and compliance program.
•
•
•
•
Phase One: Risk and cultural assessment. Through employee surveys, interviews, and document reviews, culture of ethics and compliance at all levels of the organization is validated with a detailed work plan. Phase Two: Program design and update. Creation of guideline documents that outline the reporting structures, communications methods, and other key components of the code of ethics and compliance program. This encompasses all aspects of the program, from grass roots policies to structuring board committees that oversee the program. Phase Three: Policies and procedures development. Development of the detailed policies of the program, including issues of financial reporting, antitrust, conflicts of interest, gifts and entertainment, records accuracy and retention, employment, the environment, global business, fraud, political activities, securities, and sexual harassment, among others. Phase Four: Communication, training, and implementation. It includes institutionalization of the best policies and procedures that become part of the everyday work of an e-business organization.
861
New Ethics for E-Business Offshore Outsourcing
•
Phase Five: Ongoing self-assessment, monitoring, and reporting. The true test of ethics and compliance program comes over time, and techniques such as employee surveys, internal controls, and monitoring and auditing programs, are in use to help achieve sustained success.
Business ethics procedures must be presented at the core of every business decision based on standards of business conduct (Carroll, 1990). Also are included continuous education on ethical decision making with permanent monitoring adherence to laws, company policies and guidelines. The current trend of increasing focus on ethics is a much-needed change, with recent events rightly drawing attention to its importance. If used well, ethics enhancement may not be just another expense for businesses that are already struggling, but the solution that reverses the economic difficulties of late and builds a better way of doing business, and of living in general.
globalization of ethics for e-business offshore outsourcing Since e-business is considering information and communications technology, technology impact on global scale is undoubtedly vial to all sectors. However, some authors state that far from creating paradise on Earth, technology has instead produced an unsustainable contest for resources. For example, Mander (1992) surveys the major technologies shaping the new world order and new forms of globalization — computers, telecommunications, space exploration, genetic engineering, robotics, and the corporation itself. He warns that they are merging into a global megatechnology, with dire environmental and political results. One generally expects it to be the responsibility of the host country to set fair standards for wages, working conditions and pollution. However, this does not work well in poor countries. There is an overabundance of potential workers and work sites
862
in poorer nations. Therefore these countries do not have the negotiating power to insist on living wages, humane working conditions and reducing pollution (Sethi, 2003). The contemporary global digital economy is guided by market competition that brings efficiencies, so to provide better quality goods and services at a lower cost (Negroponte, 1995). Globalization has also brought jobs, investment and new technologies to many poor peoples. However, reports have documented the disparities in income and wealth that develop, as those people and nations who possess resources are able to obtain a greater share of benefits, while those who have little resources or skills fall further behind (World Bank, 2004). Hence, there is a need for effective international acceptable guidelines for the operation of the international digital economy that will take into account current inequities, and thus the interests of all people. Defining a code of ethics, which would be acceptable to e-business organizations in all cultures, has been said to be an impossible task. However, given that there are some ethical values that can and do cross cultural boundaries, it may be possible to select a set of ethical values and to construct a set of guiding principles that would be acceptable to all societies. Each cultural group would then, in turn, build these principles into a code of ethics appropriate for their cultural values. The set of guiding principles may be universally acceptable, but the detail of their application will likely not be. An allowance for variation must be a component of any code of ethics that aims to be applied and accepted across multiple cultures. Multinational, multicultural e-business organizations may well find it necessary to translate corporate directives and policies for the various cultural groups across their organization based on the ethical values of those cultures just as they now must translate material into the various languages used by their workforce. Current efforts on international scene is considering the programs that encourage a culture
New Ethics for E-Business Offshore Outsourcing
of mutual respect in which everyone understands and values the similarities and differences among employees, customers, communities and other stakeholders. An e-business company with offshore outsourcing must provide equal access to the best jobs in the world for people who are willing to compete, and equal employment opportunity to all employees regardless of age, race, color, national origin, sexual orientation, gender, disability or religion. Besides these global ethics issues, there are nine key ethics issues on the global scene that should be the core for any particular company’s code of ethics and for any international agreement or standard:
•
•
•
•
Diversity, Equal Opportunity and Respect in the Workplace: It is policy to provide all employees with an environment of mutual respect that is free from any form of harassment and discrimination. Harassment and discrimination of any form is not acceptable and will not be tolerated. In some countries, harassment is against the law; in all countries, it is wrong under global ethics standards. Environment, Health and Safety: Protecting people and the environment is key issue in global standardization for ethics. Health and safety rules and procedures are designed to provide a safe and healthy work environment and meet applicable health and safety laws. Financial Integrity: All e-business company’s accounting records, and reports produced from those records, must be kept and presented according to the laws of each applicable jurisdiction. Moreover, the records must accurately and fairly reflect the e-business company’s assets, liabilities, revenues and expenses. Accurate Company Records: Laws and regulations require e-business company records to accurately reflect the events they represent. Falsifying business records is a
•
•
•
•
•
serious offense that may result in criminal prosecution, civil action and/or disciplinary action up to and including termination of employment and close of whole business offshore unit. Conflicts of Interest: This means that employees, officers, and directors should avoid any investment, interest, association or activity that may cause others to doubt their or the e-business company’s fairness or integrity, or that may interfere with their ability to perform job duties objectively and effectively. Obligations to Customers: Competitors and Regulators. It is consider the commitment to free, fair and open business competition, and is equally committed to competing ethically and in compliance with laws that foster competition in the marketplace. It includes also the freedom in gathering competitive information. Computer Systems and Telecommunication Security Policy: It is the core issues for an e-business company. The application/data owner and information systems professionals share responsibility for protection of e-business system from misuse — the information is an e-business company’s asset and information technology unit of company is responsible for all of its assets including data and information, too. Safeguarding Important Information: It much relies on protection of confidential information such as trade secrets, proprietary know-how, personnel records, business plans and proposals, capacity and production information, marketing or sales forecasts, client and customer lists, pricing lists or strategies, construction plans, supplier data and so forth. Corporate Social Responsibility: The continuing development of international codes of conduct and principles governing corporate social responsibility are positive
863
New Ethics for E-Business Offshore Outsourcing
indicators for redefining business accountability for the 21s t century. The basic principles that change e-business offshore outsourcing scenes of today are considering two key issues: the responsibilities of businesses and the economic and social impact of business. The responsibilities of businesses states that the value of a business to society is the wealth and employment it creates and the marketable products and services it provides to consumers at a reasonable price commensurate with quality. To create such value, a business must maintain its own economic health and viability, but survival is not a sufficient goal. Businesses have a role to play in improving the lives of all their customers, employees, and shareholders by sharing with them the wealth they have created. Suppliers and competitors as well, should expect businesses to honor their obligations in a spirit of honesty and fairness. As responsible citizens of the local, national, regional and global communities in which they operate, businesses share a part in shaping the future of those communities. The economic and social impact of business is consider the notion that e-businesses established in foreign countries to develop, produce or sell should also contribute to the social advancement of those countries by creating productive employment and helping to raise the purchasing power of their citizens. Businesses also should contribute to human rights, education, welfare, and vitalization of the countries in which they operate. Businesses should contribute to economic and social development not only in the countries in which they operate, but also in the world community at large, through effective and prudent use of resources, free and fair competition, and emphasis upon innovation in technology, production methods, marketing and communications. Since the e-business outsourcing and offshoring is actually global scaled, many efforts are in the beginning to promote universal, international
864
system of value assets, dominantly by shaping the form of an international code of ethics. Current works on international scene promote several global ethical issues related to sexual and racial discrimination, human rights, price discrimination, harmful products, pollution, and telecommunications issues. The American Institute of Certified Public Accountants (AICPA) issued a new set of ethics requirements for members who outsource. The new requirements state that AICPA members must inform their clients that the firm will use a third-party service provider when providing professional services to the client. The new rules also clarify that AICPA members are responsible for all work performed by the service provider. Furthermore, AICPA members using third-party service providers are required under the new rules to enter into a contractual agreement with the third-party service provider to maintain the confidentiality of the client’s information, and to be reasonably assured that the third-party service provider has appropriate procedures in place to prevent the unauthorized release of confidential client information. New rules are effective for all professional services performed on or after July 1, 2005.
Caux Round Table Caux Round Table is the example of the efforts to determine universal conceptualization of ethical conduct that should help doing business internationally, and it is of core interest for e-business outsourcing and offshoring. The Caux Round Table is an international network of principled business leaders working to promote a moral capitalism targeting on settings Principles for Business through which principled capitalism can flourish, and sustainable and socially responsible prosperity can become the foundation for a fair, free and transparent global society. At the company level, the Caux Round Table advocates implementation of the principles for
New Ethics for E-Business Offshore Outsourcing
business (Caux, 2002) that applies fundamental ethical norms to business decision-making. The Principles have been translated into more than 15 languages, and have been used as benchmarks for firms’ codes throughout the world. The Caux principles are aspirational, and they are proposed as a model, starting point and benchmark when executives write or attempt to improve their own firm’s code of ethics. The basis for the Principles is the idea that mobility of employment, capital, products and technology is making business increasing global in its transactions and its effects. The principles have credibility because they were written by and have been actively supported by senior business executives from around the world. A more proactive recent focus of the Caux Round Table is to contribute to the alleviation of world poverty and to make it possible for poor nations to share in global prosperity. To promote better outcomes for globalization, the Caux Round Table is working to raise the level of awareness of senior business leaders, thought leaders and elite opinion around the world about new opportunities to attack global poverty. These include legal and regulatory changes in developing countries that will improve the environment for productive investment of foreign and domestic equity capital. The Caux Round Table is working in alliance with global business leaders, international institutions and policy makers to improve investment environments in selected developing countries by also suggesting certain principles for governments and the adoption of the core standards for transparent management of national financial institutions.
United Nations Global Compact with Business The Global Compact seeks to advance responsible corporate citizenship so that business can be part of the solution to the challenges of globalization. To achieve these objectives, the Global Compact offers facilitation and engagement through several
mechanisms: policy dialogues, learning, local structures, and projects. This in turn helps organizations to redefine their strategies and courses of action so that all people can share the benefits of globalization. The Global Compact is a network. At its core are the Global Compact office and four UN agencies: Office of the High Commissioner for Human Rights, United Nations Environment Program, International Labor Organization and United Nations Development Program. The Global Compact involves all the relevant social actors: governments, who defined the principles on which the initiative is based; companies, whose actions it seeks to influence; labor, in whose hands the concrete process of global production takes place; civil society organizations, representing the wider community of stakeholders; and the United Nations, the world’s only truly global political forum, as an authoritative convener and facilitator. The United Nations Global Compact with business was born when United Nations Secretary General Kofi Annan was invited to give a major address to world business and political leaders at the World Economic Forum in 1999. He warned global business leaders that we have underestimated the fragility of the global economy. People around the world fear and distrust the resulting loss of jobs, trashing the environment and the huge rewards that go to a few, while leaving the vast majority very poor. Annan warned that this could lead to widespread unrest and even civil wars and terrorism. To counter potential civil strife, Annan proposed a global compact for business firms. For firms that sign, the United Nations would on one hand support the open global market and signing business firms on the other hand would pledge to support human rights, worker standards and sustainable environmental practices. The Global Compact is based on the recognition that development and poverty reduction depend on prosperity which can only come from efficient and profitable business. International trade and investment create new employment, raise skill levels and increase local economic activity. At
865
New Ethics for E-Business Offshore Outsourcing
the same time, companies have a duty to manage all aspects of their business in a responsibly and sustainable way. These universal ideals are specified in ten precise principles under four headings: human rights, labor standards, environment and anti-corruption. The Global Compact’s 10 principles (United Nations, 2003) in the areas of human rights, labor, the environment and anti-corruption enjoy universal consensus and are derived from:
• • • •
The Universal Declaration of Human Rights The International Labor Organization’s Declaration on Fundamental Principles and Rights at Work The Rio Declaration on Environment and Development The United Nations Convention against Corruption
The Global Compact principles by prime categories: Human Rights • Principle 1: Businesses should support and respect the protection of internationally proclaimed human rights. • Principle 2: Make sure that they are not complicit in human rights abuses. Labor Standards • Principle 3: Businesses should uphold the freedom of association and the effective recognition of the right to collective bargaining. • Principle 4: The elimination of all forms of forced and compulsory labor. • Principle 5: The effective abolition of child labor. • Principle 6: The elimination of discrimination in respect of employment and occupation.
866
•
• •
Principle 7: Businesses should support a precautionary approach to environmental challenges. Principle 8: Undertake initiatives to promote greater environmental responsibility. Principle 9: Encourage the development and diffusion of environmentally friendly technologies.
Anti-Corruption • Principle 10: Businesses should work against all forms of corruption, including extortion and bribery. The Global Compact is a symbol of leadership in a complex world. It goes back to basics by focusing on a concise set of fundamental principles for living and working in a global society. Its 10 principles addressing human rights, labor standards and the environment are truly universal — both precise enough to be relevant and general enough so as to avoid cultural conflict. The Global Compact is not a regulatory regime or a code of conduct, but a platform and forwardlooking forum for the exchange of good practices in order to achieve actual progress in creating a more prosperous and sustainable world.
The Global Reporting Initiative The Global Reporting Initiative provides a structure whereby a firm can publicly report on its business activities over three sets of criteria: economic, social and environmental. It currently has more than 500 organizations in 50 countries that participate in doing an annual report on economic, social and environmental issues. It also operates as an environmental reporting mechanism for the United Nations Global Compact and for the Organization for Economic Cooperation and Development (OECD) Guidelines for Multinational Enterprises. The OECD, in cooperation with many intergovernmental and nongovernmental groups, has also developed “A Working Set of Core
New Ethics for E-Business Offshore Outsourcing
Indicators for Measuring Global Development Progress”. These indicators are straightforward and quite useable measures of economic, social, environmental and general development in the respective countries.
ISO Standard on Social Responsibility In order to set more precise and more appropriate for all stakeholders around the globe, new standard settlement is needed. It is especially true when e-business offshore outsourcing is in place. Most countries recognized two key issues — the responsibilities of businesses and the economic and social impact of business — as the promoter of new placebo for globally acceptable norm of social responsibility in the era of digital economy. In 2005, the national standards institutes that are members of the International Organization for Standardization (ISO) approved the development of a standard on social responsibility that will provide guidance to organizations on social responsibility (ISO, 2005). The goal is to develop guiding principles with global relevance that will be useful to organizations worldwide in establishing, implementing, maintaining and improving the way they address social responsibility. By reducing environmental damage caused by their operations, and improving the living conditions and health of their workers, organizations have the ability to improve the quality of life for the communities in which they operate. The real challenge for ISO is to design a meaningful standard for organizations which will supplement existing tools and build a bridge between national legislation and international norms on the one hand and recognized voluntary initiatives on social responsibility on the other. ISO is referring to this as a standard on “social responsibility”, a somewhat misleadingly narrow term given the fact that the standard is expected to address a broad range of organizational activities including social, labor, and environmental impacts. As ISO develops this new standard over the next
three years, it will transform how concepts like “corporate social responsibility” and “the triple bottom line” are defined, measured and reported on. Of equal importance, ISO has established a ground-breaking new approach to stakeholder involvement in its own standards development that could transform the world of standards making and spill over into the broader sphere of global governance. The Goal of the ISO Social Responsibility standard is to encourage organizations around the world to improve their performance on key indicators of sustainable development. This new standard dealing with global social and environmental sustainability is settled as ISO 26000 with publication in 2008 – there is enough time to all interested stakeholders to participate in preparing the final document. ISO is working to make the development of its ISO 26000 standard as representative and inclusive as possible. The three years scheduled project includes six sectors: industry, government, labor unions, consumers, non-governmental organizations, and others (primarily academics and consultants). Each country participating is encouraged to involve a representative from each sector. The new ISO 26000 social responsibility standard provides an unprecedented opportunity for global discussion and widespread involvement to implement the goals of sustainable development at many levels of organizational activity throughout the world. At present, no internationally recognized standard exists to manage organizational ethics, compliance and business conduct programs. The ISO 26000 social responsibility standard could be crucial actor in the process of harmonization and ethical development of e-business offshore outsourcing. It will be mutually compatible with the management system standards ISO 9000 for quality and ISO 14000 for the environment. It will be also a practical way for organizations around the world to integrate e-business ethics into their operations.
867
New Ethics for E-Business Offshore Outsourcing
dIscussIon Outsourcing infrastructure is crucial to e-business companies’ success — if their customers cannot access their application, how good is it, especially when competitors are on the e-market. In today’s dynamic e-business environment where fast time to market is imperative, where information and telecommunications technology is costly and changing rapidly, and where skilled technical resources are scarce, e-businesses need reliable, high-end outsourcing infrastructure and resources to ensure that mission-critical applications will be up and running when customers want to do business. A shift to outsourcing moves employment from the company to its outsourcing vendor, and may move employees as well. The outsourcing vendor virtually is stealing employment from people in the company. Is this ethical? It does not feel like a good thing to do, but our economy is not driven by acts of goodness and mercy. The question remains: Is it ethical? Answering the question is driven by a code of ethics. The overall preference is for a simple code. It can be boiled down to basic ethical injunctions. The benefit of a simple code of ethics is that it gives us a relatively clear measure against which to judge ethical questions. None of the injunctions can be absolute, but they do provide a useful starting point. Codes of ethics are properly silent about competing — that is the accepted driver in global economy. It is ethically acceptable to compete. It may be personally distasteful to enter a competition, and there is nothing ethically wrong with such competition. Offshore outsourcing vendors are ethically free to compete for business. In many cases, they are effectively importing labor of people who have no right to work in the country. Thanks to the wonders of telecommunications, a telephone can be answered anywhere in the world. What does that do to immigration policies? What are the policy implications for the development of skilled local workers? Technology may be push-
868
ing interdependence further than we are prepared to accept. It is not an ethical question, but it is a vitally important question about local values and their appropriate role in the global economy. Employees working for a company considering outsourcing also face ethical questions. Fundamental to any good code of ethics is a requirement to not put personal interests above those of employer. Employees are obligated to work for the best interests of employer, even when that might not be in their personal best interest. It is not ethical to skew the outcome of outsourcing contract talks just because someone does not want to be a personal part of outsourcing. That may be true in general, but what happens if the benefit of outsourcing to employer is very low, and the cost to employees is very high? Judgment is required about when an employee’s interests are so much more important than an employer’s interests that the first injunction can be suspended. Many offshore workers are unprotected by their country’s trade laws, and job conditions can be less than ideal. It is true that the law in their home countries poorly protects laborers, with some countries not even having an efficient social security, medical and unemployment compensation system. However, this does not change the fact that offshoring creates opportunities for hardworking, skilled laborers to rise above poverty, to seek a better lifestyle than the one they used to struggle with. Actually, offshoring companies are bound to adhere to international rules and regulations preventing them from mistreating their employees, and sending them to work in sweatshop conditions. It is a rabid myth, for example, that a software development team offshore is underpaid, overworked and worse — undereducated. In order to stay competitive, offshore software development companies make it a point to hire only credentialed, skilled programmers, who can keep up with the market demand, minimize risk, and ensure absolute professionalism on the job. Offshoring e-business, or the shifting of jobs from developed to developing countries, repre-
New Ethics for E-Business Offshore Outsourcing
sents a litmus test for corporate social responsibility, too. Viewed through one lens, the practice is irresponsible, as it strips workers in the developed world, where the companies are typically based, of their livelihoods. Viewed through another lens, the practice is the paragon of responsibility when implemented in a fair manner, as it infuses income into emerging economies. This issue of ethical consumerism has captivated thinking in the corporate social responsibility field for some time, and survey after survey demonstrates consumer concern for a range of environmental and social factors in purchasing choices. However, the reality of ethical consumerism is very different. Truly ethical consumers are very rare. Of course, whether consumers buy for ethical reasons or not, companies that claim a reputation for corporate social responsibility must be prepared to defend the ethics of their actions. This means decent working conditions, environmental responsibility, sensitivity to community needs and all the other good practices that make up corporate social responsibility. Establishing an ethics program, it is not an exact science. It involves the input, interaction, cooperation, decision-making and ongoing commitment of many people. Proper planning is important, but the effectiveness of any company’s or organizations’ approach will also depend on characteristics that are unique to its culture: the leadership style of the president or chief executive officer and the executive team; the company’s or organizations’ relationship with its board of directors; and so on. The benefits of developing clear ethics and compliance policies are immeasurable. Managers can have peace of mind knowing that every employee has a detailed understanding of the impact of their actions on the business. Moreover, the business will establish itself as a moral environment concerned about its impact on society. The company’s stakeholders, from employees to customers, will have a positive feeling about the firm and confidence in its operations.
Historically, companies have moved activities from one place to another for various reasons, and their mobility has only increased in recent decades. In developing countries, this can encourage mass migration from villages to urban areas, practice development experts generally consider unsustainable. In an era of telecommunications, it is better to move jobs to people than to move people to jobs. Advances in technology and global networking of economies support this trend. Whether jobs move or people move, communities suffer losses. E-business companies with a commitment to corporate social responsibility must work to reduce the pain and stress of disruption in home countries while increasing the socioeconomic benefits of these jobs in the receiving country. Both must be done responsibly and to the best of the ability of the company. E-business should develop outsourcing ethics program. It presents the analysis of the nature and the social impact of information technology and the corresponding formulation and justification of policies for the ethical use of technology. Ethics cover both social as well as personal policies for the ethical use of technology. It is a dynamic and complex field of study, which takes into account, the relationships between facts, conceptualization, policies and values concerning the ever-changing information technology. These notions are considered in efforts to produce globally acceptable ethics program that would articulate both individual company and global interests in an appropriate way. Recent work on international scene is promising and we could expect development of offshore outsourcing activities in accordance with globally accepted rules/codes of ethics — it will be the new age of global e-business ethics.
references Anscombe, E. (1981). Ethics, religion and politics. Oxford: Blackwell.
869
New Ethics for E-Business Offshore Outsourcing
Berenbeim, R. E. (1992). The corporate ethics test. Business and Society Review, 31(1), 77-80.
Madsen, P., & Shafritz, J. M. (1990). Essentials of business ethics. New York: Penguin Books.
Borgman, Ch. L. (2000). From Gutenberg to the global information infrastructure: Access to information in the networked world. Cambridge: MIT Press.
Mander, J. (1992). In the absence of the sacred. San Francisco: Sierra Club Books.
Brenner, S. N. (1992). Ethics programs and their dimensions. Journal of Business Ethics, 11, 391399. Carroll, A. B. (1990). Principles of business ethics: Their role in decision making and in initial consensus. Management Decision, 28(8), 21-23.
Negroponte, N. (1995). Being digital. London: Hodder and Stoughton. Rice, D. (2002). Refining the Zippo test: New trends on personal jurisdiction for Internet activities. In The Computer & Internet Lawyer. Prentice Hall Law & Business.
Caux Round Table. (2002). Principles for business. Saint Paul, MN: Caux Round Table Secretariat.
Sethi, S. (2003). Setting global standards: Guidelines for creating codes of conduct in multinational corporations. New York: John Wiley & Sons.
Cornford, J., Gillespie, A., & Richardson, R. (1999). Regional development in the information society. Boulder, CO: Rowman and Littlefield.
Toffler, B. (1991). Doing ethics: An approach to business ethics consulting. Moral Education Forum, 16(4), 14-20.
Dean, P. J. (1992). Making codes of ethics real. Journal of Business Ethics, 11, 285-290.
United Nations. (2003). The Global Compact: Corporate citizenship in the world economy. New York: Global Compact Office.
Deborah, B. (1991). Asking for help: A guide to using socially responsible consultants. Business Ethics Magazine, 11, 24-29. ISO Working Group on Social Responsibility. (2005). ISO 26000 standard on social responsibility guidelines. ISO.
Weston, A. (1997). A practical companion to ethics. New York: Oxford University Press. World Bank. (2004). World development report 2004: Making services work for poor people. New York: Oxford University Press.
This work was previously published in Outsourcing and Offshoring in the 21st Century: A Socio-Economic Perspective, edited by H. Kehal, pp. 87-121, copyright 2006 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
870
871
Compilation of References
(n.a.) (2005). IT, multiculturalism and global democracy – ethical challenges. In Technology in a Multiculrural and Global Society. (eds.) Thorseth, M. & Ess. Trondheim: Programme for Applied Ethics, NTNU. “Spammer X” (2004). Inside the Spam Cartel. Rockland, Massachusetts: Syngress. 4teachers.org. (2004). Dictionary main page. Retrieved February 22, 2004, from http://www.4teachers.org 6, Perri. (1998). Private life and public policy. In Lasky, K & Fletcher, A (eds), The future of privacy: Public trust in the use of private information (Vol. 2). Demos Medical Publishing. Abbing, H D C R (1995). Genetic information and third party interests. How to find the right balance? Law and the Human Genome Review, 2, 35-53. ABC News. (2007). Child pornographer pleads guilty. Retrieved May 17,2007, from www.abc.net.au/news/ items/200705/1918736.htm?tasmania Abdallah S. Daar, Halla Thorsteinsdottir, Douglas Martin, Alyna C. Smith, Shauna Nast, & Peter Singer. (2002). Top ten biotechnologies for improving health in developing countries. Nature Genetics, 32. Ackoff, R. (1979). The future of Operations Research is past. Journal of the Operations Research Society, 30, 93-104. ACLU (2003). RFID position statement of consumer privacy and civil liberties organizations, American Civil Liberties Union (ACLU)., November 30.
ACLU (2003). Total information compliance: The TIA’s burden under the Wyden amendment, a preemptive analysis of the government’s proposed super surveillance program. American Civil Liberties Union (ACLU)., May 19. ACM/IEEE-CS Joint Task Force on Software Engineering Ethics and Professional Practices (1999). Software engineering code of ethics and professional practice. Retrieved April 15, 2007 from www.acm.org/service/ se/code.htm. Acuerdo para el fomento de la autorregulación sobre contenidos televisivos e infancia. (2005). Gobierno de España y Televisiones españolas. Retrieved April 15, 2005, from http://www.cnice.mecd.es/tv_mav/n/f6_normativa.htm Adam, A. (2001). Computer ethics in a different voice. Information and Organization, 11, 235-261. Adam, A. (2002) Cyberstalking and Internet Pornography: Gender and the Gaze. Ethics and Information Technology, 4, 133-142. Adams, B., Breazeal, C., Brooks, R.A. & Scassellati, B. (2000). Humanoid Robots: A New Kind of Tool. IEEE Intelligent Systems and Their Applications: Special Issue on Humanoid Robotics, 15(4), 25-31. Adeya, C. N., & Cogburn, D. L. (2001). Globalisation and the information economy: Challenges and opportunities for Africa. In G. Nulerns, N. Hafkin, L. Van Audenhoven, & B. Cammaerts (Eds.), The digital divide in developing countries: Towards an information society in Africa (pp. 77-112). Brussels: Brussel University Press.
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Compilation of References
Advisory Committee on Health Research (2002). Genomics and world health (Report). Canada: World Health Organization.
Alien Technology (2007, January 10). RFID tags. retrieved from http://www.alientechnology.com/products/rfid-tags/
Agamben, G. (2006). Qu’est-ce qu’un dispositif? Paris: Rivages Poche / Petite Bibliothèque.
Al-Jabri, I. & Abdul-Gader, A. (1997). Software copyright infringements: An exploratory study of the effects of individual and peer beliefs. Omega, 25, 335-344.
Agar, N. (1998). Liberal eugenics. Public Affairs Quarterly, 12(2), 137-155. Ahrens, F. (2006). Government, Internet firms in talks over browsing data. Washington Post. June 3, p D3 Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179-211. Akdeniz, A. (2002) UK Government and the Control of Internet Content. Computer Law and Security Report. Alba, J., Lynch, J., Weitz, B., Janiszewski, C., Lutz, R., Sawyer, A. & Wood, S. (1997). Interactive home shopping: Consumer, retailer, and manufacturer incentives to participate in electronic marketplace. Journal of Marketing, 61, 38-53. Albrecht, K. (2002). Supermarket cards: The tip of the retail surveillance iceberg. Denver University Law Review, 79(4, 15)., 534-554. Albrecht, K. and McIntyre, L. (2004). RFID: The big brother bar code, ALEC Policy Forum, 6(3)., pp 49-54. Alder, G. (1998). Ethical issues in electronic performance monitoring: A consideration of deontological and teleological perspectives. Journal of Business Ethics, 17, 729-744. Alderson, P. & Morrow, V. (2004) Ethics, social research and consulting with children and young people. London: Barnardo’s. Alexandersson, M. & Runesson, U. (2006). The tyranny of the temporal dimension: Learning about fundamental values through the Internet. Scandinavian Journal of Educational Research, 50(4), 411-427. Al-Ghorairi, M. (2005). The rise of social software: What makes software “social”? Oxford Brookes University.
872
Al-Jumaily, A., & Stonyer, H. (2000). Beyond teaching and research: Changing engineering academic work. G. J. of Engng. Educ., 4(1), 89-97. Allan, B. & Lewis, D. (2006). The impact of membership of a virtual learning community on individual learning careers and professional identity. British Journal of Educational Technology, 37(6), 841–852 Allan, S. (2002). Media, risk and science. Buckingham and Philadelphia: Open University Press. AllBusiness (2001). Employee Records. Retrieved January 5, 2004, from http://www.allbusiness.com Allen, A. (1995). Privacy in healthcare. Encyclopedia of Bioethics (pp. 2064-2073). New York, NY: Simon & Schuster Macmillan. Allen, C., Varner, G., & Zinzer, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental and Theoretical Artificial Intelligence, 12(2000), 251-261. Allen, C., Wallach, W., & Smit, I. (2006, July-August). Why machine ethics? IEEE Intelligent Systems, 21 (4), 12-17. Allen, L. & Voss, D. (1997). Ethics in technical communication: Shades of gray. New York: Wiley Computer Publishing, John Wiley & Sons, Inc. Alliance for Human Research Protection. (2000). Harvard-Anhui case. Retrieved June 12, 2007, from http://www.ahrp.org/infomail/1200/20.php Allio, R. (2000). Russell L. Ackoff, iconoclastic management authority, advocates a “systemic” approach to innovation. Strategy and Leadership, 31(3). Alschuler, A.W. (2001). Law without values: The life, work, and legacy of justice Holmes. Chicago and London: University of Chicago Press.
Compilation of References
Altenbaugh, R. E. (2003). The American people and their education—A social history. Upper Saddle River NJ: Merrill Prentice Hall. Altheide, D. L. (2003). Notes towards a politics of fear. Journal for Crime, Conflict and the Media, 1(1), 37-54. Altmann, Jürgen, Gubrud, M. (2004). Anticipating military nanotechnology. IEEE Technology and Society Magazine, (Winter), 33-41. Altmann, Jürgen. (2006). Military technology: Potential applications and preventive arms control. New York: Routledge. Amason, A. C., & Schweiger, D. (1997). The Effects of Conflict on Strategic Decision Making Effectiveness and Organizational Performance. International Journal of Conflict Management, 5, 239-253. Amason, A. C., Hochwarter, W. A., Thompson, K. R., & Harrison, A. W. (1995). Conflict: An Important Dimension in Successful Management Teams. Organizational Dymanic, 24(2), 20-35. American Academy of Pediatrics (2001). Human embryo research. Pediatrics, 108, 813 - 816. American Civil Liberties Union. (2004). Surveillance under the USA Patriot Act. Retrieved April 26, 2004, from http://www.aclu.org/SafeandFree/SafeandFree. cfm?ID=12263&c=206 AMIA (1997). A proposal to improve quality, increase efficiency, and expand access in the U.S. healthcare system. Journal of the American Medical Informatics Association, 4, 340-341. Amos-Hatch, J. (1995). Ethical conflicts in classroom research: Examples from study of peer stigmatization in kindergarten. In: J. Amos Hatch (Ed.), Qualitative research in early childhood settings. London, Praeger/ Greenwood. Anderson, A. (1997). Media, culture and the environment. London: UCL Press. Anderson, A. (2006). Media and risk. In Walklate, S. & Mythen, G. (eds.), Beyond the risk society, (pp. 114-31). Maidenhead: Open University Press.
Anderson, A., Allan, S., Petersen, A. & Wilkinson, C. (2005). The framing of nanotechnologies in the British newspaper press. Science Communication, 27(2), 200-220. Anderson, C. (1961). History of instructional technology, I: Technology in American education, 1650-1900. Washington DC: National Education Association. Anderson, T. (2000). The body and communities in cyberspace: A Marcellian analysis. Ethics and Information Technology, 2, 153–158. Andreasen, R.O. (1998). A new perspective on the race debate. British Journal for the Philosophy of Science, 49, 199–225. Andreasen, R.O. (2000). Race: Biological reality or social construct? Philosophy of Science, 67 (Proceedings), S653–S666. Andreasen, R.O. (2004). The cladistic race concept: A defense. Biology and Philosophy, 19(19), 425–442. Andrews, C., & Lewis, E.J.E. (2004). A systems approach to critical ethical decision making. 3r d International Conference on Systems Thinking in Management. Andrews, L B (2000). The clone age: Adventures in the new world of reproductive technology. Owl Books. Andrews, M. (2006). Decoding MySpace. U.S.News & World Report, 141(10), 46-58. Andrews, M. (2006). Make it predator-proof. U.S.News & World Report, 141(10), 52. Anonymous 1 (2007). Global positioning system. Wikipedia. Anonymous 10 (2007). What is RFID? Retrieved from http://www.spychips.com/what-is-rfid.html Anonymous 11 (2006, August 22). SmartDust & ubiquitous computing. Nanotechnology News. Retrieved from http://www.nanotech-now.com/smartdust.htm Anonymous 12 (2007). Famous inventors: GPS. Retrieved from http://www.famous-inventors.com/invention-of-gps.html
873
Compilation of References
Anonymous 13 (2007, May 11). Google searches web’s dark side. BBC News. Retrieved from http://news.bbc. co.uk/2/hi/technology/6645895.stm. Anonymous 14 (2007). What is a smart card? Retrieved from http://computer.howstuffworks.com/question322. htm Anonymous 2 (2007). Radio frequency identification. Wikipedia. Anonymous 3 (2007). Smart card. Wikipedia. Anonymous 4 (2006). Smart dust. Wikipedia. Anonymous 5 (2007). Part I: Active and passive RFID: Two distinct, but complementary, technologies for real-time supply chain visibility. Retrieved from http:// www.autoid.org/2002_Documents/sc31_wg4/docs_501520/520_18000-7_WhitePaper.pdf Anonymous 6 (2007). Pocketful of espionage: Beware the spy coins. CNN. Retrieved from http://www.cnn. com/2007/US/01/11/spy.coins.ap/index.html Anonymous 7 (2004). Heterogeneous sensor networks. Intel. Retrieved from http://www.intel.com/research/exploratory/hetergeneous.htm Anonymous 8 (2003, June 11). What is smart dust, anyway? Wired Magazine, pp. 10-11. Anonymous 9 (2007). Data loss database. Retrieved from http://attrition.org/dataloss/ Anscombe, E. (1981). Ethics, religion and politics. Oxford: Blackwell. Anthony, W. A. (2004). The principle of personhood: The field’s transcendent principle. Psychiatric Rehabilitation Journal, 27, 205. Anti-Phishing Working Group (2007). Phishing Activity Trends Report for the Month of April, 2007. Retrieved May 27, 2007 from http://www.antiphishing.org/reports/ apwg_report_april_2007.pdf Antoniou, A., Gayther, S., Stratton, J., Ponder, B., & Easton, D. (2000). Risk models for familiar ovarian and breast cancer. Genetic Epidemiology, 18, 173-190.
874
Anyiam-Osigwe, M. C. (2002). Africa’s new awakening and ICT. Toward attaining sustainable democracy in Africa. In T. Mendina & J. J. Britz (Eds.), Information ethics in the electronic age. Current issues in Africa and the world (pp. 36-46). Jefferson, NC: McFarland. AOL/NCSA (2007, December). AOL/NCSA Online Safety Study. AOL and the National Cyber Security Alliance. Retrieved from http://www.staysafeonline. info/pdf/safety_study_2005.pdf APEC (2005). APEC Privacy Framework. Asia-Pacific Economic Cooperation (APEC).. Retrieved from www. ag.gov.au/.../$file/APEC+Privacy+Framework.pdf Apel, S B (2001). Privacy in genetic testing: Why women are different. Southern California Interdisciplinary Law Journal, 11(1), 1-26. Apostel, L. (1981). African philosophy: Myth or reality. Gent, Belgium: Story-Scientia. Appelbaum, S. H., Shapiro, B., & Elbaz, D. (1998). The Management of Multicultural Group Conflict. Team Performance Management, 4(5), 211-234. Applegate, J. (2001). Are your employees costing you? Retrieved September 6, 2005, from http://www.entrepreneur.com/article/0,4621,289593,00.html Arborio, A-M., & Fournier, P. (1999). L’enquête et ses méthodes : l’observation directe. Paris: Nathan. Arendt, H. (1983). Vita Activa oder vom tätigen Leben. München: Piper. Arendt, Hannah (1961). Crisis in Culture. In Between past and future: Six exercises in political thought. New York: Meridian. Argyris, C. (1977). Overcoming organizational defences, facilitating organizational learning. Boston, MA: Allyn and Bacon, Argyris, C. (1988). Problems in producing usable knowledge for implementing liberating alternatives. In Bell, Raiffa & Tversky (Eds.), Decision making: Descriptive, normative and prescriptive interactions (pp. 540-561).
Compilation of References
Aristotle (2004). The art of rhetoric. London: Penguin Classics.
and plagiarism in academic work and assessment. Studies in Higher Education, 22 (2), 137–148.
Armonaitis, K. (2004, July 23). Microsoft location server integration: The good, the bad, and the ugly. Retrieved September 6, 2005, from http://www.devx. com/DevX/Article/21583
Associated Press (2007, January 10). There’s an undercurrent of espionage in that currency: Canadian coins with transmitters planted on U.S. defense contractors, baffling both countries.
Arne, P. H. & Bosse, M. M. (2003). Overview of Clickwrap Agreements. Practising Law Institute, January–March.
Associated Press (Saturday, January 27, 2007). Amnesia victim wandered 25 days. New Haven Register.
Arquilla, J. (1999). Ethics and information warfare. In Khalilzad, Z., White, J., & Marsall, A., (Eds.), Strategic appraisal: the changing role of information in warfare (pp. 379-401). Santa Monica, California: Rand Corporation. Arras, K. & Cerqui, D. (2003). Do we want to share our lives and bodies with robots? Retrieved from http://asl.epfl. ch/index.htm?Content=member.php&SCIPER=112713 Artz, J. (1994). Virtue versus utility: Alternative foundations for computer ethics. Proc. of Conference on Ethics in the Computer Age, Gatlinburg, Tennessee, USA, 16-21. AS/NZS 4360. (2004). AS/NZS 4360: 2004 risk management, standards Australia. Sydney, Australia. Retrieved from www.standards.com.au Asada, Y., Akiyama, S., Tsuzuki, M., Macer, N. Y. & Macer, D. R. J. (1996). High school teaching of bioethics in New Zealand, Australia, and Japan. Journal of Moral Education, 25, 401-420. Asch, A. (2001). Disability, bioethics and human rights. In Albrecht, G .L. (et al.) (eds.), Handbook of disability studies (pp. 297-326). Thousand Oaks, etc.: Sage Publications. Asch, A. et al. (2003). Respecting persons with disabilities and preventing disability: is there a conflict? In S. S. Herr et al. (Eds.), The human rights of persons with intellectual disabilities (pp. 319-346). Oxford: Oxford University Press. Ashworth, P., Bannister, P. & Thorne, P. (1997) Guilty in whose eyes? University students’ perceptions of teaching
Associated Press (Sunday, January 28, 2007). Cell Phones Boost Developing Nations. New Haven Register. Associated Press (Tuesday, March 6, 2007). Data Explosion Running out of Room. New Haven Register. Association for Computing Machinery. (1992). ACM Code of Ethics and Professional Conduct. Retrieved September 9, 2003 from: http://www.acm.org/constitution/code.html At-risk students. (n.d.). Wikipedia. Retrieved August 13, 2007, from Answers.com [online] http://www.answers. com/topic/at-risk-students Attaway, M. (2001). Privacy in the workplace on the Web. Internal Auditor, 58, 30-35. Attrition.org (2007, March 3). Data Loss Archive and Database (DLDOS).. Attrition.org. Retrieved from http://attrition.org/dataloss/. Auh, T.S. (2001) Language divide and knowledge gap in cyberspace: Beyond digital divide. Accessed online May 27, 2007 from http://www.unesco.or.kr/cyberlang/ auhtaeksup.htm Australasian Centre for Policing Research. (2005). The Australasian Identity Crime Policing Strategy 2006 – 2008. Retrieved May 27, 2007 from http://www.acpr. gov.au/pdf/ID%20Crime%20Strat%2006-08.pdf Australasian Centre for Policing Research. (2006). Standardisation of definitions of identity crime terms: A step towards consistency. Australasian Centre for Policing Research Report Series No 145.3. Canberra: Commonwealth of Australia.
875
Compilation of References
Australia Law Reform Commission. (2003) Alrc 96: Essentially yours. Australian Bureau of Statistics (2005). Household Use of Information Technology 2004–05, Cat. no. 8146.0. Canberra: Australian Bureau of Statistics. Australian Defence Force. (1999). Joint publication 9 – Joint planning. Canberra: Commonwealth of Australia. Auvinen, J. et al. (2004). The development of moral judgment during nursing education in Finland. Nurse Education Today, 24, 538-46. Avison, D. E., & Wood-Harper, A. T. (1990). Multiview: An exploration in information systems development. Henley on Thames, UK: Alfred Waller (McGraw-Hill Publishing Company). Avison, D., & Horton, J. (1992). Evaluation of information systems (Working Paper). Southampton: University of Southampton, Department of Accounting and Management Science. Avison, D., Wood-Harper, A. T., Vidgen, R. T., & Wood, J. R. G. (1998). A further exploration into information systems development: The evolution of Multiview2. Information Technology and People, 11(2), 124-139. Awad, N., & Fitzgerald, K. (2005, August). The deceptive behaviors that offend us most about spyware. Communications of the ACM, 48 (8), 55-60. Axelrod, R. (1997). The complexity of cooperation. NJ: Princeton University Press. Axelrod, R., & Michael D.C. (2000). Harnessing complexity. US: Free Press.
Bagheri A. (2001). Ethical codes in medical research and the role of ethics committees in the protection of human subjects. Eubios Journal of Asian and International Bioethics, 11, 8-10. Bagheri A. (2004). Ethical issues in collaborative international medical research. Iranian Journal of Diaetes and Lipid Disorders, Supplement Ethics in Clinical Research, 4, 59-70. Baier, K. (2006). Welterschliessung durch Grundstimmungen als problem interkultureller Phänomenologie. Daseinsanalyse, 22, 90-109. Bain, P., & Taylor, P. (2000). Entrapped by the electronic panopticon?: Worker resistance in the call centre. New Technology, Work and Employment, 15(1), 2-18. Bainbridge, W. & Roco, M. Eds. (2002). Converging technologies for improving human performance: Nanotechnology, biotechnology, information technology and cognitive science. Arlington: National Science Foundation. Bainbridge, W. S. (2002). Public attitudes towards nanotechnology. Journal of Nanoparticle Research, 4(6), 561-70. Baird, D. & Vogt, T. (2004) Societal and ethical interactions with nanotechnology, Nanotechnology Law and Business, 1(4), 391-396. Baird, F.E. (2002). Ancient philosophy. New Jersey: Prentice Hall. Bakardjieva, M. and Feenberg, A. (2000) Involving the Virtual Subject. Ethics and Information Technology, 2: 233-240.
Bachelard-Jobard, C. (2001). L’éugenisme, la science et le droit. Paris: Presses Universitaires de France.
Bakhtin, M. (1935/1981 trans.). The dialogic imagination. Translation by K. Brostrom. Austin, TX: University of Texas Press.
Bader, R., Wanono, R., Hamden, S., & Skinner, H. A. (2007). Global youth voices: Engaging Bedouin youth in health promotion in the Middle East. Canadian Journal of Public Health, 98(1), 21-25.
Bal, J., & Gundry, J. (1999). Virtual Teaming in the Automotive Supply Chain. Team Performance Management: An International Journal, 5(6), 174-193.
Bagheri A, & Macer D. (2005). Ethics review on externally-sponsored research in Japan. Eubios Journal of Asian and International Bioethics, 15, 138-40. 876
Ball, K., & Wilson, D. (2000). Power, control and computer-based performance monitoring: A subjectivist
Compilation of References
approach to repertoires and resistance. Organization Studies, 21(3), pp. 539-565.
Baron, R. (1998). What type am I? Discover who you really are. New York: Penguin Putnam Inc.
Ball, P. (2004). Critical mass: How one thing leads to another. New York: Farrar, Straus and Giroux.
Barquin, R. C. (1992). In pursuit of a ‘ten commandments’ for computer ethics. Computer Ethics Institute. Retrieved September 9, 2003 from: http://www.brook. edu/its/cei/papers/Barquin_Pursuit_1992.htm
Ballantine, J., Levy, M., Munro, I., & Powell, P. (2003). An ethical perspective on information systems evaluation. International Journal of Agile Management Systems, 2(3), 233-241. Banisar, D. (2004). The freedominfo.org global survey: Freedom of information and access to government record laws around the world. Retrieved October 12, 2004, from http://www.freedominfo.org/survey.htm Barabasi, A.-L., Freeh, V. W., Jeong, H., & Brockman, J. B. (2001). Parasitic computing. Nature, 412, 894-897. Barbook, R. (1998). The Hi-Tech Gift Economy. First Monday, Vol. 3 Number 12, December. URL: http://www. firstmonday.dk/issues/issue3_12/barbrook/ Barbour, I. G. (1993). Ethics in an age of technology. San Francisco, CA: Harper San Francisco. Barbujani, G., Magagni, A., Minch, E., & Cavalli-Sforza, L.L. (1997). An apportionment of human DNA diversity. Proceedings of the National Academy of Science USA, 94, 4516–4519. Barger, R. N. (2001). Is computer ethics unique in relation to other fields of ethics? Retrieved September 9, 2003: http://www.nd.edu/~rbarger/ce-unique.html Barger, R. N. (2001). Philosophical belief systems. Retrieved September 9, 2003 from: http://www.nd.edu/ ~rbarger/philblfs.html Bariff, M., & Galbraith, J. R. (1978). Intraorganizational power considerations for designing information systems. Accounting, organizations and society, 3(1), 15-27. Barlow, J. P. (2000). Censorship 2000. posted on Internet mailing list , Available at: http:www.pbs.orgwggbh/pages/frontline/shows/porn/interviews/asher Baroff, G. S. (2000). Eugenics, “Baby Doe”, and Peter Singer: Toward a more “perfect” society. Mental Retardation, 38(11), 73-77.
Barquin, R.C. (1992). In pursuit of a ‘ten commandments’ for computer ethics. Computer Ethics Institute. Retrieved September 9, 2003 from: http://www.brook.edu/dybdocroot/its/cei/papers/Barquin_Pursuit_1992.htm Barry, B. (1976). Political Argument. London: Routledge and Kegan Paul. Barthes, R. (1981) Camera Lucida: Reflections on Photography, Richard Howard (trans). New York: Hill and Wang. Barton, B., Byciuk, S., Harris, C., Schumack, D., & Webster, K. (2005). The emerging cyber risks of biometrics. Risk Management, 52(10), 26-30. Barton, J. (2007). New trends in technology transfer: Implications for national and international policy. Issue paper No. 18, International Center for Trade and Sustainable Development. Accessed online May 27, 2007 from http://www.iprsonline.org/resources/docs/ Barton%20%20New%20Trends%20Technology%20T ransfer%200207.pdf Basalla, G. (1988). The evolution of technology. Cambridge, England: Cambridge University Press. Bashshur, R., & Sanders, J. (Ed.) (1997). Telemedicine: Theory and practice. Springfield, IL: Charles C. Thomas Publisher, LTD. Baskin, M. (Winter, 1998). Is it time to revise your employee handbook? (Legal Report). Alexandria, VA: Society for Human Resource Management. Bastir, M., & Rosas, A. (2004). Geometric morphometrics in paleoanthropology: Mandibular shape variation, allometry, and the evolution of modern human skull morphology. In A.M.T. Elewa (Ed.), Morphometrics: Applications in biology and paleontology (pp. 231–244). New York: Springer.
877
Compilation of References
Bauer, K. (2001). Home-based telemedicine: A survey of ethical issues. Cambridge Quarterly of Healthcare Ethics, 10(2), 137-146.
Beauchamp, T. L. & Childress, J. F. (1994). Principles of biomedical ethics. Fourth Edition. New York: Oxford University Press.
Bauer, K. (2007). Wired patients: Implantable microchips and biosensors in patient care. Cambridge Quarterly of Healthcare Ethics, 16(3), 281-290.
Beauchamp, T.L. (2003). The nature of applied ethics. In Frey, R.G. & Wellman, C.H. (Eds.), A companion to applied ethics (p. 1). Blackwell Publishing Ltd.
Bauer, M. (2001). Villian-to-victim computing and applications: Abuse of protocol features. Retrieved September 9, 2003 from: http://www1.informatik.uni-erlangen. de/~bauer/new/v2v.html
Becker, J.D. (Ed) (2004). The overrepresentation of students of color in special education as a next generation, within-school racial segregation problem. Proceedings from ERASE Racism’04, Long Island, NY. Retrieved May 26, 2007 from http: www.eraseracismny.org/downloads/ brown/speaker_comments/LIBrown_becker_comments. pdf
Baum, K. (2006, April). Identity theft, 2004: First estimates from the National Crime Victimization Survey. Bureau of Justice Statistics Bulletin. Retrieved May 27, 2007 from www.ojp.gov/bjs/pub/pdf/it04.pdf. Bayles, W. (2001, Spring). Network attack. Parameters, US Army War College Quarterly, 31, 44-58. BBBBOnLine (2008). A Review of Federal and State Privacy Laws. BBBBOnLine, Inc. and the Council of Better Business Bureaus, Inc. Retrieved from http://www. bbbonline.org/UnderstandingPrivacy/library/fed_statePrivLaws.pdf Beagle, D.R., Bailey, D.R. & Tierney, B. (2006). The information commons handbook. Neal-Schuman Publishers. Beam, Thomas A., Howe, E.G. (2003). A look toward the future. In Thomas Beam, Linette Sparacino (Eds.), Military medical ethics, vol. 2, 831-50. Falls Church, VA: Office of the Surgeon General. Bearman, D., and Trant, J. (1998). Authenticity of Digital Resources. Archives and Museum Informatics. Available; www.archimuse.com. Accessed 15/05/2007. Beasly, A., & Graber, G. (1984). The range of autonomy: Informed consent in medicine. Theoretical Medicine, 5, 31-41. Beauchamp TL and Childress JF. (2001) Principles of biomedical ethics. Fifth Edition. New York: Oxford University Press.
878
Belanger, Y. (2005, June). Duke University iPod First Year Experience Final Evaluation Report. Retrieved April 26, 2007, from Bell, D. (1973). The coming of post-industrial society : A venture in social forecasting. New York : Basic Books. Bell, D. (1993). Communitarianism and Its Critics. Oxford: Clarendon Press. Bell, D. (1999). The axial age of technology. In The coming of post-industrial society. New York : Basic Books. Bell, D. (2000). East Meets West: Human Rights and Democracy in East Asia. Princeton: Princeton University Press. Bell, D., Raiffa, H., & Tversky, A. (Eds.). (1988). Decision making: Descriptive, normative, and prescriptive interactions. Cambridge: Cambridge University Press Bell, G. (2004). A personal digital store. Communications of the ACM, 44(1)., 86- 94. Bell, T. E. (2006). Reporting risk assessment of nanotechnology: A reporter’s guide. http://www.nano. gov/html/news/reporting_risk_assessment_of_nanotechnology.pdf Bellovin, S. M. (1989). Security problems in the TCP/IP protocol suite. ACM Computer Communications Review, 19(2), 32-48.
Compilation of References
Belmont Report. (1979). Ethical principles and guidelines for the protection of human subjects of research. Retrieved June 12, 2007, from http://www.hhs.gov/ohrp/ humansubjects/guidance/belmont.htm Beloof, D. E. (2005). The third wave of victims’ rights: Standing, remedy and review. Brigham Young University Law Review, 2, 255-365. Benadou, R. (2005). Inequality, technology and the social contract. Handbooks in economics, 22, 1595-1638. Bender, W. (2006). OLPC talks. From a seminar entitled, Ars Electronica, September, 2006. Posted online and accessed May 27, 2007 at http://www.olpctalks.com/walter_bender/walter_bender_ars_electronica.html Benhabib, Seyla (1992). Situating the self. Cambridge: Polity Press. Ben-Jacob, M.G. (2005). Integrating computer ethics across the curriculum: A case study. Educational Technology & Society, 8(4), 198-204. Bentham, J. (1995 [1787]). Panopticon or InspectionHouse. In Bozovik, M. (ed) The Panopticon Writings, Bozovik, M. (ed). London: Verso Books. Beran, T., & Li, Q. (2005). Cyber-harrassment: A study of a new method for an old behavior. Journal of Educational Computing Research, 32(3), 265-277. Berenbeim, R. E. (1992). The corporate ethics test. Business and Society Review, 31(1), 77-80. Berg, M. (1997). Problems and promises of the protocol. Social Science & Medicine, 44(8), 1081-1088. Berg, M. (1999). Patient care information systems and health care work: A sociotechnical approach. International Journal of Medical Informatics, 55, 87-101. Berg, S. E. (2006). Recommendations for a comprehensive identity theft victimization survey framework and information technology prevention strategies. Master of Science in Information Technology thesis. Rochester Institute of Technology. Retrieved May 27, 2007 from https://ritdml.rit.edu/dspace/handle/1850/1647.
Berghel, H. (2006, April). Phishing mongers and posers. Communications of the ACM, 49 (4), 21-25. Bergson, H. (1928). L’évolution créatrice. Paris : Alcan. Berlin, I (2000) From hope and fear set free. In Berlin, I, Hardy, H & Hausheer, R (eds), The proper study of mankind: An anthology of essays. Farrar Straus & Giroux. Berman, B. J. (1992). The state, computers, and African development: The information non-revolution. In S. Grant Lewis & J. Samoff (Eds.), Microcomputers in African development: Critical perspectives (pp. 213-229). Boulder, CO: Westview Press. Bernhauer, J., & Mahon, M. (1994). The ethics of Michel Foucault. In G. Gutting (Ed.), The Cambridge Companion to Foucault (pp. 141-158). Cambridge, UK: Cambridge University Press. Bero, L., & Jadad, A. (1997). How consumers and policy makers can use systematic reviews for decision making. Annals of Internal Medicine, 127, 37-42. Bezmen, TL. & Depken, CA. (2006). Influences on software piracy: Evidence from the various United States. Economic Letters, 90, 356-361. Bhardwaj, M. & Macer, D. (1999). A comparison of bioethics in school textbooks in India and Japan. Eubios Journal of Asian & International Bioethics, 9, 56-9. Bijker, W. (1995). Of bicycles, bakelites, and bulbs: Toward a theory of sociotechnical change. Cambridge, MA: MIT Press. Bilbeny, N. (1997). La revolución en la ética. Hábitos y creencias en la sociedad digital. Barcelona: Anagrama. Billinger, M.S. (2000). Geography, genetics, and generalizations: The abandonment of ‘race’ in the anthropological study of human biological variation. Unpublished Master’s thesis, Carleton University, Ottawa. Billinger, M.S. (2006). Beyond the racial paradigm: new perspective on human biological variation. Unpublished doctoral dissertation, University of Alberta, Edmonton.
879
Compilation of References
Billinger, M.S. (2007). Another look at ethnicity as a biological concept: Moving anthropology beyond the race concept. Critique of Anthropology, 27(1), 5–35.
Blum, R. W. (1998). Healthy youth development as a model for youth health promotion: A review. Journal of Adolescent Health, 22, 368-375.
Billinger, M.S. (2007). Gene expression and ethnic differences. Science, 315(5318), 766.
Boar, B. (2001). Art of strategic planning for information technology (second edition). New York: Wiley.
Birsch, D. (2004). Moral responsibility for harm caused by computer system failures. Ethics and Information Technology, 6, 233-245.
Bocij, P. & Sutton, M. (2004). Victims of cyberstalking: Piloting a web-based survey method and examining tentative findings. Journal of Society and Information [online], 1(2).
Blaisdell, M., (2006, March). In POD we trust. The Journal, 33, 30-36. Blanke, J. M. (2006). Robust Notice and Informed Consent, The Keys to Successful Spyware Legislation. Columbia Science and Technology Law Review, Vol. 7, pp. 1-33. Blanke, JM. (2004). Copyright law in the digital age. In Brennan, L. L. & Johnson, V.E. (Eds.), Social, ethical and policy implications of information technology, (pp. 223-233). Hershey, PA: Idea Group Inc. Blindell, J. (2006). Review of the legal status and rights of victims of identity theft in Australia. Australasian Centre for Policing Research Report Series No 145.2. Canberra: Commonwealth of Australia. Bloom, B., Hasting, J., & Madaus, G. (1971). Handbook of formative and summative evaluation for student learning. New York: McGraw-Hill. Bloom, H. (1999). The kidnap of mass mind – Fundamentalism, Spartanism and the games subcultures play. History of the Global Brain XVIII, online forum. Alle Rechte vorbehalten Heise Zeitschriften Verlag GmbH and Co. KG Bloomfield, B., & Coombs, R. (1992). Information technology, control and power: The centralization and decentralization debate revisited. Journal of Management Studies, 29(4), 459-484. Bloor, D. & Henry, J. (1996). Scientific knowledge: A sociological analysis. London: Athlone. Bloor, David (1976) Knowledge and social imagery. London: Routledge.
880
Bocij, P. (2003). Victims of cyberstalking: An exploratory study of harassment perpetrated via the Internet. First Monday, 8(10). http://firstmonday.org/issues/issue8_10/ bocij/index.html Boden, M. (1997). What is interdisciplinarity? In Cunningham, R. (Ed.), Interdisciplinarity and the organization of knowledge in Europe (pp. 13-26). European Community, Brussels. Boehle, S. (2000). They’re watching you: workplace privacy is going, going…. Training, 37, 50-60. Bok, S. (1978). Lying: Moral choice in public and private life. New York: Pantheon. Bolter, J. D., and Gruisin, R. (1999). Remediation: Understanding New Media. Cambridge, MA: MIT Press. Bolton, L. E., Cohen, J. B., & Bloom, P. N. (2006). Does marketing products as remedies create “get out of jail free cards”? Journal of Consumer Research, 33(1), 71-81. Bookstein, F.L., Gunz, P., Mitteroecker, P., Prossinger, H., Schaefer, K., & Seidler, H. (2003). Cranial integration in Homo reassessed: Morphometrics of the mid-sagittal plane in ontogeny and evolution. Journal of Human Evolution, 44, 167–187. Bookstein, F.L., Schäfer, K., Prossinger, H., Seidler, H., Fieder, M., Stringer, C., Weber, G.W., Arsuaga, J-L., Slice, D.E, Rohlf, F.J., Recheis, W., Mariam, A.J., & Marcus, L.F. (1999). Comparing frontal cranial profiles in archaic and modern Homo by morphometric analysis. The Anatomical Record (New Anatomist), 257, 217–224.
Compilation of References
Borberg, E. (1995). Development, acceptance, and use patterns of computer-based education and support systems for people living with AIDS/HIV infection. Computers in Human Behavior, 11(2), 289-311. Borgman, Ch. L. (2000). From Gutenberg to the global information infrastructure: Access to information in the networked world. Cambridge: MIT Press. Borgmann, A. (1984). Technology and the character of contemporary life: A philosophical inquiry. Chicago: University of Chicago Press. Borgmann, A. (1999), Holding on to reality: The nature of information at the turn of the millennium. Chicago: University of Chicago Press. Bork, A. (1993). Technology in education: An historical perspective. In R. Muffoletto & N. N. Knupfer (Eds.), Computers in education: Social, political & historical perspectives (pp. 71-90). Cresskill, NJ: Hampton Press. Boss, M. (1975). Grundriss der Medizin und der Psychologie. Bern: Huber. Boudreau, M., Loch, K. D., Robey, D., & Straud, D. (1998). Going Global: Using Information Technology to Advance the Competitiveness of the Virtual Transational Organization. Academy of Management Executive, 12(4), 120-128. Boulding, K. (1963). Conflict and Defense. NY: Harper & Row.
Bourdil, P. (1996). Le temps. Paris: Ellipses/Édition Marketing. Boutin, F., & Chinien, C. A. (1998). Retaining at-risk students in school using a cognitive-based instructional system: Teachers’ reflection in action. Journal of Industrial Teacher Education, 36(1),62-78. Bower, B. (August 30, 2003). Artificial Intelligence Meets Good Old-Fashioned Human Thought. Science News, Vol. 164, No 9. (p. 136). Bower, B. (February 17, 2007). Net Heads. Science News, Vol. 71 (pp. 104-105). Bowers, C. A. (2000). Let them eat data: How computers affect education, cultural diversity, and the prospects of ecological sustainability. Athens: The University of Georgia Press. Bowrey, K. (2005). Law & Internet Cultures, Cambridge: Cambridge University Press. Boyer, E.L. (1990). Scholarship reconsidered: Priorities of the professoriate, special report. Stanford, Carnegie Foundation for the Advancement of Teaching. Boyle, T. (2007). Pupils punished over Facebook comments: Five Grade 8ers bumped from end-of-year trip after ridiculing their teachers. Toronto Star. Brace C.L. (2005). “Race” is a four-letter word: The genesis of the concept. New York: Oxford University Press.
Boulding, K. (1978). Ecodynamics: A New Theory of Societal Evolution (Beverly Hills, CA: Sage Publications.
Brace, C.L. (1964). A nonracial approach towards the understanding of human diversity. In A. Montagu (Ed.), The concept of race (pp. 103–152). New York: Free Press.
Boulos, MNK., Maramba, I., & Wheeler, S. (2005). Wikis, blogs and podcasts: A new generation of Web-based tools for virtual collaborative clinical practice and education [Electronic version]. BMC Medical Education, 6(41).
Brace, C.L. (1995). Region does not mean race: reality versus convention in forensic anthropology. Journal of Forensic Sciences, 40(2):171–175.
Bouquet, B., Voilley, P. (Eds.). (2000). Droit et littérature dans le contexte suédois. Paris: Flies, 2000.
Brace, C.L. (1996 [2000]). A four-letter word called ‘race.’ In C.L. Brace (Ed.), Evolution in an anthropological perspective (pp. 283–322). Walnut Creek: AltaMira Press.
Bourdieu, P., (1983) Economic capital, cultural capital, social capital. Soziale-Welt, Supplement 2, pp. 183-198.
Brain, M. and Harris, T. (2007, January). How GPS Works. Retrieved from http://electronics.howstuffworks. com/gps.htm
881
Compilation of References
Brandon, D. G. (1980). The partnership of ethics and technology. In Kranzberg, M. (Ed.), Ethics in an age of pervasive technology. Boulder, CO: Westview Press.
Brenner, S. W., & Schwerha, J. J. (2004). IntroductionCybercrime: A note on international issues. Information Systems Frontiers, 6(2), 111-114.
Bransford, J.D., Brown, A.L. & Cocking, R.R. (1999). How people learn: Brain, mind, experience, and school. Washington, DC: National Academy Press.
Brey, P. (2000, December). Disclosive computer ethics. Computers and Society, 10-16.
Braun, J. (1992). Caring, citizensship, and conscience. The cornestones of a values education curriculum for elementary schools. International Jorunal of Social Education, 7(2), 47-56. Braun, K. (2000). Menschenwürde und biomedizin. Zum philosophischen diskurs der bioethik. Frankfurt/New York: Campus. Braybrooke, D. (1974). Traffic Congestion Goes through the Issue Machine. Routledge and Kegan Paul. Braybrooke, D. (1998). Moral Objectives, Rules, and the Forms of Social Change. Toronto: Toronto Studies in Philosophy-University of Toronto Press. Braybrooke, D., Brown, B. and Schotch, P.K. (1995). Logic on the Track of Social Change. Oxford and New York: Clarendon Press/Oxford University Press. Brazeal, C. (1999). Robot in society: Friend or appliance? Proceedings of the 1999 Autonomous Agents Workshop on Emotion-Based Agent Architectures, Seattle, WA, pp. 18-26. Brazeal, C. (2002). Designing sociable robots. Cambridge: MIT Press. Brazeal, C., Brooks, A., Gray, J., Hoffman, G., Kidd, C., Lee, H., Lieberman, J., Lockerd, A., & Mulanda, D. (n.d.). Humanoid robots as cooperative partners for people. Retrieved, August 2006 from, http://robotic.media.mit. edu/Papers/Breazeal-etal-ijhr04.pdf Brenner, S. (2006, March 19). Bugs and dust. Cybercrim3. Retrieved from http://cyb3rcrim3.blogspot.com/2006/03/ bugs-and-dust.html Brenner, S. N. (1992). Ethics programs and their dimensions. Journal of Business Ethics, 11, 391-399.
882
Brey, P., Floridi, L. and Grodzinsky, F. (2005). Ethics of new information technology. Ethics and Information Technology, 7, 109. Breyer, S. (1993). Breaking the Vicious Circle. Cambridge, MA: Harvard University Press. Briadis, T. (2006). U.S. warns about Canadian spy coins. Physorg.com. Retrieved from http://www.physorg. com/news87716264.html Briggs, J. & Peck, M. (2003). A case study based analysis on behalf of the Office of Government Commerce. Retrieved May 17, 2007, from www.ogc.gov.uk/index. asp?id=2190 Bringsjord, S. (2007): Ethical robots: The future can heed us. AI and Society, (online). Retrieved Tuesday, March 13, 2007, from http://www.springerlink.com Britz, J. J. (2002). Africa and its place in the twenty-first century. A moral reflection. In T. Mendina & J. J. Britz (Eds.), Information ethics in the electronic age: Current issues in Africa and the world (pp. 5-6). Jefferson, NC: McFarland. Broadhurst, R. (2006). Developments in the global law enforcement of cyber-crime. Policing: An International Journal of Police Strategies and Management, 29, 408433. Broadstock, M., Michie, S., & Marteau, T. (2000). The psychological consequences of predictive genetic testing: A systematic review. European Journal of Human Genetics, 8, 731-738. Brocklesby, J., & Cummings, S. (1996). Foucault plays Habermas: An alternative philosophical underpinning for critical systems thinking. Journal of the Operational Research Society, 47(6), 741-754.
Compilation of References
Bronfenbrenner, U. (1979). The ecology of human development: Experiment by nature and design. Cambridge: Harvard University Press. Bronskill, J. (2007, January 14). Spy coin caper loses currency. Ocnus.net. Retrieved from http://www.ocnus. net/artman/publish/article_27531.shtml Brooke, C. (2002). What does it mean to be “critical” in IS research? Journal of Information Technology, 17(2), 49-57. Brown, N (2003). Hope against hype: Accountability in biopasts, present and futures. Science Studies, 16(2), 3-21. Brown, R.A., & Armelagos, G.J. (2001). Apportionment of racial diversity: A review. Evolutionary Anthropology, 10, 34–40. Brown, W.H., & Wilson, E.O. (1954). The case against the trinomen. Systematic Zoology, 3(4), 174–176. Brues, A.M. (1992). Forensic diagnosis of race: general race vs. specific populations. Social Science and Medicine, 34(2), 125–128. Bryce, T. (2004). Tough acts to follow: the challenges to science teachers presented by biotechnological progress. Int. J. Science Education, 26, 717-733. BSA. (2006). Third annual BSA and IDC global software piracy study. Retrieved May 30, 2007 from http://www. bsaa.com.au/. Buchanan, E.A. (2004). Ethics in library and information science: What are we teaching? Journal of Information Ethics, 13(4), 51-60. Bunch, W.H. (2005). Ethics for evangelical christians. Chapter 13: Virtue Ethics, p. 2. Retrieved May 30, 2007, from http://faculty.samford.edu/~whbunch/Chapter13. pdf Bunge, M. (1967). Scientific research II: The search for truth. New York: Springer. Bunge, M. (1974). Por una tecnoética, en etica, ciencia y técnica. Buenos Aires: Editorial Sudamericana.
Bunge, M. (1976). The philosophical richness of technology. Proceedings of the Biennial Meeting of the Philosophy of Science Association. Volume 2: Symposia and Invited Papers (pp. 153–172). Bunge, Mario. (1977). Towards a technoethics. Monist, 60(1), 96–107. Bureau of National Affairs. (2005). BNA employment guide. Washington, DC: Bureau of National Affairs. Burke, M. D. (1997). Drugs in sport: Have they practiced too hard? A response To Schneider And Butcher. Journal Of The Philosophy Of Sport XXIV, 47-66. Burke, W., Pinsky, L., & Press, N. (2001). Categorizing genetic tests to identify their ethical, legal, and social implications. American Journal of Medical Genetics, 106, 233-240. Burleigh M. (2002). Death and deliverance: ‘Euthanasia’ in Germany, c. 1900-1945. Cambridge: Cambridge University Press. Burmeister, O.K. (2000). HCI professionalism: Ethical concerns in usability engineering. In Selected papers from the second Australian institute conference on computer ethics. Canberra, Australia, pp. 11-17. Burrell, G. (1988). Modernism, post modernism and organizational analysis: The contribution of Michel Foucault. Organization Studies, 9(2), 221-235. Business Software Alliance (BSA). (2005). Second annual BSA and IDC global software piracy study. Retrieved January 7, 2006 from http://www.bsa.org/globalstudy/ upload/2005-Global-Study-English.pdf Business Software Alliance (BSA). (2007). Fourth annual BSA and IDC global software piracy study. Retrieved July 27, 2007 from http://www.ifap.ru/library/book184.pdf Butt, A. (2006). A cross-country comparison of software piracy determinants among university students: Demographics, ethical attitudes & socio-economic factors, emerging trends and challenges in information technology management. Proceedings of the 2006 Information Resources Management Association International Conference. Hershey: Idea Group Publishing.
883
Compilation of References
Byars, L. L., & Rue, L. W. (2004). Human resource management (7th ed.). Boston: McGraw-Hill, Irwin. Bygrave, L. (2002). Data Protection Law: Approaching its Rationale, Logic and Limits. The Hague: Kluwer Law International. Bynum, T. W. (1992). Human values and the computer science curriculum. Retrieved May 30, 2007, from http://www.southernct.edu/organizations/rccs/resources/teaching/teaching_mono/Bynum/Bynum_human_values.html Bynum, T. W. (2000, Summer). A very short history of computer ethics. Newsletter of the American Philosophical Association on Philosophy and Computing. Retrieved July 2005, from http://www.southernct.edu/ organizations/rccs/resources/research/introduction/bynum_shrt_hist.html Bynum, T. W. (2001). Computer Ethics: Basic concepts and historical overview. In The Stanford Encyclopedia of Philosophy. Bynum, T. W. (2003). Norbert Wiener’s foundation of computer ethics. The Research Center on Computing & Society. Retrieved May 30, 2007, from http://www. southernct.edu/organizations/rccs/resources/research/ introduction/Bynum_Wiener.html Bynum, T. W., & Rogerson, S. (1996). Introduction and overview: Global information ethics. Science and Engineering Ethics, 2, 131-136. Bynum, T.W. (2001). Computer ethics: Its birth and future. Ethics and Information Technology, 3, 109–112. Bynum, T.W. (Ed.) (1985). Computers and ethics. Basil Blackwell. (published as the October 1985 issue of Metaphilosophy). Cabero, J. (2001). Tecnología educativa. Diseño y utilización de medios en la enseñanza. Barcelona: Paidós. Cabinet Office. (2002). Identity fraud: A study. Retrieved May 27, 2007 from http://www.identitycards.gov.uk/ downloads/id_fraud-report.pdf
884
Calabresi, G., & Melamed, D. A. (1972). Property rules, liability rules and inalienability: One view of the cathedral. Harvard Business Review, 85, 1089-1128. Califf, R., Morse, M., & Wittes, J. (2003). Toward protecting the safety of participants in clinical trials. Controlled Clinical Trials, 24, 256 - 271. California Department of Consumer Affairs (2006, February 14). Privacy Laws. California Department of Consumer Affairs Office of Privacy Protection. Retrieved from http://www.privacy.ca.gov/lawenforcement/laws.htm. California State University Chico. Accessed (2002 October 2). http://rce.csuchico.edu/online/site.asp Callahan, D. (1980). Goals in the teaching of ethics. In Callahan, D. and Bok, S. (Eds.) Ethics teaching in higher education. New York: Plenum Press. Callon, M. (1986). Some elements of a sociology of translation: Domestication of the scallops and the fishermen of S./Brieuc Bay. In Law, J. (Ed.), Power, action, and belief: A new sociology of knowledge? Sociological Review Monograph, 32, 196-233. London, UK: Routledge & Kegan Paul Camardella, M. (2003). Electronic monitoring in the workplace. Employee Relations Today, 30, 91-100. Cameron, N. (2006). Nanotechnology and the human future: Policy, ethics, and risk. Annals of the New York Academy of Science, 1093(Dec), 280-300. Camino, L. A. (2000). Youth-adult partnerships: Entering new territory in community work and research. Applied Developmental Science, 4(Supplement 1), 11-20. Campbell, G. (2005, November/December). There’s something in the air: Podcasting in education. Educause Review, 40, 33-46. Retrieved on May 11, 2006, from http://www.educause.edu/ir/library/pdf/erm0561.pdf Campbell, M. A. (2005). Cyber bullying: An old problem in a new guise? Australian Journal of Guidance and Counselling, 15(1), 68-76.
Compilation of References
Camps, V. (2003). Ética para las ciencias y técnicas de la vida. In A. Ibarra & L. Olivé (Eds.), Cuestiones éticas en ciencia y tecnología en el siglo XXI (pp. 225-244). Madrid: Biblioteca Nueva. Canadian Alliance Against Software Theft (CAAST). (2005). Software piracy runs rampant on canadian university campuses. Retrieved Nov 1, 2005 from http:// www.caast.org/release/default.asp?aID=139 Candilis, P. (2002). Distinguishing law and ethics: A challenge for the modern practitioner. Psychiatric Times, 19(12). Retrieved September 8, 2005, from http://www. freedominfo.org/survey.htm Cannistra, S. A. (2004). The ethics of early stopping rules: Who is protecting whom?, Journal of Clinical Oncology, 22(9), 1542 - 1545. Capurro, R. (1989). Towards an information ecology. In I. Wormell (Ed.), Information quality. Definitions and dimensions. Proceedings of the NORDINFO International Seminar “Information and Quality,” (pp. 122-139) Copenhagen. Capurro, R. (1995). Leben im Informationszeitalter. Berlin: Akademie Verlag. Capurro, R. (2003). Angeletics – A message theory. In H. H. Diebner & L. Ramsay (Eds.), Hierarchies of Communication, 58-71. Karlsruhe: ZKM Verlag. Retrieved October 1, 2006, from http://www. capurro. de/angeletics_zkm.html Capurro, R. (2005). Privacy: An intercultural perspective. Ethics and Information Technology, 7, 37-47. Capurro, R. (2006). Ethik der informationsgesellschaft: Ein interkultureller Versuch. Beitrag zur Tagung. Shapes of Things to Come, Humboldt-Universität, Berlin (1517 February 2006). Retrieved October 1, 2006, from http://www.capurro.de/db.htm (printing in Japanese translation). Capurro, R. (2006). Towards an ontological foundation of information ethics. Ethics and Information Technology, 8, 175-186.
Capurro, R. (2007). Information Ethics for and from Africa. International Review of Information Ethics, (7). Capurro, R. (2008). Intercultural information ethics. In K. E. Himma & H. T. Tavani (Eds.), Handbook of Information and Computer Ethics. Wiley & Sons (forthcoming). Capurro, R. and Pingel, C. (2002). Ethical Issues of Online Communication Research. Ethics and Information Technology, 4: 189-194. Capurro, R., Frühbauer, J., & Hausmanninger, T. (Eds.) (2007). Localizing the Internet: Ethical aspects in intercultural perspective. München: Fink Verlag. Carbo, T. & Almagno, S. (2001). Information ethics: The duty, privilege and challenge of educating information professionals. Library Trends, 49(3), 510-518. Cardon, Phillip L. (2000, Winter-Spring). At-risk students and technology education: A qualitative study. Journal of Technology Studies [Online]. Available from http://scholar.lib.vt.edu/ejournals/JOTS/Winter-Spring2000/cardon.html Carlin, F. (1996). The data protection directive: The introduction of common privacy standards. European Law Review, 21, 65-70. Carlson, C. (2006, February 1). Unauthorized sale of phone records on the rise. eWeek. Carroll, A. B. (1990). Principles of business ethics: Their role in decision making and in initial consensus. Management Decision, 28(8), 21-23. Cartmill, M. (1997). The third man. Discover, 18(9). Electronic document, http://www.discover.com/issues/sep-97/departments/thethirdman1220/, accessed April 4, 2000. Cartmill, M. (1998). The status of the race concept in physical anthropology. American Anthropologist, 100(3), 651–660. Cartmill, M., & Brown, K. (2003). Surveying the race concept: A reply to Lieberman, Kirk, and Littlefield. American Anthropologist, 105(1), 114–115.
885
Compilation of References
Carver, L. (2002). MESS Model for more effective management of information systems under conditions of centralisation/decentralisation. Unpublished PhD thesis. Canberra: UNSW, Australian Defence Force Academy. Casey, E. (2004). Digital evidence and computer crime: Forensic science, computers and the Internet (2nd ed). London: Elsevier Academic Press. Cassell. P. G. (2005). Recognizing victims in the federal rules of criminal procedure: Proposed amendments in light of the Crime Victims’ Rights Act. Brigham Young University Law Review, 4, 835-925. Castelfranchi, C. (2000, June). Artificial liars: Why computers will (necessarily) deceive us and each other. Ethics and Information Technology, 2 (2), 113-119. Castells, M. (1997). La era de la información. Economía, sociedad y cultura. (3 vols). Madrid: Alianza. Castells, M. (2000). The information age: Economy, society, and culture, volume I: The rise of the network society (2nd ed.). Oxford: Blackwell. Castells, M. (2002). Tecnologías de la Información y de la Comunicación y desarrollo social. Revista de Economía Mundial, 7, 91-107. Cate, F. H. (Forthcoming). The failure of fair information practice principles. In Consumer protection in the age of the ‘ information economy.’ Catudal, J.N. (2001) Censorship, the Internet and Child Pornography Law of 1996: A Critique. In Richard A. Spinello and Herman T. Tavani (eds.) Readings in Cyberethics. Sudbury, Mass: Jones and Bartlett Publishers. Cauley, L. (2006, May 11). NSA has massive database of Americans’ phone calls. USA Today. Retrieved from http://www.usatoday.com/news/washington/2006-0510-nsa_x.htm. Caulfield, T. (2004) Nanotechnology: Facts and fictions? Health Law Reform, 12(3), 1-4. http://www.law.ualberta. ca/centres/hli/pdfs/hlr/v12_3/12-3-01%20Caulfield.pdf
886
Caulfield, T. (2004). Biotechnology and the popular press: Hype and the selling of science, Trends in Biotechnology, 22(7), 337-39 Caux Round Table. (2002). Principles for business. Saint Paul, MN: Caux Round Table Secretariat. Cavalier, R. (2005). The impact of the internet on our moral lives. NY: State University of New York Press. Cavalli-Sforza, L.L., & Cavalli-Sforza, F. (1995). The great human diasporas: the history of diversity and evolution. Reading: Addison-Wesley. Cavalli-Sforza, L.L., Menozzi, P., & Piazza, A. (1994). The history and geography of human genes. Princeton: Princeton University Press. Cavanagh, A. (1999) Behaviour in Public: Ethics in Online Ethnography. Available: http://www.socio.demon. co.uk/6/cavanagh.html. Accessed 12/01/07. Cave, D. (2001). The Parasite Economy. August 02, Salon.com. URL: http://archive.salon.com/tech/feature/2001/08/02/parasite_capital/ Cavina, M. (2007). Il padre spodestato. L’autorità paterna dall’antichità a oggi. Roma-Bari: Laterza. Cebeci, Z., & Tekdal, M. (2006). Using Podcasts as Audio Learning Objects [Electronic version]. Interdisciplinary Journal of Knowledge and Learning Objects, 2. CEC (2002). Creating a safer information society by improving the security of information infrastructures and combating computer-related crime. Council of the European Communities (CEC).. Retrieved from http://europa.eu.int/ISPO/eif/InternetPoliciesSite/Crime/ CrimeCommEN.html#1.Opportunities%20and%20Thr eats%20in%20the%20Information%20Society Centers for Disease Control and Prevention (2006). 2004 Assisted Reproductive Technology Success Rates - National Summary & Fertility Clinic Reports. Atlanta: US Department of Health and Human Services. Retrieved May 2, 2007, from http://www.cdc.gov/art/ART2004/ index.htm
Compilation of References
Cerqui, D. & Warwick, K. (2005). Can converging technologies bridge the gap? Proceedings of the CEPE 2005 Conference (Computer Ethics: Philosophical Enquiry), University of Twente, Netherlands. Cerqui, D. & Warwick, K. (to be published). Prospects for thought communication: Brain to brain and brain to machine. In K. Kimppa, P. Duquenoy & C. George (Eds), Ethical, Legal and Social Issues in Medical Informatics. Idea Group Cerqui, D. (1995). L’extériorisation chez Leroi-Gourhan. Lausanne: Institut d’Anthropologie et de Sociologie.
Chan, A. & Lee, MJW (2005) An MP3 a day keeps the worries away: Exploring the use of podcasting to address preconceptions and alleviate pre-class anxiety amongst undergraduate information technology students. In Dirk HR Spennemann & Leslie Burr (eds), Good Practice in Practice. Proceedings of the Student Experience Conference 5-7th September ’05. Wagga Wagga, NSW: Charles Sturt University. 59–71. Chan, A., & Garrick, J. (2002). Organisation theory in turbulent times: The traces of Foucault’s ethics. Organization, 9(4), 683-701.
Cerqui, D. (1997). L’ambivalence du développement technique: entre extériorisation et intériorisation. Revue européenne des sciences sociales, 108, 77-91.
Chan, H.Y.L. & Pang, S.M.C. (2007). Quality of life concerns and end-of-life care preferences of aged persons in long-term care facilities. Journal of Clinical Nursing, 16, 2158-2166.
Cerqui, D. (2002). The future of humankind in the era of human and computer hybridisation. An anthropological analysis. Ethics and Information Technology, 4(2), 1-8.
Chan, M. (2003). Corporate espionage and workplace trust/distrust. Journal of Business Ethics, 42, 43-58.
Cerqui, D. (2005). La société de l’information, de la médiation à l’immédiat. In G. Berthoud, A. Kündig & B. Sitter-Liver (Eds.), Société de l’information : récits et réalités, actes du colloque 2003 de l’Académie suisse des sciences humaines (pp. 311-321). Fribourg: Academic Press. Cerqui, D. (forthcoming). Humains, machines, cyborgs: Le paradigme informationnel dans l’imaginaire technicien. Genève: Labor et Fides (collection Champs éthique). Chachra, D. (2005). Beyond course-based engineering ethics instruction: Commentary on “Topics and cases for online education in engineering”. Science & Engineering Ethics, 11(3), 459-62. Chadwick, R. (2005) Nutrigenomics, individualism and sports. In Tamburrini, C. & Tannsjo, T. (Eds.), Genetic Technology and Sport: Ethical Questions. Oxon and New York: Routledge, pp.126-135. Chaker, A. M. (2007). Schools act to short-circuit spread of ‘cyberbullying’. Wall Street Journal (Eastern Edition), D1, D4.
Chan, T. S. (1992). Emerging Trends in Export Channel Strategy: An Investigation of Hong Kong and Singaporean Firms. European Journal of Marketing, 26(3), 18-26. Chang, C., Denning, P.J., Cross II, J.H., Engel, G., Sloan, R. Carver, D., Eckhouse, R., King, W., Lau, F., Mengel, S., Srimani, P., Roberts, E. Shackelford, R., Austing, R., Cover, C.F., Davies, G., McGettrick, A., Schneider, G.M., & Wolz, U. (December 15, 2001). Computing Curricula 2001, Report of the ACM/IEEE-CS Joint Curriculum Task Force. http://www.computer.org/portal/cms_docs_ieeecs/ieeecs/education/cc2001/cc2001. pdf. Also at http://www.sigcse.org/cc2001/ Chapman, M. (Ed.). (1993). Social and biological aspects of ethnicity. Oxford: Oxford University Press. Chatterjee, S. (2006, October 3). India BPOs under scanner after TV channel exposes data leak. International Business Times. Retrieved from www.ibtimes.com/.../ nasscom-kiran-karnik-hsbc-data-privacy-cyber-crimechannel-4-bpo-call-center.htm Chatters, J.C. (2000). The recovery and first analysis of an early Holocene human skeleton from Kennewick, Washington. American Antiquity, 65(2): 291–316.
887
Compilation of References
Check, J. (1985) The Effects of Violent and Non-Violent Pornography. Ottawa: Department of Justice, Canada. Checkland, P. (1981). Systems thinking, systems practice. London: John Wiley and Sons. Checkland, P. (1990). Information systems and systems thinking: Time to unite? In P. Checkland & J. Scholes (Eds.), Soft systems methodology in action (pp. 303-315). Chichester, UK: John Wiley & Sons Ltd. Checkland, P., & Holwell, S. (1998). Information, systems and information systems: Making sense of the field. Chichester, UK: John Wiley and Sons. Checkland, P., & Scholes, P. (1990). Soft systems methodology in action. Chichester: John Wiley and Sons. Chee, E., & Schneberger, S. (1998). British Columbia’s PHARMANET project. University of Western Ontario - Richard Ivey School of Business. Chen, A. (July 12, 2004). After slow start, location-based services are on the map. Retrieved September 6, 2005, from http://www.eweek.com/article2/0,1895,1621409,00. asp Chen, Y-H. and Barnes, S. (2007). Initial trust and online buyer behavior. Industrial Management and Data Systems, 107(1)., 21-36. Cheng, HK. Ronald, RS. & Hildy, T. (1997). To purchase or to pirate software: An empirical study. Journal of Management Information Systems, 13(4), 49-60. Chestnut, A. (2006, March 9). Cell phone GPS tracking-privacy issues, eZine Articles, Retrieved from http://ezinearticles.com/?Cell-Phone-GPS-Tracking--Privacy-Issues&id=159255. Cheung, C.M.K. and Lee, M.K.O. (2004/2005). The asymmetric effect of Web site attribute performance on Web Satisfaction: An empirical study. E-Service Journal, 3(3)., 65-105. Cheung, C.M.K., Chan, G.W.W., and Limayem, M. (2005). A critical review of online consumer behavior: Empirical research. Journal of Electronic Commerce in Organizations, 3(4)., 1-19.
888
Child, J. (1981). Culture, contingency and capitalism in the cross-national study of organization. Research in Organization Behavior, 3, 303-356. Ching, C., Kafai, Y. & Marshall, S. (1998). Give Girls Some Space: Considering Gender in Collaborative Software Programming Activities. In Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 1998 (pp. 56–62). Chesapeake, VA: AACE. Chinnery, G. M. (2006). Emerging Technologies - Going to the MALL: Mobile Assisted Language Learning [Electronic version]. Language Learning & Technology, 10(1), 9-16. Chiou, W.-B. (2006). Adolescents’ sexual self-disclosure on the Internet: Deindividuation and impression management. Adolescence, 41(163), 547-561. Chisholm, J. F. (2006). Cyberspace violence against girls and adolescent females. Annals of New York Academy of Sciences, 1087, 74-89. Choonara, I. et al (Eds.) (2003). Introduction to paediatric and perinatal drug therapy. Nottingham, University Press. Christensen, AL. & Martha, M.E. (1991). Factors influencing software piracy: Implications for accountants. Journal of Information Systems, 5 (spring), 67-80. Chromatius (2006, February 19). Dust: A ubiquitous surveillance technology. Blog Critics Magazine. Retrieved from http://blogcritics.org/archives/2006/02/19/111529. php Chuang, C.P. & Chen, J.C. (1999). Issues in information ethics and educational policies for the coming age. Journal of Industrial Technology, 15, 1-6. Churchman, C. W. (1970). Operations research as a profession. Management Science, 17, b37-b53. Churchman, C. W. (1979). The systems approach and its enemies. New York: Basic Books. Churchman, C.W. (1968). The systems approach, New York: Dell.
Compilation of References
Citizens for Responsible Care and Research. (CIRCARE) (2002) A human rights organizations. April 17. Clare, J., & Morgan, F. (forthcoming). Exploring the “Bright Figure” of crime: Analysing the relationship between assault victimization and victims’ perceptions of criminality. Clark, H., & Brennan, S. (1991). Grounding in Communication. In L. Resnick, J. Levine & S. Teasley (Eds.), Perspectives on Socially Shared Cognition. Washington: American Psychological Association. Clark,J.M. (1923). Studies in the Economics of Overhead Costs. Chicago: University of Chicago Press. Clarke, R. (1999). Internet privacy concerns confirm the case for intervention. Communications of the ACM, 42(2)., 60-66. Clarke, S. (2001). Information systems strategic management : An integrated approach. London: Routledge. Clarke, S., & Lehaney, B. (2000). Mixing methodologies for information systems development and strategy: A higher education case study. Journal of the Operational Research Society, 51, 542-566. Clarkeburne, H.M, Downie, J.R. Downie, Gray, C., & Matthew, R.G.S. (2003). Measuring ethical development in life sciences students: A study using Perry’s developmental model. Studies in Higher Education, 28, 443-456. Clynes, M. and N. Kline (1960) Cyborgs and Space. Astronautics, September, 26-27, 74-75. CNET News (2006). Three workers depart AOL after privacy uproar. Retrieved on August 23, 2006, from http://news.com.com/Three+workers+depart+AOL+af ter+privacy+uproar/2100-1030_3-6107830.html CNN.com. (2001). Microsoft in China: Clash of titans. Retrieved March 19, 2006 from http://archives.cnn. com/2000/TECH/computing/02/23/microsoft.china. idg/ CNN.com. (2004). Brain implant devices approved for trials. Retrieved July 2, 2007, from http://www.webmd. com/stroke/news/20040415/Brain-Implants
Cobb, M. D. and Macoubrie, J. (2004). Public perceptions about nanotechnology: Risks, benefits and trust. Journal of Nanoparticle Research, 6(4), 395-405. Cobb, M.D. (2005). Framing effects on public opinion about nanotechnology. Science Communication, 27(2), 221-39. Cochrane, T. (2005). Podcasting: Do It Yourself Guide. Wiley. Cockerham, W. (1993). The changing pattern of physician-patient interaction. In M. Clair & R. Allman (Eds.), Sociomedical perspectives on patient care (pp. 47-57). Lexington, KY: University Press of Kentucky. Coffee, P. (2004, April 19). Privacy concerns dog RFID chips, eWeek. Cogoi, C., Sobrado, L., Hawthorm, R. & Korte, A. (2005). ICT skills for guidance practitioners: Final results of the research. Work group, CD-rom en AIOSP Conferencia Internacional. Lisboa. Cohen J. (2005) AIDS research: Male circumcision thwarts HIV infection. Science; 2005: 860. Cohen, F. (1992). A formal definition of computer worms and some related results. IFIP-TC11 Computers and Security, 11(7), 641-652. Cohen, F., and Koike, D. (2003). Leading attackers through attack graphs with deceptions. Computers and Security, 22 (5), 402-411. Cohen, J E (2001). Privacy, ideology, and technology: A response to Jeffrey Rosen. Georgetown Law Journal, 89(2029). Cohen, L., Manion L. & Morrison K. (2003). Research methods in education. 5th Edition. London: Routledge Falmer. Cohen, R., Singer P. A., Rothman A. I., & Robb, A. (1991). Assessing competency to address ethical issues in medicine. Academic Medicine, 66, 14-5. Cohendet, P., Kern, F., Mehmanpazir, B., & Munier, F. (1999). Knowledge Coordination, Competence Creation
889
Compilation of References
and Integrated Networks in Globalised Firms. Cambridge Journal Economics, 23, 225-241.
Collins, Steven. (1982). Selfless Persons. Cambridge University Press.
Coleman, D.J. (1998). Applied and academic geomatics into the 21st century. Proc. FIG Commission 2, XXI Inter. FIG Congress, 39-62. Brighton, UK.
Colombo, A., Bendelow, G., Fulford, K. W. M., & William, S. (2003). Evaluating the influence of implicit models of mental disorder on processes of shared decision making with community-based multidisciplinary teams. Social Science & Medicine, 56, 1557-1570.
Coleman, J.S. (1988). Social capital and the creation of human capital. American Journal of Sociology, 94, supplement, S95-S120 Coleman, K.G. (2001). Android arete: Toward a virtue ethic for computational agents. Ethics and Information Technology, 3, 247-265. Coleman, Stephen & Gøtze, John (2001). Bowling together: Online public engagement in policy deliberation. Hansard Society, 2001. Online at bowlingtogether.net Cole-Turner, R. (1998). Do means matter? In Parens, E. (Ed.), Enhancing human traits: Ethical and social implications. Washington, DC:Georgetown University Press. Colhoun, H., & Mckeigue, P. (2003). Problems of reporting genetic associations with complex outcomes. The Lancet, 361, 865-872. Colla, P. S. (2000). Per la nazione e per la razza. Cittadini ed esclusi nel “modello svedese”. Roma: Carocci. Collard, M. & Wood, B. (2000). How reliable are human phylogenetic hypotheses? Proceedings of the National Academy of Sciences USA, 97(9), 5003–5006. Collard, M., & O’Higgins, P. 2001. Ontogeny and homeoplasy in the papionin monkey face. Evolution and Development, 3, 322–331. Collins C. (2002) AVAC announces appointment of new director of education and outreach: Ford foundation make two year award to AIDS vaccine advocacy coalition for community education and mobilization. June 25, New York. Retrieved from: avac.org/press releases.htm Collins C. (2002) Thai-WRAIR phase III HIV vaccine trial. AIDS Vaccine, Advocacy Coalition. July 8, avac. org reports and documents. http://www.avac.org/index. htm
890
Colombo, G. & Franklin, C. (2005). Absolute Beginner’s Guide to Podcasting (Absolute Beginner’s Guide). Que. Colvin, V. L. (2003). The potential environmental impact of engineered nanoparticles. Nature Biotechnology, 21(10), 1166-1170. COMEST (The World Commission on the Ethics of Scientific Knowledge and Technology). (2004). The teaching of ethics. Paris: UNESCO. Commonwealth of Australia. (2005). Anti-Terrorism Act 2005. Retrieved May 17, 2007, from http://www.comlaw.gov.au/ComLaw/Legislation/ Act1.n sf /0/53D2DEBD3A F B7825CA 2570B20 0 0B29D5?OpenDocument Community Oriented Policing Services, (2006). A national strategy to combat identity theft. US Department of Justice. Retrieved May 27, 2007 from http://www.cops. usdoj.gov/mime/open.pdf?Item=1732. Computer Ethics Institute (CEI). The ten commandments of computer ethics. (1992). Retrieved May 30, 2007, from http://www.brook.edu/its/cei/overview/Ten_Commanments_of_Computer_Ethics.htm Computer Professionals for Social Responsibility. (2001). The ten commandments of computer ethics. Retrieved September 9, 2003 from: http://www.cpsr.org/program/ ethics/cei.html Conger, S. & Loch, K. (1995). Ethics and computer use. Communications of the ACM, 38(12), 30-32. Conger, S., Mason, R.O., Mason, F., and Pratt, J.H. (2005, August). The connected home: Poison or paradise. In Proceedings of Academy of Management Meeting. Honolulu, HI.
Compilation of References
Conger, S., Mason, R.O., Mason, F.. Pratt, J.H. (2006, December 10). Legal sharing, shadow sharing, Leakages—Issues of personal information privacy have changed. In Proceedings of IFIPS 8.2 OASIS Meeting. Milwaukee, WI. Conner, L. (2003). The importance of developing critical thinking in issues education. New Zealand Biotechnology Association Journal, 56, 58-71. Conner, L. (2004). Assessing learning about social and ethical issues in a biology class. School Science Review, 86(315), 45-51. Consequences of Computing: A Framework for Teaching; Project ImpactCS Report 1. http://www.seas.gwu. edu/~impactcs/paper/toc.html Cooper, A. & Griffin-Shelley, E. (2002) A Quick Tour of On-Line Sexuality: Part 1. Annals of the American Psychotherapy Association, 5: 11-13. Copyleft (2006). CopyLeft. Retrieved May 17, 2007, from http://www.gnu.org/copyleft/ Córdoba, J. R., & Robson, W. D. (2003). Making the evaluation of information systems insightful: Understanding the role of power-ethics strategies. Electronic Journal of Information Systems Evaluation, 6(2), 55-64. Corner, J., Buchanan, J., & Henig, M. (2002). A dynamic model for structuring decision problems. Retrieved January 3, 2004, from www.mngt. wakato.ac.nz/depts./mnss/ JIM/ResearchArticles.htm Cornford, J., Gillespie, A., & Richardson, R. (1999). Regional development in the information society. Boulder, CO: Rowman and Littlefield. Correa, C. (2007) Trade related aspects of intellectual property rights: A commentary on the TRIPS agreement. Oxford University Press Corruccini, R.S. (1973). Size and shape in similarity coefficients based on metric characters. American Journal of Physical Anthropology, 38, 743–754. Corruccini, R.S. (1987). Shape in morphometrics: comparative analysis. American Journal of Physical Anthropology, 73, 289–303.
Corry, D., & Nutz, D. (2003). Employee e-mail and Internet use: Canadian legal issues. Journal of Labor Research, 24, 233-257. Cortés, P. A. (2004). Una mirada psicoeducativa de los valores. Seminario aplicado a las nuevas tecnologías de la educación. Prensas Universitarias de la Universidad de Zaragoza: Zaragoza. Cortés, P. A. (2005). Educational Technology as a means to an end. Educational Technology Review, 13(1), 73-90. Cortés, P.A. (2005). Las preconcepciones sobre la tecnoética en los adultos. Revista Mexicana de Psicología, 22(2), 541-552. Cortés, P.A. (2006). Valores y orientación profesional: líneas de investigación e intervención actuales. Contextos Educativos, 8-9, 223-238. Cortés-Pascual, P. (2005). Educational technoethics: As a means to an end. AACE Journal, 13(1), 73-90. Cortez, J. J. (2002). The origins of technology in education: The role of federal interventions and national emergencies during the early evolution of instructional technology and media. Unpublished master’s thesis, University of Dayton. Cortina, A. (1998). ¿Qué son los valores y para qué sirven? Temas para el debate, 42, 20-22. Cortina, A. (2001). Alianza y Contrato. Política, ética y religión. Madrid: Editorial Trotta. Coser, L. A. (1956). The Function of Social Conflict. NY: Free Press. Cottone, R. R. & Tarvydas, V. M. (2003). Ethical issues in counseling (2nd ed.). Upper Saddle River, NJ: Merrill Prentice Hall. Council for International Organizations of Medical Sciences (CIOMS) (1993). In collaboration with the World Health Organization (WHO): International Ethical Guidelines for Biomedical Research Involving Human Subjects, Geneva. Council for International Organizations of Medical Sciences (CIOMS) (2002). In collaboration with the World
891
Compilation of References
Health Organization (WHO): International Ethical Guidelines for Biomedical Research Involving Human Subjects, Geneva.
Crignon-De Oliveira, C. Nikodimov, M. G. (2004). A qui appartient le corps humain? Médecine, politique et droit. Paris: Les Belles Lettres.
Council for International Organizations of Medical Sciences (CIOMS) (1991). International Guidelines for Ethical Review of Epidemiological Studies. Geneva, CIOMS.
Cronin, B & Dvenport, E. (2001) E-rogenous Zones: Positing Pornography in the Digital Economy. The Information Society, 17: 33-48.
Council of Europe (1997). Convention for the Protection of Human Rights and Dignity of the Human Being with Regard to the Application of Biology and Medicine, Convention on Human Rights and Biomedicine. European Treaty Series, No. 164, Oviedo. Council of Europe (2005). Additional Protocol to the Convention on Human Rights and Biomedicine Concerning Biomedical Research. European Treaty Series, No. 195. Strasbourg. Cox, A. (2004). Business Relationship Alignment: On the Commensurability of Value Capture and Mutuality in Buyer and Supplier Exchange. Supply Chain Management: An International Journal, 9(5), 410-420. Coye, M. (2007) Jogging into the sunset. Retrieved July 2, 2007, from http://www.healthtechcenter.org/Common_site/news/docs/Molly_MostWired112906.pdf Coyle, J. (2007). Learning the Golden Rule the hard way. Toronto Star. Crain, W.C. (1985). Theories of development. New York: Prentice-Hall. Creative Commons (2007). Creative Commons Attribution 2.5. Retrieved May 18, 2007 at http://creativecommons.org/licenses/by/2.5 Creativity Unleashed Ltd. (2003). Home page. Retrieved December 30, 2003, from www.cul.co.uk Cremin, L. (1980). American education: The national experience,1783-1876. New York: Harper & Row, Inc. Crews, D.E., & Bindon, J.R. (1991). Ethnicity as a taxonomic tool in biomedical and biosocial research. Ethnicity and Disease, 1, 42–49.
892
Cuban, L. (1986). Teachers and machines: The classroom use of technology since 1920. New York: Teachers College Press. Culler, D.E., and Mulder, H. (2004, August 2). Sensor Nets/RFID. Intel. Retrieved from http://www.intel.com/ research/exploratory/smartnetworks.htm. Culnan, M. (2000). Protecting privacy online: is selfregulation working? Journal of Public Policy & Marketing, 19(1)., 20-26. Culnan, M.J. (1993). How did they get my name? An exploratory investigation of consumer attitudes toward secondary information use. MIS Quarterly, 17(3). 341363. Culnan, M.J., & Armstrong, P. (1999). Information privacy concerns, procedural fairness, and impersonal trust: An empirical investigation. Organization Science, 10, 104-115. Currat, M., & Excoffier, L. (2004). Modern humans did not admix with Neanderthals during their range expansion into Europe. PLOS Biology, 2(12), 2264–2274. Curry, M. R. (1995). Geographic information systems and the inevitability of ethical inconsistencies. In J. Pickles (Ed.), Ground truth (pp. 68-87). London: The Guilford Press. Curtis, B., & Hunt, A. (2007). The fellatio “epidemic”: Age relations and access to the erotic arts. Sexualities, 10(1), 5-28. Cyber bullying: Understanding and preventing online harassment and bullying. (2006). School Libraries in Canada, 25(4). CyberAltlas Staff (2003). Colleges a gateway to software piracy. CyberAtlas. Retrieved September 30, 2003, from
Compilation of References
http://cyberatlas.Internet.com/big_picture/ applications/ article/0,,1301_3078901,00.html CyberTipline (n.d.). CyberTipline: Annual report totals by incident type. Retrieved June 3, 2007 from http://www. cybertipline.com/en_US/documents/CyberTiplineReportTotals.pdf CyberTipline (n.d.). CyberTipline fact sheet. Retrieved June 3, 2007 from http://www.cybertipline.com/en_US/ documents/CyberTiplineFactSheet.pdf D’Ovidio, R. & Doyle, J. (2003). A study on cyberstalking: Understanding investigative hurdles. FBI Law Enforcement Bulletin, 72(3), 10-17. Dagvs, A.J. (2005). Podcasting Now! Audio Your Way. Course Technology PTR. Dahl, M. (2005). High-tech cheating comes to high schools. In Detroit School News, Saturday, September 24, 2005. Retrieved May 17, 2007, from http://www.detnews. com/2005/schools/0509/24/0scho-325779.htm Dale, E. (1969). Audiovisual methods in teaching. Third Edition. New York: Dryden Press. Danielson, P. (1992). Artificial morality: Virtuous robots for virtual games. London: Routledge. Darier, E. (1998). Time to be lazy: Work, the environment and modern subjectivities. Time & Society, 7(2), 193-208. Darier, E. (1999). Foucault and the environment: An introduction. In E. Darier (Ed.), Discourses of the environment (pp. 1-33). Oxford: Blackwell. Darling, J. R., & Fogliasso, C. E. (1999). Conflict Management across Cultural Boundaries: A Case Analysis from a Multinational Bank. European Business Review, 99(6), 383-392. DARPA (2002). DarpaTech 2002 Symposium: Transforming fantasy. U.S. Defense Applied Research Projects Agency. Dasgupta, R. (2002) Sexworks/Networks: What do people get out of Internet porn? Available: http: sarai. net. Accessed 17/01/07.
Data Protection Directive 95/46/EC. (1995). Of the European parliament and of the council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data. Daugherty, C. (1995). Perceptions of cancer patients and their physicians involved in Phase I trials. Journal of Clinical Oncology, 13, 1062 – 1072. David, L. (2003, November 5). Satellite navigation: GPS grows up, market lifts off. Space.com. Retrieved from http://www.space.com/businesstechnology/technology/ satcom_gps_overview_031105.html. Davis, D. (2001). Genetic dilemmas: Reproductive technology, parental choices, and children’s futures. London: Routledge Publishers. Davis, E. S. (2003). A world wide problem on the World Wide Web: International responses to transnational identity theft via the Internet. Journal of Law and Policy, 12, 201-227. Davis, K. (1991). Inequality and access to health care. The Milbank Quarterly, 69(2), 253-273. Davis, L. J. (2002). Bending over backwards. Disability, dismodernism & other difficult positions. New York & London: New York University Press. Dawson A J., Yentis S M., (2007). Consenting the science/ethics distinction in the review of clinical research. Journal of Medical Ethics, 33, 165-167. Dawson, V. & Taylor, P. (1997). The inclusion of bioethics education in biotechnology courses. Eubios Journal of Asian & International Bioethics, 7(6), 171-175. Day, M. & Gehringer, E. (unknown). Educators and pornography: The “unacceptable use” of school computers. Retrieved May 17, 2007, from http://research.csc.ncsu. edu/efg/ethics/papers/acceptableuse.pdf Day, S., & Schneider, P. L. (2002). Psychotherapy using distance technology: A comparison of face-to-face, video, and audio treatment. Journal of Counseling Psychology, 49, 499-503.
893
Compilation of References
de Bono, E. (1992). Serious creativity. London: HarperCollins. De Botton, A. (2002). The Art of travel. New York: Pantheon Books. De Castro Leonardo. (1995). Exploitation in the use of human subject for medical experimentation. Bioethics, 9, 259-268. De Dreu, C. K., & Weingart, L. R. (2003). Task versus Relationship Conflict, Team Performance, and Team Member Satisfaction: A Meta-Analysis. Journal of Applied Psychology, 88(4), 741-749. De Jong, K.A. & Schultz, A.C. (1988). Using Experience-Based Learning in Game Playing. Proceedings of the Fifth International Machine Learning Conference, Ann Arbor, Michigan, June 12-14, 1988, 284-290. De Sola, C. (1994). Privacy and genetic data: Cases of conflict. Law and the human genome review, 1, 173-185. Dean, M., Stephens, C., Winkler, C., Lomb, D.A., Ramsburg, M., Boaze, R., Stewart, C., Charbonneau, L., Goldman, D., Albough, B.J., Goedert, J.J., Beasley, P., Hwang, L-V., Buchbinder, S., Weedon, M., Johnson, P.A., Eichelberger, M., & O’Brian, S.J. (1994). Polymorphic admixture typing in human ethnic populations. American Journal of Human Genetics, 55, 788–808. Dean, P. J. (1992). Making codes of ethics real. Journal of Business Ethics, 11, 285-290. Deborah, B. (1991). Asking for help: A guide to using socially responsible consultants. Business Ethics Magazine, 11, 24-29. Deciphering AIDS Vaccine. (2006) An anthology of VAx and IAVI report articles explaining key concepts in AIDS vaccine research and clinical trials. July 2006. Declaration of Helsinki, Paragraph 30. (2004). Retrieved June 12, 2007, from http://www.wma.net/e/ethicsunit/ helsinki.htm/ Declaration of Helsinki. (2000). World Medical Association. Retrieve March 10, 2007, from http://www.wma. net/e/policy/b3.htm
894
Defillippi, R. J. (2002). Oganizational Models for Collaboration in the New Economy. Human Resource Planning, 25(4), 7-18. DeGeorge, R. (2006). Information technology, globalization and ethics. Ethics and Information Technology, 8, 29-40. Deirmenjian, J. M. (2002). Pedophilia on the Internet. Journal of Forensic Science, 47(5), 1-3. Deluna, J. (2006). Infusing technology into the K-12 classroom. Youngstown, NY: Cambria Press. Deniker, J. (1900 [1904]). The races of man: An outline of anthropology and ethnography. London: Walter Scott Publishing Co. Ltd. Dennett, D. (1998). When HAL Kills, Who’s to Blame? Computer Ethics. In, D. Stork (Ed), HAL’s legacy: 2001’s computer as dream and reality (pp. 351-365). Cambridge, MA: MIT Press. Dennett, D. (2003). Freedom evolves. New York, New York: Penguin Books. Dennett, D.C. (1978). Intentional Systems. In Brainstorms. Philosophical Essays on Mind and Psychology. MIT Press, 3-22. Dennett, D.C. (1978). Conditions of Personhood. In Brainstorms. Philosophical Essays on Mind and Psychology. MIT Press, 267-285. Dennett, D.C. (1987). The Intentional Stance. Cambridge MA, London: MIT Press. Denning, D. (2001). Is cyber terror next? Social Science Research Council. Retrieved from http://www.ssrc. org/sept11/essays/denning.htm Denning, D. and Baugh, W. (1999). Hiding crimes in cyberspace. Information, Communication and Society, 2(3)., 251-276. Dent, D. (1969). Landmarks in learning: The story of SVE. Chicago IL: Society for Visual Education. Denton, I. (1993). Telemedicine: A new paradigm. Healthcare Informatics, 10(11), 44-46, 48, 50.
Compilation of References
Department of Health, Education, and Welfare (1979). The Belmont report: Ethical principles and guidelines for the protection of human subjects of research. Retrieved from http://history.nih.gov/laws/pdf/belmont.pdf Department of Justice Canada (2006). Copyright Act, R.S., 1985, c. C-42. Retrieved March 4, 2006 from http://laws. justice.gc.ca/en/C-42/index.html Department of Justice Canada. (2004). Employment Equity Act. Retrieved October 14, 2004, from http://laws. justice.gc.ca/en/E-5.401/50057.html Dertouzos, M. (1997). What will be. How the world of information will change our lives. San Francisco: Harper. DeSanctis, G., & Monge, P. (1998). Communication Processes for Virtual Organizations. Journal of Computer-Mediated Communication, 3(4), 0-0. Deutsch, K. W. (1963). The Nerves of Government: Models of Political Communication and Control. New York: Free Press. Deutsch, M. (1973). The Resolution of Conflict. New Haven: Yale University Press. DeVille, K. (2003). The ethical implications of handheld medical computers in medicine. Ethics & Health Care, 6(1), 1-4. DeWaal, F. (1996). Good natured: The origins of right and wrong in humans and other animals. Cambridge, MA: Harvard University Press. DeWaal, F. (2006). Primates and philosophers: How morality evolved. Princeton, NJ: Princeton University Press. Dewey, J. (1903). Ethical principles underlying education (as cited in Yeaman, A. R. J., 2005). Dewey, J. (1938/1997). Experience and education. New York: Macmillan. Dewey, J. (1972). The collected works of John Dewey. In Hickman, L. (1996), Techne and politeia revisited: pragmatic paths to technological revolution. Society for Philosophy and Technology, 1 (3–4) (September).
Retrieved May 17, 2007, from http://scholar.lib.vt.edu/ ejournals/SPT/v1_n3n4/Hickman.html Dewey, J. (1993). John Dewey and American democracy. New York: Cornell University Press. Dewey, J. (1997). Democracy and education. Washington DC: Free Press. DeWolfe, C. (2007). The MySpace generation. Forbes, 179(10), 72. Dhillon, G. (2004). Dimensions of power and IS implementation. Information & Management, 41, 635-644. Di Paolo, E.A. (2001). Artificial life and historical processes. In J. Kelemen & P. Sosik (Eds.), Advances in Artificial Life, Proceedings of ECAL 2001 (pp. 649-658). Berlin Heidelberg: Springer-Verlag. Dickens BM. (1991) Issues in preparing ethical guidelines for epidemiological studies. Law, Medicine and Health Care, 19(3-4), 175-183 Dickens BM. (2001) The challenge of equivalent protection. Commissioned Paper. Dictionary.com (2007). Definitions of Privacy. Retrieved from http://www.dallasnews.com/sharedcontent/dws/ bus/stories/092306dnbushp.7a661b5.html. Diederichsen, U. (1984). Der Allgemeine Teil des Buergerlichen Gesetzbuches fuer Studienanfaenger. 5th extended ed. Heidelberg: C.F.Mueller jur. Verlag. Dietrich, E. (2001). Homo sapiens 2.0: Why we should build the better robots of our nature. Journal of Experimental and Theoretical Artificial Intelligence, 13(4), 323-328. Digital Divide Organisation. (2007). Ushering in the second digital revolution. Retrieved May 17, 2007, from www.digitaldivide.org/dd/index.html Dillon, G. & Torkzadeh, G. (2006). Value-focused assessment of information system security in organizations. Information Systems Journal, 16, 293-314. Dinev, T. & Hart, P. (2006). An extended privacy calculus model for E-commerce transactions. Information Systems Research, 17(1)., 61-80.
895
Compilation of References
Directive 2001/20/EC. (2001). Of the European parliament and the council of April 2001 on the approximation of the laws, regulations and administrative provisions of the member states relating to the implementation of good clinical practice in the conduct of clinical trials on medicinal products for human use, Section 3. Directive 95/46/EC. (1995). Of the European parliament and of the council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data. Doctor, R. (1998). Justice and social equity in cyberspace. In Stichler, R. N. & Hauptman, R. (Eds.), Ethics, information and technology readings. Jefferson, NC: McFarland & Company, Inc., Publishers. Doherty, N., & King, M. (2001). An investigation of the factors affecting the successful treatment of organisational issues in systems development. European Journal of Information Systems, 10, 147-160. Doherty, S. (2001). Monitoring and privacy: Is your head still in the sand? Retrieved June 3, 2002, from http://www.nwc.com/1213/1213fl.html Dolgin, J. (2001). Ideologies of discrimination: Personhood and the genetic group. Studies in History and Philosophy of Biological and Biomedical Sciences, 32(4), 705-721. Dolya, A. (2007, April 18). Internal IT Threats in Europe 2006, CNews.ru. Retrieved from http://eng.cnews.ru/cgibin/oranews/get_news.cgi?tmpl=top_print_eng&news_ id=246325 Donnelly, K. M., & Berge, Z. L. (2006). Podcasting: Coopting MP3 Players for Education and Training Purposes. Online Journal of Distance Learning Administration, 11(3). Retrieved April 29, 2007, from http://www.westga. edu/~distance/ojdla/fall93/donnelly93.htm Doolin, B. (2004). Power and resistance in the implementation of a medical management information system. Information Systems Journal, 14(4), 343-362. Doolin, B., Dillon, S., Thompson, F. & Corner, J. (2005). Perceived risk, the internet shopping experience and
896
online purchasing behavior: A New Zealand perspective. Journal of Global Information Management, 13(2)., 66-88. Dooyeweerd, H. (1969). A New critique of theoretical thought, Vols I-IV (transl. from Dutch by D.H. Freeman & W.S. Young). S.l.: The Presbyterian and Reformed Publishing Company. Douglas, J. and Marmar, C. R. (1998). Trauma, Memory, and Dissociation. Washington, DC: American Psychiatric Press. Douglas, M. (1986). Risk acceptability according to the social sciences. New York: Russell Sage Foundation Dower, N. (2005). The earth charter and global ethics. Worldviews: Environment, Culture, Religion, 8, 15-28. Downing, R. W. (2005). Shoring up the weakest link: What lawmakers around the world need to consider in developing comprehensive laws to combat cybercrime. Columbia Journal of Transnational Law, 43, 705- 762. Doyal, L., Hurwitz, B., Yudkin, J. S. (1987). Teaching medical ethics symposium: Medical ethics and the clinical curriculum: A case study. Journal of Medical Ethics, 13, 144-149. Drake, C. E., Oliver, J. J., & Koontz, E. J. (2004). Anatomy of a phishing email. Proceedings of the First Conference on Email and Anti-Spam. Retrieved May 10, 2007, from www.ceas.cc/papers-2004/114.pdf Drazen, J. M., Curfman, M.D. (2004). Public access to biomedical research. The New England Journal of Medicine, 351, 1343. Dreifus, C. (2004). A lament for ancient games in modern world of doping. New York Times. Retrieved from http://www.nytimes.com/2004/08/03/Health/ Psychology/03conv.html Drennan, J., Mort, G. & Previte, J. (2006, January-March). Privacy, risk perception, and expert online behavior: An exploratory study of household end users. Journal of Organizational and End User Computing, 18(1)., 1-22.
Compilation of References
Dretske, F. (1981). Knowledge and the Flow of Information. Cambridge, MA: MIT. Drexler, E. (1986). Engines of creation: The coming era of nanotechnology. New York: Anchor Press/Doubleday. Dreyfus, H. (2002). Thinking in action: On the Internet. London: Routledge Dreyfus, H., & Dreyfus, S. (1986). Mind over machine: The power of human intuition and expertise in the era of the computer. Oxford: Basil Blackwell. Dreyfuss, R C & Nelkin, D (1992). The jurisprudence of genetics. Vanderbilt Law Review, 45(2), 313-348; Drozdek, A. (1995). Moral dimension of man in the age of computers. Lanham, Maryland: University Press of America. Dryzek, J. (2001). Deliberative democracy and beyond: Liberals, critics, contestations. Oxford: Oxford UP. Dukowitz, G. (2006). Out on MySpace, then out the door. Advocate (Los Angeles, Calif.), 22. Duncan, N. (1996) Body Space. Routledge: London. Duncker, E. (2002). Cross-cultural usability of the library metaphor. In Proceedings of the Second Joint Conference on Digital Libraries (JCDL) of the Association of Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers Computer Society (IEEE-CS) 2002, Portland, Oregon (pp. 223-230). Dunham, W. (2006, May 22). Personal data on millions of U.S. veterans stolen. Computerworld. Retrieved from http://www.computerworld.com/action/article.do?comm and=viewArticleBasic&articleId=9000678. Dunn, L.C. (1951). Race and biology. Paris: UNESCO. Dunn, R. (1992). Learning styles network mission and belief statements adopted. Learning Styles Network Newsletter, 13(2), 1. Dunn, R., (1990). Understanding the Dunn and Dunn learning styles model and the need for individual diagnosis and prescription. Reading, Writing and Learning Disabilities, 6, 223-247.
Dunn, R., Beaudry, J.S. & Klavas, A. (1989). Survey of research on learning styles. Educational Leadership, 46(6), 50-58. Dunnigan, J.., & Nofi, A. (2001). Victory and deceit, second edition: deception and trickery in war. San Jose, CA: Writers Club Press. Dunwoody, S. (1999). Scientists, journalists, and the meaning of uncertainty. In Friedman, S. Dunwoody, S. & C. L. Rogers (eds.), Communicating uncertainty: Media coverage of new and controversial science, (pp. 59-80). Lawrence Erlbaum: New York. Dustnetworks.com (2006). Technology overview. Retrieved January 10, 2007, from http://dustnetworks. com/about/index.shtml Dworkin, G. (1987). Intention, Foreseeability, and Responsibility. In Schoeman, F. (ed.), Responsibility, Character, and the Emotions. New Essays in Moral Psychology. Cambridge: Cambridge University Press, 338-354. e3L project website. (2005). http://e3learning.edc.polyu. edu.hk/main.htm. Eassom, S. B. (1995). Playing games with prisoners’ dilemmas. Journal Of The Philosophy Of Sport XXII, 26-47. Ebbesen, M., Andersen, S. & Besenbacher, F. (2006). Ethics in nanotechnology: Starting from scratch? Bulletin of Science, Technology & Society, 26(6), 451-462. Ebell, M.H., Becker, L.A., Barry, H.C. & Hagen, M. (1998). Survival after in-hospital cardiopulmonary resuscitation: a meta-analysis. Journal of General Internal Medicine, 13(12), 805-816. Eberl, J.T. (2000). The beginning of personhood: a Thomistic biological analysis. Bioethics, 14, 134-157. Echeverría, J. (2001). Tecnociencia y sistema de valores. In J.A. López & J.M. Sánchez (Eds.), Ciencia, tecnología, sociedad y cultura (pp. 221-230). Madrid: Biblioteca Nueva.
897
Compilation of References
Echeverría, J. (2004). Los riegos de la globalización. In Luján, J.L. y Echeverría, J. (Eds.), Gobernar los riesgos. Ciencia y valores en la sociedad del riesgo (pp. 187-205). Madrid: Biblioteca Nueva.
Elgin, M. (2007). School iPod ban is cheating students. PC Advisor, May 12, 2007. Retrieved May 17, 2007, from http://www.pcadvisor.co.uk/news/index. cfm?newsid=9340
Edgar (1992). Objectivity, bias and truth. In Belsey, A. & Chadwick. R. (eds.) Ethical issues in journalism and the media, (pp. 112-29). London: Routledge.
Eliot, T.S. (1952). ‘Choruses from “The Rock”. ‘ Complete Poems and Plays. New York: Harcourt, Brace p. 96.
Editorial (2007). An unwieldy hybrid. Nature, 447(7143), 353-354. Editorial. (2007, 2 August). A sporting chance: Bans on drug enhancement in sport may go the way of earlier prohibitions on women and remuneration. Nature, 448, 512. Edwards, I. (2004). An ambiguous participant: The crime victims and criminal justice decision making. British Journal of Criminology, 44(6), 967-982. Edwards, R. (1997). Changing places? Flexibility, lifelong learning and a learning society. New York: Routledge. Edwards, W. (1977). How to use multiattribute utility measurement for social decisionmaking. IEEE Transactions on Systems, Man, and Cybernetics, SMC-7(5), 326-340. Eisen, A. & Parker, K.P. (2004). A model for teaching research ethics. Science & Engineering Ethics, 10(4), 693-704. Eisenberg, N. (1992). The caring child. Cambridge, MA: Harvard University Press. Eisenstein, E.L., (1979). The printing press as an agent of change. Cambridge University Press. El Akkad, O., & McArthur, K. (2007). The hazards of Facebook’s social experiment. The Globe and Mail. Electronic Privacy Information Centre –EPIC. (2006). The Amy Boyer case. Retrieved May 17, 2007, from http://www.epic.org/privacy/boyer/ Elgesem, D. (2005). Deliberative Technology? In Thorseth, M. & Ess, C. (eds.), Technology in a Multicultural and Global Democracy (pp. 61-77). Trondheim: Programme for Applied Ethics, NTNU.
898
Ellis, T. S. & Griffith, D. (2001). The evaluation of it ethical scenarios using a multidimensional scale. The Database for Advances in Information Systems, 32(1), 75-85. Ellul, Jacques. (1964). The technological society. NY: Vintage Books. Ely, D. P. (1997, Jan.-Feb.). Professional education in educational media and technology. TechTrends, 43(1), 17-22. Emanuel E J., David W., & Christine G. (2000). What makes clinical research ethical? Journal of American Medical Association, 283, 2701-2711. Emerson, Ralph Waldo. On education. In Lee A. Jacobus (Ed) (2006), A world of ideas. 7th ed. Emmans, C. (2000, Spring). Internet ethics won’t go away. The Education Digest, 66(1), 24-26. Emmison, M. J. & Smith, P. D. (2000). Researching the visual: Images, objects, contexts and interactions in social and cultural inquiry. London: Sage Publications. Eng, T., & Gustafson, D. (1999). Wired for health and well-being: The emergence of interactive health communication. Washington, DC: Science Panel on Interactive Communication, U.S. Department of Health and Human Services. Engman, L. R. (1989). School effectiveness characteristics associated with black student mathematics achievement. Focus on Learning Problems in Mathematics, 11(4), 31-42. Entwistle, V. A., Sheldon, T. A., Sowden, A. J., & Watt, I. S. (2001). Supporting consumer involvement in decision making: What constitutes quality in consumer health information. International Journal of Quality in Health Care, 8(5), 425-437.
Compilation of References
EPC Global (2005, September). Guidelines on EPC for consumer products. Retrieved from at http://www. epcglobalinc.org/public/ppsc_guide/ EPolicy Institute. (2004). EPolicy handbook. Retrieved September 13, 2004, from http://www.epolicyinstitute. com Epstein, R. (1997). The case of the killer robot. John Wiley and Sons, New York. Equal Opportunities Commission. (2004). Relevant legislation. Retrieved October 14, 2004, from http://www. epolicyinstitute.com Ercegovac, Z. & Richardson, J. Jr. (2004). Academic dishonesty, plagiarism included, in the digital age: A literature review. College and Research Libraries. Retrieved June 30, 2007, from http://privateschool.about. com/cs/forteachers/a/cheating.htm ERIC (2007). Effects of technology. Retrieved April 15, 2007 from http://www.eduref.org/cgi-bin/printresponses. cgi/Virtual/Qa/archives/Educational_Technology/Effects_of_Technology/negeffects.html and http://www. eduref.org/cgi-bin/printresponses.cgi/Virtual/Qa/archives/Educational_Technology/Effects_of_Technology/edtech.html Ericsson, K. (1996). The road to excellence: The acquisition of expert performance in the arts and science. Sports and Games, Mahwah: Erlbaum. Eriksen, Ch. W. (1962). Behavior and Awarenes: A Symposium of Research and Interpretation. Durham, NC., Duke University Press. Eriksson, C. & Wadendal, I. (2002). Pappan: Det var inget hedersmord. Svenska Dagbladet 29(1-02).
Ess, C. (2002). Introduction. Ethics and Information Technology, 4(3), 177-188. Ess, C. (2005). Lost in translation? Intercultural dialogues on privacy and information ethics (Introduction to special issue on Privacy and Data Privacy Protection in Asia). Ethics and Information Technology, 7, 1-6. Ess, Charles. (2005). ‘‘Lost in translation’’?: Intercultural dialogues on privacy and information ethics (Introduction to special issue on Privacy and Data Privacy Protection in Asia). Ethics and Information Technology, 7, 1–6. ETC Group (2003). The big down: Atomtech – Technologies converging at the nano-scale. Winnipeg: Canada. Ethical Conduct for Research Involving Humans: Tri Council Statement. (1998). Ethical consideration for HIV/AIDS clinical and epidemiological research. (2000). Department of Health, HIV/ AIDS/STD Directorate, Republic of South Africa. Ethics. (2007). In Encyclopædia Britannica. Retrieved June 4, 2007, from Encyclopædia Britannica Online: http://www.britannica.com/eb/article-252578 Ettenheim, S., Furger, L. Siegman, & McLester, (2000). Tips for getting girls involved. Technology & Learning, 20(8), 34-36. Etzioni, A. (1999). The Limits of Privacy. New York: Basic Books. EU (1995). Directive 95/46/EC of the European Parliament and of the Council of 24 October 1005 on the protection of individuals with regard to processing of personal data and on the free movement of that data. Council of the European Union (EU)..
ESHRE PGD Consortium Steering Committee (2002). ESHRE Preimplantation Genetic Diagnosis Consortium: data collection III (May 2001). Human Reproduction, 17(1), 233-246.
EULABOR (2005). European and Latin American systems of ethics regulation of biomedical research: Comparative analysis of their pertinence and application for human subjects protection. Madrid: EPSON Entreprise.
Esquirol, J., ed. (2003). Tecnoética: Actas del II Congreso Internacional de Tecnoética. Barcelona: Publicaciones Universitat de Barcelona.
European Commission (2004) Ethical, legal an social aspects of genetic testing: Research, development and
899
Compilation of References
clinical applications. Luxembourg: Office for Official Publications of the European Communities. European Convention on Human Rights. (2005). The European Convention on Human Rights and its five protocols. Retrieved August 19, 2005, from http://www. epolicyinstitute.com European Forum for Good Clinical Practice (1997). Guidelines and recommendations for European ethics committees. Brussels, EFGCP. Evans, H. (1993). High tech vs. high touch: The impact of technology on patient care. In M. Clair & R. Allman (Eds.), Sociomedical perspectives on patient care (pp. 82-95). Lexington, KY: University Press of Kentucky. Evans, J., & Marteau, T. (2001b) Genetic testing: Studies point to variable prognostic abilities and question if testing results in behavioral change. Genomics and Genetics Weekly, June 1, 11-12. Evans, J., Skrzynia, C., & Burke, W. (2001a). The complexities of predictive genetic testing. British medical Journal, 322, 1052-1056. Excoffier, L., Smouse, P.F., & Quattro, J.M. (1992). Analysis of molecular variance inferred from metric distances among DNA haplotypes: applications to human mitochondrial DNA restriction data. Genetics, 131, 479–491. Eysenbach, G. (2002). Infodemiology: The epidemiology of (mis)information. American Journal of Medicine, 113(9), 740-745. Eze, E.C. (2001). Achieving our humanity: the idea of a postracial future. New York: Routledge. Faber, B. (2005). Popularizing nanoscience: The public rhetoric of nanotechnology, 1986-1999. http://people. clarkson.edu/~faber/pubs/nano.tech.tcq.3.0.doc Faber, P. (2007, January 9). RFID strategy—RFID privacy and security issues. Industry Week. Retrieved from http://www.industryweek.com/ReadArticle. aspx?ArticleID=13371
900
Faden, R., & Beauchamp, T. (1986). A history and theory of informed consent. New York: Oxford University Press. Fagelson, D. (2002) Perfectionist Liberalism, Tolerance and American Law. Res Publica 8 (1): 41-70. Fairweather, N. B., & Rogerson, S. (2001). A moral approach to electronic patient records. Medical Informatics and the Internet in Medicine, 26(3), 219-234. Faludi, A. (1973). Planning theory. Oxford: Pergamon Press. Fancher, R. (1983). Biographical Origins of Francis Galton’s Psychology. Isis, 74, 227–33 Farbey, B., Land, F., & Targett, D. (1999). Moving IS evaluation forward: Learning themes and research issues. Journal of Strategic Information Systems, 8(2), 189-207. Farkis, B.G. (2005). Secrets of Podcasting : Audio Blogging for the Masses. Peachpit Press. Fauriel, I., Moutel. G. (2003). Protection des personnes et recherche biomedicale en France. La Presse Medicale, 20, 1887 – 1891 Federal Communications Commission. (2005). Enhanced 911 — Wireless services. Retrieved September 6, 2005, from http://www.fcc.gov/911/enhanced/ Federal Trade Commission (2007). Consumer fraud and identity theft complaint data January – December 2006. Retrieved May 27, 2007 from http://www.consumer. gov/sentinel/pubs/Top10Fraud2006.pdf. Feenberg, A. (1992). Subversive Rationalization: Technology, Power, and Democracy. Inquiry, 35(3/4). Feenberg, A. (1998). Escaping the iron cage, or subversive rationalization and democratic theory. In R. Schomberg (ed.), Democratising technology: Ethics, risk, and public debate. Tilburg: International Centre for Human and Public Affairs. Feenberg, A. (1999). Questioning technology. Routledge Press.
Compilation of References
Feenberg, A. (2002). Transforming technology: A critical theory revisited. New York: Oxford University Press. Feenberg, A. (2002). Democratic rationalization: Technology, power and freedom. Published in the online journal Dogma. Accessed online May 27, 2007 from http://dogma. free.fr/txt/AF_democratic-rationalization.htm Feezell, R. M. (1988). On the wrongness of cheating and why cheaters can’t play the game. Journal Of The Philosophy Of Sport, XV, 57-68. Feinberg, J. (1986). Harm to self. New York: Oxford University Press. Feldacker, B. (1990). Labor guide to labor law. Englewood Cliffs, NJ: Prentice-Hall. Felder, R.M., (1993).Reaching the second tier: Learning and teaching styles in college science education. Journal of College Science Teaching, 23(5), 286-290. Felkins, L. (2003). Social dilemmas. Retrieved January 3, 2004, from http://perspicuity.net/sd/sd.html Feminist Health Care Ethics Research Network (1998). The politics of health: Geneticization versus health promotion. In Sherwin, S (ed.), The politics of women’s health: Exploring agency and autonomy. Temple University Press; Ferner, M. (2006, February 4). Pentagon database leaves no child alone. Counterpunch. Retrieved from http:// www.counterpunch.org/ferner02042006.html Ferster, C.B., & Perrott, M.C. (1968). Behavior principles. New York: Appleton-Century-Crofts. Field, M. (Ed.) (1996). Telemedicine: A guide to assessing telecommunications in health care. Washington, DC: National Academy Press. Fieser, J. (2006). Ethics. In The Internet encyclopedia of philosophy. Retrieved May 30, 2007, from http://www. iep.utm.edu/e/ethics.htm Finch, E. (2003). What a tangled web we weave: Identity theft and the Internet. In Y Jewkes (Ed.. Dot.cons: Crime, deviance and identity on the Internet (pp. 86-104). Cullompton: Willan.
Finch, E. (2007). The problem of stolen identity and the Internet. In Y Jewkes (Ed.), Crime on-line (pp. 29-43). Cullompton: Willan. Findeli, A. (1994). Ethics, aesthetics, and design. Design Issues, 10(2), 49-68. Finelli, C. J., Klinger, A., & Budny, D. D. (2001 Oct). Strategies for improving the classroom environment. Journal of Engineering Education, 90(4), 491-501. Finn, J. (2001).Domestic violence organizations online: Risks, ethical dilemmas, and liability issues. Paper commissioned by Violence Against Women Online Resources. Retrieved May 29, 2007 from http://www. mincava.umn.edu/documents/commissioned/online_liability/online_liability.pdf Finn, J. (2004). A survey of online harassment at a university campus. Journal of Interpersonal Violence, 19(4), 468-483. Finnegan, R. (1973). Literacy versus non-literacy: The great divide. In Horton & Finnegan (Eds.), Modes of thought. London: Faber and Faber. Finta, C., Warner S.C., & Zaphiropoulos, P. (2002). Intergenic mRNAs: Minor gene products or tools of diversity? Histology and Histopathology, 17(2), 677-682. Finucane, T.E., Christmas, C. & Travis, K. (1999). Tube feeding in patients with advanced dementia: A review of the evidence. Journal of American Medical Association, 282, 1365-1370. Fischer, J.M. & Ravizza, M. (1993). Perspectives on Moral Responsibility. Ithaca and London: Cornell University Press. Fischer, J.M. & Ravizza, M. (1998). Responsibility and Control. A Theory of Moral Responsibility. Cambridge: Cambridge University Press. Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention and behavior: An introduction to theory and research. Reading, MA: Addison-Wesley. Fishe R, W. & Wesolkonski, S. (1999). Tempering technostress. Technology and Society Magazine, 18(1), 28 – 42.
901
Compilation of References
Fisher, C. & Kingma, B.R. (2001). Criticality of data quality as exemplified in two disasters. Information and Management, 39, 109-116.
Fleischmann, K.R. (2006). Boundary objects with agency: A method for studying the design-use interface. The Information Society, 22(2), 77-87.
Fisher, C., Lauria, E., Chengular-Smith, S., & R. Wang. (2006). Introduction to information quality. MIT Information Quality Program.
Fleischmann, K.R. (2006). Do-it-yourself information technology: Role hybridization and the design-use interface. Journal of the American Society for Information Science and Technology, 57(1), 87-95.
Fisher, W. & Barak, A. (2001) Internet Pornography: A Social Psychological Perspective on Internet Sexuality. Journal of Sex Research, 38: 313-323. Fishkin, J. (1997). Voice of the people. Yale UP. Flecha, R. & Rotger, J.M. (2004). Innovación, democratización y mejora de la docencia universitaria en el marco de la Sociedad de la Información. Contextos Educativos, 6-7, 159-166. Fleischmann, A. (1995, October). Personal data security: Divergent standards in the European Union and the United States. Fordham International Law Journal, 19,143-180. Fleischmann, K.R. & Wallace, W.A. (2005). A covenant with transparency: Opening the black box of models. Communications of the ACM, 48(5), 93-97. Fleischmann, K.R. & Wallace, W.A. (2006). Ethical implications of values embedded in computational models: An exploratory study. Proceedings of the 69th Annual Conference of the American Society for Information Science and Technology, Austin, TX. Fleischmann, K.R. & Wallace, W.A. (in press). Ensuring transparency in computational modeling: How and why modelers make models transparent. Communications of the ACM. Fleischmann, K.R. (2003). Frog and cyberfrog are friends: Dissection simulation and animal advocacy. Society and Animals, 11(2), 123-143. Fleischmann, K.R. (2004). Exploring the design-use interface: The agency of boundary objects in educational technology. Doctoral Dissertation, Rensselaer Polytechnic Institute, Troy, NY. Fleischmann, K.R. (2005). Virtual dissection and physical collaboration. First Monday, 10(5).
902
Fleischmann, K.R. (2007). Standardization from below: Science and technology standards and educational software. Educational Technology & Society, 10(4), 110-117. Fleischmann, K.R. (2007). Digital libraries and human values: Human-computer interaction meets social informatics. Proceedings of the 70th Annual Conference of the American Society for Information Science and Technology, Milwaukee, WI. Fleischmann, K.R. (2007). Digital libraries with embedded values: Combining insights from LIS and science and technology studies. Library Quarterly, 77(4), 409-427. Fleming, M. J., Greentree, S., Cocotti-muller, D., Elias, K. A., & Morrison, S. (2006). Safety in cyberspace: Adolescents’ safety and exposure online. Youth & Society, 38(2), 135-154. Fletcher, J. (1974). The ethics of genetic control: Ending reproductive roulette. New York: Anchor Books. Flicker, S., & Guta, A. (in press). Ethical approaches to adolescent participation in sexual health research Journal of Adolescent Health. Flicker, S., Goldberg, E., Read, S., Veinot, T., McClelland, A., Saulnier, P., et al. (2004). HIV-Positive youth’s perspectives on the internet and e-health. Journal of Medical Internet Research, 6(3), e32. Flicker, S., Haans, D., & Skinner, H. A. (2003). Ethical dilemmas in research on Internet communities. Qualitative Health Research, 14(1), 124-134. Flood, R. L. (1990). Liberating systems theory. New York: Plenum Press.
Compilation of References
Flood, R. L., & Jackson, M. C. (1991). Total systems intervention: A practical face to critical systems thinking. Systems Practice, 4, 197-213.
Floridi, L., & Sanders , J. F. (2002). Mapping the foundationalist debate in computer ethics. Ethics and Information Technology, 4, 1-9.
Flood, R. L., & Jackson, M. C. (Eds.). (1991). Critical systems thinking: Directed readings. Chichester: John Wiley and Sons.
Floridi, L., & Sanders, J. W., (2001). Artificial evil and the foundation of computer ethics. Ethics and Information Technology, 3(1), 55-66.
Flood, R. L., & Romm, N. (1996). Diversity management: Triple loop learning. Chichester: John Wiley and Sons.
Floridi, L., & Sanders, J. W., (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349-379.
Floridi, L. & Sanders, J. (2003). Computer ethics: Mapping the foundationalist debate. Ethics and Information Technology, 4(1), 1-24. Floridi, L. & Sanders, J. W., (1999). Entropy as evil in information ethics. Etica & Politica, Special Issue on Computer Ethics, I.2. Floridi, L. (1999). Information ethics: On the philosophical foundation of computer ethics. ETHICOMP98 - The Fourth International Conference on Ethical Issues of Information Technology. Retrieved August 2007 from http://www.wolfson.ox.ac.uk/~floridi/ie.htm Floridi, L. (1999). Information ethics: On the theoretical foundations of computer ethics. Ethics and Information Technology 1(1), 37-56. Floridi, L. (2001). Information ethics: An environmental approach to the digital divide. Philosophy in the Contemporary World, 9(1). Retrieved June 2005, from www. wolfson.ox.ac.uk/~floridi/pdf/ieeadd.pdf Floridi, L. (2002). What is the philosophy of information? Metaphilosophy, 33(1/2). Floridi, L. (2003). On the intrinsic value of information objects and the infosphere. Ethics and Information Technology, 4(4), 287-304 Floridi, L. (2006). Information ethics, its nature and scope. SIGCAS Computer Society, 36(3) (Sep. 2006), p. 36. Retrieved May 30, 2007, from http://doi.acm. org/10.1145/1195716.1195719 Floridi, L. L. & Sanders, J. W. (2003). Computer ethics: Mapping the foundationalist debate. Ethics and Information Technology, 4(1), 1-24.
Floridi, Luciano and J. W. Sanders. (2007). On the morality of artificial agents. Retrieved May 13, 2007 from http://www.roboethics.org/site/modules/mydownloads/ download/Floridi_contribution.pdf. Foertsch, J., Moses, G., Strikwerda, J. & Litzkow, M. (2002 Jul). Reversing the lecture / homework paradigm using e-TEACH Web-based streaming video software. Journal of Engineering Education, 91(3), 267-275. Fogg, B. (2003). Persuasive technology: using computers to change what we think and do. San Francisco, CA: Morgan Kaufmann. Ford, C. V. (1996). Lies! Lies!! Lies!!! The Psychology of Deceit. Washington, DC: American Psychiatric Press. Ford, R. C. & Richardson, W. D. (1994). Ethical decision making: A review of the empirical literature. J. Business Ethics, 13, 205-21. Forelle, C. (2004, May 14). On the road again, but now the boss is sitting beside you. The Wall Street Journal (Eastern Edition), p. A1. Foresight International. (2003). Home page. Retrieved January 3, 2004, from www.foresightinternational. com.au Forester, J. (1989). Planning in the face of power. Berkley, CA: University of California Press. Forester, T. & Morrison, P. (1990). Computer ethics: Cautionary tales and ethical dilemmas in computing. Cambridge, MA: The MIT Press. Forrest, D.W. (1974). Francis Galton. The life and work of a Victorian genius. London: Paul Elek
903
Compilation of References
Forrest, K., Simpson, SA., Wilson, BJ., van Teijlingen, ER., McKee, L., Haites, N., & Matthews E. (2003). To tell or not to tell: Barriers and facilitators in family communication about genetic risk. Clinical Genetics, 64, 317-326. Fortwengel, G. (2005). Issues to be considered in the informed consent process. Paper/presentation and conference summary, Clinical Trial Summit, Munich. Fortwengel, G., Ostermann, H., Staudinger, R. (2007). Informed consent and the incapable patient. Good Clinical Practice Journal, 14(8), 18 - 21. Fost, N. (1986). Banning drugs in sports: A skeptical view. Hastings Center Report, 16, 5-10. Foucault, M. (1954). Maladie mentale et personalité. Paris: Presses Universitaires de France. Japanese translation by Nakayama, G., (1997). Seishin shikann to paasonarithi. Tokyo: Tcikuma-syobou Foucault, M. (1975). Surveiller et punir. Paris: Gallimard. Foucault, M. (1977). The history of sexuality volume one: The will to knowledge (Vol. 1). London: Penguin. Foucault, M. (1979) Discipline & Punish: The birth of the prison. New York: Vintage Books. Foucault, M. (1980). Power, knowledge: selected interviews and other writings 1972-1977. Gordon, C (ed & trans), New York: Pantheon. Foucault, M. (1980). Truth and power. In P. Rabinow (Ed.), The Foucault reader: An introduction to Foucault’s thought (pp. 51-75). London: Penguin. Foucault, M. (1980). Two lectures. In C. Gordon (Ed.), Power/knowledge: Selected interviews and other writings Michel Foucault (pp. 78-108). New York: Harvester Wheatsheaf. Foucault, M. (1982). Afterword: The subject and power. In H. Dreyfus & P. Rabinow (Eds.), Michel Foucault: Beyond structuralism and hermeneutics (pp. 208-226). Brighton: The Harvester Press.
904
Foucault, M. (1982). On the genealogy of ethics: An overview of work in progress. In P. Rabinow (Ed.), The Foucault reader: An introduction to Foucault’s thought (pp. 340-372). London: Penguin. Foucault, M. (1983). Discourse and truth: The problematization of parrhesia. Presentation at University of California at Berkeley, October-November. Retrieved October 1, 2006, from http://foucault.info/documents/ parrhesia/ Foucault, M. (1984). The ethics of the concern of the self as a practice of freedom (R. e. a. Hurley, Trans.). In P. Rabinow (Ed.), Michel Foucault: Ethics subjectivity and truth: Essential works of Foucault 1954-1984 (pp. 281-301). London: Penguin. Foucault, M. (1984). The history of sexuality volume two: The use of pleasure. London: Penguin. Foucault, M. (1984). What is enlightenment? (C. Porter, Trans.). In P. Rabinow (Ed.), The Foucault reader: An introduction to Foucault’s thought (pp. 32-50). London: Penguin. Foucault, M. (1986) Of Other Spaces. Diacritics, Vol 16(1): 22-27. Fraleigh, W. (1982). Why the good foul is not good. Journal of physical education, recreation and dance, January, 41-42. Fraleigh, W.P. (1985). Performance enhancing drugs in sport: The ethical issue. Journal Of The Philosophy Of Sport, XI, 23-29. Francastel, C., Scübeler, D., Martin, D., & Groudine, M. (2000). Nuclear compartmentalization and gene activity. Nature Reviews Molecular Cell Biology, 1, 137. Frances, C., Pumerantz, R. and Caplan, J. (1999 July/August). Planning for instructional technology. What you thought you knew could lead you astray, pp. 25-33. Frankfurt, H. (1971). Freedom of the Will and the Concept of a Person. Journal of Philosophy, LXVIII, 5-21. Frankfurt, H. (1993). Identification and Wholeheartedness. In Fischer, J.M. & Ravizza, M. (eds.), Perspectives
Compilation of References
on Moral Responsibility. Cornell University Press, 170-187.
Freire, P. (1985). The politics of education: Culture, power and liberation. Hadley, MA: Bergin & Garvey.
Franklin, S., & Graesser, A. (1996). Is it an agent, or just a program: A taxonomy for autonomous agents. Proceedings of the Third International Workshop on Agent Theories, Architectures, and Languages. SpringerVerlag.
Fridman, R. A., & Currall, S. C. (2003). Conflict Escalation: Dispute Exacerbating Elements of E-mail Communication. Human Relations, 56(11), 1325-1347.
Franko Aas, K. (2006). The body does not lie: Identity, risk and trust in technoculture. Crime, Media, Culture, 2(2), 143-158. Frauenheim, E. (2004). Sony, Samsung Complete LCD Plant. The Economist. Freedman, D. (2006). The technoethics trap. Inc. Magazine. Retrieved June 30, 2007 from http://www. inv.com/magazine/20060301/column-freedman_Printer_Friendly.html Freeh, V.W. (2002). Anatomy of a Parasitic Computer. Dr. Dobb’s Journal, January, 63-67. Freeman, L. A. & Peace, A. G. (2005). Revisiting Mason: The last 18 years and onward. In Freeman, L. A. & Peace, A.G. (Eds.), Information ethics: Privacy and intellectual property, (pp. 1-18). Hershey, PA: Idea Group Inc.
Fried, Charles. (1968). Privacy. The Yale Law Journal, 77(3), 475-493. Friedenberg, J. E. (1999). Predicting dropout among Hispanic youth and children. Journal of Industrial Teacher Education, 36(3). Friedman, B. (1997). Social judgments and technological innovation: Adolescents’ understanding of property, privacy, and electronic information. Computers in Human Behavior, 13(3), 327-351. Friedman, M. (1993 [1970]). The Social Responsibility of Business is to Increase Its Profits. New York Times Magazine, (1 September 1970). Reprinted in Beauchamp, T. & Bowie, N. (eds) Ethical Theory and Business, Englewood Cliffs, N.J.: Prentice-Hall. Friedman, R., Chi, S. C., & Liu, L. A. (2006). An Expectany Model. Journal of International Business Studies, 37, 76-91.
Freeman, L. A. & Urbaczewski, A. (2005). Why Do People Hate Spyware? Communications of the ACM, Vol. 48, No. 8, August.
Friedman, S. M., and Egolf, B.P. (2005). Nanotechnology, risks and the media. IEEE Technology and Society Magazine 24 (4), 5-11.
Freeman, R. (1984). Strategic management: A stakeholder approach. Boston: Pitman.
Friedman, T. L. (2006). The world is flat: A brief history of the 21st century. New York: Farrar, Straus and Giroux.
Freeman, S. (ed.) (2003). The cambridge companion to Rawls, Cambridge, UK: Cambridge University Press.
Friend, J. (2001). The strategic choice approach. In Rosenhead and Mingers, pp. 121-158.
Freiberger, P. A., & Swaine, M. R. (2003). Computers. In Encyclopædia Britannica 2003. [CD-ROM]. London: Encyclopedia Britannica.
Friends of the Earth (2006). Nanomaterials, sunscreens and cosmetics: Small ingredients, big risks. FoE, May. http://www.foe.org/camps/comm/nanotech/nanocosmetics.pdf
Freire, P. (1970). Pedagogy of the oppressed. New York: Continuum. Freire, P. (1973). Education for critical consciousness. New York: Continuum Publishing.
Friere, P. (1970/1993). Pedagogy of the oppressed. New York: Continuum. Froehlich, T.J. (1992). Ethical considerations of information professionals. Annual Review of Information Science and Technology, 27.
905
Compilation of References
From Awareness to Action: Integrating Ethics and Social Responsibility across the Computer Science Curriculum; Project ImpactCS Report 3. http://www. seas.gwu.edu/~impactcs/paper3/toc.html. Retrieved June 12, 2007. Frow, J. (1997) Time and Commodity Exchange. Oxford: Clarendon Press. Fulford, K. W. M. (2004). Ten principles of values-based medicine. In J. Radden (Ed.), The philosophy of psychiatry: A companion (pp. 205-234). New York: Oxford University Press. Funtowicz, S.O., & Ravetz, J. (2003). Post-normal science. International encyclopaedia of ecological economics, 1-10. Retrieved October 24, 2006, from http://www. ecoeco.org/pdf/pstnormsc.pdf Funtowicz, S.O., & Ravetz, J.R. (1990). Uncertainty and quality in science for policy. Dordrecht: Kluwer Academic Publishers. Furmston, M. P. (1996). Cheshire, Fifoot & Furmston’s Law of Contract. London: Butterworths. Gadow, S. A. (1980). Existential advocacy: philosophical foundation of nursing. In S.F. Spicker & S. Gadow (eds), Images and ideals: opening dialogue with the humanities (pp.79 – 101). New York: Springer. Gallivan, M.J. & Depledge, G. (2003). Trust, control and the role of interorganizational systems in electronic partnerships. Information Systems Journal, 13, 159-190. Galtung, J. (1998). Frieden mit friedlichen Mitteln— Friede und Konflikt, Entwicklung und Kultur. Opladen: Leske + Budrich. Galván, J. (2001). Technoethics: Acceptability and social integration of artificial creatures. Retrieved June 30, 2007 from http://www.eticaepolitica.net/tecnoetica/ jmg_acceptability%5Bit%5D.htm Galván, J.M. (2001). Technoethics: Acceptability and social integration of artificial creatures. Retrieved January 12, 2007 from http://www.eticaepolitica.net/ tecnoetica/jmg_acceptability%5Bit%5D.htm
906
Gan LL. & Koh. HC. (2006). An empirical study of software piracy among tertiary institutions in Singapore. Information & Management, 43, 640-649. Gan, Z. (1995). Critical biography of Sun Simiao [Sun Simiao ping zhuan]. Nanjing: Nanjing University Press. In Chinese. Gandalf. (2005). Data on Internet activity worldwide (hostcount). Retrieved July 2005, from http://www. gandalf.it/data/data1.htm Gantz, J. & Rochester, J. B. (2004). Pirates of the digital millennium. Upper Saddle River, NJ: Prentice Hall. García Pascual, E. (1998). Las siete pecados capitales de las nuevas tecnologías. Acción Educativa, Madrid, 97, 5-7. García Pascual, E. (1998). Las siete virtudes de las nuevas tecnologías. Acción Educativa, Madrid, 98, 5-8. Gardner, H. (1983). Frames of Mind: The Theory of Multiple Intelligence. Basic Books, New York, NY. Gardner, M. L. S. (2005). Peer influence on risk taking, risk preference, and risky decision making in adolescence and adulthood: An experimental study. Developmental Psychology, 41(4), 625-635. Garson, G.D. (2000). Social dimensions of information technology: Issues for the new millenium. Hershey, PA: Idea Group. Gaskell, G. Ten Eyck, T., Jackson, J. & Veltri, G. (2005). Imagining nanotechnology: Cultural support for technological innovation in Europe and the United States. Public Understanding of Science, 14(1), 81-90. Gasson, M., Hutt, B., Goodhew, I., Kyberd, P. & Warwick K. (2002). Bi-directional human machine interface via direct neural connection. Proceedings of the IEEE International Conference on Robot and Human Interactive Communication, 25-27 September 2002 (pp. 265-270), Berlin, Germany. New York: IEEE Press. Gasson, M., Hutt, B., Goodhew, I., Kyberd, P., & Warwick, K. (2005). Invasive neural prosthesis for neural signal detection and nerve stimulation. Proc. International
Compilation of References
Journal of Adaptive Control and Signal Processing, 19(5), 365-375.
related responsibilities. Journal of Financial Crime, 12(1), 33-43.
Gates, B. (1996). The road ahead. London: Penguin Books.
Gershenfeld, N. (1999). When things start to think. New York: Henry Holt and Company.
Gaudin, S. (2007, April 11). Security breaches cost $90 to $305 per lost record. Information Week. Retrieved from http://www.informationweek.com/shared/printableArticle.jhtml?articleID=19900022
Gethin, Rupert. (1998). The Foundations of Buddhism. Oxford University Press.
Gauzente, C. (2004). Web merchants’ privacy and security statements: How reassuring are they for consumers? A two-sided approach. Journal of Electronic Commerce Research, 5(3)., 181-198. Gearhart, D. (2005). Topic: The ethical use of technology and the Internet in research and learning. Presented at DSU Center of Excellence Symposium, April 2005. Geertz, C. (1983). Local knowledge. Further essays in interpretive anthropology. New York: Basic Books. Gefen, D., Karahanna, E. & Straub, D. (2003). Trust and TAM in online shopping: An integrated model. MIS Quarterly, 27(1)., 51-90. Gefenas, E. (2005). The concept of risk: Linking research ethics and research integrity. Presentation at the Responsible Conduct of Basic and Clinical Research Conference, Warsaw. Gehlen, A. (1997). Der Mensch: Seine Natur und seine Stellung in der Welt (13th ed.). Wiesbaden: UTB. Gehring, J. (2001). Not enough girls. Education week, 20 (35) 18–19. Gellman, B. (2005, November 6). The FBI’s secret scrutiny. The Washington Post, A01. General education requirements. (n.d.). Wikipedia. Retrieved August 13, 2007, from answers.com, http://www. answers.com/topic/general-education-requirements Geoghegan, M. & Klass, D. (2005). Podcast Solutions: The Complete Guide to Podcasting (Solutions). Friends of ED; Bk&CD-Rom edition.
Gibbons, P., Karp, B., Ke, Y., Nath, S., and Seshan,S. (2003, October-December). IrisNet: An architecture for a worldwide sensor web, Pervasive computing, 2(4).,22-33. Giddens, A. (1990). The consequences of modernity. Cambridge, UK: Polity Press Gifford, S.M. (1986). The meaning of lumps: A case study of the ambiguities of risk. In C. Janes, R. Stall & S.M. Gifford (Eds.), Anthropology and epidemiology: Interdisciplinary approaches to the study of health and disease (pp. 213-246). Dordrecht: D. Reidel. Gigerenzer, G. & Sleten, R. (2001). Bounded rationality: The adaptive toolbox. Cambridge, MA: MIT Press. Gilbert, A. (2005, July 8). Will RFID-guided robots rule the world? CNet News.com. Retrieved from http://news. com.com/2102-7337_3-5778286.html. Gilbert, S., & Sarkar, S. (2000). Embracing complexity: Organicism for the 21st century. Developmental Dynamics, 219, 1-9. Gilbert, W (1993). A vision of the grail. In Kevles, D & Hood, L (eds.), The code of codes : Scientific and social issues in the human genome project. Harvard University Press. Giles, J. (2005). Internet encyclopaedias go head to head. Nature, 438, 900-901. Gilfillan, S. C. (1970). The sociology of invention. Cambridge MA: MIT Press. Gill, D. W. (1997). Educating for meaning and morality: The contribution of technology. Bulletin of Science, Technology & Society, 17, 5-6, 249-260.
Gerard, G. J., Hillison, W., & Pacini, C. (2004). Identity theft: The US legal environment and organisations’
907
Compilation of References
Gillespie, A. A. (2005) Indecent Images of Children: The Ever-Changing Law. Child Abuse Review, 14: 430-443. Gilligan, C. (1993). In a different voice: Psychological theory and women’s development. Cambridge, MA: Harvard. Gilligan, C., (1977). Concepts of the self and of morality. Harvard Educational Review, 481-517. (Reprinted in M. Pearsall (ed.), Women and values: Readings in recent feminist philosophy. Belmont, CA: Wadsworth). Ginwright, S., & James, T. (2002). From assets to agents of change: social justice, organizing, and youth development. New Directions for Youth Development, 96, 27-46. Girts, Dimdins and Montgomery, Henry (2007). Egalitarian vs. proportional voting in various contexts: An experimental study. Paper presented at the Workshop Democracy in a Globalised World in Oñati, Basque Country, April 20. Forthcoming in Hart publication. Gitlin, T. (1980). The whole world is watching: Mass media in the making and unmaking of the new left. Berkely: University of California Press. Givon, M., Mahajan, V. & Muller, E. (1995). Software piracy: Estimation of lost sales and the impact on software diffusion. Journal of Marketing, 59(1), 29-37. Glaser, B. & Strauss, A. (1967). The discovery of grounded theory. Chicago: Aldine. Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. New York: de Gruyter. GlaxoSmithKline. (2006). Position paper on global public policy issue. A publication of GlaxoSmithKline government affairs, Europe & Corporate. Retrieved June 13, 2007, from http://www.gsk.com/responsibility/Downloads/clinical_trials_in_the_developing_world.pdf Global Forum for Health Research (2004). The 10/90 report on health research. (fourth report). Retrieved March 1, 2007, from http://www.globalforumhealth. org/Site/000_Home.php
908
Global Forum on Bioethics in Research. (2005). What happens when the research is over? Post trial obligations of researchers and sponsors. Sixth annual meeting, Retrieved May 30, 2007, from http://www.gfbronline. com/ Godfrey-Smith, P. (2000). Information, arbitrariness, and selection: Comments on Maynard Smith. Philosophy of Science, 67, 202-207. Godwin-Jones, R. (2005). Emerging Technologies - Skype and Podcasting: Disruptive Technologies for Language Learning [Electronic version]. Language Learning & Technology, 9(3), 9-12. Gold, E.R. (1996). Body parts: Property rights and the ownership of human biological materials. Washington, DC: Georgetown University Press. Goldberg, P. (1983). The intuitive edge. Wellingborough: Aquarian Press (Thorsons). Goldfarb, N. (2006). The two dimensions of subject vulnerability. Journal of Clinical Research Best Practices, 2. Goldin, I. M., Ashley, K. D., & Pinkus, R. L. (2001). Introducing PETE: Computer support for teaching ethics. Proceedings of the Eighth International Conference on Artificial Intelligence and Law, (ICAIL-2001). Association of Computing Machinery, New York. Goldner JA. (2000) Dealing with conflict of interest in biomedical research: IRB oversight as the next best solution to the abolitionist approach. Journal of Law, Medicine and Ethics, 28, 379-404 Goldstein, K. (1934).Der Aufbau des Origanismus. Haag: Martinus Nijhoff. Goldstein, R.S., Drukker, M., Reubinoff, B.E., & Benvenisty, N. (2002). Integration and differentiation of human embryonic stem cells transplanted to the chick embryo. Developmental Dynamics, 225(1), 80-86. Goldstein, W., Hogarth, R., Arkes, H., Lopes, L., & Baron, J. (Eds.). (1997). Research on judgment and decision making: Currents, connections, and controversies. Cambridge: Cambridge University Press.
Compilation of References
Goodell, R. (1986). How to kill a controversy: The case of recombinant DNA. In S. Friedman, S. Dunwoody & C. Rogers (eds.), Scientists and journalists: Reporting science as news, (pp. 70-181). Free Press: New York. Goodin, R. (2007). Enfranchising all affected interests, and its alternatives. Philosophy and Public Affairs, 35, 40-68. Goodman, P.S. (2005, September 11). Yahoo says it Gave China Internet data. Washing Post Foreign Service, A30. Retrieved from http://www.washingtonpost.com/ wp-dyn/content/article/2005/09/10/AR2005091001222. html Goodwin, B. (2003, June 17). Tell staff about e-mail snooping or face court, new code warns. Computer Weekly, 38, 5. Goody, J. & I. Watt (1968). The consequences of literacy. In J. Goody (Ed.), Literacy in traditional societies (pp. 27-68). New York: Cambridge University Press. Goody, J. (1986). The logic of writing and the organization of society. New York: Cambridge University Press. Goody, J.(1977). The domestication of the savage mind. Cambridge, UK: Cambridge University Press. Gopal, R. D. & Sanders, G. L. (1998). International software piracy: Analysis of key issues and impacts. Information Systems Research, 9(4), 380-397. Gopal, R. D., Sanders, G. L. & Bhattacharjee, S. (2004). A behavioral model of digital music piracy. Journal of Organizational Computing and Electronic Commerce, 14(2), 89-105. Gorden, J. R. (1991). A Diagnostic Approach to Organizational Behaviour. Boston: Allyn & Bacon. Gordjin, B. (2003) Nanoethics: From utopian dreams and apocalyptic nightmares towards a more balanced view. Paris, France: UNESCO. Available at http://portal. unesco.org/shs/en/ev.php-URL ID=6603&URL DO=DOTOPIC&URL SECTION=201.html. Gordley, P. (1991). The Philosophical Origins of the Modern Contract Doctrine. Oxford: Clarendon Press.
Gordon, G. N. (1965, December). The end of an era in American education. Educational Technology, 12(12), 15-19. Gordon, S., & Ford, R. (2006). On the definition and classification of cybercrime. Journal in Computer Virology, 2(1), 13-20. Gormly, E. K.. (1996, December). Critical perspectives on the evolution of technology in America public schools. Journal of Educational Thought/Revue de la Pensee Educative, 30(3), 263-86. Gòrniak-Kocikowska, K. (1996). The computer revolution and the problem of global ethics. Science and Engineering Ethics, 2, 177-190. Gòrniak-Kocikowska, K. (2004). The global culture of digital technology and its ethics. The ETHICOMP EJournal, 1(3). Retrieved May 2005, from http://www. ccsr.cse.dmu.ac.uk/journal Goss, G. (2006, May 1). RFID standards released IT by vendors, privacy groups. News Service. Gossett, J. L. and Byrne, S. (2002) Click Here: A Content Analysis of Internet Rape Sites. Gender & Society, 16 (5): 689-709. Gostin, L. (1995) Genetic privacy. Journal of Law, Medicine and Ethics, 23, 320-330 Gotterbarn, D. (1991). Computer ethics: Responsibility regained. National Forum 71(3), 26-32. Retrieved May 30, 2007, from http:// csciwww.etsu.edu/Gotterbarn/ artpp1.htm Gotterbarn, D. (1992). A Capstone course in computer ethics. In Bynum, T.W., Maner, W., & Fodor, J.L. (Eds.), Teaching computer ethics. First presented at and published in Proceedings of the National Conference on Computing and Values, New Haven, CT. Gotterbarn, D. (1992). The use and abuse of computer ethics. The Journal of Systems and Software, 17(1), 1. Retrieved May 30, 2007 http://www.southernct.edu/organizations/rccs/resources/teaching/teaching_mono/Gotterbarn02/Gotterbarn02_intro.html
909
Compilation of References
Gotterbarn, D. (1999, November-December). How the new Software Engineering Code of Ethics affects you. IEEE Software, 16 (6), 58-64. Gotterbarn, D. (2001). Software engineering ethics. In J. Marciniak (ed.), Encyclopedia of Software Engineering, 2nd ed. Wiley-Interscience, New York. Gotterbarn, D. (2004). An ethical decision support tool: Improving the identification and response to the ethical dimensions of software projects. Ethicomp Journal, 1(1). Gould, S.J. (1996). The mismeasure of man. Revised and expanded edition. New York: W.W. Norton and Co. Gouldson, T., (2001, July 27). Hackers and crackers bedevil business world, Computing Canada, 27(16), 13. Government of Alberta. (2003). Stakeholder analysis. Retrieved December 30, 2003, from www3.gov.ab.ca/ cio/costbenefit/stake_tsk.htm Government of the United States of America (1998). The digital millennium copyright act of 1998. Washington, DC: U.S. Copyright Office. Retrieved May 17, 2007, from www.copyright.gov/legislation/dmca.pdf Grabe, M. & Grabe, C. (2007). Integrating technology for meaningful learning (5th Edition). New York: Houghton Mifflin Company. Grabosky, P. (2004). The global dimension of cybercrime. Global Crime, 6(1), 146-157. Gracia Guillén, D. (1995). Medical ethics: History of Europe, Southern Europe. In T. W. Reich (ed.), Encyclopedia of bioethics (pp. 1556-1563), Vol. 3. New York: Simon and Schuster Macmillan. Granit, R and C. G. Phillips (1956). Excitatory and Inhibitory Processes Acting upon Individual Purkinje Cells in the Cerebellum of Cats. The Journal of Physiology, Vol 133 (pp. 520-547). Granitz, N. & Loewy, D. (2007). Applying Ethical Theories: Interpreting and Responding to Student Plagiarism. Journal of Business Ethics, 72, 293–306.
910
Grant Lewis, S., & Samoff, J. (1992). Introduction. In S. Grant Lewis & J. Samoff (Eds.), Microcomputers in African development: Critical perspectives (pp. 1-24). Boulder, CO: Westview Press. Greenaway, K. & Chen, Y. (2006). Theoretical explanations for firms information privacy. Journal of the Association for Information Systems-Online, 6, 1. Greening, T., Kay, J., & Kummerfield, B. (2004). Integrating ethical content into computing curricula. In Raymond Lister & Alison Young (Eds.), Conferences in Research and Practice in Information Technology, Vol. 30. Greenwald R A., Ryan M K., & Mulvihill J E., (1982). Human subjects research: a handbook for institutional review boards. New York, Plenum Press. Gregory, A. (2000). Problematizing participation: A critical review of approaches to participation in evaluation theory. Evaluation, 6(2), 179-199. Gregory, A., & Jackson, M. C. (1992). Evaluation methodologies: A system for use. Journal of the Operational Research Society, 43(1), 19-28. Gregory, R., Flynn, J. and Slovic, P. (2001). Technological stigma, In Flynn, J., Slovic. P. and Kunreuther, H. (eds) Risk, Media and Stigma: Understanding Public Challenges to Modern Science and Technology, (pp. 3-8). London: Earthscan. Gregory, W. J. (1992). Critical systems thinking and pluralism: A new constellation. Unpublished doctoral dissertation, City University, London. Griffiths, A.J.F., Miller, J.H., Suzuki, D.T., Lewontin, R.C., & Gelbart, W.M. (2000). An introduction to genetic analysis. 7th edition. New York: W.H. Freeman. Griffiths, P. (2001). Genetic information: A metaphor in search of a theory. Philosophy of Science, 68, 394-412. Griffiths, P. (2005). The fearless vampire conservator: Philip Kitcher, genetic determinism and the informational gene. In E.M. Neumann-Held & C. Rehmann-Sutter (Eds.), Genes in development: Rereading the molecular paradigm (pp. 175-197). Durham, ND: Duke University Press.
Compilation of References
Grigsby, J., & Kaehny, M (1993). Analysis of expansion of access through use of telemedicine and mobile health services. Denver, CO: University of Colorado Health Science Center. Groark, M., Oblinger, D. & Choa, M. (2001). Term Paper Mills, Anti-plagiarism Tools, and Academic Integrity. EDUCAUSE Review, 36(5), 40–48. Groce, N.E. & Marks, J. (2001). The Great Ape Project and disability rights: Ominous undercurrents of eugenics in action. American Anthropologist, 102(4), 818-822. Gross, M.L. (2006). Bioethics and armed conflict: Moral dilemmas of medicine and war. The MIT Press. Grunwald, A. (2005). Nanotechnology – A new field of ethical inquiry? Science and Engineering Ethics, 11, 187-201. Grusec, J. , & Redler, E. (1980). Attribution, reinforcement, and altruism: A developmental analysis. Developmental Psychology, 16, 525-534. Guba, E. G., & Lincoln, Y. S. (1989). Fourth generation evaluation. Newbury Park, CA: Sage Publications. Gudykunst, W. B., & Nishida, T. (1984). Individual and Cultural Influences on Uncertianty Reduction. Communication Monographs, 51, 26-36. Guetzkow, R., & Gyr, J. (1954). An Analysis of Conflict in Decision Making Groups. Human Resource Relations, 7, 367-381. Guidelines for Good Practice in the Conduct of Clinical Trials in Human Participants in South Africa. (2000). Clinical trials guidelines. Republic of South Africa: Department of Health. Gulati, C (2001). Genetic antidiscrimination laws and health insurance: A misguided solution. Quinnipiac Health Law Journal, 4(2), 149-210. Gupta GR. (2002) Assuring access to AIDS vaccine by talking to the experts. International Center for Research on Women. International AIDS Vaccine Initiative. A satellite Meeting Prior to the XIV International AIDS Conference. 6 July 2002, Barcelona.
Gura, M. & Percy, B. (2005). Recapturing technology for education: Keeping tomorrow in today’s classrooms. Lanham, MD: Scarecrow Education. Gustafson, D. H., Hawkins, R., Boberg, E. W., Pingree, S., Serlin, R. E., Graziano, F., et al. (1999). Impact of a patient-centered computer-based health information/support system. American Journal of Preventive Medicine, 16(1), 1-9. Gustafson, K. L. & Branch, R. M. (2002). Survey of instructional development models, 4th edition. Syracuse NY: ERIC Clearinghouse on Information & Technology. Gutek, G. L. (2001). Historical and philosophical foundations of education—A biographical introduction. Upper Saddle River NJ: Merrill Prentice Hall. Gutman, A. & Thompson, D (1996). Democracy and disagreement. Cambridge, MA: Belknap Press. Gutmann, A., & Thompson, D. (1997). Deliberating about bioethics. Hastings Center Report, 27(3), 38-41 Haartsen et al. (1998, October). Bluetooth: Vision, goals, and architecture. ACM Mobile Computing and Communications Review, 2(4), 38-45. Haberberg, A. (2000). Swatting SWOT. Strategy Magazine Archives, Strategic Planning Society. Retrieved December 30, 2003, from www.sps.org.uk/d8.htm Habermas, J. (1968). Technik und wissenschaft als “Ideologie”. Frankfurt am Main: Suhrkamp. Habermas, J. (1981). Erkenntnis und intresse. Frankfurt am Main: Suhrkamp. Habermas, J. (1981). Theorie des kommunikativen Handelns (Band I/II). Frankfurt a. M.: Suhrkamp Verlag. Habermas, J. (1983). Moralbewusstsein und kommunikatives handeln. Frankfurt am Main: Suhrkamp. Habermas, J. (1984). The theory of communicative action, Vol. 1: Reason and the rationalization of society. Cambridge, UK: Polity Press.
911
Compilation of References
Habermas, J. (1987). The theory of communicative action, Vol. 2: The critique of functionalist reason. Cambridge, UK: Polity Press. Habermas, J. (1989). The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society. Thomas, B. (trans). Cambridge, MA: MIT Press. Habermas, J.(1990): Moral consciousness and communicative ethics. Cambridge, MA: MIT Press. Hager, R.M. & Smith, D. (2003). The public school’s special education system as an assistive technology funding source: The cutting edge (2nd ed.). Buffalo, NY: Author. pp 35-45. Hall, J. S. (2007). Beyond AI. New York: Prometheus Books. Hall, S., Clarke, J., Critcher, C., Jefferson, T. and Roberts, B. (1978) Policing the Crisis, Mugging the State and Law and Order. London: Macmillan. Hallowell, N., Foster, C., Eeles, R., Ardern-Jones, A., Murday & Watson, M. (2003). Balancing autonomy and responsibility: The ethics of generating and disclosing genetic information. Journal of Medical Ethics, 29, 74-79. Hamaguchi, E. (1993). Nihongata moderu to wa nanika (About the characteristics of Japanese culture and society). Tokyo: Shinyosya. Handa, H., Bonnard, G., & Grienenberger, J. (1996). The rapessed mitochondrial gene encoding a homologue of the bacterial protein Cc11 is divided into two independently transcribed reading frames. Molecular & General Genetics, 252(3), 292-302. Handler, J (2000). The third way or the old way. U.Kan. L.Rev., 48, 800. Handler, J F (2001). The paradox of inclusion: Social citizenship and active labor market policies. University of California, Los Angeles School of Law Research Paper Series, 01-20. Handy, C. (1976). Understanding organizations. Aylesbury: Penguin.
912
Handyside, A.H., Pattinson, J.K., Penketh, R.J., Delhanty, J.D., Winston, R.M., & Tuddenham, E.G. (1989). Biopsy of human preimplantation embryos and sexing by DNA amplification. Lancet, 1(8634), 347-349. Hanlon, G., et al. (2005). Knowledge, technology and nursing: The case of NHS direct. Human Relations, 58(2), 147-171. Hansen, A. (1994). Journalistic practices and science reporting in the British press. Public Understanding of Science, 3, 111-34. Hansen, A., Cottle, S., Negrine, R., & Newbold, C. (1998). Mass communication research methods. London: Macmillan. Hanson, M., & Callahan, D. (Eds.). (1999). The goals of medicine: The forgotten issue in healthcare reform. Washington, DC: Georgetown University Press. Hansson, S.O. (1996). Decision making under great uncertainty. Philosophy of the Social Sciences, 26(3), 369-386. Haraway, D (1991) The Cyborg Manifesto, extracted from Simians, Cyborgs, and Women: The Reinvention of Nature. New York: Routledge. Haraway, D J (1997). Modest\_Witness@Second\_Millenium. FemaleMan\_Meets\_OncoMouse: Feminism and technoscience. Routledge. Hargreaves, A. (1999). Profesorado, cultura y postmodernidad. Madrid: Ediciones Morata. Harmon, A. (2004, February 5). Geeks put the unsavvy on alert: Learn or log off. New York Times. Retrieved February 6, 2004, from http://www.nytimes.com/ 2004/02/05/technology/05VIRU.html?th=&pagewante d=print&position= Harrer, A., McLaren, B. M., Walker, E., Bollen, L., & Sewall, J. (2006). Creating cognitive tutors for collaborative learning: Steps toward realization. The Journal of Personalization Research, 16, 175-209. Harrington, S. J. (2002). Software piracy: Are robin hood and responsibility denial at work? In Salehnia, A. (Ed.),
Compilation of References
Ethical issues of information systems, (pp. 177-188). Hershey, PA: IRM Press. Harris, C.E., Pitchard, M.S. & Rabins, M.J. (2000), Engineering ethics. Concepts and cases. Belmont, CA: Wadsworth. Harris, J and Keywood, K (2001). Ignorance, information and autonomy. Theoretical Medicine and Bioethics, 22(5), 415-436. Harris, L., & Spence, L. (2002). The ethics of ebanking. Journal of Electronic Commerce Research, 3 (2), 59-66. Harris, T. M., Weiner, D., Warner, T. A., & Levin, R. (1995). Pursuing social goals through participatory geographic information systems. Redressing South Africa’s historical political ecology. In J. Pickles (Ed.), Ground truth (pp. 196-222). London: The Guilford Press. Hart, D. (1997). Modeling the political aspects of information systems projects using “information wards.” Failures and Lessons Learned in Information Technology Management, 1(1), 49-56. Hart, D. (1999). Ownership, organizational politics and information systems. Unpublished PhD thesis. Canberra: UNSW, Australian Defence Force Academy Hartle, A. (1989). Moral issues in military decision making. Lawrence: University of Kansas Press. Hartley, R & Almuhaidib, S.M.Y. (2007). User oriented techniques to support interaction and decision making with large educational databases. Computers & Education, 48(2), 268-284. Hartley, R.V.L. (1928). Transmission of Information Bell System Technical Journal, Vol. 7 (pp. 535-563). Hartman, L. (1998). The rights and wrongs of workplace snooping. Journal of Business Strategy, 19, 16-20. Harvati, K. (2004). 3-D geometric morphometric analysis of temporal bone landmarks in Neanderthals and modern humans. In A.M.T. Elewa (Ed.), Morphometrics: Applications in biology and paleontology (pp. 245–258). New York: Springer.
Harvati, K., Frost, S.R., & McNulty, K.P. (2004). Neanderthal taxonomy reconsidered: Implications of 3D primate models of intra- and interspecific differences. Proceedings of the National Academy of Science USA, 101(5), 1147–1152. Hashimoto, M. (1975). Ukiyo no sisou (Ways of thought based upon views on this transitory world with resignation). Tokyo: Koudansya. Hauptman, R. (1988). Ethical challenges in librarianship. Phoenix, AZ: Oryx Press. Havelock, E. A. (1963). Preface to Plato. Cambridge, UK: Cambridge University Press. Hawk, S. (1994). The effects of computerized performance monitoring: An ethical perspective. Journal of Business Ethics, 13, 949-958. Hawkridge, D. (1991). Challenging educational technology. ETTI, 28(2), 102-110. Hawks, J., & Wolpoff, M.H. (2003). Sixty years of modern human origins in the American Anthropological Association. American Anthropologist, 105(1), 89–100. Haygood, R. & Hensley, R. (2006). Preventing identity theft: New legal obligations for businesses. Employment Relations Today, 33(3), 71-83. Health on the Net Foundation. (2007). Home page. Retrieved May 4, 2007, from http://www.hon.ch Health research policy in South Africa. (2001). Republic of South Africa: Department of Health. Healy, J.M. (1999, April) The mad dash to compute. School Administrator. Retrieved May 28, 2007 from http://findarticles.com/p/articles/mi_m0JSD/is_4_56/ ai_77196005 Hebert, P., Meslin, E. M., Dunn, E. V., Byrne, N., Reid, S.R. (1990). Evaluating ethical sensitivity in medical students: Using vignettes as an instrument. J. Medical Ethics, 16, 141-145. Hecht, I., Higgerson, M., Gmelch, W., & Tucker, A. (1999). The department chair as academic leader.
913
Compilation of References
Phoenix, AZ: The American Council on Education and The Oryx Press. Hedgecoe, A (2000). Narratives of geneticization: cystic fibrosis, diabetes and schizophrenia. PhD thesis, University of London. Hegel, G. W. F. (1977). Phenomenology of Spirit. A. V. Miller transl. Oxford University Press. Heidegger, M. (1953). Die Frage nach der Technik. In Gesamtausgabe Band 7. Frankfurt am Main: Vittorio Klostermann Verlag. Heidegger, M. (1976) Sein and Zeit. Tubingen: Max Niemeyer Verlag (Engl. Trans. Being and Times, J. Macquarrie, E. Robinson, Oxford, 1987). Heidegger, M. (1977). The question concerning technology. In Lovitt, W. (Ed.), The question concerning technology and other essays (pp.13-39). New York: Harper and Row.
Herman, J. L. (2003). The mental health of crime victims: Impact of legal intervention. Journal of Traumatic Stress, 16(2), 159-166. Herrnstein, R.J., & Murray, C. (1994). The bell curve: intelligence and class structure in American life. New York: Free Press. Hettinger, E. C. (1989). Justifying intellectual property. Philosophy and Public Affairs, 18(1), 31-52. Hickman, L. A. (1990). John Dewy’s pragmatic technology. Bloomington, IN: Indiana University Press. Higgins, G. E. & Makin, D. A. (2004). Does social learning theory condition the effects of low self-control on college students’ software piracy? Journal of Economic Crime Management, 2(2), 1-22. Higgs, E., Light, A., & Strong, D. (2000). Technology and the good life. Chicago: Chicago University Press.
Heim, M. (1993). The metaphysics of virtual reality. New York: Oxford University Press.
Hilgartner, S. (1990). The dominant view of popularization: Conceptual problems, political uses. Social Studies of Science, 20(3),519-539.
Heinich, R., Molenda, M. & Russell, J. D. (1989). Instructional media and the new technologies of instruction, 3rd ed. New York: Macmillian Publishing Company.
Hill, C. W. (2007). Digital piracy: Causes, consequences, and strategic responses. Asia Pacific Journal of Management, 24, 9-25.
Heinich, R., Molenda, M. & Russell, J. D. (1996). Instructional media and the new technologies of instruction, 5th ed. New York: Macmillian Publishing Company.
Hillier, L., & Harrison, L. (2007). Building realities less limited than their own: Young people practising same-sex attraction on the internet. Sexualities, 10(1), 82-100.
Heinrichs, B. (2006). Medical research involving minors. Retrieved from http://www.drze.de/themen/blickpunkt/ kinder-en
Himma, K. (2004). The ethics of tracing hacker attacks through the machines of innocent persons. International Journal of Information Ethics, 2(11).
Hempel, L., & Töpfer, E. (2004, August). CCTV in Europe. Center for Technology & Society and UrbanEye.net.
Himma, K. E. (2007). Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? In L. Hinman, P. Brey, L. Floridi, F. Grodzinsky, & L. Introna (Eds.), Proceedings of CEPE 2007: The 7th International Conference of Computer Ethics: Philosophical Enquiry (pp. 163-180). Enschede, the Netherlands: Center for Telematics and Information Technology (CTIT).
Hendrix, J. R. (1993). The continuum: A teaching strategy for science and society issues. American Biology Teacher, 55, 178-81. Herkenhoff, L. M. (2006). Podcasting and VODcasting as Supplementary Tools in Management Training and Learning. Retrieved April 28, 2007, from http://www. iamb.net/CD/CD06/MS/71_Herkenhoff.pdf
914
Hinchcliff. J. (2000).Overcoming the Anachronistic Divide: Integrating the Why into the What in Engineering Education. G. J. of Engng. Educ., 4(1), 13-18.
Compilation of References
Hinduja, S. (2001). Correlates of internet software piracy. Journal of Contemporary Criminal Justice, 17(4), 369-82.
Hoffman, L.J., Lawson-Jenkins, K., and Blum, J. (2006). Trust beyond security: An expanded trust model. Communications of the ACM, 49(7), 94-101.
Hinman, L. (1984). Ethics update glossary. http://ethics.asusd.edu/Glossary.html (as cited in Milson, A. J., 2001).
Hoffman, T. (2003, March 24). Smart dust: mighty motes for medicine, manufacturing, the military and more. Computerworld.
Hinman, L. M. (2002). Basic moral orientations. Retrieved May 30, 2007, from http://ethics.sandiego.edu/ presentations/Theory/BasicOrientations/index.asp
Hofstede, G. (1984). Culture’s Consequences: International Differences in Work-related Values. CA: Sage.
HIPAA - Privacy Rule at 45 CFR. Parts 160 and 164 and guidance. (http://www.hhs.gov/ocr/hipaa) Hirschheim, R., & Smithson, S. (1999). Evaluation of information systems: A critical assessment. In L. Willcocks & S. Lester (Eds.), Beyond the IT productivity paradox (pp. 381-409). Chichester, UK: John Wiley and Sons. HM Government (2005). The government’s outline programme for public engagement on nanotechnologies. August 2005, HM Government in Consultation with the Devolved Administrations. http://www.ost.gov. uk/policy/issues/programme12.pdf Hocking, J. (2003). Counter-terrorism and the criminalisation of politics: Australia’s new security powers of detention, proscription and control. Australian Journal of Politics and History, 49(3), 355- 371. Hodel-Widmer, T.B. (2006). Designing databases that enhance people’s privacy without hindering organizations, Ethics and Information Technology, 8, 3-15. Hoedemaekers, R. & Ten Have, H. (1999). The concept of abnormality in medical genetics. Theoretical Medicine and Bioethics, 20(6), 537–561. Hoesle, V. (1992). The third world as a philosophical problem. Social Research, 59, 227-263. Hoet, P. H. M., Bruske-Hohlfeld, I. & Salata, O.V. (2004). Nanoparticles: Known and unknown health risks. Journal of Nanobiotechnology, 2, (12). Hoffman, E. (2005, October) Brain training against stress: Theory, methods and results form an outcome study. Retrieved May 21, 2007 from http://www.mentalfitness.dk/?download=Stress%20report%204.2.pdf
Hofstede, G. (1997), Cultures and organizations: Software of the mind. New York: McGraw Hill. Hoglund, G., & McGraw, G. (2007). Exploiting online games: Cheating massively distributed systems. Reading, MA: Addison-Wesley. Hogue MS (2007). Theological ethics and technological culture: A biocultural approach. ZYGON, 42(1), 77-95. Hohfeld, W.N. (1923). Fundamental legal conceptions. New Haven, CT: Yale University Press. Holland, J. (1975). Adaptation in Natural and Artificial Systems. University of Michigan Press: Ann Arbor. Holley, W., & Jennings, K., & Wolters, R. (2001). The labor relations process (7th ed.). Fort Worth: Dryden Press. Holliss, E. (2002). The application of Threat Assessment Models against small non-state groups in an information operations environment. Thesis. Canberra: School of Computer Science, University of New South Wales at the Australian Defence Force Academy. Holloway, S. L., & Valentine, G. (2003). Cyberkids: Children in the information age. London: RoutledgeFarmer. Holm, S. (2002). Moral pluralism. In The ethical aspects of biomedical research in developing countries. Proceedings of the Round Table Debate, European Group on Ethics, Brussels. Holmes, A. (2006, March 25). The Profits in Privacy, CIO Magazine, 19(11), 3. Retrieved from http://www. cio.com/archive/031506/privacy.html
915
Compilation of References
Holt, T. J. & Graves, D. C. (2007). A qualitative analysis of advance fee fraud e-mail schemes. International Journal of Cyber Criminology, 1(1), 137-154. Holtfreter, R. E., & Holtfreter, K. (2006). Gauging the effectiveness of US identity theft legislation. Journal of Financial Crime, 13, 56-64. Holz, T., & Raynal, F. (2005, June) Detecting honeypots and other suspicious environments. In Proc. 6th SMC Information Assurance Workshop, West Point, NY, 29-36. Hongladarom, S., & Ess, C. (Eds.) (2007). Information technology ethics: Cultural perspectives. Pennsylvania: Idea Group. Hongladarom, Soraj. (2007). Analysis and justification of privacy from a Buddhist perspective. In S. Hongladarom and C. Ess (Eds.), Information Technology Ethics: Cultural Perspectives (pp. 108-122). Hershey, PA: Idea Group Reference. Hood, S. et al. (1996). Children as research subjects: A risky enterprise. Children & Society, 2, 117 - 128. Hornig Priest, S. (2005). Risk reporting: Why can’t they ever get it right? In Allan, S. (ed.) Journalism: Critical Issues, (pp.199-209). Maidenhead and New York: Open University Press.
Hoskin, P. (2006). Emerging fraud risks and countermeasures in government welfare programs. Paper presented at The Australian & New Zealand Society of Criminology 19th Annual Conference, Sydney, Australia. Hospital Authority Hong Kong. (2002). Guidelines on life-sustaining treatment in the terminally ill. Hong Kong: Hospital Authority Head Office. Hottois, G. (1999). Essai de philosophie bioéthique et biopolitique. Paris: Vrin. Hough, M. (1990). Threats: Findings from the British Crime Survey. International Review of Victimology, 1, 169-180. Houkes, W.N. (2006). Knowledge of artifact functions. Studies in the History and Philosophy of Science, 37, 102-113. Houlihan, B. (1999). Dying to win: Doping in sport and the development of anti-doping policy. Strasburg, Council Of Europe Publishing. Howard, N. (2001). The manager as politician and general: The metagame approach to analyzing cooperation and conflict. In Rosenhead and Mingers, pp. 239-261. Howe, E. (2001). Should ethics consultants use telemedicine? A comment on Pronovost and Williams. The Journal of Clinical Ethics, 12(1), 73-79.
Hornig Priest, S. (2005) Commentary: Room at the bottom of Pandora’s box: Peril and promise in communicating nanotechnology. Science Communication, 27(2), 292-99.
Howell, T., Sack, R. L. (1991). The ethics of human experimentation in psychiatry: Toward a more informed consensus. Psychiatry, 44(2), 113-132.
Horsey, K. (2006, June 26). Three million IVF babies born worldwide. BioNews. Retrieved May 25, 2007, from http://www.bionews.org.uk/new.lasso?storyid=3086
Howells W.W. (1995). Who’s who in skulls: Ethnic identification of crania from measurements. Papers of the Peabody Museum of Archaeology and Ethnology, Volume 82. Cambridge: Harvard University.
Horton, K. S. (2000). The exercise of power and information systems strategy: The need for a new perspective. Proceedings of the 8th European Conference on Information Systems (ECIS), Vienna. Horton, M. (2005, February 9-10). The future of sensory networks. Xbow.com. Retrieved from http://www.xbow. com
916
Howells, W.W. (1973). Cranial variation in man: a study by multivariate analysis of patterns of difference among recent human populations. Papers of the Peabody Museum of Archaeology and Ethnology, Volume 67. Cambridge: Harvard University Press. Howells, W.W. (1976). Explaining modern man: evolutionists versus migrationists. Journal of Human Evolution, 6, 477–495.
Compilation of References
Howells, W.W. (1989). Skull shapes and the map: Craniometric analyses in the dispersion of modern Homo. Papers of the Peabody Museum of Archaeology and Ethnology, Volume 79. Cambridge: Harvard University. HRW. (2006). The race to the bottom: Corporate complicity in Chinese Internet censorship. Human Rights Watch (HRW). Retrieved from http://www.hrw.org/reports/2006/china0806/china0806webwcover.pdf, 2006 Hudson, F. & Moyle, K. (2004). Open source software suitable for use in Australian and New Zealand schools: A review of technical documentation. Department of Education and Children’s Services, South Australia. Retrieved May 17, 2007, from http://www.mceetya.edu. au/verve/_resources/open_source_aust_nz.pdf Huey, L., & Rosenberg, R. S. (2004). Watching the web: Thought on expamding police surveillance opportunites under the Cyber-Crime Convention. Canadian Journal of Criminology and Criminal Justice, 46(5), 597-606. Huff, C. & Frey, W. (2005). Moral pedagogy and practical ethics, Science and Engineering Ethics, 11, 389-408. Huff, C. & Martin, C.D. (1995). Computing consequences: A framework for teaching ethical computing. Communications of the ACM, 38(12), 75-84. Hui, E., Ho, S.C., Tsang, J., Lee, S.H. & Woo, J. (1997). Attitudes toward life-sustaining treatment of older persons in Hong Kong. Journal of the American Geriatrics Society, 45(10), 1232-1236. Hui, E.C. (2002). At the Beginning of Life: Dilemmas in Theological Ethics. Downers Grove, IL: InterVarsity Press. Hulse, F.S. (1969). Ethnic, caste and genetic miscegenation. Journal of Biosocial Science, Supplement No. 1. Oxford: Blackwell Scientific Publications. Humphries C.J., (2002). Homology, characters and continuous variables. In N. MacLeod & P.L. Forey (Eds.), Morphology, shape and phylogeny (pp. 8–26). New York: Taylor & Frances. Hunt, G. & Mehta, M. (2006). Nanotechnology: Risk, ethics and law. London: Earthscan Book.
Hunt, G. (2006) The global ethics of nanotechnology. In Hunt, G. & Mehta, M. (eds.), Nanotechnology: Risk, ethics, law, (pp. 183-95). London: Earthscan. Hunt, L. (1997) The Invention of Pornography: Obscenity and the Origins of Modernity, 1500 – 1800. MIT Zone Books: Cambridge, MA. Hurley, S. (2004). Imitation, media violence, and freedom to speech. Philosophical Studies, 117(1/2), 165-218. Hürst, W. & Waizenegger, W (2006). An overview of different approaches for lecture casting. Proceedings of IADIS International Conference on Mobile Learning 2006, July 2006. Husted, B. W. (2000). The impact of national culture on software piracy. Journal of Business Ethics, 26, 197-211. Hutcheson, R. (2006, October 17). U.S. Government has long history of abusing personal information. Common Dreams News Center. Retrieved June 26, 2007, from http://www.commondreams.org/headlines06/051304.htm Hutchinson, A., & Stuart, C. (2004). The RyersonWellesley social determinants of health framework for urban youth. Toronto, ON: Ryerson University & Wellesley Institute. Huxley, J.S., & Haddon, A.C. (1935). We Europeans: A survey of ‘racial’ problems. London: J. Cape. Hydar A A. & Wali S A., ( 2006). Informed consent and collaborative research: Perspectives from the developing world. Developing World Bioethics, 6(1), 33-40. Iannone, A. P. (1987). Contemporary Moral Controversies in Technology. New York and London: Oxford University Press. Iannone, A. P. (1994). Philosophy as Diplomacy: Essays in Ethics and Policy Making. Atlantic Highlands, NJ: Humanities Press. Iannone, A. P. (1999). Philosophical Ecologies: Essays in Philosophy, Ecology, and Human Life. Atlantic Highlands, NJ and Amherst, NY: Humanity Books and Humanity Press.
917
Compilation of References
Iannone, A.P. and Briggle, A. (2005). Information overload and speed entries. In Carl Mitcham, (gen.ed.), Encyclopedia of Science, Technology, and Ethics. New York: Macmillan. ICAMS (2005, April). The emergence of a global infrastructure for mass registration and surveillance. International Campaign Against Mass Surveillance (ICAMS). Retrieved from http://www.i-cams.org/ICAMS1.pdf Identity Theft Resource Centre. (2003). Identity theft: The aftermath 2003. Retrieved March 2, 2007 from http://www.idtheftcenter.org/idaftermath.pdf. Identity Theft Resource Centre. (2005). Identity theft: The aftermath 2004. Retrieved March 2, 2007 from http://www.idtheftcenter.org/idaftermath2004.pdf. IDTechEx (2007, January 12). The RFID Knowledgebase. IDTechEx. Retrieved from http://rfid.idtechex. com/knowledgebase/en/nologon.asp IFPI (2006). The recording industry 2006: Piracy report. Retrieved May 29, 2007 from http://www.ifpi.org/content/library/piracy-report2006.pdf IFPI (2007). IFPI:07. Digital music report. Retrieved May 29, 2007 from http://www.ifpi.org/content/library/ digital-music-report-2007.pdf Ignatieff, M. (2000). The rights revolution. Toronto: House of Anansi Press. Illich, Ivan (1971). Deschooling society. New York: Harper & Row. Iltis, A.S. (2005). Stopping trials early for commercial reasons: The risk-benefit relationship as moral compass. Journal of Medical Ethics, 31, 410 - 414. Implementing the tenth strand: Extending the curriculum requirements for computer science, Project ImpactCS Report 2. Retrieved June 12, 2007 from http://www.seas. gwu.edu/~impactcs/paper2/toc.html Independent Television News Limited (2007). Teachers warn of cyber-bully menace. Available: http://www. channel4.com/news/articles/uk/teachers+warn+of+cy berbully+menace/641947
918
Indian Council of Medical Research New Dehli (2000). Ethical Guidelines for Biomedical Research on Human Subjects, 58. Inspector General. (2005). Top issues facing social security administration management: Fiscal year 2006. Publication L. No. 106-531. Retrieved May 27, 2007 from http://www.ssa.gov/oig/ADOBEPDF/ 2006TopMgmtChallenges.pdf. International Association for Educational and Vocational Guidance (IAEVG, 2003). International competencies for educational and vocational guidance practitioners. Retrieved April 14, 2004, from http://www.iaevg.org/ iaevg/nav.cfm?lang=4&menu=1&submenu=5 International Centre for Studies in Creativity. (2003). Home page. Retrieved from www.buffalostate.edu/centers/creativity/ International Conference on Harmonization (ICH) (1996). ICH harmonized tripartite guideline for good clinical practice. ICH International ethical guidelines for biomedical research involving human subjects. (1993) CIOMS. International guidelines for ethical review of epidemiological studies. (1991) CIOMS, Geneva. International Institute for Environment and Development. (2003). Stakeholder power analysis. Retrieved from www. iied.org/forestry/tools/stakeholder.html International Intellectual Property Alliance (IIPA). (2004). 2004 special 301 report: Pakistan. Retrieved November 20, 2005 from http://www.iipa.com/rbc/2004/ 2004SPEC301PAKISTAN.pdf International Intellectual Property Alliance (IIPA). (2006). 2006 special 301: Canada. Retrieved March 21, 2006 from http://www.iipa.com/rbc/2006/2006SPEC301CANADA. pdf International Risk Governance Council (2006). White paper on nanotechnology risk governance, June. Geneva: International Risk Governance Council.
Compilation of References
International Society for Technology in Education. (2003). ISTE NETS main page. Retrieved December 10, 2003, from http://cnets.iste.org/index.shtml Internet Architecture Board. (1989). Retrieved June 4, 2007, from http://tools.ietf.org/html/rfc1087 Internet Encyclopedia of Philosophy, The (2001). Immanuel Kant (1724-1804): Metaphysics. Retrieved September 9, 2003 from: http://www.utm.edu/research/ iep/k/kantmeta.htm Internet Home Alliance (IHA) (2005). Industry summaries. retrieved from http://www.internethomealliance. org/resrch_reports/industry_summaries.asp Introna, L. (2005). Phenomenological approaches to ethics and information technology. In The Stanford encyclopedia of philosophy. Accessed online at http://plato. stanford.edu/entries/ethics-it-phenomenology/ Introna, L. D. (1997). Management, information and power: A narrative of the involved manager. Basingstoke: Macmillan. Introna, L., & Nissenbaum, H. (2000). Shaping the Web. Why the politics of search engines matter. The Information Society, 16, 169-185. Involve (2006). The nanotechnology engagement group. Policy Report One. March 2006. London: Involve. IOM, (2001). Crossing the quality chasm: A new health system for the 21st century. Washington, DC: National Academy Press. Irani, Z. (2002). Information systems evaluation: Navigating through the problem domain. Information & Management, 40, 11-24. Irani, Z., & Fitzgerald, G. (2002). Editorial. Information Systems Journal, 12(4), 263-269. Irani, Z., & Love, P. E. (2001). Information systems evaluation: Past, present and future. European Journal of Information Systems, 10(4), 189-203. Irani, Z., Love, P. E., Elliman, T., Jones, S., & Themistocleus, M. (2005). Evaluating E-government: Learn-
ing from the experiences of two UK local authorities. Information Systems Journal, 15(1), 61-82. Irrgang, B. (2006). Ethical Acts in Robotics. Ubiquity, 7(34). Retrieved from, www.acm.org/ubiquity ISO Working Group on Social Responsibility. (2005). ISO 26000 standard on social responsibility guidelines. ISO. Jablonski, Nina. (2006). Skin: A natural history. Berkeley: University of California Press. Jackson, M. C. (1982). The nature of soft systems thinking: The work of Churchman, Ackoff and Checkland. Journal of Applied Systems Analysis, 9, 17-29. Jackson, M. C. (1992). An integrated programme for critical thinking in information systems research. Information Systems Journal, 2, 83-95. Jackson, M. C. (1999). Towards coherent pluralism in management science. Journal of the Operational Research Society, 50(1), 12-22. Jackson, M. C. (2000). Systems approaches to management. London: Kluwer Academic/Plenum Publishers. Jackson, M. C. (2003). Creative holism: Systems thinking for managers. Chichester, UK: John Wiley and Sons. Jackson, M. C., & Keys, P. (1984). Towards a system of system methodologies. Journal of the Operational Research Society, 35, 473-486. Jackson, M., & Ligertwood, J. (2006). Identity management: Is an identity card the solution for Australia? Prometheus, 24(4), 379-387. Jacob, F (1976) La Logique du vivant. Gallimard. Jacob, F (1987) La statue intérieure. Seuil. Jacobs, J. (1961). The death and life of great american cities. New York: Random House and Vintage Books. Jacobs, K. (2004) Pornography in Small Places and Other Spaces. Cultural Studies, 18 (1): 67-83. Jaeger, P.T. & Fleischmann, K.R. (2007). Public libraries, values, trust, and e-government. Information Technology & Libraries, 26(4), 35-43
919
Compilation of References
James, G. (2004, March 1). Can’t hide your prying eyes. Computerworld, 38, 35-36.
Jesiek, B. (2003). Democratizing software: Open source, the hacker ethic, and beyond. First Monday, 8(10).
Jamieson, D. (1995). Teaching ethics in science and engineering: Animals in research. Science & Engineering Ethics, 1, 185-6.
Johnson, D. (1985) Computer ethics. NJ: Prentice-Hall
JAMRS.org (2005). Joint advertising market research & studies. Retrieved from http://www.jamrs.org/ Janis, I. (1989). Crucial decisions. Free Press. Janssen, P.H.M., Petersen, A.C., Van der Sluijs, J.P., Risbey, J., & Ravetz, J.R. (2005). A guidance for assessing and communicating uncertainties. Water science and technology, 52(6), 125-131. Jarvis, S., Hickford, J. & Conner, L. (1998). Biodecisions. Lincoln: Crop & Food Research Institute. Jasperson, J. S., Carte, T., Saunders, C. S., Butler, B. S., Croes, H. J. P., & Zheng, W. (2002). Power and information technology research: A metatriangulation review. MIS Quarterly, 26(4), 397-459. Javelin Strategy & Research. (2007). 2007 Identity fraud survey report-Consumer Version. How consumers can protect themselves. Retrieved May 27, 2007 from www. javelinstrategy.com. Jefferson, J. (2004). Police and identity theft victimsPreventing further victimisation. Australasian Centre for Policing Research, No 7. Retrieved March 2, 2007 from http://www.acpr.gov.au/publications2.asp?Report_ ID=154
Johnson, D. (1994). Computer ethics, 2nd ed. Englewood Cliffs, New Jersey: Prentice Hall, Inc. Johnson, D. (n.d.). Ethical issues in engineering. Englewood Cliffs, NJ: Prentice-Hall, Johnson, D. G. (1997). Is the global information infrastructure a democratic technology? Computers and Society, 27, 20-26. Johnson, D. G. (1999). Computer ethics in the 21st century. In Proceedings of ETHICOMP99, Rome, Italy. Johnson, D. G. (2000). Introduction: Why computer ethics? Computer Ethics, 3, 1-256. Pearson Education. Retrieved May 30, 2007, from http://www.units. it/~etica/1999_2/Johnson.html Johnson, D. G. (2001). Computer ethics (3rd ed.). Upper Saddle River, NJ: Prentice Hall. Johnson, D.G. & Nissenbaum, H. (1995). Computers, ethics & social values. Upper Saddle River, NJ: Prentice Hall. Johnson, D.G. (1985). Computer ethics (First Edition). Upper Saddle River, NJ: Prentice Hall. Johnson, D.G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8, 195-204.
Jehn, K. A. (1997). A Qualitative Analysis of Conflict Types and Dimentions in Organizational Groups. Administrative Science Quarterly, 42, 530-557.
Johnson, D.W., Johnson, R.T. & Smith, K.A. (1991). Active learning: Cooperation in the college classroom. Edina, MN: Interaction Book Company.
Jehn, K. A., & Mannix, E. A. (2001). The Dynamic Nature of Conflict: A Longitudinal Study of Intragroup Conflict and Group Performance. Academy of Management Journal, 44(2), 238-251.
Johnson, D.W., Johnson, R.T. & Smith, K.A. (n.d.). Cooperative learning: Increasing college faculty instructional productivity. Washington DC: The George Washington University, School of Education and Human Development.
Jenkins, P. (2001) Beyond tolerance: Child pornography on the internet. New York University Press. Jensen, R. (1993). TLC career guidance. Currículo. Utah: Salt Lake City.
920
Johnson, E., & Huber, G. (1977). The technology of utility assessment. IEEE Transactions on Systems, Man, and Cybernetics, SMC-7(5), 311-325.
Compilation of References
Johnson, F., Ave, E., & Hill, J. (2006, August). CCD data file: National public education financial survey FY 2004 (SY 2003-04). National Center for Education Statistics. Retrieved May 30 2007 http://nces.ed.gov/pubsearch/ pubsinfo.asp?pubid=2006443 Johnston, D. (1991). Cheating: Reflections on a moral dilemma. Journal of moral education, 20, 283–291. Johnston, M. & Cooley, N. (2001). Supporting new models of teaching and learning through technology. Arlington, VA: Educational Research Service. Jonas, H. (1979). The imperative of responsibility: In search of ethics for the technological age. Chicago: Chicago University Press. Jonas, H. (1985). On technology, medicine and ethics. Chicago: Chicago University Press. Jonas, H. (1990). Le principe responsabilité. Paris: Cerf. Jonas, H. (1998). Pour une éthique du futur. Paris: Payot. Jones, D.G., & Towns, C.R. (2006). Navigating the quagmire: the regulation of human embryonic stem cell research. Human Reproduction, 21(5),1113-1116. Jones, T. M. (1991). Ethical decision making for individuals in organizations: An issue contingent model. Academy of Management Review, 16(February), 366-395. Joseph, V., & Conrad, A. P. (1995). Essential steps for ethical problem-solving. Retrieved from http://www. socialworkers.org/pubs/code/oepr/steps.htm Jotterand, F. (2006). The politicization of science and technology: Its implications for nanotechnology. Journal of Law, Medicine & Ethics, Winter, 658-666. Juengst, E. (1995). The ethics of prediction: Genetic risk and the physician-patient relationship. Genome Science and Technology, 1(1), 21-36 Juengst, E. T. (2003). Editorial: What’s next for human gene therapy. British Medical Journal, 326, 1410-1411. Julian-Reynier, C., Welkenhuysen, M., Hagoel, L., Decruyenaere, M., & Hopwood, P. (on behalf of the
CRISCOM Working Group) (2003). Risk communication strategies: State of the art and effectiveness in the context of cancer genetic services. European Journal of Human Genetics, 11, 725-736. Jullien, F. (1985). La valeur allusive. Paris: Presses Universitaires de France. Jullien, F. (1995). Le detour et l’accès. Stratégies du sens en Chine, en Grèce. Paris: Grasset. Jullien, F. (Ed.) (1982-). Revue extrême-orient extrêmeoccident. Presses Universitaires de Vincennes. Jung, C.G. (n.d.). Psychological types. In R.F.C. Hull (Ed), The collected works of Jung C. G., vol. 6. Princeton. Kahan, D. M., Slovic, P., Braman, D., Gastil, J. & Cohen, G. L. (2007). Affect, values, and nanotechnology risk perceptions: An experimental investigation. In Nanotechnology risk perceptions: The influence of affect and values. Washington, DC: Woodrow Wilson International Center for Scholars, Center Project on Emerging Nanotechnologies. Kahn, J.M., and Warneke, B.A. (2006, August 22). Smart dust and ubiquitous computing. In Nanotechnology Now. Kane, J. & Wall, A. (2005). Identifying the links between white-collar crime and terrorism. U.S. Department of Justice. Kant, I. (1781/1787) Critique of pure reason (trans. N. Kemp Smith in 1927 as Immanuel Kant’s Critique of Pure Reason). London: Macmillan Co. Ltd. Kant, I. (1964). The critique of judgment. Transl. J.M. Meredith. Oxford: Clarendon. Kaplan, B. (2000). Culture counts: How institutional values affect computer use. MD Computing, 17(1), 23-26. Kaplan, D. (2004). Readings in the philosophy of technology. Rowman & Littlefield. Kaplan, S. & Greenfield, S. (1989). Assessing the effects of physician-patient interactions on the outcomes of chronic disease. Medical Care, 27, S100-S127.
921
Compilation of References
Karon, P. (1986). Software industry groups set sail against pirates in academe. PC Week, 9 December, 62.
and accessed May 27, 2007 at http://www.laptop.org/ OLPCEtoys.pdf
Karpin, I (2005). Genetics and the legal conception of the self. In Mykitiuk, R & Shildrick, M (eds), Ethics of the body: Postconventional challenges (basic bioethics). The MIT Press.
Ke, G. S., and Karger, P. A. (2005, November 8). Preventing security and privacy attacks on machine readable travel documents (MRTDs). IBM Document RC 23788.
Kashmeery, A. (2004). Who owns the human genes? An East-West cross-cultural analysis. Oxford Research Forum Journal, 2(1), 81-85.
Kearnes, M., Macnaghten, P. & Wilsdon, J. (2006). Governing at the nanoscale: People, policies and emerging technologies. London: Demos.
Kass N., Dawson L., & Loyo-Berrios N. ( 2003). Ethical oversight of research in developing countries. IRB Ethics & Human Research, 25(2), 1-10.
Keefer, M.W. (2005). Making good use of online case study materials. Science and Engineering Ethics, 11, 413-429.
Kaszycha, K.A., & Štrkalj, G. (2002). Anthropologists’ attitudes towards the concept of race: The Polish sample. American Anthropologist, 43(2), 329–335.
Keeney, R. (1988). Value-focused thinking and the study of values. Ch 21 in Bell et al, pp. 465-494.
Kaszycha, K.A., & Strzałko, J. (2003). Race: Tradition and convenience, or taxonomic reality? More on the race concept in Polish anthropology. Anthropological Review, 66, 23–37. Katz, SB. (1992). The ethic of expediency: Classical rhetoric, tecnology, and the holocaust. College English, 54(3), 255-275. Kaufman, K. R. & Gerner, R. (2005) Modafinil in sports: Ethical considerations. British Journal of Sports Medicine, 39, 241-244. Kaufmann, J.-C. (1997) L’entretien compréhensif. Paris: Nathan. Kaupins, G. (2004). Ethical perceptions of corporate policies associated with employee computer humor. Ethics and Critical Thinking Quarterly Review, 2004(1), 16-35. Kawooya, D. (2002). The digital divide. An ethical dilemma for information professionals in Uganda? In T. Mendina, & J. J. Britz (Eds.), Information ethics in the electronic age: Current issues in Africa and the world (pp. 28-35). Jefferson, NC: McFarland. Kay, A. (2007). Children learning by doing: Etoys on the XO. Draft document published on the OLPC web
922
Keeney, R. (1992). Value-focused thinking: A path to creative decisionmaking. Cambridge: Harvard University Press. Keeney, R., & Raiffa, H. (1993). Decisions with multiple objectives: Preferences and tradeoffs. Cambridge: Cambridge University Press. Keita, S.O.Y., & Kittles, R. (1997). The persistence of racial thinking and the myth of racial divergence. American Anthropologist, 99(3), 534–544. Keller, E. F. (1985). Reflections on gender and science. New Haven, CT: Yale University Press. Keller, F.S. (1968). Goodbye teacher... Journal of Applied Behavior Analysis, 1, 79-89. Kelley, K., & Bonner, K. (2001). Courseware ownership in distance education: Issues and policy models. Unpublished manuscript, University of Maryland University College, USA. Kendrick, W. (1996) The Secret Musuem: Pornography in Modern Culture. University of California Press, Berkeley Press. Kennedy, K.A.R. (1995). But professor, why teach race identification if races don’t exist? Journal of Forensic Sciences, 40(5), 797–800.
Compilation of References
Kennedy, K.A.R., & Chiment, J. (1992). Racial identification in the context of prehistoric-historic biological continua: Examples from South Asia. Social Science and Medicine, 34(2), 119–123. Kennedy, P., & Bakay, R. (1998). Restoration of neural output from a paralyzed patient by direct brain connection. NeuroReport, 9(8), 1707-1711. Kepner-Tregoe. (2003). Home page. Retrieved January 3, 2004, from www.kepner-tregoe.com Kermani, F., Bonacossa, P. (2003). New ways to recruit trial subjects. Applied Clinical Trials, 38 -42. Kern, S. (1983) The Culture of Time and Space 1880-1918. Cambridge, MA: Harvard University Press. Ketterl, M., Mertens, R., & Morisse, K. (2006). Alternative content distribution channels for mobile devices. Retrieved April 30, 2007, from http://www2.informatik. uni-osnabrueck.de/papers_pdf/2006_02.pdf Khan, KS. (1991) Epidemiology and ethics: The people’s perspective. Law, Medicine and Science, 19(3-4), 202206 Khoury, M., Millikan, R., Little, J., & Gwinn, M. (2004). The emergence of epidemiology in the genomics age. International Journal of Epidemiology, 33, 936-944. Khushf, G. (2004). The ethics of nanotechnology: On the vision and values of a new generation of science and engineering. In National Academy of Engineering, Emerging Technologies and Ethical Issues in Engineering. (pp 29-56) Washington, DC: National Academies Press. Ki E., Chang B., & Khang K. (2006). Exploring influential factors on music piracy across countries. Journal of Communication, 56, 406-426. Kibbey, M and Costello, B. (1999) Displaying the Phallus. Masculinity and Performance of Sexuality on the Internet. Men and Masculinities 1(4): 352-364. Kieran, M. (1998). Objectivity, impartiality and good journalism. In Kieran, M. (ed.), Media Ethics, (pp 2336). London: Routledge.
Kiesler, S., & Sproull, L. (1986). Response effects in the electronic survey. Public Opinion Quarterly, 50, 402-413. Kiesler, S., Siegel, J., & McGuire, T. W. (1984). Social psychological aspects of computer-mediated communication. American Psychologist, 39, 1123-1134. Kim, J. B., & Michell, P. (1999). Relationship Marketing in Japan: The Buyer-Supplier Relationships of Four Automakers. Journal of Business & Insudtrial Marketing, 14(2), 118-129. Kim, P., Eng, T. R., Deering, M. J., & Maxfield, A. (1999). Published criteria for evaluating health related web sites: Review. British Medical Journal, 318, 647-649. Kimura, B. (1972). Hito to hito to no aida. Tokyo: Koubundo. German translatation by E. Weinmayr. (1995). Zwischen Mensch und Mensch. Strukturen Japanischer Subjektivität. Darmstadt: Wissenschaftliche Buchgesellschaft. Kincheloe, J.L. & P. McLaren (2000). Rethinking critical theory and qualitative research. In K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research. 2nd edition. Thousand Oaks, CA: Sage. King, D.S. (1999). Preimplantation genetic diagnosis and the ‘new’ eugenics. Journal of Medical Ethics, 25, 176-182. King, R., Bambling, M., Lloyd, C., Gomurra, R., Smith, S., Reid, W., & Wegner, K. (2006). Online counselling: The motives and experiences of young people who choose the Internet instead of face to face or telephone counselling. Counselling and Psychotherapy Research, 6(3), 169-174. Kini, R. B., Ramakrishna, H.V. & Vijayaraman, B. S. (2003). An exploratory study of moral intensity regarding software piracy of students in Thailand. Behavior & Information Technology, 22(1), 63-70. Kirda, E & Kruegel, C. (2006). Protecting Users against Phishing Attacks. The Computer Journal, 49, 554-559. Kirkpatrick, D. (1976). Evaluation of training. In R. L. Craig (Ed.), Training and development handbook (2nd ed., chap. 18). New York: McGraw Hill. 923
Compilation of References
Kitchener, G. (2006, March 16). Pentagon plans cyberinsect army. BBC News. Retrieved from http://www. bbc.co.uk,/2/hi/americas/480342.stm Kitcher, P. (2003). In Mendel’s mirror: Philosophical reflections on biology. New York: Oxford University Press. Kitiyadisai, Krisana. (2005). Privacy rights and protection: foreign values in modern Thai context. Ethics and Information Technology, 7,17-26. Kitzinger, J. (2006). The role of media in public engagement. In Turney,J. (ed), Engaging science: Thoughts, deeds, analysis and action, (pp. 45-9). London: Wellcome Trust. http://www.wellcome.ac.uk/doc_WTX032706. html Klang, M. (2001). Who do you Trust? Beyond encryption, secure e-business. Decision Support Systems, Vol. 31, pp. 293-301. Klang, M. (2003). Spyware: Paying for Software with our Privacy. International Review Of Law Computers and Technology, Vol. 17, Number 3, November, pp. 313-322. Klang, M. (2004). Spyware – the ethics of covert software. Ethics and Information Technology, Issue 3, September, pp. 193-202. Klapp, O. E. (1986). Overload and Boredom: Essays on the Quality of Life in the Information Society. New York: Greenwood Press. Klein, G. (1989). Strategies for decision making. Military Review, May, 56-64. Klein, G. (2000). Sources of power (sixth printing). Boston: MIT Press. Klein, G. (2002) Intuition at work. New York: Doubleday. Klein, G., Orasanu, J., Calderwood, R., & Zsambok, C. (1993). Decision making in action: Models and methods. Norwood: Ablex. Klein, H.K. & Hirschheim, R. (1991). Rationality concepts in information system development methodologies.
924
Accounting, Management and Information Technology, 1(2), 157 – 187. Kling, R. (1998). Technological and social access to computing, information, and communication technologies. White paper for the Presidential Advisory Committee on High-Performance Computing and Communications, Information Technology, and the Next Generation Internet. Accessed online May 27, 2007 from http://rkcsi. indiana.edu/archive/kling/pubs/NGI.htm Klipstein, S. (2005). Preimplantation genetic diagnosis: Technological promise and ethical perils. Fertility and Sterility, 83,1347-1353 Kneppreth, N., Gustafson, D., Leifer, R., & Johnson, E. (1974). Techniques for the assessment of worth (TR 254, AD 784629). Arlington: U.S. Army Research Institute. Knoppers, B M (1998). Professional disclosure of familial genetic information. Am. J. of Human Genetics, 62, 474-483. Knoppers, B M (1998). Towards a reconstruction of the genetic family: New principles? IDHL, 49(1), 249. Knoppers, B M, Godard, B & Joly, Y (2004). A comparative international overview. In Rothstein, M A (ed.), Genetics and life insurance: Medical underwriting and social policy (basic bioethics). The MIT Press. Koch, L and Svendsen, M N (2005). Providing solutiondefining problems: the imperative of disease prevention in genetic counselling. Social Science and Medicine, 60, 823-832. Kocikowski, A. (1999). Technologia informatyczna a stary problem totalitaryzmu. Nauka, 1, 120-126. Koehler, W. (2003). Professional values and ethics as defined by “The LIS Discipline.” Journal of Education for Library and Information Science, 44(2), 99-119. Kohlberg, L. (1969). Stage and sequence: the cognitive-developmental approach to socialization. Chicago: Rand-McNally. Kolb, D.A. (1981). Learning styles and disciplinary differences. In Chickering A. et al. (Eds.), The modern
Compilation of References
american college. San Francisco, CA: Jossey-Bass Publishers. Kolb, D.A. (1984). Experiential learning: Experience as the source of learning and development. Englewood Cliffs, CA: Prentice Hall. Kolmos, A. (1996). Reflections on project work and problem-based learning. European J. of Engng. Educ., 21(2), 141-148. Konidala, D., Kim, W-S., and Kim, K. (2006). Security assessment of EPCglobal architecture framework. (White Paper #WP-SWNET-017). Daejeon, Korea: Auto-ID Labs. Korman, R. (05/12/2003) Geeking in the third world. O’Reilly Media. Accessed online May 27, 2007 from http://www.oreilly.com/pub/a/oreilly/news/ethan_0503. html Korn, D., Murray, M., Morrison, M., Reynolds, J., & Skinner, H. A. (2006). Engaging youth about gambling using the Internet. Canadian Journal of Public Health, 97(6), 448-453. Kostrewski, B.J., & Oppenheim, C. (1980). Ethics in information science. Journal of Information Science, 1(5), 277-283. Kovach, G. C., & Campo-Flores, A. (2007). The trail of cheers. Newsweek, 149(3), 17. KPMG. (2006). Commonwealth of Australia. Health and social services smart card initiative. Volume 1: Business case public extract. Retrieved February 1, 2007 from http://www.humanservices.gov.au/modules/resources/ access_card/kpmg_access_card_business_case.pdf. Krause, J. (2006). Stolen lives: Victims of identity theft start looking for damanges from companies that held their personal financial information. ABA Journal, 92, 36-41, 64. Krebs, V. (2004). Social network analysis of the 9-11 terrorist network. Retrieved July 16, 2004, from Orgnet.com
Krisana Kitiyadisai (2005). Privacy rights and protection: Foreign values in modern Thai context. Ethics and Information Technology, 7, 17–26. Kroes, P.A. & Meijers, A. (2000). Introduction: A discipline in search of its identity. In: Mitcham, C., Kroes, P.A. & Meijers, A.W.M. (Eds.), The empirical turn in the philosophy of technology (pp. xvii-xxxv). Stanford: JAI Press. Krone, T. & Johnson, H. (2006). Internet purchasing: Perceptions and experiences of Australian households. Trends & Issues in Crime and Criminal Justice, No. 330. Canberra: Australian Institute of Criminology. Kuipiers, G. (2006) The Social Construction of Digital Danger: Debating, Defusing and Inflating the Moral Dangers of Online Humour, Pornography in the Netherlands and the US. New Media and Society, 13(3): 379-400. Kulinowski, K. (2006). Nanotechnology: From “wow” to “yuck”? In G. Hunt & M. D. Mehta (eds.), Nanotechnology: Risk, ethics and law, (pp 13-24). London: Earthscan. Kumano, Y. (1991). Why does Japan need STS: A comparative study of secondary science education between Japan and the U.S. focusing on an STS approach. Bull. Sci. Tech. Soc., 11, 322-30. Kwong, T. C. H. & Lee, M. K. O. (2002). Behavioral intention model for the exchange mode internet music piracy. Proceedings of the 35th Annual Hawaii International Conference on System Sciences -Volume 7, 191. Washington, DC, USA. Lacey, D., & Cuganesan, S. (2004). The role of organizations in identity theft response: The OrganizationIndividual victim dichotomy. The Journal of Consumer Affairs, 38(2), 244-261. Ladd, J. (1997). Ethics and the computer world: A new challenge for philosophers. Computers and Society, 27(3). pp. 8-9. Lane, F. S. (2000) Obscene Profits. The Entrepreneurs of Pornography in the Cyber Age. Routledge: New York.
Kreuter, E. A. (2003). The impact of identity theft through cyberspace. Forensic Examiner, 12(5-6), 30-35.
925
Compilation of References
Langton, R. and West, C. (1999) Scorekeeping in a Pornographic Language Game. Australian Journal of Philosophy, 77(3): 303-319. Lanzendorf, S.E., Boyd, C.A., Wright, D.L., Muasher, S., Oehninger, S., & Hodgen, G.D. (2001). Use of human gametes obtained from anonymous donors for the production of human embryonic stem cell lines. Fertility and Sterility, 76, 132-137. Lanzetta J. T. and Roby, T. B. 1956). Effects of WorkGroup Structure and Certain task Variables on Group Performance. Journal of Abnormal and Social Psychology, Vol. 53. Lapsley, D. K., Enright, R. D., & Serlin, R. (1989). Moral and social education. In J. Worell & F. Danner (Eds.), The adolescent as decision-maker: Applications to development and education (pp. 111-143). San Diego, CA: Academic Press. Lapsley, D., & Narvaez, D. (2004). A social-cognitive view of moral character. In D. Lapsley & D. Narvaez (Eds.), Moral development: Self and identity, pp. 189212. Mahwah, NJ: Erlbaum. LaRue, J. (1985). Steal this program. Library Software Review, 4(5), 298-301. Lathrop, A. & Foss, K. (2000). Student cheating and plagiarism in the Internet era: A wake-up call. Englewood, CD: Libraries Unlimited Inc. Latter, B.D.H. (1980). Genetic differences within and between populations of the major human subgroups. American Naturalist, 116, 220–237. Lau, E. K. W. (2003). An empirical study of software piracy. Business Ethics, 12(3), 233-245. Laudon, K. (1996). Markets and privacy. Communications of the ACM, 39(9), 92-105. Law Reform Commission of Hong Kong. (2006). Report on substitute decision-making and advance directives in relation to medical treatment. Hong Kong Government.
926
Lazar, J., Jones, A., Hackley, M. & Shneiderman, B. (2006). Severity and impact of computer user frustration: A comparison of student and workplace users. Interacting with Computers, 18(2), 187-207. Lazarus, D. (2006, June 21). AT&T rewrites rules: Your data isn’t yours. San Francisco Chronicle. Retrieved on July 9, 2007, from http://sfgate.com/cgi-bin/article. cgi?file=/chronicle/archive/2006/06/21/BUG9VJHB9C1. DTL&type=business Le Roy, E. (1928). Les origines humaines et l’évolution de l’intelligence. Paris: Boivin. Leaman, O. (1988). Cheating and fair play in sport. In Morgan, W. J. & Meier, K. V. (Eds.), Philosophic inquiry in sport. Illinois: Human Kinetics. Leap, T., & Crino, M. (1998). How serious is serious. HR Magazine, 44, 43-48. Ledley, F. D. (1994). Distinguishing genetics and eugenics on the basis of fairness. Journal Of Medical Ethics, 20, 157-164. Legge, J. (1971). Confucian analects, the great learning and the doctrine of the mean. New York: Dover. Legge, K. (1984). Evaluating planned organizational change. London: Academic Press. Lehman, C. K. (1981). Can cheaters play the game? Journal Of The Philosophy Of Sport, VII, 41-46. Lehman, P., & Lowry, T. (2007). The marshal of Myspace; How Hemanshu Nigam is trying to keep the site’s ‘friends’ safe from predators and bullies. Business week (4031), 86. Leitch, S., & Warren, M. (2000). Ethics and electronic commerce. In Selected papers from the second Australian Institute conference on computer ethics, Canberra, Australia (pp. 56-59). Lelièpvre-Botton, S. (1997). L’essor technologique et l’idée de progrès. Paris: Ellipses. Lemke, J. L., (1993). Education, cyberspace, and change. Originally published in the Electronic Journal on Virtual Culture, 1(1). Archived as ERIC document #ED356767
Compilation of References
Accessed online Sep. 23, 2007 from ERIC: http://www. eric.ed.gov/ERICDocs/data/ericdocs2sql/content_storage_01/0000019b/80/13/b5/9d.pdf Lenat, D. (1995). CYC: A Large-Scale Investment in Knowledge Infrastructure. Communications of the ACM, 38(11).
Lerman, C., Lustbader, E., Rimer, B., Daly, M., Miller, S., Sands, C., & Balshem, A. (1995). Effects of individualized breast cancer risk counseling: A randomized trial. Journal of the National Cancer Institute, 87, 286-292. Leroi-Gourhan, A. (1993). Gesture and speech. Cambridge, MA: MIT Press.
Lenhart, A., & Fox, S. (2006). Bloggers: A portrait of the Internet’s new storytellers. Washington, DC: Pew Internet & American Life Project.
Leslie, L. (2004). Mass communication ethics: Decision making in postmodern culture, 2nd ed. Boston, MA: Houghton Mifflin.
Lenhart, A., & Madden, M. (2007). Social networking websites and teens: An overview. Washington, DC: PEW Internet and American Life Project.
Lessick, M., Wickman, R., Chapman, D., Phillips, M., & McCaffrey S. (2001). Advances in genetic testing for cancer risk. Medsurg Nursing, 10(3), 123-127.
Lenhart, A., & Madden, M. (2007). Teens, privacy & online social networks: How teens manage their online identities and personal information in the age of MySpace. Washington, DC: Pew Internet & American Life Project.
Leurkittikul, S. (1994). An empirical comparative analysis of software piracy: The United States and Thailand. Unpublished doctoral dissertation, Mississippi State University.
Lenhart, A., Horrigan, J. B., & Fallows, D. (2004). Content creation online. Washington, DC: Pew Internet and American Life Project. Lenhart, A., Horrigan, J. B., Rainie, L., Allen, K., Boyce, A., Madden, M., et al. (2003). The ever-shifting Internet population: A new look at Internet access and the digital divide. Washington, DC: The PEW Internet & American Life Project. Lenhart, A., Madden, M., & Hitlin, P. (2005). Teens and technology: Youth are leading the transition to a fully wired and mobile nation. Washington, DC: PEW Internet and American Life Project. Leonard, L. & Haines, R. (2007). Computer-mediated group influence on ethical behaviour. Computers in Human Behavior, 23(5), 2302-2320. Lepofsky, R. (2006). Cyberextortion by Denial-of-Service attack. Risk Management, 53(6), 40. Lerman, C., Hughes, C., Croyle, RT., Main, D., Durham, C., & Snyder, C., (2000). Prophylactic surgery decisions and surveillance practices one year following BRCA1/2 testing. Preventive Medicine, 31(1), 75-80.
Levant, R. F. (2005). Evidence-based practice in psychology. Monitor on Psychology, 36(2). Retrieved April 19, 2005, from http://www.apa.org/monitor/feb05/pc.html Levijoki, S. (2004). Privacy vs location awareness. Retrieved October 12, 2004, from http://www.hut. fi/~slevijok/privacy_vs_locationawareness.htm Levin R. (1983) Informed Consent in Research and Clinical Practice: Similarities and Differences. Archive of Internal Medicine. 143: 1229-1231 Levine, B. D. & Stray-Gunderson, J. (1997). ‘Living high—training low’: Effect of moderate-altitude exposure simulated with nitrogen tents. Journal Of Applied Physiology, 83, 102-112. Levine, B. D. (2006). Editorial: Should ‘artificial’ high altitude environments be considered doping? Scandinavian Journal Of Medicine And Science In Sports, 16, 297-301. Levine, J. (2002) Harmful to Minors: The Perils of Protecting Children from Sex. University of Minnesota Press, Minneapolis, MN. Levine, R. (1988). Uncertainty in clinical research. Law, Medicine and Healthcare, 16, 174-182.
927
Compilation of References
Levinson, R. & Reiss M. J. (eds). (2003). Key issues in bioethics: A guide for teachers. London: RoutledgeFalmer.
Li B F. (2004). Informed consent in research involving human subjects, The Journal of Clinical Ethics, 15(1), 35-37.
Levi-Strauss, C. (1958). Race and history. Paris: UNESCO.
Li, Q. (2007). New bottle but old wine: A research of cyberbullying in schools. Computers in Human Behavior, 23, 1777-1791.
Levy, N. (2002) Virtual Child Pornography: The Erotization of Inequality. Ethics and Information Technology, 4: 319-323. Lewens, T. (2002). Developmental aid: On ontogeny and ethics. Studies in History and Philosophy of Biological and Biomedical Sciences, 33, 195-217. Lewenstein, B. V. (1995). Science and media. In S. Jasanoff, G. E., Markle, J. C., Petersen, & T. Pinch (Eds.), Handbook of science and technology studies. Thousand Oaks: Sage. Lewenstein, B.V. (2006). What counts as a social and ethical issue in nanotechnology? In Schummer, J. & Baird, D. (eds.), Nanotechnology challenges: Implications for philosophy, ethics and society, (pp. 201-16). London: World Scientific. Lewenstein, B.V., Radin, J. & Diels, J. (2004). Nanotechnology in the media: A preliminary analysis. In M.C. Roco & W.S. Bainbridge (eds.), Societal implications of nanoscience and nanotechnology II: Maximizing human benefit. Report of the National Nanotechnology Initiative Workshop, December 3-5, 2003, Arlington, VA, Washington DC, National Science & Technology Council and National Science Foundation. Lewis, E. (1999). Using the risk-remedy method to evaluate outsourcing tenders. Journal of Information Technology, 14(2), 203-211. Lewis, P. (1994). Information systems development: Systems thinking in the field of information systems. London: Pitman Publishing. Lewontin, R. C. 2001. It ain’t necessarily so. New York. New York review books. Lewontin, R.C. (1972). The apportionment of human diversity. Evolutionary Biology, 6, 381–398.
928
Libet, B., Freeman, A., & Sutherland, K. (eds). (1999). The volitional brain: Towards a neuroscience of free will. Thorverton, UK: Imprint Academic. Lichtblau, E. (2006, September 25). Europe panel faults sifting of bank data. The New York Times. Lieberman, D.E. (2000). Ontogeny, homology, and phylogeny in the hominid craniofacial skeleton: the problem of the browridge. In P. O’Higgins & M.J. Cohn (Eds.), Development, growth and evolution: implications for the study of the hominid skeleton (pp. 86–115). New York: Academic Press. Lieberman, L. (1968). The debate over race: a study in the sociology of knowledge. Phylon, 39(2), 127–141. Lieberman, L., Kirk, R.C., & Littlefield, A. (2003). Perishing paradigm: race, 1931–99. American Anthropologist, 105(1), 110–113. Lieberman, L., Stevenson, B.W., & Reynolds, L.T. (1989). Race and anthropology: a core concept without consensus. Anthropology and Education Quarterly, 20, 67–73. Limayem, M., Khalifa, M., & Chin, W. W. (1999). Factors motivating software piracy: A longitudinal study. International Conference on Information systems (pp. 124-131). Association for Information Systems. Lin, H. & Kolb, J. (2006). Ethical issues experienced by learning technology practitioners in design and training situations. Paper presented at the Academy of Human Resource Development International Conference (AHRD) (Columbus, OH, Feb 22-26) pp. 1168-1175. Lindsay, P. H., and Norman, D. A. (1972). Human Information Processing. New York: Academic Press. Linstone, H. (1984). Multiple perspectives for decision making: Bridging the gap between analysis and action. New York: North-Holland.
Compilation of References
Lippman, A (1992), The geneticization of health and illness: Implications for social practice. Romanian J Endocrinol, 29(1/2), 85-90;
terson (eds.), Risky business: Communicating issues of science, risk and public policy. (pp 43-59). Greenwood: New York.
Liptak, A. (2006, August 2). U.S. wins access to reported phone records. The New York Times.
Loland, S. (2002). Fair play in sport: A moral norm system. London & New York: Routledge.
Liu, W., & Williams, M. (1999). A framework for multi-agent belief revision, part I: The role of ontology. In N. Foo (Ed.), AI’99, LNAI 1747 (pp.168-179). Berlin Heidelberg: Springer Verlag.
Lombardo, C., Zakus, D., & Skinner, H. A. (2002). Youth social action: Building a latticework through information and communication technologies. Health Promotion International, 17(4), 363-371.
Livingstone, F.B. (1958). Anthropological implications of sickle cell gene distribution in West Africa. American Anthropologist, 30(3), 533–562.
London School of Economics and Political Science Identity Project. (2007, April 24). Submission to the House of Commons Home Affairs Committee inquiry into “A surveillance society?” Retrieved May 27, 2007 from http://identityproject.lse.ac.uk/LSE_HAC_Submission.pdf
Livingstone, F.B. (1962). On the non-existence of human races. Current Anthropology, 3(3), 279–281. Llobet, E., Hines, E.L., Gardner, J.W. & Franco, S. (1999). Non-Destructive Banana Ripeness Determination Using a Neural Network-Based Electronic Nose. Meas. Sci. Technol., 10, 538-548. Lloyd, F., Reyna, V., & Whalen, P. (2001). Accuracy and ambiguity in counseling patients about genetic risk. Archives of Internal Medicine, 161, 2411-2414. Loch, K., & Conger, S. (1996). Evaluating ethical decision making and computer use. Communications of the ACM, 39(7), 74-84. Loch, K., Conger, S., & Oz, E. (1998). Ownership, privacy, and monitoring in the workplace: A debate on technology and ethics. Journal of Business Ethics, 17, 653-654. Lock, R. & Miles, C. (1993). Biotechnology and genetic engineering: students’ knowledge and attitudes. J. Biological Education, 27, 267-72. Locke, John (1690). Of civil government: Second treatise. Accessed online May 27, 2007 from http://www. constitution.org/jl/2ndtreat.txt Locklear, F. Z. (2004). IDC says piracy loss figure is misleading. Retrieved March 21, 2006 from http://arstechnica.com/news.ars/post/20040719-4008.html Logan, R. A. (1991). Popularization versus secularization: Media coverage of health. In L Wilkins & P. Pat-
London School of Economics and Political Science Identity Project (2005). The identity project: An assessment of the UK Identity Cards Bill and its implications. Retrieved May 27, 2007 from http://identityproject.lse. ac.uk/identityreport.pdf López, J.A. (2003). Ciencia, técnica y sociedad. In A. Ibarra & L. Olivé (Eds.), Cuestiones éticas en ciencia y tecnología en el siglo XXI (pp. 113-158). Madrid: Biblioteca Nueva. LoPucki, L. M. (2001). Human identification theory and the identity theft problem. Texas Law Review, 80, 89-135. Lor, P. J., & Britz, J. J. (2002). Information imperialism. Moral problems in information flows from south to north. In T. Mendina & J. J. Britz (Eds.), Information ethics in the electronic age: Current issues in Africa and the world (pp. 15-21). Jefferson, NC: McFarland. Lorenzoni, I., Jones, M., & Turnpenny, J. (2007). Climate change, human genetics, and post-normality in the UK. Futures, 39(1), 65-82. Losch, A. (2006) Anticipating the futures of nanotechnology: Visionary images as means of communication. Technology Analysis and Strategic Management, 18(314), 393-409.
929
Compilation of References
Loui, M.C. (2005). Educational technologies and the teaching of ethics in science and engineering. Science and Engineering Ethics, 11, 435-446. Lovgren, S. (2005). One-fifth of human genes have been patented, study reveals. National Geographic News, October 13. Retrieved May 5, 2007 http://news.nationalgeographic.com/news/2005/10/1013_051013_gene_patent.html Lü Yao-Huai (2005). Privacy and data privacy issues in contemporary China. Ethics and Information Technology, 7, 7–15. Luger, G. (2005). Artificial Intelligence: Structures and Strategies for Complex Problem Solving. 5th ed. Addison-Wesley Luhmann, N. (1987). Soziale Systeme. Frankfurt am Main: Suhrkamp. Lujan, H.L, & DiCarlo, S.E. (2006). First year students prefer multiple learning styles. Advances in Physiology Education, 30, 13-16. Luján, J.L. & López, J.A. (2003). The social dimension of technology and the precautionary principle. Política y Sociedad, 40, 53-60. Luján, J.L. & Todt, O. (2007). Precaution in public: The social perception of the role of science and values in policy making. Public Understanding of Science, 16(1), 97-109. Lukes, S. (1974). Power: A radical view. London: Macmillan. Luppicini, R. (2003). Towards a cyber-constructivist perspective (CCP) of educational design. Canadian Journal of Learning and Technology, 29(1). Retrieved April 26, 2006, from http://www.cjlt.ca/content/vol29.1/01_luppicini.html Luppicini, R. (2005). A systems definition of educational technology in society. Educational Technology & Society, 8(3), 103-109. Lupton, D (1993). Risk as moral danger: The social and political functions of risk discourse in public health. International Journal of Health Services, 23(3), 425-435.
930
Lurie P., & Wolfe S M., (1997). Unethical trials of interventions to reduce perinatal transmission of the human immunodeficiency virus in developing countries. New England Journal of Medicine, 337(12), 853-856. Lurigio, A. J., & Resnick, P. (1990). Healing the psychological wounds of criminal victimization: Predicting postcrime distress and recovery. In A. J. Lurigio, W. G. Skogan & R. C. Davis (Eds.), Victims of crime: Problems, policies and programs (pp. 50-67). Newbury Park, CA: Sage. Lynch, J. (2005). Identity theft in cyberspace: Crime Control methods and their effectiveness in combating phishing attacks. Berkeley Technology Journal, 20, 259-300. Lyon, D. (2001). Facing the future: Seeking ethics for everyday surveillance. Ethics and Information Technology, 3, 171–181. Lyytinen, K. & Hirschheim, R. (1989). Information systems and emancipation, promise or threat? In Klein, H.K. & Kumar, K. (eds.), Systems development for human progress. Amsterdam: North-Holland (pp. 115 – 139). Lyytinen, K., & Hirschheim, R. (1987). Information systems failures - A survey and classification of the empirical literature. Oxford Surveys in Information Technology, 4, 257-309. Macer, D. & Ong, C.C. (1999). Bioethics education among Singapore high school science teachers. Eubios Journal of Asian & International Bioethics, 9, 138-144 Macer, D. R. J. (1994) Bioethics for the people by the people. Christchurch: Eubios Ethics Institute. Macer, D. R. J. (1998). Bioethics is love of life: An alternative textbook. Christchurch: Eubios Ethics Institute. Macer, D. R. J. (2002). The next challenge is to map the human mind. Nature, 420, 121. Macer, D. R. J. (2004) Bioethics education for informed citizens across cultures. School Science Review, 86(315), 83-86.
Compilation of References
Macer, D. R. J., Asada, Y., Tsuzuki, M., Akiyama, S., & Macer, N. Y. (1996). Bioethics in high schools in Australia, New Zealand and Japan. Christchurch: Eubios Ethics Institute. Macer, D. R. J., ed. (2004). Challenges for bioethics from Asia. Christchurch: Eubios Ethics Institute. Macer, D. R. J., ed. (2006). A cross-cultural introduction to bioethics. Christchurch: Eubios Ethics Institute. Retrieved from http://eubios.info/ccib.htm, http://www. unescobkk.org/index.php?id=2508 Macer, D. R. J., ed., (2004). Bioethics for informed citizens across cultures. Christchurch: Eubios Ethics Institute. Macer, D., Obata, H., Levitt, M., Bezar, H. & Daniels, K. (1997). Biotechnology and young citizens: Biocult in New Zealand and Japan. Eubios Journal of Asian & International Bioethics, 7, 111-114. Macintyre S. (1997). Social and psychological issues associated with the new genetics. Philosophical transactions: Biological sciences, 352(1357), 1095-1101.
Magretta, J. (1998). The Power of Virtual Integration: An Interview with Dell Computer’s Michael Dell. Harvard Business Review, 72-84. Mahon, A. et al. (1996). Researching children: Methods and ethics. Children and Society, 2, 145 - 154. Mahoney, M. S. (1988). The history of computing in the history of technology. Annals of the History of Computing, 10, 113-125. Makridakis, S. (1990). Forecasting, planning, and strategy for the 21st century. London: Free Press. Malcom, S. (1988). Technology in education: Looking toward 2020. In Nickerson, R.S. & Zodhiates, P.P. (Eds.), Educating a diverse population (pp. 213- 230). Hillsdale, NJ: Lawrence Erlbaum Associates. Malhotra, N., Kim, S. & Agarwal, J. (2004). Internet users’ information privacy concerns (IUIPC): The construct, the scale, and a causal model. Information Systems Research, 15(4), 336-355.
Macklin R. (2001). After Helsinki: Unresolved issues in international research, Kennedy Institute Ethics Journal, 11(1), 17-36.
Mallen, M., Vogel, D. & Rochlen, A. (2005). The practical aspects of online counseling: Ethics, training, technology, & competency. Counseling Psychologist, 33(6), 776-818.
Maclean, A. (1993). The elimination of morality. Reflections on utilitarianism and bioethics. London & New York: Routledge.
Maloka, E., & le Roux, E. (2001). Africa in the new millennium: Challenges and prospects. Pretoria: Africa Institute of South Africa.
MacLeod, N., &. Forey, P.L. (Eds.). (2002). Morphology, shape and phylogeny. New York: Taylor & Frances.
Mamchur, C. (n.d.). Cognitive type theory and learning style. Association for Supervision and Curriculum Development.
Macnamara, D. (2007). Leveraging IP for learning knowledge, a key resource for the 21st Century. Australian College of Educators edventures, 8 (January). Macoubrie, J. (2006). Nanotechnology: Public concerns, reasoning and trust in government. Public Understanding of Science, 15(2), 221-41. Madsen, P., & Shafritz, J. M. (1990). Essentials of business ethics. New York: Penguin Books. Maekawa, F. & Macer, D. R. J. (2005). How Japanese students reason about agricultural biotechnology. Science & Engineering Ethics, 10(4) 705-716.
Mander, J. (1992). In the absence of the sacred. San Francisco: Sierra Club Books. Maner, W. (1978/1980). Starter kit on teaching computer ethics. Self published in 1978. Republished in 1980 by Helvetia Press in cooperation with the National Information and Resource Center for Teaching Philosophy. Maner, W. (1998). ICEE: Online ethical scenarios with interactive surveys and real-time demographics. Proceedings of an International Conference on the Ethical Issues
931
Compilation of References
of Using Information Technology, Erasmus University, Rotterdam, 25-27 March, 1998, 462-470. Maner, W. (1999). Is computer ethics unique. Etica & Politica, Special Issue on Computer Ethics 2. Retrieved May 30, 2007, from http://www.units.it/~etica/1999_2/ Maner.html
Markus, M. L. (2002). Power, politics and MIS implementation. In M. Myers & D. Avison (Eds.), Qualitative research in information systems. London: Sage. Marra, T. (n.d) Authentic learning. Retrieved August 14, 2007 from http://www-personal.umich.edu/~tmarra/authenticity/authen.html
Maner, W. (2002). Procedural ethics. Retrieved January 3, 2004, from http://csweb.cs.bgsu.edu/maner/heuristics/toc.htm
Marron, D. B. & Steel, D. G. (2000). Which countries protect intellectual property? The case of software piracy. Economic Inquiry, 38, 159-174.
Manion, M., & Goodrum, A. (2000, June). Terrorism or civil disobedience: Toward a hacktivist ethic. Computers and Society (ACM SIGCAS), 30 (2), 14-19.
Marshall, A. M., & Tompsett, B. C. (2005). Identity theft in an online world. Computer Law & Security Report, 21, 128-137.
Manjoo, F. (2003). Carpel study stress syndrome. Retrieved October 15, 2003, from http://www.wired. com/news/politics/0,1283,44400,00.html
Marteau, T., & Weinman, J. (2006). Self-regulation and the behavioral response to DNA risk information: A theoretical analysis and framework for research and practice. Social Science and Medicine, 62, 1360–1368.
Mann, C., & Stewart, F. (2000). Internet communication and qualitative research- A handbook for researching online. London: Sage. Manning, P. (2001). News and news sources: A critical introduction. London: Sage. Manosevitch, E. (2006). Democratic values, empowerment and giving voice: Children’s media discourse in the aftermath of the assassination of Yitzhak Rabin. Learning, Media & Technology, 31(2), 163-179. Mansfield, S., Clark, R., Puttaraju, M., & Mitchell, G. (2002). Spliceosome-mediated RNA trans-splicing (SmaRT): A technique to alter and regulate gene expression. Blood Cells Molecules and Diseases, 28(3), 338-338. March, J.G. (1982). Theories of choice and making decisions, Society, 20(Nov-Dec), 29 - 39. Marckmann, G. (1999). Telemedicine and ethics. Biomedical Ethics: Newsletter for the European Network for Biomedical Ethics, 4(2), 59-62. Marcuse, H. (1968). One dimensional man. London: Sphere Books Ltd. Marks, J. (1995). Human biodiversity: Genes, race and history. New York: Aldine de Gruyter.
932
Martín Gordillo, M. (2006). Controversias tecnocientíficas. Barcelona: Octaedro OEI. Martin, C.D. & Weltz, E. (1999). From awareness to action: integrating ethics and social responsibility into the computer science curriculum. Computers and Society, 29(2). Martin, C.D. (1999). From awareness to responsible action (part 2): Developing a curriculum with progressive integration of ethics and social impact. SIGCSE Bulletin, 31(4). Martin, C.D. & Holz, H. J. (1992). Integrating social impact and ethics issues across the computer science curriculum. Information Processing 92: Proceedings of the 12th World Computer Congress, Madrid, Spain, September, Vol. II: Education and Society, p. 239-245. Martin, C.D., Huff, C., Gotterbarn, D., & Miller, K. (1996). A framework for implementing and teaching the social and ethical impact of computing. Education and Information Technologies, 1(1). Martin, C.D., Huff, C., Gotterbarn, D., & Miller, K. (1996). Implementing a tenth strand in the CS curriculum. Communications of the ACM, 39(12), 75-84.
Compilation of References
Martindale, L. (11/01/2002). Bridging the digital divide in South Africa. In Linux Journal. Accessed online May 27,2007 from http://www.linuxjournal.com/article/5966 Martínez, F. & Área, M. (2003). El ámbito docente e investigador de la tecnología educativa en España. Algunos apuntes para el debate. Paper presented at the meeting of the Meeting of the area of Didactics and Scholastic Organization, Valencia, University of Valencia. Spain. Martínez, S. (2003). Ética de científicos y tecnólogos. In A. Ibarra & L. Olivé (Eds.), Cuestiones éticas en ciencia y tecnología en el siglo XXI (pp. 277-300). Madrid: Biblioteca Nueva. Martino, J. (1972). Technological forecasting for decision-making. Elsevier. Martins, H. y García, J.L. (2003). Dilemas da Civilização Tecnológica. Lisboa: Imprensa de Ciencias Sociais. Marwick, C. (2000). National health service corps faces reauthorization during risky time. Journal of the American Medical Association, 283(20), 2641-2642. Marx, G. (2001) Murky Conceptual Waters: The Public and the Private. Ethics and Information Technology, 3: 157-169. Marziali E., Serafini, J.M.D., McCleary L. (2005). A systematic review of practice standards and research ethics in technology-based home health care intervention programs for older adults. Journal of Aging and Health, 17(6), 679-696. Masahiro Morioka. (1994). Toward international and cross-cultural bioethics. Proceedings of the Third International Bioethics Seminar in Fukui, Eubios Ethics Institute, 293-295 Maslow, A. (1954). Motivation and Personality. New York: Harpers Press. Maslow, A. (1971). The Farther Reaches of Human Nature. New York: The Viking Press
Mason, R. O. (1986). Four ethical issues of the information age. MIS Quarterly, 10(1), 5-12. Mason, R., Mason, F., & Culnan, M. (1995). Ethics of information management. Thousand Oaks, CA: Sage Publications. Mason, R.O. (1986). Four ethical issues of the information age. MIS Quarterly, 10(1), 5-12. Mason, R.O., Mason, F.M, & Culnan, M.J. (1995). Ethics of information management. Thousand Oaks, CA: Sage. Mather, K. (2005). Object oriented goodness: A response to Mathiesen’s ‘What is information ethics?’. Computers and Society, 34(4). Retrieved May 30, 2007, from http:// www.computersandsociety.org/sigcas_ofthefuture2/ sigcas/subpage/sub_page.cfm?article=919&page_number_nb=911 Mathews, S. & Birney, G. (1921). A Dictionary of Religion and Ethics. Waverley book Company, Ltd., London. Mathiesen, K. (2004). What is information ethics? Computers and Society, 32(8). Retrieved May 30, 2007, from http://www.computersandsociety.org/sigcas_ofthefuture2/sigcas/subpage/sub_page.cfm?article=909&page_ number_nb=901 Mathis, R., & Jackson, J. (2003). Human resource management (10th ed.). Mason, OH: Southwestern. Matthews, H. et al. (1998). The geography of children: Some ethical and methodological considerations for project and dissertation work. Journal of Geography in Higher Education, 3, 311 - 324. Matthews, K. (2006). Research Into Podcasting Technology Including Current and Possible Future Uses. Retrieved April 28, 2007, from http://mms.ecs.soton. ac.uk/2007/papers/32.pdf Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175-183 Matthias, A. (2004). Responsibility Ascription to Nonhumans. Climbing the Steps of the Personhood Ladder.
933
Compilation of References
In Ikäheimo, H., Kotkavirta, J., Laitinen, A. & Lyyra, P. (eds.), Personhood. Workshop papers of the Conference “Dimensions of Personhood” (August 13-15, 2004). University of Jyväskylä, Publications in Philosophy 68.
McCarthy, M. (2000). Keystroke cops: New software raises troubling questions on worker privacy. Retrieved March 7, 2000, from http://www.msnbs.com/news/3/8/00. asp
Maxwell (2006). Tracing the dynabook: A study of technocultural transformations. Accessed online May 27, 2007 from http://thinkubator.ccsp.sfu.ca/Dynabook/ Maxwell-DynabookFinal.pdf
McColgan, M. D., & Giardino, A. P. (2005). Internet poses multiple risks to children and adolescents. Pediatric Annals, 34(5), 405-414.
Maynard Smith, J. (2000). The concept of information in biology. Philosophy of Science, 67, 177-194. Mayr, E. (1942). Systematics and the origin of species. New York: Columbia University Press. Mayr, E. (1969). Principles of systematic zoology. New York: McGraw-Hill. Mayr, E. (2002). The biology of race and the concept of equality. Daedalus (Winter 2002), 89– 94. Maznevski, M. L., & Chudoba, K. M. (2000). Bridging Space over Time: Global Virtual Team Dynamics and Effectiveness. Organizationa Science, 11(5), 473-492. McAfee. (2005). Virtual criminology report: North American study into organized crime and the Internet. McAfee, Inc. Retrieved from www.softmart.com/mcafee/ docs/McAfee%20NA%20Virtual%20Criminology%20 Report.pdf
McCormick, J. (1997). Habermas’s Discourse Theory of Law: Bridging Anglo-American and Continental Legal Traditions. The Modern Law Review, Vol. 60, pp. 734-743. Mccrickard, M.P. & Butler, L.T. (2005). Cybercounseling: A new modality for counselor training and practice. International Journal for the Advancement of Counselling, 27(1), 101-110. McCullagh, D. (2005, April 19). New RFID travel cards could pose privacy threat. C/Net. Retrieved from http:// www.news.com/2102-1028_3-606574.html McCullagh, D. (2005, January 13). Snooping by satellite. C/Net. Retrieved from http://www.news.com/21021028_3-5533560.html McDermott, J. J. (1976). The culture of experience: Philosophical essays in the American grain. New York: New York University Press.
McAlearney, S. (2001). Parasitic computing relatively benign. Security Wire Digest, 3(68). Retrieved September 9, 2003 from: http://infosecurity mag.techtarget. com/2001/sep/digest06.shtml
McDonald, D. (2007, February 7). What’s that on your chinn, Ron?? (Disability Activists Work Group: DAWG OREGON), Retrieved August 11, 2007 [online] http:// dawgoregon.blogspot.com/2007/02/whats-that-on-yourchinn-ron.html
McArthur, R. L. (2001). Reasonable Expectations of Privacy. Ethics and Information Technology, 3, pp. 123–128.
McGee, E. (1999). Implantable brain chips? Time for debate. Hastings Center Report, 29(1), 7-13.
McCabe, D. (2001). Students cheating in American high schools. Retrieved May 17, 2007, from http://www. academicintegrity.org/hs01.asp
McGrath, D. (2006, January 26). RFID market to grow 10 fold by 2016, firm says. EE Times. Retrieved on June 3, 2007, from http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=177104240
McCarter, C. (2006, January 19th). RFID Market $2.71Bn in 2006 rising to $12.35Bn in 2010. RFIDTimes.org. Retrieved from http://rfidtimes.org/2006/01/rfid-market271bn-in-2006-rising-to.html
McGrath, M. G., & Casey, E. (2002). Forensic psychiatry and the Internet: Practical perspectives on sexual predators and obsessional harassers in cyberspace. Journal
934
Compilation of References
of the American Academy of Psychiatry and the Law, 30(1), 81-94. McHaffie, P. (1995). Manufacturing metaphors. Public cartography, the market, and democracy. In J. Pickles (Ed.), Ground truth (pp. 113-129). New York: The Guilford Press. McKeown, A.H., & Jantz, R.L. (2005). Comparison of coordinate and craniometric data for biological distance studies. In D.E. Slice (Ed.), Modern morphometrics in physical anthropology (pp. 215–246). New York: Kluwer Academic. McKinnon, C. (n.d.) Feminism Unmodified: Discourses on Life and Law. Cambridge: Harvard University Press. McKnight, H., Choudhury, V., & Kacmar, C. (2004). Dispositional and distrust distinctions in predicting high and low risk internet expert advice site perceptions. EService Journal, 3(2), 35-59. McLaren, B.M. (2006). Computational models of ethical reasoning: Challenges, initial steps, and future directions. IEEE Intelligent Systems, 21(4): 29-37. McLelland, M. (2001) Out and about on the Japanese Gay Net. In Mobile Cultures: New Media and Queer Asia, eds C. Berry, F.Martin & A. Yue. Duke University Press, Durham, NC. McLucas, A. (2003). Decision making: Risk management, systems thinking and situation awareness. Canberra: Argos. McLuhan, M. (1962). The Gutenberg galaxy. Toronto: McGraw Hill. McLuhan, Marshall. (1964). Understanding media: The extensions of man. Toronto: McGraw Hill. McManamon, F.P. (2000). Determination that the Kennewick human skeletal remains are “Native American” for the purposes of the Native American Graves Protection and Repatriation Act (NAGPRA). National Parks Service, United States Department of the Interior. Electronic document, http://www.cr.nps.gov/aad/kennewick/c14memo. htm, accessed April 25, 2005.
McMichael, A. J. (1995, ©1993). Planetary Overload: Global Environmental Change and the Health of the Human Species. Cambridge [England]; New York: Cambridge University Press). McMillan, R. (2006, August 6). Defcon: Cybercriminals taking cues from Mafia, says FBI. Computerworld Security. Retrieved from http://www.computerworld. com/action/article.do?command=viewArticleBasic&ta xonomyName=cybercrime_hacking&articleId=90022 30&taxonomyId=82 McNeil PM. (1993) The ethics and politics of human experimentation. School of Medicine, University of South Wales Cambridge University Press. McTeer, M A (1995). A role for law in matters of morality. McGill Law J., 40, 893. Meaney, M. (2001). Maternal care, gene expression, and the transmission of individual differences in stress reactivity across generations. Annual Review of Neuroscience, 24, 1161-1192. Medical News Today, (2005). Smart fabric to keep patients healthy. Medical News Today. Retrieved July 1, 2007, from http://www.medicalnewstoday.com/medicalnews. php?newsid=21338 Medical research guidelines on ethics for medical research. (1993). Found at http://www.mrc.ac.za/ethics/epidemiolgical.htm Medrano, C. & Cortés, P.A. (2007). Teaching and learning of values through television. International Review of Education, 53(1), 5-21. Meili, C. (2005). The “ten commandments” of nano communication – or how to deal with public perception. Innovation Society. http://www.innovationsgesellschaft. ch/images/publikationen/Ten%20Commandments%20o f%20Nano-Communication.pdf Meller, P. (2006, May 30). Officials downplay EU data privacy concerns. NetworkWorld.com, IDG News Service. Retrieved from http://www.networkworld. com/news/2006/053006-officials-downplay-eu-dataprivacy.html.
935
Compilation of References
Memott, M. (2007, January 4). Bush says feds can open mail without warrants. USA Today. Retrieved from http://blogs.usatoday.com/ondeadline/2007/01/bush_ says_feds_.html Mendels, P. (2000, February 16). Online ethics should begin in classroom, educators say. The New York Times, Technology section. Meng, P. (2005). Podcasting & vodcasting: A white paper: Definitions, discussions & implications. Columbia: University of Missouri, IAT Services. Retrieved April 27, 2007, from http://edmarketing.apple.com/adcinstitute/ wp-content/Missouri_Podcasting_White_Paper.pdf Menkiti, I. A. (1979). Person and community in African traditional thought. In R. A. Wright (Ed.), African philosophy (pp. 157-168). New York: University Press. Merchant, C. (1980). The death of nature: Women, ecology, and the scientific revolution. San Francisco: Harper and Row. Mercuri, R. T. (2006). Scoping identity theft. Communications of the ACM, 49(5), 17-21. Merleau-Ponty, M. (1942). La structure du comportement. Paris: Presses Universitaires de France. Merleau-Ponty, M. (1945). Phénoménologie de la perception. Paris: Presses Universitaires de France. Merriam-Webster. (2004). Dictionary main page. Retrieved February 18, 2004, from http://www.m-w.com Mertens, D. (1999). Inclusive evaluation: Implications of transformative theory for evaluation. American Journal of Evaluation, 20(1), 1-14. Merton, R. K. & Kendall, P. L. (1946). The focused interview. American J. Sociology, 51, 541-7. Metros, S. & Woolsey, K. (2006). Visual literacy: An institutional imperative. EDUCAUSE Review, 41(3), 80-81. Mettler, L.E., Gregg, T.G., & Schaffer, H.G. (1988). Population genetics and evolution. 2nd edition. Englewood Cliffs: Prentice Hall.
936
Miah, A. & Rich, E. (2006) Genetic tests for ability? Talent identification and the value of an open future. Sport, Education & Society, 11, 259-273. Miah, A. (2004). Genetically modified athletes: Biomedical ethics, gene doping and sport. London and New York: Routledge. Miah, A. (2006). Rethinking enhancement in sport. In Bainbridge, W. S. & Roco, M. C. (Eds.) Progress in convergence: Technologies to improve human well-being. New York Academy of Sciences, 1093, 301-320. Miah, A. and S. B. Eassom, Eds. (2002). Sport Technology: History, philosophy & policy. Research in philosophy & technology. Oxford, Elsevier Science. Michalko, M. (1991). Thinkertoys. Berkeley: Ten Speed. Michelson, E.S. & Rejeski, D. (2006). Falling through the cracks: Public perception, risk and the oversight of emerging technologies. IEEE Report. http://www. nanotechproject.org/reports Michelson, L. and Ray, W. J. (1996). Handbook of Dissociation: Theoretical, Empirical, and Clinical Perspectives. New York: Plenum Press. Michie, S., Bobrow, M., & Marteau, T. (2001). Predictive genetic testing in children and adults: A study of emotional impact. Journal of Medical Genetics, 38(8), 519-526. Michie, S., Lester, K., Pinto, J., & Marteau, T. (2005). Communicating risk information in genetic counseling: An observational study. Health Education and Behavior, 32(5), 589-598. Microsoft. (2005). Microsoft MapPoint Location Server. Retrieved September 6, 2005, from http://www.microsoft. com/mappoint/products/locationserver/default.mspx Midgley, G. (1996). What is this thing called CST? In R. L. Flood & N. Romm (Eds.), Critical Systems Thinking: Current Research and Practice (pp. 11-24). New York: Plenum Press.
Compilation of References
Midgley, G. (1997). Mixing methods: Developing systemic intervention. In J. Mingers & A. Gill (Eds.), Multimethodology: The Theory and Practice of Combining Management Science Methodologies. (pp. 249-290). Chichester, UK: John Wiley and Sons. Midgley, G. (2000). Systemic intervention: Philosophy, methodology and practice. New York: Kluwer Academic/Plenum. Miles, S, (2002). Ad-Aware Maker LavaSoft Frustrates Internet Advertisers. The Wall Street Journal Online. URL: http://online.wsj.com/article/0,,SB1035830...231. djm,00.html. Miles, S. H., Bannick-Mohrland, S. & Lurie, N. (1990). Advance-treatment planning discussions with nursing home residents: Pilot experience with simulated interviews. Journal of Clinical Ethics, 2, 108-112. Milford C., Wassenaar D., & Slack C. (2006). Resources and Needs of Research Ethics Committees in Africa: Preparation for HIV vaccine trials, IRB Ethics & Human Research, 28(2), 1-9. Military uses of nanotechnology [summary of a UN committee report]. Retrieved from http://ubertv/envisioning/clippings/2005/03/005882.html Miller, C. (2006). Cyber Harassment: Its Forms and Perpetrators. Law Enforcement Technology, 33(4), 26, 28 - 30. Miller, C. (2006). Cyber stalking & bullying: What law enforcement needs to know. Law Enforcement Technology, 33(4), 18, 20-22, 24. Miller, J. G. (February 1960). Information Input Overload and Psychopathology. American Journal of Psychiatry, Vol. 116 (pp. 695-704). Miller, K. (1988). Integrating ethics into the computer science curriculum. Computer Science Education, 1(1): 37-52. Miller, M. M. and Riechert, B. P. (2000). Interest group strategies and journalistic norms: News media framing of environmental issues. In S. Allan, B. Adam & C. Carter
(eds.), Environmental risks and the media, (pp.45-54). London: Routledge. Miller, P. (2006). German hackers clone RFID e-passports. Retrieved from http://www.engadget.com Miller, R. (1991). Interpretations of conflict: Ethics, pacifism, and the just-war tradition. Chicago, IL: University of Chicago Press. Miller, S., & Weckert, J. (2000). Privacy, the workplace, and the Internet. Journal of Business Ethics, 28, 255266. Mills, K., & Fledderman, C. (2005). Getting the best from nanotechnology: Approaching social and ethical implications openly and proactively. IEEE Technology and Society Magazine, 24(4), 18-26. Milman, N.B. & Kilbane, C.R. (2005). Digital teaching portfolios: Catalysts for fostering authentic professional development. Canadian Journal of Learning and Technology, 31(3). Retrieved April 26, 2006, from http://www. cjlt.ca/content/vol31.3/milman.html Milne, G. R., Rohm, A. J., & Bahl, S. (2004). Consumers’ protection of online privacy and identity. The Journal of Consumer Affairs, 38(2), 217-232. Milson, A. J. (2001, Spring/Summer). Fostering civic virtue in a high-tech world. International Journal of Social Education, 16(1), 87-93. Minch, R. (2004). Privacy issues in location-aware mobile devices. In HICSS-37 Proceedings. IEEE Press. Mingers, J. (1984). Subjectivism and soft systems methodology: A critique. Journal of Applied Systems Analysis, 11, 85-113. Mingers, J. (1992). Technical, practical and critical OR: Past, present and future? In M. Alvesson & H. Willmott (Eds.), Critical management studies (pp. 90-112). London: Sage. Mingers, J. (2005). ‘More dangerous than an unanswered question is an unquestioned answer’: A contribution to the Ulrich debate. Journal of the Operational Research Society, 56(4), 468-474.
937
Compilation of References
Mingers, J., & Gill, A. (1997). Multimethodology: The theory and practice of combining management science methodologies. Chichester, UK: John Wiley & Sons Ltd. Ministry of Education. (2007). McGuinty government doing more to make schools safer [Electronic Version]. Retrieved June 2, 2007 from http://ogov.newswire.ca Mintzberg, H. (2000). The rise and fall of strategic planning. London: Prentice-Hall (Pearson). Miquel, C. & Ménard, G. (1988. Les ruses de la technique. Le symbolisme des techniques à travers l’histoire. Montréal: Boréal / Méridiens Klincksieck. Mish, F. (Ed.). (1987). Webster’s ninth new collegiate dictionary. Springfield, MA: Merrriam-Webster. Mishra, A. Akman, I. & Yazici, A. (2006). Software piracy among IT professionals in organizations. International Journal of Information Management, 26(5), 401-413. Mitcham , C. (1997). Thinking ethics in technology: Hennebach lectures and papers, 1995-1996. Golden, CO: Colorado School of Mines Press. Mitcham, C. (1994), Thinking through technology: The path between engineering and philosophy. Chicago: Chicago University Press. Mitcham, C. (2005). Encyclopedia of science, technology, and ethics. Detroit: Macmillan Reference USA. Mitcham, C., A. Briggle, & M. Ryder (2005). Technology overview. In The encyclopedia of science, technology, and ethics. Stamford, CT: Thompson and Gale. Mitchell, K. J., Wolak, J., & Finkelhor, D. (2007). Trends in youth reports of sexual solicitations, harassment and unwanted exposure to pornography on the Internet. Journal of Adolescent Health, 40(2), 116-126. Mitchell, K., Wolak, J., & Finkelhor, D. (2005). Police posing as juveniles online to catch sex offenders: Is it working? Sexual Abuse: A Journal of Research and Treatment, 17(3), 241-267. Mitchell, W.J. (1999) Telematics Takes Command: ETopia. Cambridge, MA: MIT Press.
938
Mitnick, K. (2002). The art of deception. New York: Cyber Age Books. Mitrakas, A. (2006). Information security and law in Europe: Risks checked? Information and Communications Technology Law, 15(1), 33-53. Mitteroecker, P., Gunz, P., Bernhard, M., Schaefer, K., & Bookstein, F. (2004). Comparison of cranial ontogenetic trajectories among great apes and humans. Journal of Human Evolution, 46, 679–698. Mnyusiwalla, A., Daar, A.S. & Singer, P.A. (2003). Mind the gap: Science and ethics in nanotechnology. Nanotechnology, 14, R9-R13. http://www.utoronto. ca/jcb/pdf/nanotechnology_paper.pdf Mobileinfo.com. (2002) Location-based services. Retrieved October 12, 2004, from http://www.mobileinfo. com/locationbasedservices/market_outlook.htm Mockus, A., Fielding, R.T, & Herbsleb, J.D. (2002). Two case studies of open source software development: Apache and Mozilla. ACM Transactions on Software Engineering and Methodology, 11(3), 309-346. Mohoney, D. (2006, September 1). Same threats, different technology. MRT Magazine. Retrieved from http://mrtmag.com/mag/radio_threats_different_technology/ Moinian, F. (2006). The construction of identity on the Internet: Oops! I’ve left my diary open to the whole world! Childhood, 13(1), 49-68. Molander, R., & Siang, S. (1998, Fall). The legitimization of strategic information warfare: Ethical considerations. AAAS Professional Ethics Report, 11 (4). Retrieved November 23, 2005, from www.aaas.org/spp/sfrl/sfrl.htm Molnar, A. R. (1969, June). Ten years of educational technology. Educational Broadcasting Review 3(3), 52-9. Molnar, S. (2002). Human variation: Races, types, and ethnic groups. Upper Saddle River: Prentice Hall. Montagu, A. (1941). The concept of race in the human species in the light of genetics. The Journal of Heredity, 32, 243–247.
Compilation of References
Montagu, A. (1942). The genetical theory of race, and anthropological method. American Anthropologist, 44(3), 369–375.
Moor, J. H. (2005). Why we need better ethics for emerging technologies. Ethics and Information Technology, 7, 111-119.
Montagu, A. (1997). Man’s most dangerous myth: the fallacy of race. 6th edition. Walnut Creek: AltaMira Press.
Moor, J.H. (1997). Toward a theory of privacy for the information age. Computers and Society, 27(3), 27-32. September 1997.
Montgomery, K. C. (2007). Generation digital: Politics, commerce, and childhood in the age of the Internet. Cambridge, MA: MIT Press.
Moor, J.H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18-21.
Moon, J. (1999). How to ... stop students from cheating. The Times Higher Education Supplement, September 3, 1999. Retrieved May 17, 2007, from http://72.14.253.104/ search?q=cache:2IyJTIUJdhUJ:www.jiscpas.ac.uk/images/bin/underwoodtertiary.doc+%22The+Times+Hig her+Education+Supplement+September+3,+1999&hl= en&ct=clnk&cd=1
Moore, A. (2000). Employee monitoring and computer technology: Evaluative surveillance v. privacy. Business Ethics Quarterly, 10, 697-710.
Moor, J. & Weckert, J. (2004). Nanoethics: Assessing the nanoscale from an ethical point of view. In Baird, D., Nordmann, A. & J. Schummer (Eds.) Discovering the nanoscale. (pp. 301-310). Amsterdam: IOS Press. Moor, J. (1989). How to invade and protect privacy with computer ethics. In C. C. Gould (Ed.), The Information Web (pp.57-70). Boulder: Westview Press. Moor, J. (1997). Towards a theory of privacy in the information age. In J. van den Hoven (Ed.), Computer ethics: Philosophical enquiry. Department of Philosophy, Erasmus University. Moor, J. (1998). Reason, relativity, and responsibility in computer ethics. Computers and Society, 28, 14-21. Moor, J. (2005). Why we need better ethics for emerging technologies. Ethics and Information Technology, 7, 111–119. Moor, J. H. (1985). What is computer ethics? In T. W. Bynum (ed.), Computers and Ethics, pp. 266-275. Basil Blackwell. Moor, J. H. (1995). What is computer ethics? In D.G. Johnson & H. Nissenbaum (Eds.), Computers, ethics & social values (pp. 7-15). Saddle River, NJ: Prentice Hall.
Moore, J.H. (1994). Putting anthropology back together again: The ethnogenetic critique of cladistic theory. American Anthropologist, 96(4), 925–948. Moores, T. T. & Dhaliwal, J. (2004). A reversed context analysis of software piracy issues in Singapore. Information & Management, 41, 1037-1042. Moores, T. T. (2003).The effect of national culture and economic wealth on global software piracy rates. Communications of the ACM, 46(9), 207-215. Mora, M., Gelman, O., Cervantes, F., Mejia, M., & Weitzenfeld. (2003). A systemic approach for the formalization of information systems concept: Why information systems are systems? In J. Cano (Ed.), Critical reflections in information systems: A systemicapproach (pp. 1-29). Hershey (PA): Idea Group Publishing. Moravec, H. (1998). ROBOT: Mere machine to transcendent mind. Cambridge: Oxford University Press. Morell, V. (1998). Kennewick Man’s contemporaries. Science, 280(5361), 191. Moreno J D. (2001). Goodbye to all that: the end of moderate protectionism in human subject research, The Hasting Center Report, 31(3), 9-17. Moreno J. (2001). Undue Risk: Secret state experiments on humans. New York: Routledge. Moreno, Jonathan D. (2000). Undue risk: Secret state experiments on humans. New York: Freeman & Co.
939
Compilation of References
Moreno, Jonathan D. (2006). The role of brain research in national defense. Chronicle of Higher Education Review, Nov. 10, 2006, B6-B7. Moriarty, D.E., Schultz, A.C & Grefenstette, J.J. (1999). Evolutionary Algorithms for Reinforcement Learning. Journal of Artificial Intelligence Research, 11, 199229 Morone, J. A. (2003). Hellfire nation. The politics of sin in American history. New Haven and London: Yale University Press. Morris, T., Terra, E., Miceli, D., & Domkus, D. (2005). Podcasting for Dummies. For Dummies Series. Morton, S. (1830). Crania Americana. Philadelphia: J. Dobson. Moss, J. (2002). Power and the digital divide. Ethics and information technology, 4 (2), 159–165. Moss, L. (2001). Deconstructing the gene and reconstructing molecular developmental systems. In S. Oyama, P. Griffiths & R.D. Gray (Eds.), Cycles of contingency: Developmental systems and evolution (pp. 85-97). Cambridge, MA: MIT Press. Motion Picture Association of America & Junior Achievement (2003). What’s the diff? JA.org. Retrieved July 21, 2004, from http://www.ja.org/programs/programs_supplements_citizenship.shtml Mottus, R., Whitehead, I., Ogrady, M., Sobel, R., Burr, R.H.L., Spiegelman, G., & Grigliatti, T. (1997). Unique gene organization: Alternative splicing in Drosophila produces two structurally unrelated proteins. Gene, 198(1-2), 229-236. Muffoletto, R. (2003, November/December). Ethic: A discourse of power. TechTrends, 47(6), 62-67. Mulago, V. (1969). Vital participation: The cohesive principle of the Bantu community. In K. Kickson & P. Ellinworth (Eds.), Biblical revelation and African beliefs (pp. 137-158). London: Butterworth. Mulvey, L. (1975) Visual Pleasure and Narrative Cinema. Screen, 16: 6-18.
940
Muotri, A.R., Nakashima, K., Toni, N., Sandler, V.M., & Gage, F.H. (2005). Development of functional human embryonic stem cell-derived neurons in mouse brain. Proceedings of the National Academy of Sciences of the United States of America, 102(51), 18644-18648. Murdock, G., Petts J. & Horlick-Jones, T. (2003). After amplification: Rethinking the role of the media in risk communication. In Pidgeon, N., Kasperson, R.E. & Slovic, P. (eds) The social amplification of risk, (pp. 15678). Cambridge: Cambridge University Press. Murphy, Timothy F. (in progress). Ethics in military medicine. Murray, T. (1997) Genetic exceptionalism and “future diaries”: Is genetic information different from other medical information? In M. Rothstein (Ed.), Genetic secrets: Protecting privacy and confidentiality (pp. 60-73). New Haven: Yale University Press. Murray, T. H. (1983). The coercive power of drugs in sports. Hastings Center Report, August, 24-30. Murray, T. H. (1984). Drugs, sports, and ethics. In T. H. Murray, W. Gaylin & R. Macklin (Eds.), Feeling good and doing better. Clifton, New Jersey: Humana Press. Murray, T. H. (1986). Guest Editorial: Drug Testing And Moral Responsibility. The Physician And Sportsmedicine, 14(11), 47-48. Murray, T. H. (1987). The ethics of drugs in sport. drugs and performance in sports. London: W.B. Saunders Company. Muskavitch, K.M.T. (2005). Cases and goals for ethics education: Commentary on “connecting case-based ethics instruction with educational theory.” Science and Engineering Ethics, 11(3), 431-434. Mutz, D. (2005). Social trust and e-commerce: Experimental evidence for the effects of social trust on individuals’ economic behavior. Public Opinion Quarterly, 69(3), 393-416. Myers, L., Pasternak, D., Gardella, R., and the NBC Investigative Unit (2005, December 14). Is the Pentagon
Compilation of References
spying on Americans? MSNBC. Retrieved from http:// www.msnbc.msn.com/id/10454316/ Myers, L.B. & McCaulley, M.H. (1985).Manual-guide to the development and use of the Myers-Briggs indicator. Palo Alto, CA: Consulting Psychologists Press, Inc. Nadeau, J. E. (2006). Only androids can be ethical. In, Ford, K. and Glymour, C. (eds.), Thinking about android epistemology (pp. 241-248). Menlo Park, CA: AAAI Press (American Association for Artificial Intelligence); Cambridge, MA: MIT Press. Nakada, M. (2005). Are the meanings of japanese popular songs part of Seken-Jinseikan meanings? Ronsyuu-gendaibunnka-koukyouseisaku, 1(1), 105-130. Nakada, M. (2006). Privacy and Seken in Japanese information society: Privacy within Seken as old and indigenous world of meaning in Japan. In F.Sudweeks, H. ,Hrachovec, & C. Ess (Eds.), Cultural Attitudes towards Technology and Communication 2006 (pp.564-579). Perth: Murdoch University. Nakada, M. (2007). Japanese traditional values behind political interests in the information age: Japanese political minds within Seken as old and indigenous world of meaning in Japan. In T. W. Bynum, S. Rogerson, & K. Murata (Eds.), ETHICOMP 2007(Globalization: Bridging the Global Nature of Information and Communication Technology and the Local Nature of Human Beings (pp.427-435). Tokyo: Meiji University. Nakada, M., & Capurro, R. (2007). The public/private debate between the “Far West” and the “Far East” (2007) (Unpublished six month e-mail debate between Rafael Capurro and Makoto Nakada. This debate provided the authors with theoretical foundations and the empirical material underlying this chapter. This debate will be published on Capurro’s website: http://www.capurro. de/ ) Nakada,M., & Tamura, T. (2005). Japanese conceptions of privacy: An intercultural perspective. Ethics and Information Technology, 7, 27-36.
Nance, K. L., & Strohmaier, M. (1994). Ethical accountability in the cyberspace. Ethics in the Computer Age (pp 115-118). Gatlinburg, TN: ACM. Nano Ethics Bank (2007). Nano ethics bank. Retrieved May 17 2007, from http://hum.iit.edu/NanoEthicsBank/ index.php NanoForum report (2004). Benefits, risks, ethical, legal and social aspects of nanotechnology, June. http://med. tn.tudelft.nl/~hadley/nanoscience/references/nanoforum1.pdf Naraine, M. (2006, March 15). Dutch researchers create RFID malware. eWeek. Narcissistic, but in a good way. (2007). 199864 (983), 6-6. Nardin, T. (Ed.) (1998). The ethics of war and peace. Princeton, NJ: Princeton University Press. Narvaez, D. (2001). Moral text comprehension: Implications for education and research. Journal of Moral Education, 30(1), 43-54. Narvaez, D. (in press). The Neo-Kohlbergian tradition and beyond: Schemas, expertise and character. In C. Pope-Edwards & G. Carlo (Eds.), Nebraska Symposium Conference Papers, Vol. 51. Lincoln, NE: University of Nebraska Press. Narvaez, D., & Rest, J. (1995). The four components of acting morally. In W. Kurtines & J. Gewirtz (Eds.), Moral behavior and moral development: An introduction (pp. 385-400). New York: McGraw-Hill. Narvaez, D., Bock, T., & Endicott, L. (2003). Who should I become? Citizenship, goodness, human flourishing, and ethical expertise. In W. Veugelers & F. K. Oser (Eds.), Teaching in moral and democratic education (pp. 43-63). Bern, Switzerland: Peter Lang. Narvaez, D., Endicott, L., Bock, T., & Lies, J. (in press). Foundations of character in the middle school: Developing and nurturing the ethical student. Chapel Hill, NC: Character Development Publishing.
Nakashima, E. (2007, January 16). Enjoying technology’s conveniences but not escaping its watchful eyes. Washington Post. 941
Compilation of References
Nash, R. J. (2002). Real world ethics: Frameworks for educators and human service professionals, second edition. New York: Teachers College Press, Columbia University. Nathan, O. & Norden, H. (1960) Einstein of peace. New York: Simon and Schuster. National Cancer Institute. (2000). Research-based web design usability guidelines. Retrieved 06/02/2004, 2004, from http://www.usability.gov/guidelines/index.html National Center for Education Statistics (NCES) (n.d.) Search for public school districts. Retrieved May 21, 2007 from http://nces.ed.gov/ccd/districtsearch/district_detail. asp?start=0&ID2=3904696 National Partnership for Women and Families on behalf of the Coalition for Genetic Fairness (2004). Faces of genetic discrimination. How genetic discrimination affects real people. Retrieved February 28, 2005, from http://www. nationalpartnership.org/site/DocServer/FacesofGeneticDiscrimination.pdf?docID=971 National Public Radio. (2001). All things considered. August 29. Retrieved September 9, 2003 from: http://www. npr.org/ramfiles/atc/20010829. atc.14.ram National White Collar Crime Center and the Federal Bureau of Investigation. (2007). Internet Crime Report January 1, 2006 – December 31, 2006. Retrieved May 28, 2007 http://www.ic3.gov/media/annualreport/2006_ IC3Report.pdf. National Workrights Institute. (2004). Electronic monitoring in the workplace: Common law and federal statutory protection. Retrieved October 12, 2004, from http://www.workrights.org/issue_electronic/em_common_law.html National Workrights Institute. (2005). On your tracks: GPS tracking in the workplace. Retrieved October 10, 2005, from http://www.workrights.org/issue_electronic/ NWI_GPS_Report.pdf Neale, S. (1986) Sexual Difference in the Cinema: Issues of Fantasy, Narrative and the Look. Oxford Literary Review, 8: 123-32.
942
Neelameghan, A. (1981). Some issues in information transfer. A third world perspective. International Federation of Library Associates (IFLA) Journal, 7, 8-18. Negroponte, N. (1995). Being digital. London: Hodder and Stoughton. Nei, M., & Roychoudhury, A.K. (1997). Genetic relationship and evolution of human races. In N.E. Gates (Ed.), The concept of ‘race’ in the natural and social sciences (pp. 29–88). New York: Garland Publishing Inc. Nelkin, D. (1995). Selling science: How the press covers science and technology. New York: W.H. Freeman and Company. Netzer, C., & Biller-Andorno, N. (2004). Pharmacogenetic testing, informed consent and the problem of secondary information. Bioethics, 18(4), 344-360. Nevgi, A., Virtanem, P. & Niemi, H. (2006). Supporting students to develop collaborative learning skills in technology-based environments. British Journal of Educational Technology, 37(6), 937-947. New approaches to HIV prevention: Accelerating research and ensuring future access. Global HIV Prevention Working Group, August 2006. http://www. paho.org Newby, T. J., Stepich, D. A., Lehman, J., & Russell, J. D. (2000). Instructional technology for teaching and learning, 2nd ed. Englewood Cliffs NJ: Merrill. Newitz, A. (2004). The RFID hacking underground. Wired Magazine, 13(12). Retrieved from http://www. wired.com/wired/archive/14.05/rfid_pr.html Newitz, A. (2006, November 30). Nike + IPod = Surveillance. Wired Magazine. Retrieved from http://www. wired.com/science/discoveries/news/2006/11/72202 Newman, D. (1990). Opportunities for research on the organizational impact of school computers. Educational researcher: A publication of the American Educational Research Association. 19(3), 8-13. Newman, M. (2006). Internet site hires official to oversee users’ safety. The New York Times, C3(L).
Compilation of References
Ng’etich, K. A. (2001). Harnessing computer-mediated communication technologies in the unification of Africa: Constraints and potentials. In E. Maloka & E. le Roux (Eds.), Africa in the new millennium: Challenges and prospects (pp. 77-85). Pretoria: Africa Institute of South Africa. Nichols, R. G. (1987). Toward a conscience: Negative aspect of educational technology. Journal of Visual/Verbal Languaging, 7(1), 121-137. Nichols, R. G. (1987). Toward a conscience: Negative aspect of educational technology. Journal of visual/Verbal Languaging, 7(1), 121-137. Nichols, R. G. (1994). Searching for moral guidance about educational tecnology. Educational technology, 34(2), 40-48. Nicholson, P. J. M. (March 9, 2007). The Intellectual in the Infosphere. The Chronicle of Higher Education. Nisbet, L. (1977). The ethics of the art of teaching. In Hook, S., Kurtz, P. & Todorovich, M. (Eds.). The ethics of teaching and scientific research. Buffalo, NY: Prometheus Books. Nisbet, M., D., Brossard, & A. Kroepsch (2003). Framing science: The stem cell controversy in an age of press/ politics. Harvard International Journal of Press/Politics, 8(2), 36-70. Nisbet, M.D. & Lewenstein, B.V. (2002). Biotechnology and the American media: The policy process and the elite press, 1970 to 1999. Science Communication, 23(4), 359-91. Nissenbaum, H. (1995). Should I Copy My Neighbor’s Software?. In Johnson, D. & Nissenbaum, H. (eds), Computers, Ethics, and Social Responsibility. New Jersey: Prentice-Hall. Noe, R., Hollenbeck, J., Gerhart, B., & Wright, P. (2004). Fundamentals of human resource management. New York: McGraw Hill Irwin. Nolan, D. (2003). Privacy and profitability in the technological workplace. Journal of Labor Research, 24, 207-232.
Nolo. (2006). Intellectual property. Retrieved March 19, 2006 from http://www.nolo.com/definition.cfm/Term/ 519BC07C-FA49-4711-924FD1B020CABA92/alpha/I/ Norman, C. D., & Skinner, H. A. (2006). eHEALS: The eHealth Literacy Scale. Journal of Medical Internet Research, 4, e27. Norman, C. D., & Skinner, H. A. (2006). eHealth literacy: Essential skills for consumer health in a networked world. Journal of Medical Internet Research, 8(2), e9. Norman, C. D., & Skinner, H. A. (in press). Engaging youth in eHealth promotion: Lessons learned from a decade of TeenNet Research. Adolescent Medicine: State of the Art Reviews, 18(2), 357-369. Norman, C. D., Maley, O., & Skinner, H. A. (2000). CyberIsle: Using information technology to promote health in youth. CyberMed Catalyst, 1(2). Norman, C. D., Maley, O., Li, X., & Skinner, H. A. (in press). Using the Internet to initiate and assist smoking prevention and cessation in schools: A randomized controlled trial. Health Psychology. Norris, C. & Armstrong, G. (1999) The Maximum Surveillance Society: The Rise of CCTV. Oxford: Berg. Norris, P. (2001). Digital divide: Civic engagement, information poverty, and the Internet worldwide (communication, society and politics). Cambridge: Cambridge University Press. Norwegian Parliament. (2000). Act of 14, April 2000 No. 31, relating to the processing of personal data (Personal Data Act). Retrieved October 12, 2004, from http://www. personvern.uio.no/regler/peol_engelsk.pdf Novas, C and Rose, N (2000). Genetic risk and the birth of the somatic individual. Economy and Society, 29(4), 485-513. Nucleus Research. (2007). Research note. Spam: The repeat offender. Retrieved May 21, 2007 from http://www. nucleusresearch.com/research/h22.pdf Nuffield Council on Bioethics. (2002). The ethics of research related to healthcare in developing countries, London: Nuffield Council on Bioethics.
943
Compilation of References
Nuffield Council on Bioethics. (2005). A follow up discussion paper: The ethics of research related to healthcare in developing countries. London: Nuffield Council on Bioethics. Nunes, J. (2004, Spring). The uncertain and the unruly: Complexity and singularity in biomedicine and public health. Paper presented at The Annual New England Workshop on Science and Social Change (NewSSC), Woods Hole, MA. Nys, H, Dreezen, I, Vinck, I, Dierick, K, Dequeker, E & Cassiman, J J (2002). Genetic testing: Patients rights, insurance and employment. A survey of regulations in the European Union. European Commission, Directorate-General for Research. O’Brien, G. V. (1999). Protecting the social body: use of the organism metaphor in fighting the “menace of the feeble-minded”. Mental Retardation, 37(3), 188–200. O’Connell, K. (2005). The devouring: Genetics, abjection, and the limits of law. In Shildrick, M. And Mykitiuk, R. (eds.), Ethics of the body: Postconventional challenges. MIT Press. O’Connor, T., & Wong, H. (2002). Emergent properties. The Stanford Encyclopedia of Philosophy (Winter). Retrieved December 30, 2003, from http://plato.standford. edu/archives/win2002/entries/properties-emergent/ O’Neill, B., & Xiao, J. J. (2005). Consumer practices to reduce identity theft risk: An exploratory study. Journal of Family and Consumer Sciences, 97, 33-38. O’Neill, J. (2002). The rhetoric of deliberation: Some problems in Kantian theories of deliberative democracy. Res Publica, 8, 249-268. O’Neill, O (2002). Autonomy and trust in bioethics (Gifford Lectures, 2001). Cambridge University Press. O’Neill, O. (2001). Informed consent and genetic information. Studies in History and Philosophy of Biological and Biomedical Sciences, 32(4), 689-704. O’Neill, Onora (1990). Constructions of reason: Explorations of Kant’s practical philosophy. Cambridge: Cambridge UP.
944
O’Neill, T. (1995). Implementation frailties of Guba and Lincoln’s fourth generation evaluation theory. Studies in Educational Evaluation, 21(1), 5-21. O’Reilly, K. (2005). Ethnographic methods. New York: Routledge. O’Toole, L. (1998) Pornocopia: Porn, Sex, Technology and Desire. London: Serpent’s Tail. Occupational and Environmental Health Center. (2004). Ergonomic technology center. Retrieved September 18, 2006 from http://www.oehc.uchc.edu/ergo/index.htm OECD (2000). Guidelines on the protection of privacy and transborder flows of personal data. Organization for Economic and Co-operation and Development (OECD). OECD (2001). OECD health data 2001: A comparative analysis of 30 countries. Paris, OECD. OECD (2003). Privacy online: policy and practical guidance. Organization for Economic and Co-operation and Development (OECD). OECD (2006). Report of cross-border enforcement of privacy laws. Organization for Economic and Co-operation and Development (OECD). OECD. (2005). Informe de la organización para la cooperación y el desarrollo económicos (OCDE). Retrieved April 15, 2007 from http://www.oecd.org/ home/0,2987,en_2649_201185_1_1_1_1_1,00.html. Consultado en diciembre de 2004. Office of Community Oriented Policing Services. (2006). National strategy to combat identity theft. Retrieved May 27, 2007 from http://www.ncjrs.gov/App/Publications/AlphaList.aspx Office of Government Commerce 2002. Open source software: Guidance on implementing UK Government policy. Retrieved May 17, 2007, from www.ogc.gov. uk/index.asp?id=2190 Office of Government Commerce 2004. Government open source software trials final report. Retrieved May 17, 2007, from www.ogc.gov.uk/index.asp?id=2190
Compilation of References
Ogundipe-Leslie, M. (1993). African women, culture and another development. In S. M. James (Ed.), Theorising black feminism: The visionary pragmatism of black women, A.P.A. Busia (pp. 102-117). London: Routledge. Ohbuchi, K. Y., & Takshashi, Y. (1994). Curtural Styles of Conflict Management in Japanese and Americans: Passivity, Convertness and Effective ness of Strategies. Journal of Applies Social Psychology, 24(15), 1345-1366. Ohkubo, M., Suzuki, K., & Kinoshita, S. (2005). RFID privacy issues and technical challenges. Communications of the ACM, 48(9), 66-71. Oka, T. & Macer, D. R. J. (2000). Change in high school student attitudes to biotechnology in response to teaching materials. Eubios Journal of Asian & International Bioethics, 106, 174-9. Olcott, D. (2002). Ética y tecnología: desafíos y elecciones inteligentes en una sociedad tecnoética. In En Hanna, D.E. (ed.), La enseñanza universitaria en la era digital (pp. 225-243). Barcelona. EUB. Oliga, J. (1996). Power, ideology and control. New York: Plenum. Olinger, Hanno N., Johannes J. Britz and Martin S. Olivier. (2007). Western privacy and/or Ubuntu?: Some critical comments on the influences in the forthcoming data privacy bill in South Africa. International Information and Library Review, 39, 31-43. Olsen, S. (2007, January 9). RFID coming soon to scooters, diapers. ZDNet. Retrieved from http://www.zdnet.com. au/news/hardware/soa/RFID_coming_to_scooters_diapers/0,130061702,339272981,00.htm?ref=search
Open University. Accessed (2002 October 2). http://www. open.ac.uk.ezproxy.uow.edu.au:2004 Oprean, C. (2006). The Romanian contribution to the development of Engineering Education. G. J. of Engng. Educ., 10(1), 45-50. Organization for Economic Cooperation and Development. (2000). OECD guidelines on the protection of privacy and transborder flows of personal data. Paris: OECD Publication Service. Retrieved October 12, 2004, from http://www1.oecd.org/publications/ebook/9302011E.pdf Ormerod, R. (1996). Information systems strategy development at Sainsbury’s supermarket using “soft” OR. Interfaces, 16(1), 102-130. Ormerod, R. (2005). Putting soft OR methods to work: the case of IS strategy development for the UK Parliament. Journal of the Operational Research Society, 56(12), 1379-1398. Ortega Carrillo, J.A. & Chacón Medina, A. (Coords.) (2007). Nuevas tecnologías para la educación en la era digital. Madrid: Ediciones Pirámide. Ortega Carrillo, J.A. & García Martínez, F.A. (2007). Ética en los medios de comunicación e Internet: promoviendo la cultura de paz. In J.A., Ortega Carrillo & A., Chacón Medina (Coords.), Nuevas tecnologías para la educación en la era digital (pp. 331-353). Madrid: Ediciones Pirámide. Ortega Ruíz, P. (2004). La educación moral como pedagogía de la alteridad. Revista Española de Pedagogía, 227, 5-30.
OLT, M. R. (2002). Ethics and distance education: strategies for minimizing academic dishonesty in online assessment. Online Journal of Distance Learning Administration, 5(3).
Ostbye T., & Hurlen, P. (1997). The electronic house call: Consequences of telemedicine consultation for physicians, patients, and society. Archives of Family Medicine, 6(3), 266-271.
Onvomaha P T., Kass N., & Akweongo P. (2006). The informed consent process in a rural African settings: A case study of the Kasena-Nankana District of Northern Ghana. IRB Ethics and Human Research, 28(3), 1-6.
Oswell, D. (1999) The Dark Side of Cyberspace: Internet Content Regulation and Child Protection. Convergence, 5(4): 42-62. Oswell, D. (2006) When Images Matter; Internet Child Pornography, Forms of Observation and an Ethics of
945
Compilation of References
the Virtual. Information, Communication and Society, 9(2): 244-265. Overman, S. (2003, October 28). EU directives drive changes in UK employment law. HR News — Society for Human Resource Management. Ow, J. (2000) The Revenge of the Yellowfaced Cyborg: The Rape of Digital Geishas and the Colonization of Cyber-Coolies in 3D Realms’ Shadow Warriors. In Beth Kolko (ed.) Race in Cyberspace. New York: Routledge, 51-67. Owen, R. (1965). The patrilocal band: a linguistically and culturally hybrid social unit. American Anthropologist, 67, 675–690. Owne, E. et al. (2004, October 29). GPS—Privacy Issues. M/Cyclopedia. Retrieved from http://wiki.media-culture. org/au/index.php/GPS_-_Privacy_Issues Oyama, S. (2000). Causal democracy and causal contributions in developmental systems theory. Philosophy of Science, 67, 332-347. Pack, T. (2006). Keeping cyberteens safe. Information Today, 23(4), 37-39. Padian, NS., van der Straten, A., Ramjee, G., Chipato, T., de Bruyn, G., Blanchard, K., Shibsoki, S., Montogomery, ET., Fancher, H., Cheng, H., Rosenblum, M., van der Laan, Jewell, N., & McIntyre, J. (2007). Diaphragm and lubricant gel for prevention of HIV acquisition in southern african women: A randomized controlled trial. Lancet, DOI: 10: 1016/50140 – 673 (07) 60950-7 www. thelancet.com Paget, F. (2007). Identity theft. McAfee Avert Labs technical white paper No 1. Retrieved May 27, 2007 from http://www.mcafee.com/us/local_content/white_papers/ wp_id_theft_en.pdf Paige, R. (2004). Toward a new golden age in American education. National educational technology plan website. Retrieved May 25, 2005, from http://www. nationaledtechplan.org/ Palloff, R., & Pratt, K. (1999). Building learning communities in cyberspace. San Francisco: Jossey-Bass.
946
Pandian, C. & Macer, D. R. J. (1998). An investigation in Tamil Nadu with comparisons to Australia, Japan and New Zealand. In Azariah J., Azariah H., & Macer D.R.J. (Eds.), Bioethics in India (pp. 390-400). Christchurch: Eubios Ethics Institute. Pang, M.C., Leung, W.K., Pang, L.Z. & Shi, Y.X. (2005b). Prognostic awareness, will to live and health care expectation in patients with terminal cancer. Chinese Medical Ethics, 18(5), 28-31. Pang, M.C., Volicer, L. & Leung, W.K. (2005c). An empirical analysis of making life-sustaining treatment decisions for patients with advanced dementia in the United States. Chinese Journal of Geriatrics, 24(4), 300-304. Pang, M.C., Wong, K.S., Dai, L.K., Chan, K.L. & Chan, M.F. (2006). A comparative analysis of Hong Kong general public and professional nurses’ attitude towards advance directives and the use of life-sustaining treatment in end-of-life care. Chinese Medical Ethics, 3(107), 11-15. Pang, M.C.S. (2003). Nursing ethics in modern China: Conflicting values and competing role requirements. Amsterdam-New York: Rodopi. Pang, S.M.C. (1999). Protective truthfulness: The Chinese way of safeguarding patients in informed treatment decisions. Journal of Medical Ethics, 25, 247-253. Pang, S.M.C., Chan, K.S., Chung, B.P.M., Lau, K.S., Leung, E.M.F., Leung, A.W.K., Chan, H.Y.L. & Chan, M.F. (2005a). Assessing quality of life of patients with advanced chronic obstructive pulmonary disease in the end of life. Journal of Palliative Care, 21(3), 180-187. Pang, S.M.C., Tse, C.Y., Chan, K.S., Chung, B.P.M., Leung, A.K.A., Leung, E.M.F. & Ko, S.K.K. (2004). An empirical analysis of the decision-making of limiting life-sustaining treatment for patients with advanced chronic obstructive pulmonary disease in Hong Kong, China. Journal of Critical Care, 19(3), 135-144. Pangaro, P. (1991). Cybernetics: A definition. In Macmillan encyclopedia of computers. Macmillan Publishing.
Compilation of References
Panteli, N., & Fineman, S. (2005). The Sound of Silence: The Case of Virtual Team Organising. Behaviour & Information Technology, 24(5), 347-352. Panteli, N., & Sockalingham, S. (2004). Trust and Conflict within Virtual Inter-organizaional Alliances: A Framework for Facilitating Knowledge Sharing. Decision Support System, 39, 599-617. Pantoja, A. (2004). La intervención psicopedagógica en la sociedad de la información. Educar y orientar con nuevas tecnologías. Madrid: Editorial EOS. Papert S. and I. Harel (1991) Constructionism. Norwood, NJ: Ablex Papert, S. (1980) Mindstorms: Children, computers and powerful ideas. New York: Basic Books Paradice, D.B. & Dejoie, R.M. (1988). Ethics and MIS: A preliminary investigation. Decision Sciences Institute Proceedings - 1988, Las Vegas, pp. 598-600. Parascandola, M., Hawkins, J., & Danis, M. (2002). Patient autonomy and the challenge of clinical uncertainty. Kennedy Institute of Ethics Journal, 12(3), 245-264.
Parker, M. M., Benson, R., & Trainor, H. E. (1988). Information economics: Linking business performance to information technology. Englewood Cliffs, NJ: Prentice Hall. Parker, R. (1974). A Definition of Privacy. Rutgers Law Review, Vol. 27, pp. 275-296. Parliament of Australia Parliamentary Library. (2004). Civil and human rights. Retrieved October 14, 2004, from http://www.aph.gov.au/library/intguide/law/civlaw.htm Parrish, D. M. (1999). Scientific misconduct and correcting the scientific literature. Academic Medicine, 74(3), 221-230. Pascual-Leone, J. (1978). La teoría de los operadores constructivos. In J. Delval (Comp.), Lecturas de psicología del niño (pp. 25-36). Madrid: Alianza. Pask, G. (1975). Conversation, cognition and learning. New York: Elsevier. Passmore, J. (1974). Man’s responsibility for nature. London: Duckworth.
Parasitic Computing. (2001). Retrieved September 9, 2003 from: http://www.nd.edu/~parasite/
Patton, Jason W. (2000). Protecting privacy in public? Surveillance technologies and the value of public places. Ethics and Information Technology, 2, 181–187.
Parent, W. A. (1983). Privacy, morality and the law. Philosophy & Public Affairs, 12(4), 269-288.
Paul, D. (1995). Controlling human heredity: 1865 to the present. Atlantic Highlands, NJ: Humanities Press.
Parfit, Derek. (1986). Reasons and Persons. Oxford University Press.
Payne, J., & Bettman, J. (2001). Preferential choice and adaptive strategy use. Ch 9 in Gigerenzer and Selten, 124-145.
Parker, D. B. (1968). Rules of ethics in information processing. Communications of the ACM, 11(3), 198-201. Parker, D. B. (1981). Ethical conflicts in computer science and technology. Arlington, VA: AFIPS Press. Parker, D. B. (1982). Ethical dilemmas in computer technology. In W. M. Hoffman & J. M. Moore, (Eds.), Ethics and the management of computer technology. Cambridge, MA: Oelgeschlager, Gunn & Hain. Parker, D. B. (1990). Ethical conflicts in information and computer science, technology, and business. Wellesley, MA: QED Information Sciences.
PC Webopedia. (2004). Dictionary main page. Retrieved February 26, 2004, from http://www.pcwebopedia. com Pelesko, J. A. (2005, July 24-27). Self assembly promises and challenges. In Proceedings of International Conference on MEMS, NANO and Smart Systems 2005 (pp 427-428). IEEE. Pellegrino, E. (1981). Being ill and being healed: Some reflections on the grounding of medical morality. Bulletin of the New York Academy of Medicine, 57(1), 70-79.
947
Compilation of References
Pellegrino, E., & Thomasma, D. (1993). The virtues in medical practice. New York: Oxford University Press.
Pétonnet, C. (1982). L’observation flottante. L’exemple d’un cimetière parisien. L’Homme, 22(4), 37-47.
Peltonen, L., & McKusick, VA. (2001). Genomics and medicine. Dissecting human disease in the postgenomic era. Science, 291, 1224-1229
Piaget, J. (1968) Genetic epistomology. Columbia Univesity Press.
Penrose, L.S. (1954). Distance, size and shape. Annals of Eugenics, 18, 337–343. Peredina, D., & Allen, A. (1995). Telemedicine technology and clinical applications. The Journal of the American Medical Association, 273(6), 483-488. Peredina, D., & Brown, N. (1995). Teledermatology: One application of telemedicine. Bulletin of the Medical Library Association, 83(1), 42-47. Peretz, H. (1998). Les méthodes en sociologie: l’observation. Paris: La Découverte. Perrolle, J. (1987). Computers and social change: Information, property, and power. Belmont, CA: Wadsworth Publishing. Peter, J., & Valkenburg, P. M. (2006). Adolescents’ exposure to sexually explicit online material and recreational attitudes toward sex. Journal of Communication, 56(4), 639-660. Petersen, A. (1999) The portrayal of research into genetic-based differences of sex and sexual orientation: A study of “popular” science journals, 1980-1997. Journal of Communication Inquiry, 23(2), 163-182. Petersen, A. (2001). Biofantasies: Genetics and medicine in the print news media, Social Science and Medicine, 52, 1255-1268. Petersen, A., Anderson, A. & Allan, S. (2005) Science fiction/science fact: Medical genetics in fictional and news stories. New Genetics and Society, 24(3), 337-353. Petersen, A., Anderson, A., Allan, S., & Wilkinson, C. (In press). Opening the black box: Scientists’ views on the role of the mass media in the nanotechnology debate. Public Understanding of Science. Petersen, A., Anderson, A., Wilkinson, C. & Allan, S. (2007). Nanotechnologies, risk and society: Editorial. Health, Risk and Society, 9(2), 117-124. 948
Piccoli, G., & Ives, B. (2005). IT-dependent strategic initiatives and sustained competitive advantage: A review and synthesis of the literature. MIS Quarterly, 29(4), 747-776. Pickles, J. (1995). Representations in an electronic age. Geography, GIS, and democracy. In J. Pickles (Ed.), Ground truth (pp. 1-30). New York: The Guilford Press. Pieterse, A.H., Van Dulmen, A.M., Ausems, M., Beemer, F.A., & Bensing J.M. (2005). Communication in cancer genetic counseling: Does it reflect conselees’ previsit needs and preferences? British Journal of Cancer, 92, 1671-1678. Pinkley, R. L. (1990). Dimensions of Conflict Frame: Disputant Interpretations of Conflict. Journal of Applied Psychology, 75(2), 117-126. Pistole, J. (2003). Fraudulent identification documents and the implications for Homeland Security. Statement for the record, John S. Pistole, Federal Bureau of Investigation Assistant Director, Counterterrorism Division, before the House Select Committee on Homeland Security. Retrieved, August 19, 2007 from http://www. globalsecurity.org/security/library/congress/2003_ h/031001-pistole.doc Pitt, J. (2006). When is an image not an image? In Schummer, J. & Baird, D. (eds.) Nanotechnology challenges: Implications for philosophy, ethics and society. (pp. 131-41). London: World Scientific. Plagiarism.org (2007). Homepage. Retrieved June30, 2007, from http://www.plagiarism.org/ Pocar, P. (2004). New challenges for international rules against cyber-crime. European Journal on Criminal Policy and Research, 10, 27-37. Poneman, L. (2006, July 15). Privacy is good business. CIO Magazine.
Compilation of References
Porter, W., & Griffaton, M. (2003). Between the devil and the deep blue sea: Monitoring the electronic workplace. Defense Counsel Journal, 65-77. Posner, R. A. (1984). An Economic Theory of Privacy. In Schoeman (ed), Philosophical Dimensions of Privacy: An Anthology. Cambridge: Cambridge University Press. Postman, N. (1993). Technopoly: The surrender of cultura to technology. New York: Vintage. Postman, N. (1995). Making a living. Making a life: Technology reconsidered. College Board Review, 176177, 8-13. Postman, N. (1998). Education and technology: virtual students, digital classroom. In Stichler, R. N. & Hauptman, R. (Eds.). Ethics, information and technology readings. Jefferson, NC: McFarland & Company, Inc., Publishers. Postrel, V. (2003). The Substance of Style: How the Rise of Aesthetic Value Is Remaking Commerce, Culture, and Consciousness. New York: Harper Collins. Powell T. (2006). Cultural context in medical ethics: Lessons from Japan. Philosophy, Ethics, and Humanities in Medicine, 1(4). Prasad, K., & Akhilesh, K. (2002). Global Virtual Teams: What Impacts Their Design and Performance. Team Performance Management, 8(5/6), 102-112. PRC (2007, February 24). A chronology of data breaches. Privacy Rights Clearinghouse (PRC). Retrieved from http://www.privacyrights.org/ar/ChronDataBreaches. htm PRC. (2003, November 20). PFID Position statement of consumer privacy and civil liberties organizations. Privacy Rights Clearinghouse (PRC). Retrieved from http://www.privacyrights.org/ar/RFIDposition.htm PRC. (2006, October). When a cell phone is more than a phone: Protecting your privacy in the age of the superphone. Privacy Rights Clearinghouse (PRC). Retrieved from http://www.privacyrights.org/fw/fw2b-cellprivacy. htm
Prensky, M. (2001). Digital natives, digital immigrants. On the Horizonm, (9)5, 10-15. President’s commission for the study of ethical problems in medicine and biomedical and behavior research. (1982). Making health care decisions: A report on the ethical and legal implications of informed consent in the patient-practitioner relationship. Washington, DC: U.S. Government Printing Office. President’s Identity Theft Task Force (2007). Combating identity theft: A strategic plan. Retrieved May 28, 2007 from http://www.identitytheft.gov/reports/StrategicPlan. pdf Preston, C. (2006). The promise and threat of nanotechnology: Can environmental ethics guide us? In Schummer, J. & Baird, D. (eds.), Nanotechnology challenges: Implications for philosophy, ethics and society. (pp. 217-48). London: World Scientific. Preston, J. (1994). The telemedical handbook: Improving care with interactive video. Austin TX: Telemedical Interactive Services, Inc. Prince, M. (2001). Employers should establish clear rules on e-mail. Business Insurance, 35, 25-28. Prior, L. (2001). Rationing through risk assessment in clinical genetics: All categories have wheels. Sociology of Health and Illness, 23(5), 570-593. Privacy Rights Clearinghouse. (2007). Chronology of data breaches 2006: Analysis. Retrieved May 27, 2007 from http://www.provacyrights.org/ar/DataBreaches2006-Analysis.htm Project on Emerging Nanotechnologies (2007) Project on emerging nanotechnologies. Retrieved May 17 2007, from http://www.nanotechproject.org/ Proserpio, L., Salvemini, S. & Ghiringhelli, V. (2004). Entertainment pirates: Understanding piracy determinants in the movie, music and software industries. The 8th International Conference on Arts & Cultural Management. Retrieved January 7, 2006 from http://www. hec.ca/aimac2005/PDF_text/ ProserpioL_SalveminiS _GhiringhelliV.pdf
949
Compilation of References
Protecting personal health information in research: Understanding the HIPAA privacy rule. Retrieved from http://privacyruleandresearch.nih.gov/pr_02.asp
Radetzki, M, Radetzki, M & Juth, N (2003) Genes and insurance. Ethical, legal and economical issues. Cambridge University Press.
Pruzan, P & Thyssen, O. (1994). The renaissance of ethics and the ethical accounting statement. Educational Technology, 34(1), 23-28.
Rahim, M. M., Seyal, A. H. & Abd. Rahman, M. N. (2001). Factors affecting softlifting intention of computing students: An empirical study. J. Education Computing Research, 24(4), 385-405.
Pugh, E. W. & Aspray, W. (1996, Summer). Creating the computer industry. IEEE Annuals of the History of Computing, 18(2), 7-17. Pullen, D. & Cusack, B. (2007). Schooling without boarders: Using ICT for flexible delivery of education. In Green, V., & Sigafoos, J. (Eds.), Technology and teaching: A casebook for educators. Hauppauge NY: Nova Science Publishers. Purcell-Roberston, R.M., & D.F. Purcell. (2000). Interactive distance learning. In Distance learning technologies: Issues, trends and opportunities. Hershey, PA: IDEA Publishing Group. Qiu, R. (1982). Philosophy of medicine in China (19301980). Metamedicine, 3, 35-73. Questions often asked by parents about special education services (1999). Retrieved August 13, 2007 from http://www.nichcy.org/pubs/ideapubs/lg1txt.htm Quinn, M. (2006). Ethics for the information age. Boston: Pearson Addison-Wesley. Quinn, M.J. (2006). On Teaching Computer Ethics within a Computer Science Department, Science and Engineering Ethics, 12: 335-343. Quinn, M.J. (2006). Case-based analysis. Proceedings of the Special Interest Group for Computer Science Education ’06, March 1-5. Houston, TX. Quinn, M.J. (2006). Ethics for the information age (Second Edition). Boston, MA: Addison-Wesley. Rachels, J. (Ed.). (1998). Introduction in ethical theory. New York: Oxford University Press. Rachels, James. (1975). Why privacy is important. Philosophy & Public Affairs, 4(4), 323-333.
950
Ramsey, J. (1993). The science education reform movement: Implications for social responsibility. Science Education, 77, 235-58. Rananand, Pirongrong Ramasoota. (2007). Information privacy in a surveillance state: a perspective from Thailand. In S. Hongladarom and C. Ess (Eds.), Information Technology Ethics: Cultural Perspectives (pp. 124-137). Hershey, PA: Idea Group Reference. Rand, A. (1957). Atlas shrugged. Random House. Rand, A., (1964). The virtue of selfishness. New American Library, New York. Ranum, M. (2004). The myth of homeland security. Indianapolis, IN: Wiley. Rapp, R. (1988). Chromosomes and Communication: the discourse of genetic counselling. Medical Anthropology Quarterly, 2(2), 143-157. Rapp, R. (2000). Testing women, testing the fetus. The social impact of amniocentesis in America. New York and London: Routledge. Rasmussen, S, Chen, L, Deamer, D, Krakauer, D, Packard, N, Stadler, P, & Bedau, M. (2004). Transitions From nonliving and living matter. Science, 303, 963-965 Ratcliffe, M. & Grace, M. (2003) Science for citizenship: Teaching socio-scientific issues. Maidenhead: Open University Press. Ravitz, J. (1997). Ethics in scholarly communications: intellectual property and new technologies. In National Convention of the Association for Educational Communications & Technology, 19th, Albuquerque. Rawls, J (1995). Political liberalism: Reply to Habermas. The Journal of Philosophy 92(3), 132 – 180.
Compilation of References
Rawls, J. (1973). A theory of justice. Oxford: Oxford University Press.
to-Business Sales Interactions. Journal of Business & Insudtrial Marketing, 19(4), 236-249.
Rawls, J. (1993). Political liberalism. New York: Columbian University Press.
Reidar K Lie, Ezekiel J Emmanuel, & Christine Grady. (2006) Circumcision and HIV prevention research: An ethical analysis. Lancet, 368, 522 – 25
Rawls, J. (1999). The law of peoples. Cambridge, MA: Harvard University Press. Rayburn, L.G. (1996). Cost accounting (sixth edition). Chicago: Irwin. Read, B. (2005). Seriously, iPods Are Educational [Electronic version]. The Chronicle of Higher Education, 51(28). Reardon, J., & Hasty, R. W. (1996). International Vendor Relations: A Perspective Using Game Theory. International Journal of Retail & Distribution Management, 24(1), 15-23. Reddy Pynchon, M., & Borum, R. (1999). Assessing threats of targeted group violence: Contributions from social psychology. Journal of Behavioural Sciences and the Law, 17, 339-355. Regan Shade, L. (1996) Is there Free Speech on the Internet? Censorship in the global Information Infrastructure. In R. Shields (ed.) Cultures of Internet Virtual Spaces, Real Histories, Living Bodies. London : Sage. Regan, Priscilla M. (1995). Legislating Privacy: Technology, Social Values, and Public Policy. Chapel Hill, NC: University of North Carolina Press. Regan, Priscilla M. (2002). Privacy as a common good in a digital world. Information, Communication and Society, 5(3), 382-405. Regh, W. (1997). Reason and rhetoric in Habermas’ theory of argumentation. In Walter Jost & Michael M. Hide (eds.), Rhetoric and hermeneutics in our time. New Haven/London: Yale UP. Reid, A. S. (2005). The rise of third generation phones: The implications for child protection. Information and Communications Technology Law, 14(2), 89-113. Reid, D. A., Pullins, E. B., & Buehrer, R. E. (2004). Measuring Buyers’ Perspections of Conflict in Business-
Reig, S. (1996). Correspondence between interlandmark distances and caliper measurements. In L.F. Marcus, M. Corti, A. Loy, G.J.P. Naylor, & D.E. Slice (Eds.), Advances in morphometrics (pp. 371–386). New York: Plenum Press. Reiman, Jeffrey H. (1976). Privacy, intimacy and personhood. Philosophy & Public Affairs, 6(1), 26-44. Reiss, M.J. (1999). Teaching ethics in science. Studies in Science Education, 34, 115-140. Remenyi, D., & Sherwood-Smith, M. (1999). Maximise information systems value by continuous participative evaluation. Logistics Information Management, 12(1/2), 145-156. Renn, O. and Roco, M.C. (2006). Nanotechnology and the need for risk governance. Journal of Nanoparticle Research, 8(2), 153-91. Rennie S, Muula AS, Westreich D. (2007) Male circumcision and HIV prevention: Ethical, medical and public health trade offs in low-income countries. Journal of Medical Ethics, 33, 357-361 Repetto, E. & Malik, B. (1998). Nuevas tecnologías aplicadas a la orientación. In R., Bisquerra, Modelos de orientación e intervención psicopedagógica. Barcelona: Praxis. Repetto, E., Rus, V. & Puig, J. (1994). Orientación educativa e intervención psicopedagógica. Madrid: UNED. Requicha, A. (2003, November). Nanorobots, NEMS, and NanoAssembly. Proceedings of the IEEE, 91(11), 1922-1933. Rest, J. R. (1986). Moral development: Advances in research and theory. New York: Praeger Press. Rest, J. R., Narvaez, D., Bebeau, M., & Thoma, S. (1999). Postconventional moral thinking: A neo-Kohlbergian approach. Mahwah, NJ: Erlbaum. 951
Compilation of References
Rest, J.R. (1983). Morality. In P.H. Mussen (Series Ed.), J. Flavell & E. Markman (Vol. Ed.), Handbook of child psychology: Vol. 3, Cognitive development (4th ed.), pp. 556-629. New York: Wiley. Rest, J.R., & Narvaez, D. (Eds.) (1994). Moral development in the professions: Psychology and applied ethics. Hillsdale, NJ: Lawrence Erlbaum.
Rickover, H. G. (1959). Education and freedom. New York: Dutton. Ridgley, A., Maley, O., & Skinner, H. A. (2004). Youth voices: Engaging youth in health promotion using media technologies. Canadian Issues, Fall 2004, 21-24. Ridley, M. (2004). Evolution. Cambridge: Blackwell Science.
Reubinoff, B.E., Pera, M.F., Fong, C.Y., Trounson, A., & Bongso, A. (2000). Embryonic stem cell lines from human blastocysts: somatic differentiation in vitro. Nature Biotechnology, 18, 399-404.
Rieback, M. Crispo, B., and Tanenbaum, A. (2006). RFID malware: Truth vs. myth, IEEE Security and Privacy, 4(4).70-72.
Reuters (2006, September 26). GE: Laptop with data on 50,000 staffers stolen. Reuters News Service.
Riley, J. (2004). Ethical drivers of the implementation of instructional technology. Paper presented at the Teaching Online in Higher Education Conference, Fort Wayne.
Rheinberger, H.J. (1997). Toward a history of epistemic things: Synthesizing proteins in the test tube. Stanford: Stanford University Press. Rhodes, R (1998). Genetic links, familiy ties and social bounds: Rights and responsibilities in the face of genetic knowledge. Journal of Medicine and Philosophy, 23(1), 10-30. Ribble, M., & Bailey, G. (2004). Digital citizenship: Focus questions for implementation. Learning & Leading with Technology, 32(2), 12-15. Rice, D. (2002). Refining the Zippo test: New trends on personal jurisdiction for Internet activities. In The Computer & Internet Lawyer. Prentice Hall Law & Business. Richman, H., Gobet, F., Staszwski,, J., & Simon, H. (1996). Perceptual and memory processes in the acquisition of expert performance. in Ericsson, pp. 167-187. Richta, R. (1969). La civilisation au carrefour. Paris: Anthropos. Rickards, T. (1990). Creativity and problem-solving at work. Aldershot: Gower. Ricker, T. (2006). Dutch RFID e-passport cracked—US next? http://www.engadget.com Rickert, V. I., & Ryan, O. (2007). Is the Internet the source? Journal of Adolescent Health, 40(2), 104-105.
952
RMIT Test Lab. (2006). A study on server based Internet filters: Accuracy, broadband performance degradation and some effects on the user experience. Report commissioned by NetAlert Limited. Retrieved June 7, 2007 from http://www.netalert.net.au/03100-A-Study-on-ServerBased-Internet-Filters---26-May-2006.pdf Robbins, R.W. & Hall, D.J. (2007). Decision support for individuals, groups, and organizations: Ethics and values in the context of complex problem solving. Proc. 2007 Americas Conference on Information Systems. Association for Information Systems. Robbins, R.W. & Wallace, W.A. (2007). Decision support for ethical problem solving: A multi-agent approach. Decision Support Systems, 43(4), 1571-1587. Robbins, R.W. (2005). Understanding ethical problem solving in individuals and groups: A computational ethics approach. Doctoral Dissertation, Rensselaer Polytechnic Institute, Troy, NY. Robbins, R.W. Wallace, W.A. & Puka, B. (2004). Supporting ethical problem solving: An exploratory investigation. Proc. 2004 ACM SIGMIS conference on computer personnel research: Careers, culture, and ethics in a networked environment , pp. 134-143. ACM Press. Roberson, S. (ed.) (2001) Defining Travel: Diverse Visions. University Press of Mississippi, Jackson, MS.
Compilation of References
Roberts, L. D. (forthcoming). Cyber-stalking: Jurisdictional and definitional concerns with computer-mediated interpersonal crimes. In K. Jaishankar (Ed.) International perspectives on crime and justice. Robey, D., & Azevedo, A. (1994). Cultural analysis of the organizational consequences of information technology. Accounting, Management and Information Technologies, 4(1), 23-37. Robey, D., & Boudreau, M. (1999). Accounting for the contradictory organizational consequences of information technology: Theoretical directions and methodological implications. Information Systems Research, 10(2), 167-185.
Rohlf, J.F., & Marcus, L.F. (1993). A revolution in morphometrics. Trends in Ecology and Evolution, 8(4), 129–132. Rolston, H. (1975). Is there an ecological ethic?, Ethics, 85, 93-109. Romano, A. (2006). Facebook’s ‘news feed’: [International Edition]. Newsweek, International ed. Rorvik, D. M. (1971). Brave new baby. Promise and peril of the biological revolution. Garden City, New York: DoubleDay & Co. Rosanvallon, P (1995) La nouvelle question sociale. Repenser l’État providence. Seuil.
Roblyer, M. D. & Edwards, J. (2000). Integrating educational technology into teaching, 2nd ed. Upper Saddle River, NJ: Prentice Hall.
Rose, K. (2005). Better medicines for children - where we are now, and where do we want to be? British Journal of Clinical Pharmacology, 6, 657 - 659.
Robots may one day ask for citizenship. (2007). Retrieved May 13, 2007 from http://media.www.guilfordian. com/media/storage/paper281/news/2007/03/23/World/ Robots.May.One.Day.Ask.For.Citizenship-2788091. shtml.
Rose, N. (2005). Will biomedicine transform society? The political, economic, social and personal impact of medical advances in the twenty first century. Clifford Barclay lecture. Retrieved May 24, 2007, http://www. lse.ac.uk/collections/LSEPublicLecturesAndEvents/ pdf/20050202-WillBiomedicine-NikRose.pdf
Roco, M. C. (2006). Progress in governance of converging technologies Integrated from the nanoscale. Ann. N.Y. Acad. Sci., 1093, 1–23.
Rosenberg, D. (1995). The concept of cheating in sport. International Journal Of Physical Education, 32, 4-14.
Roco, M.C. & Bainbridge, S. (2002). Societal implications of nanoscience and nanotechnology. Dordrecht, the Netherlands: Kluwer Academic Publishers.
Rosenhead, J., & Mingers, J (Eds.). (2001). Rational analysis for a problematic world revisited. Chichester: Wiley.
Roco, M.C., and Bainbridge, W. S. eds., (2001). Societal implications of nanoscience and nanotechnology. Dordrecht: Kluwer Academic Publishers.
Rosenzweig, P., Kochems, A . and Schwartz, A. (2004, June 21). Biometric technologies: Security, legal and policy implications. The Heritage Foundations. Retrieved from http://www.heritage.org/research/homelanddefense/lm12.cfm
Rodowick, D. N. (1982) The Difficulty of Difference. Wide Angle, 5:4-15. Rogers, E. (1995). Diffusion of innovations (4th Edition). New York: The Free Press. Rogerson, S., & Bynum, T. (1995). Cyberspace: The ethical frontier. The Times Higher Education Supplement, No 1179, 9 (June), p. iv.
Rossmore House. (1984). 269 NLRB 1176. Rotenberg, M. (1998). Communications privacy: implications for network design. In Stichler, R. N. & Hauptman, R. (Eds.). Ethics, information and technology readings. Jefferson, NC: McFarland & Company, Inc., Publishers.
953
Compilation of References
Rotenberg, M., Hoofnagle, C., Kapadia, D., Roschke, G. (2005, July 15).The pentagon recruiting database and the privacy act (Memo). Electronic Privacy Information Center. Rousseau, J.J. (1762). Emile, or on education. Trans. Allan Bloom. New York: Basic Books, 1979. Accessed online from Columbia’s ILT Web May 27, 2007 http://projects. ilt.columbia.edu/pedagogies/rousseau/ Rousseau, J.J. (1762). The social contract or principles of political right. Translated by G. D. H. Cole, public domain. Accessed online May 27, 2007 from http://www. constitution.org/jjr/socon.htm Rout, M., Metlikovec, J. & Dowsley, A. (2007). Bully cam shot by students. Herald Sun, April 13. Retrieved May 17, 2007, from http://www.news.com.au/heraldsun/ story/0,21985,21546947-661,00.html Rouvroy, A (2000). Informations génétiques et assurance, discussion critique autour de la position prohibitionniste du législateur belge. Journal des Tribunaux, 585-603. Rouvroy, A. (2007 in press). Human genes and neoliberal governance: A Foucauldian critique. London & New York: Routledge – Cavendish. Rowan, R. (1986). The intuitive manager. Aldershot: Wildwood House. Rowe, N. (2006, January). Measuring the effectiveness of honeypot counter-counterdeception. In Proc. Hawaii International Conference on Systems Sciences, Poipu, HI. Rowe, N. (2007, May). Finding logically consistent resource-deception plans for defense in cyberspace. In Proc. 3rd International Symposium on Security in Networks and Distributed Systems, Niagara Falls, Ontario, Canada. New York: IEEE Press. Rowe, N., & Rothstein, H. (2004, July). Two taxonomies of deception for attacks on information systems. Journal of Information Warfare, 3(2), 27-39. Rowe, N., Duong, B., & Custy, E. (2006, June). Fake honeypots: A defensive tactic for cyberspace. In Proc.
954
7th IEEE Workshop on Information Assurance, West Point, New York. New York: IEEE Press. Rowell, L. (2007). Reinventing the PC. In ACM Networker, 11(2). Rowland, D. & McDonald, E. (2000). Information Technology Law, 2nd edition. London: Cavendish Publishing. Rowlands, M. (2000). The environmental crisis - Understanding the value of nature. New York: St Martin’s Press. Rowlinson, M., & Carter, C. (2002). Foucault and history in organization studies. Organization, 9(4), 527-547. RS/RAE (2004). Nanoscience and nanotechnologies: Opportunities and uncertainties. London: Royal Society & Royal Academy of Engineering. Rubin, G. (2005). Cyber bullying. The Times Educational Supplement, F11. Rugarcia, A., Felder, R.M., Woods, D.R. & Stice, J.E. (2000). The future of engineering education I: A vision for a new century. Chemical Engng. Educ., 349(1), 16-25. Rumbo, B. (2006). La educación de las personas adultas: un ámbito de estudio y de investigación. Revista de Educación, 339, 625-635. Ruse, M.S. (2005). Technology and the evolution of the human: From Bergson to the philosophy of technology. In Essays in Philosophy, 6(1). RUSHSAP (Regional Unit for Social & Human Sciences in Asia & the Pacific, UNESCO). (2006). Action Plan for Bioethics Education. Developed at the 2006 UNESCO Asia-Pacific Conference on Bioethics Education. Found at http://www.unescobkk.org/index.php?id=apse Rushton, J.P. (1995). Race, evolution, and behavior: A life history perspective. New Brunswick: Transaction Publishers. Russell G. (2006). Globalisation, responsibility, and virtual schools. Australian Journal of Education, 50(2), 140-54.
Compilation of References
Russell Research. (2006). CA/NCSA Social networking study report. Retrieved June 6, 2007 from http://staysafeonline.org/features/SocialNetworkingReport.ppt#0. Russell, D. (1998) Dangerous Relationships: Pornography, Misogyny, and Rape. Thousand Oaks, CA: Sage. Ryan. F., Vendar, M. y Sweeder J. (1999). Tecnology, narcissism, and the moral sense: Implication for instruction. British Journal of Educational Technology, 30, 115-128. Ryman, N., Chakraborty, R., & Nei, M. (1983). Differences in the relative distribution of human genetic diversity between electrophoretic and red and white cell antigens. Human Heredity, 33, 93–102. Saaty, T.,& Vargas, L. (1982). The logic of priorities: Applications in business, energy, health, and transportation. Boston: Kluwer-Nijhoff.
Said, C. and Kirby, C. (2001, March 19). GPS cell phones may cost privacy. San Francisco Chronicle. Salheim, S. (2005, April 16). Sprint launches tracking service. Retrieved September 6, 2005, from http://www. eweek.com/article2/0,1895,1815843,00.asp Samek, T. (2005). Ethical reflection on 21st century information work: An address for teachers and librarians. Progressive Librarian, 25, 48-61. Samuel, C. (2003). RNA editing minireview series. Journal of Biological Chemistry, 278(3), 1389-1390. Sandel, M. (1998). Liberalism and the Limits of Justice. 2nd Ed. Cambridge: Cambridge University Press. Sandel, M. J. (2007). The case against perfection. Ethics in the age of genetic engineering. Cambridge, MA: The Belknap Press of Harvard University Press.
Sachs, L. (1999). Knowledge of no return. Getting and giving information about genetic risk. Acta Oncologica, 38(6), 735-740.
Sandholtz, J., Ringstaff, C., & Dwyer, D. (1990). Student engagement: Views from technology- rich classrooms. Retrieved May 29, 2007 from http://www.apple.com/ euro/pdfs/acotlibrary/rpt21.pdf
Sackett, D. L., Straus, S. E., Scott-Richardson, W., Rosenberg, W., & Haynes, R. B. (2000). Evidence-based medicine: How to practice and teach EBM (2nd ed.). Edinburgh: Churchill Livingstone.
Sandler, R and Kay, W. D. (2006). The national nanotechnology initiative and the social good. Journal of Law, Medicine & Ethics, Winter, 675-681.
Sadler, T. D., & Zeidler, D. L. (2005). Patterns of informal reasoning in the context of socio-scientific decision making. Journal of Research in Science Teaching, 42(1), 112-138.
Sano, N., & Yamaguchi, S. (1999). Is Silence Golden? A Cross-Cultural Study on the Meaning of Silence. In T. Suguman, M. Karasawa & J. Liu (Eds.), Progress in Asian Social Psychology: Theorical and Empirical Contributions. NY: Wiley.
Sadler, T.D., Amirshokoohi A., Kazempour,M. & Allspaw, K.M. (2006). Socioscience and ethics in science classrooms: Teacher perspectives and strategies. Journal of Research in Science Teaching, 43(4), 353-356. Saettler, P. (1968). A history of instructional technology. New York: McGraw-Hill Book Company. Saettler, P. (1990). The evolution of American educational technology. Englewood, CO: Libraries Unlimited. Saettler, P. (1997, Jan-Feb). Antecedents, origins, and theoretical evolution of AECT. TechTrends, 43(1), 5157.
Santelli, J. S., Smith Rogers, A., Rosenfeld, W. D., DuRant, R. H., Dubler, N., Morreale, M., et al. (2003). Guidelines for adolescent health research. A position paper of the Society for Adolescent Medicine. J Adolesc Health, 33(5), 396-409. Saponas, T.S.,Lester, J., Hartung, C., and Kohno, T. (2006, November 30). Devices that tell on you: The Nike+iPod sport kit. (Working paper University of Washington). Retrieved from http://www.cs.washington.edu/research/ systems/privacy.html
955
Compilation of References
Sarker, R., Abbass, H., & Newton, C. (2002). Heuristics and optimization for knowledge discovery. Hershey: Idea Group. Sarlemijn, A. (1993). Designs are cultural alloys. STeMPJE in design methodology. In Vries, M.J. de, Cross, N. & Grant, D.P. (Eds.). Design methodology and relationships with science (pp. 191-248). Dordrecht, the Netherlands: Kluwer Academic Publishers. Sasaki, K., Markon, S. & Makagawa, M. (1996). Elevator Group Supervisory Control System Using Neural Networks. Elevator World, 1. Sass, H. M. (1999). Educating and sensitizing health professionals on human rights and ethical considerations: The interactive role of ethics and expertise. International J. Bioethics, 10(3), 69-81. Sauer, N.J. (1992). Forensic anthropology and the concept of race: if races don’t exist, why are forensic anthropologists so good at identifying them? Social Science and Medicine, 34(2), 107–111. Sauer, N.J. (1993). Applied anthropology and the concept of race: a legacy of Linnaeus. In C.C. Gordon (Ed.), Race, ethnicity, and applied bioanthropology (pp. 79–84). NAPA Bulletin 13. National Association for the Practice of Anthropology: American Anthropological Association. Savage, S., Wetherall, D., Karlin, A., & Anderson, T. (2000). Practical network support for IP traceback. Proceedings of the 2000 ACM SIGCOMM Conference, August (pp. 295-306). Savona, E. U., & Mignone, M. (2004). The fox and the hunters: How IC technologies change the crime race. European Journal on Criminal Policy and Research, 10, 3-26. Sawyer, R.K. (Ed.) (2006). The cambridge handbook of the learning sciences. Cambridge: Cambridge University Press. Scalet, S. (2006, February 3). The never-ending Choicepoint story. CSO. Retrieved on June 26, 2007, from http://www.csoonline.com/alarmed/02032006.html
956
Scanlon, Thomas. (1975). Thomson on privacy. Philosophy & Public Affairs, 4(4), 315-322. Schaefer III, S. J. (2002). Telecommunications infrastructure in the African continent. 1960-2010. In T. Mendina & J. J. Britz (Eds.), Information ethics in the electronic age: Current issues in Africa and the world (pp. 22-27). Jefferson, NC: McFarland. Scharf, A. (1974). Art and photography. Baltimore MY: Penguin Books. Scharf, P. (1978). Moral education. Davis, CA: Responsible Action. Schell, B. H., Martin, M. V., Hung, P. C. K., & Rueda, L. (2007). Cyber child pornography: A review paper of the social and legal issues and remedies- and a proposed technological solution. Aggression and Violent Behavior, 12, 45-63. Schement, J., & Curtis, T. (1995). Tendencies and tensions of the information age: The production and distribution of information in the United States. New Jersey: Transaction Publishers. Scheufele, D.A. (2006). Messages and heuristics: How audiences form attitudes about emerging technologies. In J. Turney (ed.), Engaging science: Thoughts, deeds, analysis and action (pp. 20-25). London: The Wellcome Trust. http://www.wellcome.ac.uk/doc_WTX032706. html Schmidt, T. S. (2006). Inside the backlash against Facebook. Time Magazine (Online edition). Schmidtz, D. & Willott, E. (2002). Environmental ethics: What really matters, what really works. New York: Oxford University Press. Schneider, A. J. & Butcher, R. B. (2000). A philosophical overview of the arguments on banning doping in sport. In Tännsjö, T. & Tamburrini, C. (Eds.), Values in sport: Elitism, nationalism, gender equality, and the scientific manufacture of winners. London: E & Fn Spon. Schneider, A. J. and R. B. Butcher (1994). Why olympic athletes should avoid the use and seek the elimination of performance enhancing substances and practices
Compilation of References
from the olympic games. Journal Of The Philosophy Of Sport, XXI, 64-81. Schön, D. (1983). The reflective practitioner: How professionals think in action. New York: Basic Books. Schonfeld, E. (2006). Analysis: Yahoo’s China problem. In CNNMoney.co. Retrieved May 16, 2007, from http://money.cnn.com/2006/02/08/technology/yahoo_china_b20/ Schribner, S. & M. Cole (1981). The psychology of literacy. Cambridge, MA: Harvard University Press. Schroeder, S. (2001). Prospects for expanding health insurance coverage. The New England Journal of Medicine, 344(11), 847-852. Schulman, Miriam. (2000). Little brother is watching you. In R. M. Baird, R. Ramsower and S. E. Robenbaum (Eds.), Cyberethics: Social & Moral Issues in the Computer Age (pp. 155-161). Amherst, NY: Prometheus Books. Schultz A.C. (1991). Using a Genetic Algorithm to Learn Strategies for Collision Avoidance and Local Navigation. Proceedings of the Seventh International Symposium on Unmanned Untethered Submersible Technology. University of New Hampshire Marine Systems Engineering Laboratory, September 23-25, 213-215. Schultz, A.C. (1991). Using a Genetic Algorithm to Learn Strategies for Collision Avoidance and Local Navigation. Proceedings of the Seventh International Symposium on Unmanned Untethered Submersible Technology. University of New Hampshire Marine Systems Engineering Laboratory, September 23-25, 1991, 213-215. Schultz, A.C. (1994). Learning Robot Behaviors Using Genetic Algorithms. Proceedings of the International Symposium on Robotics and Manufacturing. August 14-18, Washington DC. Schultze, U., & Orlikowski, W. J. (2001). Mataphors of Virtuality: Shaping an Emergent Reality. Information and Organization, 11, 45-77. Schuman, E. (2006, February 27). The RFID hype effect. EWeek.
Schummer, J. & Baird, D. (2006). Nanotechnology Challenges: Implications for Philosophy, Ethics and Society. London: World Scientific. Schummer, J. (2004). Interdisciplinary issues in nanoscale research. In Baird, D., Nordmann, A. & Schummer, J. (Eds.). Discovering the nanoscale (pp. 9-20). Amsterdam: IOS Press. Schuurman, E. (2003). Faith and hope in technology. Toronto, Ontartio: Clements Publishing. Schwaig, K., Kane, G. C., & Storey, V. C. (2005). Privacy, fair information practices and the fortune 500: the virtual reality of compliance. SigMIS Database, 36, 49-63. Schwartz, O. (1993). L’empirisme irréductible. In N. Anderson (Ed.), Le hobo, sociologie du sans-abri (pp. 265-368). Paris: Nathan. Schwartz, P. (1991). The art of the long view: Planning for the future in an uncertain world. Doubleday. Schwartz, P. M., & Janger, E. J. (2007). Notification of data security breaches. Michigan Law Review, 105, 913-984. Schwartz, S.H. & Bilsky, W. (1987). Towards a universal psychological structure of human values. Journal of Personality and Social Psychology, 53(3), 550-562. Scott, M. (2005). An assessment of biometric identities as a standard for e-government services. International Journal of Services and Standards, 1(3), 271-286. Scott, R.J. (2007). A new definition of software piracy. Retrieved May 30, 2007, from http://blawg.bsadefense. com/2007/04/a_new_definition_of_software_p.html Scruton, R. (1994). Modern philosophy. An introduction and survey. London: Penguin Books. Scully, J L (2005). Admitting all variations? Postmodernism and genetic normality. In Shildrick, M & Mykitiuk, R (eds). Ethics of the body: Postconventional challenges (basic bioethics). The MIT Press. Seager, W. (2000). Physicalism. In Newton-Smith (Ed.), A companion to the philosophy of science (pp. 340-342). Oxford: Blackwell.
957
Compilation of References
Seale, D. A., Polakowski, M., & Schneider, S. (1998). It’s not really theft: Personal and workplace ethics that enable software piracy. Behavior & Information Technology, 17, 27-40. SearchCIO.com. (2006). Digital Millennium Copyright Act. Retrieved March 19, 2006 from http://searchcio.techtarget.com/sDefinition/0,290660,sid19_ gci904632,00. html Sebba, L., (1996). Third parties, victims and the criminal justice system. Columbus: Ohio State University Press. Second-rater, mediocrity. (n.d.). WordNet 1.7.1. Retrieved August 13, 2007, from answers.com, http://www.answers. com/topic/second-rater-mediocrity Seedhouse, D. (2005). Values based health care: The fundamentals of ethical decision-making. Chichester, UK: Wiley & Sons. Seffers, G. (2000, November 2). DOD database to fight cybercrime. Federal Computer Week. Retrieved from http://www.fcw.com/fcw/articles/2000/1030/web-data11-02-00.asp Seiberth, H. W. (2005). Pharmakologische besonderheiten im kindes-jugendalter. In Brochhausen, C., Seibert, H.W. (Eds.): Kinder in klinischen studien - Grenzen medizinischer machbarkeit? Lit-Publ. Seielstad, M.T., Minch, E., & Cavalli-Sforza, L.L. (1998). Genetic evidence for a higher female migration rate in humans. Nature Genetics, 20, 278–280. Self, D., Wolkinsky, F.D. & Baldwin, D.C. (1989). The effect of teaching medical ethics on medical students’ moral reasoning. Academic Medicine, 64, 755-9. Senge, P.M. (2006). The fifth discipline: The art & practice of the learning organisation. London: Random House. Senghor, L. (1966). Negritude—A humanism of the 20th century. Optima, 16, 1-8. Senglaub, M., Harris, D., & Raybourn, E. (2001). Foundations for reasoning in cognition-based computational representations of human decision making. SANDIA Report, Sandia National Laboratories. New Mexico.
958
Serafeimidis, V., & Smithson, S. (1999). Rethinking the approaches to information systems evaluation. Logistics Information Management, 12(1-2), 94-107. Serafeimidis, V., & Smithson, S. (2003). Information systems evaluation as an organizational institution - Experience from a case study. Information Systems Journal, 13, 251-274. Serre, D., Langaney, A., Chech, M., Teschler-Nicola, M., Paunovic, M., Mennecier, P., Hofreiter, M., Possnert, G., & Pääbo, S. (2004). No evidence of Neandertal mtDNA contribution to early modern humans. PLOS Biology, 2(3), 0313–0317. Sethi, S. (2003). Setting global standards: Guidelines for creating codes of conduct in multinational corporations. New York: John Wiley & Sons. SETI@home. (2003). Retrieved September 9, 2003 from: http://setiathome.ssl.berkeley.edu/ Setiloane, G. M. (1986). African theology: An introduction. Johannesburg: Skotaville Publishers. Severson, R. (1997). The principles of information ethics. Armonk, NY: M.E. Sharpe. Seyoum, B. (1996). The Impact of intellectual property rights on foreign direct investment. Columbia Journal of World Business, 31[1], 50. Elsevier Science Publishing Company, Inc. Shakespeare, T. (1999). Losing the plot? Medical and activist discourses of the contemporary genetics and disability. In Conrad, P. & Gabe, J. (Eds) Sociological perspectives on the new genetics (pp. 171-190). Oxford: Blackwell Publishers. Shallit, J. Public Networks and Censorship. In P. Ludlow (ed.), High Noon on the Electronic Frontier. Cambridge AM: MIT Press, 275-289. Shannon, C. E. & Weaver, W. (1949). The Mathematical Theory of Communication. Urbana, IL: The University of Illinois Press. Shannon, T.A., & Walter, J.J. (2003). The New Genetic Medicine. Lanham, MD: Rowman & Littlefield Publishers, Inc.
Compilation of References
Shariff, S. (2005). Cyber-dilemmas in the new millennium: School obligations to provide student safety in a virtual school environment. McGill Journal of Education, 40(3), 467-487. Sharp, T., Shreve-Neiger,A., Fremouw, W. Kane, J., & Hutton, S. (2004). Exploring the psychological and somatic impact of identity theft. Journal of Forensic Sciences, 49(1), 131-136. Sharp, T., Shreve-Neiger,A., Fremouw, W. Kane, J., & Hutton, S. (2004). Exploring the psychological and somatic impact of identity theft. Journal of Forensic Sciences, 49(1), 131-136. Sharrock, S.R. (1974). Crees, Cree-Assiniboines and Assiniboines: Interethnic social organization on the far Northern Prairies. Ethnohistory, 21, 95–122. Shaw, H. E., & Shaw, S. F. (2006). Critical ethical issues in online counseling: Assessing current practices with an ethical intent checklist. Journal of Counseling & Development, 84, 41-53. Shea, S. (1994). Security versus access: Trade-offs are only part of the story. Journal of the American Medical Informatics Association, 1(4), 314-315. Shelton, J. (Sunday, January 21, 2007). E-nailed. New Haven Register. Sheppard, S., Charnock, D., & Gann, B. (1999). Helping patients access high quality health information. British Medical Journal, 319, 764-766. Sherman, T. (2007, May 10). ‘New strain’ of terrorism: Hatched in the U.S. Newark Star-Ledger. Sherratt, D., Rogerson, S., & Fairweather, N.B. (2005). The challenges of raising ethical awareness: A case-based aiding system for use by computing and ICT students. Science and Engineering Ethics, 11, 299-315. Sherwin, S and Simpson, C (1999) Ethical questions in the pursuit of genetic information. Geneticization and BRCA1. In Thompson, A K & Chadwick, R F (eds), Genetic information: acquisition, access, and control. Springer.
Shields, M. A. (1996, February). Academe and the technology totem. The Education Digest, 61(2), 43-47. Shiva, V. (1989). Staying alive: Women, ecology, and development. London: Zed Books. Shrader-Frechette, K. (1997). Technology and ethical issues. In Shrader-Frechette, K. & Westra, L. (Eds.), Technology and values (pp. 25-32). Lanham, MD: Rowman & Littlefield Publ. Shukla, S. & Fui-Hoon Nah, F. (2005) Web Browsing and Spyware Intrusion. Communications of the ACM, Vol. 48, No. 8, pp. 85-90. Shulman, S., Beisser, S., Larson, T. & Shelley, M. (2002). Digital citizenship: Service-learning meets the digital divide. Drake.edu. Retrieved April 14, 2004, from http://www.drake.edu/artsci/faculty/sshulman/ITR/ digitalcitizenship.htm Shutte, A. (1993). Philosophy for Africa. Cape Town: University of Cape Town Press. Sichel, B. A. (1993). Ethics committees and teacher ethics. In Strike, K. A. & Ternasky, P. L. (Eds.). Ethics for professionals in education: Perspectives for preparation and practice. New York, NY: Teachers College Press, Columbia University. Siegfried, R. M. (2004). Student attitudes on software piracy and related issues of computer ethics. Ethics and Information Technology, 6, 215–222 Siegler, M., Rezler, A. G., & Connell K. J. (1982). Using simulated case studies to evaluate a clinical ethics course for junior students. Journal of Medical Education, 57, 380-385. Simelela N. (2002) The South African Government on HIV/AIDS. Science in Africa. Simmel, G. (1950). The Sociology of Georg Simmel. Glencoe, Ill: The Free Press. Simmel, G. (1955). Conflict and the Web of Group-Affiliations. NY: Free Press. Simon, H. (1982). Models of bounded rationality, Vol. 2: Behavioral economics and business organization. Cambridge, MA: The MIT Press. 959
Compilation of References
Simon, H.A. (1955). A behavioral model of rational choice. Quarterly Journal of Economics, 69, 99-118. Simon, H.A. (1956). Rational choice and the structure of the environment. Psychological Review, 63, 129-138. Simon, H.A. (1988). Rationality as process and as product of thought. Ch 3 in Bell, Raiffa, and Tversky, pp. 58-77. Simon, R.L. (1985) .Response To Brown & Fraleigh. Journal Of The Philosophy Of Sport, XI, 30-32 Simonette, M. (2007). Gays, lesbians and bisexuals lead in using online social networks. 138000, 8(17), 9-9. Simpson, P. M., Banerjee, D. & Simpson Jr., C. L. (1994). Softlifting: A model of motivating factors. Journal of Business Ethics, 13(6), 431-438. Sims, B., Yost, B., & Abbott, C. (2005). Use and nonuse of victim services programs: Implications from a statewide survey of crime victims. Criminology and Public Policy, 4(2), 361-384. Sims, B., Yost, B., & Abbott, C. (2006). The efficacy of victims services programs: Alleviating the psychological suffering of crime victims. Criminal Justice Policy Review, 17, 387-406. Sims, R. (2003). Ethics and corporate social responsibility: Why giants fall. Westport, CT: Praeger. Sims, R. R., Cheng, H. K., & Teegen, H. (1996). Toward a profile of student software piraters. Journal of Business Ethics, 15, 839-849. Sindelar, M., Shuman, L., Besterfield-Sacre, M., Miller, R., Mitcham, C., Olds, B., Pinkus, R., & Wolfe, H. (2003). Assessing engineering students’ abilities to resolve ethical dilemmas. Thirty-Third ASEE/IEEE Frontiers in Educuation Conference, November 5-8, 2003, Boulder, CO. Singer P. (2001). Beyond Helsinki- A vision for global health ethics: Improving ethical behavior depends on strengthening capacity. British Medical Journal, 322, 747–8.
960
Singer, E. et al. (1998). Trends: genetic testing, engineering, and therapy: Awareness and attitudes. Public Opinion Quarterly, 52(4), 633-664. Singer, M. (2003, October 24). Smart dust collecting in the enterprise. Retrieved from http://siliconvalley. internet.com/news/ Singer, P. (1974). Animal liberation: A new ethics for our treatment of animals. New York: Avon. Singer, P. (1995). Applied ethics. In T. Honderich (ed.), The Oxford companion to philosophy. 1st ed. Oxford University Press. Singer, P. (Ed.). (2006). In defense of animals: The second wave. Malden, MA: Blackwell. Singer, P. A., Cohen, R., Robb, A., & Rothman, A. I. (1993). The ethics objective structured clinical examination (OSCE). J Gen Intern Med, 8, 23-8. Singer, Peter. (1990). Animal Liberation. 2nd Ed. New York: Avon Books. Singer, Peter. (1993). Practical Ethics. 2nd Ed. New York: Cambridge University Press. Singer, Peter. (1994). Ethics. New York: Oxford. Singer, Peter. (Ed.). (1985). In Defence of Animals. New York: Blackwell. Sipior, J., Ward, B., & Roselli, G. (2005, August). A United States perspective on the ethical and legal issues of spyware. In Proc. of ICEC, Xi’an, China. Siponen, M. (2004). A justification for software rights. ACM Computers and Society, 34(3), 3. Skinner, H. A., Biscope, S., & Poland, B. (2003). Quality of Internet access: Barrier behind Internet use statistics. Social Science & Medicine, 57(5), 875-880. Skinner, H. A., Maley, O., & Norman, C. D. (2006). Developing Internet-based ehealth promotion programs: The spiral technology action research (STAR) model. Health Promotion Practice, 7(4), 406-417. Skinner, H. A., Maley, O., Smith, L., & Morrison, M. (2001). New frontiers: Using the Internet to engage teens
Compilation of References
in substance abuse prevention and treatment. In P. Monte & S. Colby (Eds.), Adolescence, alcohol, and substance abuse: Reaching teens through brief interventions. New York: Guilford. Skinner, H. A., Morrison, M., Bercovitz, K., Haans, D., Jennings, M. J., Magdenko, L., et al. (1997). Using the Internet to engage youth in health promotion. International Journal of Health Promotion & Education, IV, 23-25. Slater, D. (2002) Making Things Real. Ethics and Order on the Internet. Theory, Culture and Society, 19 (5/6): 227-245. Slice, D.E. (Ed.). (2005). Modern morphometrics in physical anthropology. New York: Kluwer Academic. Slive, J. & Bernhardt, D. (1998). Pirated for profit. Canadian Journal of Economics, 31(4), 886-899. Sloan, D. (1980). The teaching of ethics in the American undergraduate curriculum, 1876-1976. In Callahan, D. and Bok, S. (Eds.) Ethics teaching in higher education. New York: Plenum Press. Slosarik, K. (2002). Identity theft: An overview of the problem. The Justice Professional, 15(4), 329-343. Smallman, M. & Nieman, A. (2006). Small talk: Discussing nanotechnologies. November 2006. London: Think-Lab. Smart, J.K. (2003). Real problem solving. Harlow: Prentice Hall. Smedley, A. (1999). Race in North America: origin and evolution of a worldview. Boulder: Westview Press.
Smith, G. (2004, August 23). Police to seek greater powers to snoop. Globe and Mail. Retrieved from http://www. theglobeandmail.com/servlet/Page/document/v5/content/subscribe?user_URL=http://www.theglobeandmail. com%2Fservlet%2Fstory%2FRTGAM.20040823. wxpolice23%2FBNStory%2FNational%2F&ord=29675 16&brand=theglobeandmail&force_login=true Smith, G.D., & Ebrahim, S. (2003). Mendelian randomization: can genetic epidemiology contribute to understanding environmental determinants of disease? International Journal of Epidemiology, 32, 1-22. Smith, H. (2004). Information privacy and its management. MIS Quarterly Executive, 3(4), 201-213. Smith, H., Milberg, S., & Burke, S. (1996). Information privacy: Measuring individuals’ concerns about organizational practices. MIS Quarterly, 20(2), 167-196. Smith, H.J. (2002). Ethics and information systems. ACM SIGMIS Database, 33(3), 8-22. Smith, J. C. (2000). Covert shells. Retrieved September 9, 2003 from: http://gray-world.net/papers/covertshells. txt Smith, M.W. (1992). Professional ethics in the information systems classroom: Getting started. Journal of Information Systems Education, 4(1), 6-11. Smith, R. (2002). Counter terrorism simulation: A new breed of federation. Simulation Interoperability Workshop Spring.
Smith W. (1992) A process: Framework for teaching bioethics. Woodrow Wilson National Fellowship Foundation.
Smithson, S., & Tsiavos, P. (2004). Re-constructing information systems evaluation. In C. Avgerou, C. Ciborra & F. Land (Eds.), The social study of information and communication technology: Innovation, actors and contexts (pp. 207-230). Oxford: Oxford University Press.
Smith, A. (1976). An inquiry into the nature and causes of the wealth of nations (R. H. Cambell & A. S. Skinner, Eds.). Oxford, UK: Clarendon Press.
Smits, M. (2004). Taming monsters: The cultural domestication of new technology. Unpublished doctoral dissertation, University of Eindhoven, Eindhoven.
Smith, A. D. (2004). Cybercriminal impacts on online business and consumer confidence. Online Information Review, 28(3), 224-234.
Snekkenes, E. (2001, October). Concepts for personal location privacy policies. In Proceedings of the 3rd ACM Conference on Electronic Commerce, Tampa, Florida, October 14-17, 2001 (pp. 48-57). ACM
961
Compilation of References
Snow B. (1998) Consenting adults: The challenge of informed consent. Bay Area Reporter, June.
Spencer Brown, G. (1973). Laws of form. New York: Bantam Books.
Sober, E., & Wilson, D. S. (1998). Unto others: The evolution and psychology of unselfish behavior. Cambridge, MA: Harvard University Press.
Spencer, J. (1999) Crime on the Internet: Its Presentation and Representation. The Howard Journal, 38(3): 241-251.
Society for Adolescent Medicine. (2003). Guidelines for adolescent health research. J Adolesc Health, 33(5), 410-415.
Spinello, R. (2003). CyberEthics: Morality and Law in Cyberspace. Sudbury, MA: Jones and Bartlett.
Society for Human Resource Management Research Department. (2004). Does Europe matter? Workplace Visions, (1), 2-7. Socol, I. (2006). Stop chasing high-tech cheaters. Inside HigherEd, May 25, 2006. Retrieved May 17, 2007, from http://www.insidehighered.com/views/2006/05/25/socol Solomon, R.C. (1970). Normative and meta-ethics, philosophy and phenomenological research. Vol. 31, 1, 97 - 107. Solove, D. J. (2004). The Digital Person. NYU Press. Solove, D. J. (2006, January). A Taxonomy of Privacy. University of Pennsylvania Law Review. 154(3). 477560. Sontag, S. (1977) On Photography. New York: Penguin Books. South African AIDS Initiative (SAAVI) (2007). The community and the scientist: A Synergistic relationship. http://www.saavi.org.za Southern Association of Colleges and Schools. (1996). Criteria for accreditation. Commission on Colleges, Decatur, GA. Spafford, E. (1992). Are computer hacker break-ins ethical? Journal of Systems and Software, 17, 41-47. Sparrow, R. (1999). The ethics of terraforming. Environmental Ethics, 21(3), 227-240. Special education. (n.d.). The american heritage® dictionary of the english language, Fourth Edition. Retrieved August 13, 2007, from Answers.com, http://www. answers.com/topic/special-education
962
Spinello, R.A. & Tavani, H.T. (2004). Readings in cyberethics, (Second Edition). Sudbury, MA: Jones and Bartlett. Spitzberg, B. H. (2006). Preliminary development of a model and measure of computer-mediated communication (CMC) competence. Journal of Computer-Mediated Communication, 11, 629-666. Spitzberg, B. H., & Hoobler, G. (2002). Cyberstalking and the technologies of interpersonal terrorism. New Media & Society, 4(1), 71-92. Spitzner, L. (2005). Honeytokens: The other honeypot. Retrieved May 30, 2005, from www.securityfocus. com/infocus/1713. Splete, H. (2005). Technology can extend the reach of a bully: Cyber bullying by girls, who ‘share so much ... when they are friends,’ can be particularly devastating. Family Practice News, 35(12); 31(31)-32. Spong, J.S. (1977). The living commandments. Chapter 11. New York: Seabury Press. Retrieved May 30, 2007, from http://www.religion-online.org/showchapter. asp?title=540&C=620 Sproull, L., & Kiesler, S. (1991). Connections: New ways of working in the networked organization. Cambridge, MA: MIT Press. St. John, E., Hill, J. & Johnson, F. (2007, January). An historical overview of revenues and expenditures for public elementary and secondary education by state: Fiscal years 1990-2002 statistical analysis report. National Center for Education Statistics, Retrieved May 21, 2007 from http://nces.ed.gov/pubsearch/pubsinfo. asp?pubid=2007317
Compilation of References
Stack, S., Wasserman, I., & Kern, R. (2004) Adult Social Bonds and Use of Internet Pornography. Social Science Quarterly, 85 (1): 75-89. Stafford, T. F. & Urbaczewski, A. (2004) Spyware: The Ghost in the Machine. Communications of the Association for Information Systems, Vol 14, 291-306. Stahl, B. C. (2003). The moral and business value of information technology: What to do in case of a conflict? In N. Shin (Ed.), Creating business value with information technology: Challenges and solutions (pp. 187-202). Hershey, PA: Idea-Group Publishing. Stahl, B. C. (2004). Responsible management of information systems. Hershey, PA: Idea Group Publishing. Stahl, B.C. (2006). Responsible computers? A case for ascribing quasi-responsibility to computers independent of personhood or agency. Ethics and Information Technology, 8, 205-213. Stallard, C.H. & Cocker, J.S. (2001) The promise of technology in schools: The next 20 years. Lanham, MD: Scarecrow Press, Inc. Stallman, R. (1995). Why software should be free. In Johnson, D. & Nissenbaum, H. (Eds.), Computers, Ethics & Social Values, (pp. 190-199). Englewood Cliffs, NJ: Prentice Hall. Stallman, R. (2001). Copyright and globalization in the age of computer networks. MIT Communications Forum. Accessed online May 27, 2007 from http://web. mit.edu/m-i-t/forums/copyright/transcript.html Stancliff, S.B. & Nechyba, M.C. (2000). Learning to Fly: Modeling Human Control Strategies in an Aerial Vehicle. Machine Intelligence Laboratory, Electrical and Computer Engineering, University of Florida. Retrieved June 3, 2007 from: http://www.mil.ufl.edu/publications. Standler, R. (1997). Privacy law in the United States. Retrieved April 26, 2004, from http://www.rbs2.com/ privacy.htm#anchor444444 Stanford encyclopedia of philosophy. Retrieved from http://plato.stanford.edu/entries/metaethics/
Stanley, J. (2004, August). The surveillance industrial complex: How the American government is conscripting businesses and individuals in the construction of a surveillance society. American Civil Liberties Union. Stanley, J. and Steinhart, B. (2003, January). Bigger monster, weaker chains: The growth of an American surveillance society. American Civil Liberties Union. State of Idaho Office of the Governor. (1998). Executive Order 98-05: Establishing statewide policies on computer, the Internet, and electronic mail usage by state employees. Retrieved October 12, 2004, from http://ww2. state.id.us/gov/execord/EO98-05.htm Stempsey, W E (2006). The geneticization of diagnostics. Medicine, Health Care and Philosophy, 9(2), 193-200. Stephens, L. F. (2005). News narratives about nano S&T in major U.S. and non-U.S. newspapers. Science Communication, 27(2), 175-99. Stephens, L.F. (2004). News narratives about nano: How journalists and the news media are framing nanoscience and nanotechnology initiatives and issues. Paper presented to Imaging and Imagining Nanoscience and Engineering Conference, University of South Carolina, March 3-7. Steptoe, P.C., & Edwards, R.G. (1978). Birth after the reimplantation of a human embryo. Lancet, 2(8085), 366. Sterelny, K., Smith, K., & Dickison, M. (1996). The extended replicator. Biology and Philosophy, 11(3), 377-403. Stevens, W. R. (1994). TCP/IP illustrated, Volume 1. Reading, MA: Addison-Wesley. Stewart, W. (2006). Private lives laid bare on MySpace. The Times Educational Supplement(4699), 10. Stibbe, M. (2004, February). Feature: Technologies that will change our lives. Real Business. Retrieved from http:// www.realbusiness.co.uk/ARTICLE/FEATURETechnologies-that-will-change-our-lives/1035.aspx
963
Compilation of References
Stocking, S. H. (1999). How journalists deal with scientific uncertainty. In S. M. Friedman, S. Dunwoody, & C. L. Rogers (eds.), Communicating uncertainty: Media coverage of new and controversial science, (pp. 23-41). Mahwah, NJ: Lawrence Erlbaum Stokes, S. (1998) Pathologies of deliberation. In J. Elster (Ed.), Deliberative democracy (pp.123-139). Cambridge, UK: The Press Syndicate of the University of Cambridge. Stolarz, D. & Felix, L. (2006). Hands-On Guide to Video Blogging and Podcasting: Emerging Media Standards for Business Communication. Focal Press. Stoler, A.L. (1997). Racial histories and their regimes of truth. Political Power and Social Theory, 11, 183–206. Stolzmann, W., Butz, M.V., Hoffmann, J. & Goldberg, D.E. (2000). First Cognitive Capabilities in the Anticipatory Classifier System. Illinois Genetic Algorithms Laboratory Report No. 2000008. University of Illinois, Urbana.
Stowell, F. (1995). Information systems provision: The contribution of soft systems methodology. London: McGraw-Hill. Stratham, D. & Torell, C. (1996). Computers in the classroom: The impact of technology on student learning. Retrieved May 29, 2007 from http://www.temple. edu/lss/htmlpublications/spotlights/200/spot206.htm Street, B. (1993). Cross-cultural approaches to literacy (Cambridge studies in oral and literate culture). New York: Cambridge University Press. Street, R. L., & Rimal, R. N. (1997). Health promotion and interactive technology: A conceptual foundation. In R. L. Street, W. R. Gold & T. Manning (Eds.), Health promotion and interactive technology: Theoretical applications and future directions. Mahwah, NJ: Lawrence Erlbaum. Strijbos, S. and Basden, A. (Ed.) (2006). In search for an integrative vision for technology. Dordrecht: Springer.
Stone, A. R. (1995) The War of Desire and Technology at the Close of the Mechanical Age. Cambridge MA: MIT Press.
Strike, K. A. & Ternasky, P. L. (1993). Ethics for professionals in education: perspectives for preparation and practice. New York: Teachers College Press, Columbia University.
Stone, B. (2006). Web of risks. Newsweek, 148(8/9), 76-77.
Stringer, C. (1996). African exodus: the origins of modern humanity. New York: Henry Holt.
Stone, C. D. 1972. Should trees have standing? Southern California Law Review, 45,450-501
Stringer, C. (2002). New perspectives on the Neanderthals. Evolutionary Anthropology Supplement, 1, 58–59.
Stotz, K. (2004, November). With genes like that who needs an environment? Postgenomics’ argument for the ‘ontogeny of information’. Paper presented at the Symposium Advances in Genomics and Their Conceptual Implications for Development and Evolution, Austin, TX. Stotz, K., Griffiths, P., & Knight, R. (2004). How biologists conceptualize genes: An empirical study. Studies in History and Philosophy of Biological and Biomedical Sciences, 35, 647-673. Stover, D. (2006). Treating cyberbullying as a school violence issue. The Education Digest, 72(4), 40-42.
964
Strömholm, S. (1967). Right of Privacy and Rights of the Personality. Stockholm: Norstedts. Stubblefield, H. W. & Keane, P. (1994). Adult education in the American experience: From the colonial period to the present. San Francisco: Jossey-Bass Publishers. Stultz, T. (2004). Model transformation: From illness management to illness management and recovery. ACT Center of Indiana, 3(4), 2. Subrahmanyam, K., & Smahel, D. (2006). Connecting developmental constructions to the Internet: Identity presentation and sexual exploration in online teen chat rooms. Developmental Psychology, 42(3), 395-406.
Compilation of References
Subrahmanyam, K., Greenfield, P. M., & Tynes, B. (2004). Constructing sexuality and identity in an online teen chat room. Journal of Applied Developmental Psychology, 25(6), 651-666. Subramanian, S. (2004). Positive discipline. Retrieved October 6, 2004, from http://www.aapssindia.org/articles/vp2/vp2k.html Sudweeks, F., & Ess, C. (Eds.) (2004). Cultural attitudes towards technology and communication. Perth: Murdoch University. Suliman, A. M., & Abdulla, M. H. (2005). Towards a High-Performance Workplace: Managing Corporate Climate and Conflict. Management Decision, 43(5), 720-733. Sullins, J. P. (2005). Artificial intelligence. In C. Mitcham (Ed.), Encyclopedia of science technology and ethics. Rev Ed edition. MacMillan Reference Books. Sullins, J. P. (2006). Ethics and artificial life: From modeling to moral agents. Ethics and Information technology, 7, 139-148. Sullins, J. P. (2006). When is a robot a moral agent? International Review of Information Ethics, 6(December), 23-30. Retrieved from http://www.i-r-i-e.net/ Sullins, J. P. (2008). Friends by design: A design philosophy for personal robotics technology. In P. E. Vermaas, P. Kroes, A. Light, and S. A. Moore (Eds). Philosophy and design: From engineering to architecture. Dordrecht:Springer. Sullivan, B. (2006, October 19). La difference’ is stark in EU, U.S. privacy laws. MSNBC. Retrieved from http://www.msnbc.msn.com/id/15221111/ Sullivan, L. (2005, July 18). Apparel maker tags RFID for kid’s pajamas. Information Week, p.26. Sumner, M., & Hostetler, D. (2002). A comparative study of computer conferencing and face-to-face communications in systems design. Journal of Interactive Learning Research, 13(3), 277-291.
Sunstein, C.R. (2003). República.com. Internet, democracia y libertad. Barcelona: Paidós. Surowiecki, J. (2004). The wisdom of crowds. New York: Doubleday. Sutter, S M (2001). The allure and peril of genetic exceptionalism: Do we need special genetics legislation? Washington University Law Quarterly, 79(3). Swain, C. & Gilmore, E. (2001). Repackaging for the 21st century: Teaching copyright and computer ethics in teacher education courses. Contemporary Issues in Technology and Teacher Education [Online serial], 1 (4). Retrieved May 17, 2007, from http://www.citejournal. org/vol1/iss4/currentpractice/article1.htm. Swanson, S. (2001, August 20). Beware: employee monitoring is on the rise. Informationweek, (851), 57-58. Sweeney, G. (2001).“Fighting for the good cause” reflections on Francis Galton’s legacy to American hereditarian psychology. Independence Square, PA: American Philosophical Society. Sweeney, L. (2006). Protecting job seekers from identity theft. IEEE Internet Computing, 10(2), 74-78. Swierczek, F. W., & Onishi, J. (2002). Culture and Conflict: Japanese Managers and Thai Subordinates. Personal Review, 32(2), 187-210. Swierstra, T. (1997). From critique to responsibility: the ethical turn in the technology debate. Society for Philosophy and Technology, 3(1). Retrieved January 12, 2007 from http://scholar.lib.vt.edu/ejournal/SPT/ v3n1/swierstra.html Swinyard, W. R., Rinne, H., & Kau, A. K. (1990). The morality of software piracy: A cross-cultural analysis. Journal of Business Ethics, 9(8), 655-664. Swire, P.P. (1997). Markets, self-regulation, and government enforcement in the protection of personal information. Privacy and Self-regulation in the Information Age (pp. 3-20). Washington, DC: US Department of Commerce.
Sunstein, C. (2001) Republic.com. Princeton: Princeton University Press
965
Compilation of References
Symons, V., & Walsham, G. (1988). The evaluation of information systems: A critique. Journal of Applied Systems Analysis, 15, 119-132.
quests for Huntington’s disease when the father is at risk and does not want to know his genetic status: Clinical, legal, and ethical viewpoints. BMJ, 326, 331.
Synovate (2003). Federal Trade Commission-Identity theft survey report. Retrieved May 20, 2007 from http:// www.ftc.gov/os/2003/09/synovatereport.pdf
Tate, T. (1990) Child Pornography: An Investigation. Metheun: London.
Synovate (2003). Federal Trade Commission-Identity theft survey report. Retrieved May 20, 2007 from http:// www.ftc.gov/os/2003/09/synovatereport.pdf Takahashi, D. (2007, January 30). Demo: aggregate knowledge knows what you want to buy. San Jose Mercury News. Retrieved from www.mercextra.com/blogs/takahashi/2007/01/30/demo-aggregate-knowledge-knowswhat-you-want-to-buy/ Takala, T. & Häyry, M (2000). Genetic ignorance, moral obligations and social duties. Journal of Medicine and Philosophy, 25(1), 107-113. Takashi T. (2000). Why Japanese doctors performed human experiments in China 1933-1945. Eubios Journal of Asian and International Bioethics, 10, 179-180. Tally, G., Sames, D., Chen, T., Colleran, C., Jevans, D., Omiliak, K., & Rasmussen, R. (2006). The Phisherman Project: Creating a comprehensive data collection to combat phishing attacks. Journal of Digital Forensic Practise, 1(2), 115-129. Tamburrini, C. M. (2000). What’s wrong with doping? In T. Tännsjö And C. Tamburrini (Eds.), Values in sport: Elitism, nationalism, gender equality, and the scientific manufacture of winners. London: E & FN Spon. Tang, J. & Fam, C. (2005). The effect of interpersonal influence on softlifting intention and behavior. Journal of Business Ethics, 56, 149-161. Tan-Ud P. (2002) Community involvement in the Thai phase III trial. Thai Network of People Living with HIV/AIDS. 2002 International AIDS Vaccine Initiative. A satellite Meeting Prior to the XIV International AIDS Conference. 6 July 2002, Barcelona. Tassicker, R, Savulescu, J, Skene, L, Marshall, P, Fitzgerald, L & Delatycki, M B (2003). Prenatal diagnosis re-
966
Tavales, S., & Skevoulis, S. (2006). Podcasts: Changing the Face of e-Learning. Retrieved April 29, 2007, from http://ww1.ucmss.com/books/LFS/CSREA2006/ SER4351.pdf Tavani, H. T. (2004). Ethics and technology: Ethical issues in an age of information and communication technology. Hoboken, NJ: John Wiley & Sons. Tavani, H. T. (2005). Locke, intellectual property rights, and the information commons. Ethics and Information Technology, 7, 87-97. Taylor, C. (1984). Foucault on freedom and truth. Political Theory, 12(2), 152-183. Taylor, Charles. (1975). Hegel. Cambridge University Press. Taylor, Charles. (1989). Sources of the Self: The Making of the Modern Identity. Cambridge, MA: Harvard University Press. Taylor, G. (2004). The Council of Europe cybercrime convention: A civil liberties perspective. Electronic Frontier Australia. Retrieved from http://www.crimeresearch.org/library/CoE_Cybercrime.html Taylor, H. A. (1999). Barriers to informed consent. Seminars in Oncology Nursing, 15, 89 - 95. Taylor, J. V. (1963). The primal vision: Christian presence amid African religion. London: S.C.M. Press. Taylor, M., Holland, G. & Quayle, E. (2001) Typology of Paedophile Picture Collections. Police Journal, 74(2): 97-107. Te Kulve, H. (2006). Evolving repertoires: Nanotechnology in daily newspapers in the Netherlands. Science as Culture, 15(4), 367-82.
Compilation of References
TechWeb. (2004). Dictionary main page. Retrieved November 15, 2004, from http://www.techweb.com Tedeschi, B. (2003). Pop-up ads provoke a turf battle over Web Rights. International Herald Tribune, Tuesday 8 July, p 15. Teicher, S. (2003, December 22). It’s 2 a.m. Do you know where your workers are? The Christian Science Monitor, p. 14. Teichler-Zallen, D. (1992). Les nouveaux tests génétiques et leurs conséquences morales. In Hubert, F & Gros, G (eds). Vers un anti-destin? Patrimoine génétique et droits de l’humanité. O. Jacob. Templeton, A.R. (1999). Human races: a genetic and evolutionary perspective. American Anthropologist, 100(3), 632–650. Templeton, A.R. (2002). Out of Africa again and again. Nature, 416(7), 45–51. Templeton, A.R. (2005). Haplotype trees and modern human origins. Yearbook of Physical Anthropology, 48, 33–59. Templeton, S.-K. (2003, September 21) Spare embryos ‘should be donated to infertile couples’. The Sunday Herald. Retrieved May 25, 2007, from http://findarticles. com/p/articles/mi_qn4156/is_20030921/ai_n12584815 ten Have, H (2001). Genetics and culture: The geneticization thesis. Medicine, Health Care and Philosophy, 4(3), 294-304. Ten Have, H. (2006). The activities of UNESCO in the area of ethics. Kennedy Institute of Ethics Journal, 16(4), 333-352. Teschler-Nicola, M. (2004). The diagnostic eye: on the history of genetic and racial assessment in pre-1938 Austria. Collegium Anthropologicum 28(Supplement 2), 7–29. Teston, G. (2002). A developmental perspective of computer and information technology ethics: Piracy of software and digital music by young adolescents. Dissertation Abstracts, 62, 5815.
Thailand introduces national ID with biometric technology. (2007). Retrieved May 9, 2007 from http://www. secureidnews.com/weblog/2005/04/15/thailand-introduces-national-id-with-biometric-technology/. The Australian. (2007). YouTube banned from schools in bullying crackdown. In The Australian online. Retrieved May 17, 2007, from www.theaustralian.news.com.au/ story/0,20867,21306297-5006785,00.html The Belmont Report, OHSR. (1979). Ethical principles and guidelines for protection of human subjects. Department of Health, Education, and Welfare, USA. The Centre for Missing and Exploited Children. (2007). (Publication: http://www.cybertipline.com/ The Copyright Ordinance, 1962 (of Pakistan). Retrieved November 11, 2005 from http://www.pakistanlaw.com/ Copyright_Ordinance_1962.php The Council for International Organizations of Medical Sciences (CIOMS) in collaboration with the World Health Organization. (2002). International Ethical Guidelines for Biomedical Research Involving Human Subjects. Retrieved March 10, 2007, from http://www.cioms. ch/frame_guidelines_nov_2002.htm The Dalai Lama. (1996). The Buddha Nature: Death and Eternal Soul in Buddhism. Woodside, CA: Bluestar Communications. The Economist (2005). Dodgy software piracy data. Retrieved March 21, 2006 from http://www.economist.com/research/articlesBySubject/displayStory. cfm?story_ID=3993427&subjectid=1198563 The Honeynet Project (2004). Know Your Enemy, 2nd Ed. Boston: Addison-Wesley. The Lancet. (1997). The ethics industry. The Lancet (editorial), 350(9082). The Lectric Law Library. Lexicon on intellectual property. Retrieved May 17, 2007, from http://www.lectlaw. com/def/i051.htm The Linux Information Project (LINFO). (2006). The “software piracy” controversy. Retrieved February 27,
967
Compilation of References
2006 from http://www.bellevuelinux.org/ software_piracy.html The National Bioethics committee. (1998). UNESCO, Division of the Ethics of Science and Technology, Paris, 20 May. The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (1979). The Belmont report - Ethical principles and guidelines for the protection of human subjects of research. Retrieved from http://ohsr.od.nih.gov/guidelines/belmont.html
Thiam, T. (Jul. 1999). Neural Networks. International Joint Conference on Neural Networks, Vol. 6 (pp. 4428 - 4431). Thomas, D.H. (2000). The skull wars: Kennewick man, archaeology, and the battle for Native American identity. New York: Basic Books. Thomas, K. (1976). Conflict and Conflict Management. In R. N. College (Ed.), Handbook of Industrial and Organizational Psychology. Chicago: Rand NcNally College Publishing Company.
The Nuremberg Code (1949). Trials of war criminals before the nuremberg military tribunals under control council law, No. 10. US Government Printing Office.
Thomasma, D., & Pellegrino, E. (1981). Philosophy of medicine as the source for medical ethics. Theoretical Medicine and Bioethics, 2(1), 5-11.
The Ontario Safe Schools Act: School Discipline and Discrimination, Ontario Human Rights Commission. (2003).
Thompson, J. J. (1975). The Right to Privacy. Philosophy & Public Affairs, 4, 295.
The participants in the 2001 conference on ethics aspects of research in developing countries, moral standards for research in developing countries: From reasonable availability to fair benefits. (2004). The Hasting Center Report, 34(3), 17-27. The President’s Council on Bioethics (2004). Reproduction and Responsibility: The Regulation of Reproductive Technologies. Washington DC: President’s Council on Bioethics. The U.S. President’s Council On Bioethics (2002). Session 4: Enhancement 2: Potential for genetic enhancements in sports. The President’s Council On Bioethics, Retrieved from http://www.Bioethics.Gov/200207/Session4.html The Word Spy. (2001). Entry for “parasitic computing” (posted Dec. 6, 2001). Retrieved September 9, 2003 from: http://www.wordspy.com/words/parasiticcomputing.asp The World Medical Association (2004). Declaration of Helsinki, ethical principles for medical research involving human subjects. Tokyo, WMA. The World Medical Association (2005). Medical ethics manual - Ethics in medical research. Chapter 5. WMA.
968
Thompson, Judith Jarvis. (1975). The right to privacy. Philosophy & Public Affairs, 4(4), 295-314. Thompson, P. (2001). Privacy, secrecy and security. Ethics and Information Technology, 3, 13–19. Thompson, R. (2005). Why Spyware Poses Multiple Threats to Security. Communications of the ACM, Vol. 48, No. 8, pp. 41-43. Thomson, J.A., Itskovitz-Eldor, J., Shapiro, S.S., Waknitz, M.A., Swiergiel, J.J., Marshall, V.S., & Jones, J.M. (1998). Embryonic stem cell lines derived from human blastocysts. Science, 282, 1145-1147. Thorseth, M. (2006). Worldwide deliberation and public reason online. Ethics and Information Technology, 8, 243-252. Thorseth, M. (2006). Worldwide deliberation and public reason online. Ethics and information Technology, 8, 243-252. Tinker, R. (2000). Ice machines, steamboats, and education: Structural change and educational technologies. Paper presented at The Secretary’s Conference on Educational Technology, September 11–12. Retrieved May 17, 2007, from http://www.ed.gov/rschstat/eval/ tech/techconf00/tinkerpaper.pdf
Compilation of References
Tobar-Arbulu, J. F. (1984). Plumbers, technologists, and scientists. Research in Philosophy and Technology, 7, 5-17. Tobar-Arbulu, J.F. (n.d.). Technoethics. Retrieved September 23, 2007 from http://www.euskomedia.org/PDFAnlt/riev/3110811103.pdf Toffler, A. (1970). Future Shock. New York: Random House. Toffler, B. (1991). Doing ethics: An approach to business ethics consulting. Moral Education Forum, 16(4), 14-20. Toulmin, S. (1958). The uses of argument. Cambridge, UK: Cambridge University Press. Toulmin, S., Rieke, R. & Janik, A. (1984). An introduction to reasoning (Second edition). New York: Macmillan. Touriñan, J.M. (2004). La educación electrónica: un reto de la sociedad digital en la escuela. Revista Española de Pedagogía, 227, 5-30. Towell, E., Thompson, J.B., & McFadden, K.L. (2004). Introducing and developing professional standards in the information systems curriculum. Ethics and Information Technology, 6, 291-299. Towns, C.R., & Jones, D.G. (2004) Stem cells, embryos, and the environment: a context for both science and ethics. Journal of Medical Ethics, 30(4), 410-413. Towns, C.R., & Jones, D.G. (2004). Stem cells: public policy and ethics. New Zealand Bioethics Journal, 5, 22-28. Tri-Council of Canada. (1998). Tri-Council policy statement: Ethical conduct for research involving humans. Ottawa: Interagency Secretariat on Research Ethics on behalf of The Canadian Institutes of Health Research, Natural Sciences and Engineering Research Council of Canada and the Social Science and Humanities Research Council of Canada. Trotter, A. (2006). Social-networking web sites pose growing challenge for educators. Education Week, 25(23), 8-9.
Trout, J. (2005). Paternalism and cognitive bias. Law and Philosophy, 24, 393-434. Tuchman, G. (1978). Making news. New York: Free Press. Tucker, A.B. (Editor and Co-chair), Barnes, B.H. (Cochair), Aiken, R.M., Barker, K., Bruce, K.B., Cain, J.T., Conry, S.E., Engel, G.L, Epstein, R.G., Lidtke, D.K., Mulder, M.C., Rogers, J.B., Spafford, E.H. & Turner, A.J. (December 17, 1990). Computing Curricula 1991, Report of the ACM/IEEE-CS Joint Curriculum Task Force. Published in 1991 by New York: ACM Press, Los Alamitos, CA: IEEE Computer Society Press. Web: http://www. acm.org/education/curr91/homepage.html Turkle, S. (1984). The second self: Computers and the human spirit. New York: Simon & Schuster, Inc. Turnball, P. W., & Wilson, D. (1989). Developing and Protecting Profitable Customer Relationships. Industrial Marketing Management, 18(1), 1-6. Turnball, P. W., Ford, D., & Cunningham, M. (1996). Interaction, Relationships of Networks in Business Markets: An Evolving Perspective. Journal of Business & Insudtrial Marketing, 11(3/4), 4-62. Turnitin (2007). Turn it in. Retrieved January 12, 2007 from http://www.turnitin.com/static/home.html Tversky, A., & Kahneman, D. (1984). Choices, values and frames. American Psychologist, 39, 341-350. Twenge, J. M. (2001). Birth cohort changes in extraversion: A cross-temporal meta-analysis, 1966-1993. Personality and Individual Differences, 30(5), 735-748. Twenge, J. M. (2006). Generation me : Why today’s young Americans are more confident, assertive, entitled-and more miserable than ever before. New York: Free Press. Tyack, D. (1967). Turning Points in American Educational History. Waltham, MA: U. S. Department of Justice, National Institute of Justice (1989). In J. T. McEwen (Ed.), Dedicated computer crime units. Washington, DC: U.S. Government Printing Office.
969
Compilation of References
U. S. Department of Justice, National Institute of Justice (1989). In D. B. Parker (Ed.), Computer crime: Criminal justice resource manual (2nd ed.). Washington, DC: U.S. Government Printing Office. U.S. Congress. (2004). Location Privacy Protection Act of 2001. Accessed October 12, 2004 through title search from http://thomas.loc.gov U.S. Department of State International Information Programs. Frequently asked questions about biotechnology. USIS Online, available from http://usinfo.state.govt/ Ubuntu. Umuntu Ngumuntu Ngabantu. A South African peace lesson. Social Studies, Grades 7 – 12. http://www. ivow.net/ubuntu.html Ulrich, W. (1983). Critical heuristics of social planning: A new approach to practical philosophy. Berne: Haupt. Ulrich, W. (2003). Beyond methodology choice: critical systems thinking as critically systemic discourse. Journal of the Operational Research Society, 54(4), 325-342. UNAIDS (1999). Gender and HIV/AIDS: Taking stock of research programmes. Joint National Programme on HIV/AIDSA, Geneva, Switzerland. http://www. unaids.org UNAIDS (2006) Report on global AIDS epidemic, executive summary. A UNAIDS 10th Anniversary Special Edition. http://data.unaids.org UNAIDS/WHO. (2002) AIDS epidemic update: December. Geneva: UNAIDS/WHO; 2002. ISBN: 929173-253-2. UNAIDS: Ethical considerations in HIV preventive vaccine research. (2000). Retrieved June 12, 2007, from http://data.unaids.org/Publications/IRC-pub01/JC072EthicalCons_en.pdf Underwood, J. & Szabo, A. (2003). Academic offences and e-learning: Individual propensities in cheating. British Journal of Educational Technology, 34(4), 467–477. UNESCO (1997). Universal Declaration on the Protection of the Human Genome and Human Rights. UNESCO.
970
UNESCO (2005) Universal Declaration on Bioethics and Human Rights. UNESCO. UNESCO. (2005). Retrieved May 5, 2005 from http://www.unesco.org/webworld/observatory/in_focus/010702_telemedicine.shtml UNESCO. (2006). The ethics and politics of nanotechnology. Paris, France: UNESCO. http://unesdoc.unesco. org/images/0014/001459/145951e.pdf UNESCO. Ethics home page. http://www.unesco. org/ethics Unger, S. (1982). Controlling technology: Ethics and the responsible engineer. NY: Holt, Rinehart and Winston. United Kingdom. (2006). Terrorism Act 2006. Retrieved May 17, 2007, from http://www.homeoffice.gov.uk/security/terrorism-and-the-law/ United Nations (1948). Universal Declaration of Human Rights. Retrieved May 17, 2007, from http://www. un.org/Overview/rights.html United Nations. (14 November 2006) Draft UN Convention on Justice and Support for Victims of Crime and Abuse of Power. Retrieved June 6, 2007 from http://www. tilburguniversity.nl/intervict/undeclaration/convention. pdf United Nations. (1999). Handbook on justice for victims: On the use and application of the Declaration of Basic Principles of Justice for Victims of Crime and Abuse of Power. New York: Centre for International Crime Prevention. United Nations. (2003). The Global Compact: Corporate citizenship in the world economy. New York: Global Compact Office. United States Department of Health and Human Services. (2005). Code of Federal Regulations. Title 45: Public Welfare -- Part 46: Protection of Human Subjects. www. hhs.gov/ohrp/humansubjects/guidance/45cfr46.htm United States Department of Treasury. (2005). The use of technology to combat identity theft. Report on the study
Compilation of References
conducted pursuant to Section 157 of the Fair and Accurate Credit Transactions Act of 2003. Retrieved May 20, 2007 from http://www.ustreas.gov/offices/domesticfinance/financial-institution/cip/biometrics_study.pdf. United States Government Federal Trade Commission. (2000). Children’s Online Privacy Protection Act – COPPA. Retrieved May 17, 2007, from http://www. coppa.org/ United States of America. (2001). The USA Patriot Act. Retrieved May 17, 2007, from http://www.lifeandliberty. gov/highlights.htm Universal Declaration on Bioethics and Human Rights. (2005). Retrieved March 10, 2007, from http://portal. unesco.org/en University of Idaho. Accessed (2002 October 2). http:// www.uidaho.edu/evo University of North Dakota. Accessed (2002 October 2). http://gocubs.conted.und.nodak.edu/cedp/ University of Tasmania. (2007). Faculty of Education Unit Outline. Launceston: University of Tasmania. University of Washington. (1996). Recommended core ethical values. Retrieved from http://www.engr.washington.edu/~uw-epp/Pepl/Ethics/ethics3.html Unschuld, P.U. (1979). Medical ethics in Imperial China. Berkeley: University of California Press. UPI (2006, May 11). Report: NSA tracking all U.S. phone calls. GOPUSA.com. Retrieved on May 11, 2006, from http://www.gopusa.com/news/2006/May/0511_nsa_ phonesp.shtm Urbach, R. R. & Kibel, G. A. (2004). Adware/Spyware: An update regarding pending litigation and legislation, Intellectual Property and Technology Law Journal, 16(7), 12-17. Urban, W. & Wagoner, J. (2000). American education: a history, 2nd ed. Boston MA: McGraw-Hill Higher Education. Urbas, G., & Krone, T. (2006). Mobile and wireless technologies: security and risk factors. Trends & Issues
in Crime and Criminal Justice, No. 329. Canberra: Australian Institute of Technology. US Department of Justice (1998). New directions from the field: Victims rights and services for the 21st Century. Retrieved 29 May 2007 from http://www.ojp.usdoj. gov/ovc/new/directions/. US national bioethics advisory commission ethical and policy issues in international research: Clinical trials in developing countries. Vol. I, Report and Recommendations: Executive Summary, i-xv. Utsumi, T. (2005). Tougoushityousyou no seishinryouhou kanousei ni tsuite (About the possibility of psychiatrical treatment for schizophrenia). Seishin-Ryouhou, 31(1), 9-18. Vaas, L. (2006, January 25). Government sticks its fingers deeper into your data pie. eWeek Magazine. Retrieved from http://www.eweek.com/print_article2/ 0,1217,a=170019,00.asp Valero-Silva, N. (1996). A Foucauldian reflection on critical systems thinking. In R. L. Flood & N. Romm (Eds.), Critical systems thinking: Current research and practice. (pp. 63-79.). London: Plenum. Valkenburg, P. M., Jochen, P., & Schouten, A. P. (2006). Friend networking sites and their relationship to adolescents‘ well-being and social self-esteem. CyberPsychology & Behaviour, 9(5), 584-590. Van Den Hoven, J. (1995). Equal access and social justice: Information as a primary good. In ETHICOMP95: An International Conference on the Ethical Issues of Using Information Technology, Leicester, UK: De Montfort University. Van der Burg, S. & van de Poel. (2005). Teaching ethics and technology with Agora, an electronic tool. Science and Engineering Ethics, 11, 277-297. van der Meulen, N. (2006). The challenge of countering identity theft: Recent developments in the United States, the United Kingdom, and the European Union. Report Commissioned by the National Infrastructure Cyber Crime program (NICC). Retrieved May 20, 2007 from
971
Compilation of References
http://www.tilburguniversity.nl/intervict/publications/ NicolevanderMeulen.pdf. Van der Sluijs, J. (2006). Uncertainty, assumptions, and value commitments in the knowledge base of complex environmental problems. In A. Guimaraes, S. Guedes & S. Tognetti (Eds.), Interfaces between science and society (pp. 67-84). Sheffield, UK: Greenleaf Publishing. Van der Weele, C. (1995). Images of development. Environmental causes in ontogeny. Albany: State University of New York Press. van Dijk, J.A.G.M. (2005). The deepening divide: Inequality in the information society. Thousand Oaks, CA: Sage. Van Gigch, J. (1974). Applied general systems theory. NY: Harper and Row. van Gundy, A. (1992). Idea power. NY: AMACOM. Van Rooy, W. & Pollard, I. (2002). Teaching and learning about bioscience ethics with undergraduates. Education & Health, 15(3), 381-385. Van Rooy, W. & Pollard, I. (2002). Stories from the bioscience ethics classroom: Exploring undergraduate students’ perceptions of their learning. Eubios Journal of Asian & International Bioethics, 12, 26-30. Varela, Francisco J. and Bernhard Poerksen. (2006). “Truth Is What Works”: Francisco J. Varela on cognitive science, Buddhism, the inseparability of subject and object, and the exaggerations of constructivism—a conversation. The Journal of Aesthetic Education, 40(1), 35-53. Vazquez R. (1999) A discussion of community concerns. National Cooperative Vaccine Development Groups for AIDS Meeting. August 1992. Original: AIDS Research and Human Retroviruses, 9(1). Ann Liebert Inc., Publishers Vedder, A. and Wachbroit, R. (2003) Reliability of Information on the Internet: Some distinctions. Ethics and Information Technology, 5: 211-215.
Vega-Romero, R. (1999). Care and social justice evaluation: A critical and pluralist approach. Hull: University of Hull. Velasco, V. (2000). Introduction to IP spoofing. Retrieved September 9, 2003 from: http://www.giac.org/practical/gsec/Victor_Velasco_ GSEC. pdf Venable, B., & Venable, H. (1997, July). Workplace labor update. New York: Venable. Veregin, H. (1995). Computer innovation and adoption in geography: A critique of conventional technological models. In J. Pickles (Ed.), Ground truth (pp. 88-112). London: The Guilford Press. Vidarsdottir, U.S., O’Higgins, P., & Stringer, C. (2002). A geometric morphometric study of regional differences in the ontogeny of the modern human facial skeleton. Journal of Anatomy, 201(3), 211–229. Virilio, P. (1995). La vitesse de libération. Paris: Gallilée. Virilio, P. (2002). Desert Screen: War at the Speed of Light. London and New York: Continuum. Virilio, P. (August 1995). Speed and Information: Cyberspace Alarm! CTHEORY. Visala, S. (1993). An exploration into the rationality of argumentation in information systems research and development. Unpublished Doctoral Dissertation, University of Oulu, Visala, S. (1996). Interests and Rationality of Information Systems Development. Computers and Society, 26(3), 19 – 22. Viseu, A. (2003). Simulation and augmentation: Issues of wearable computers. Ethics and Information Technology, 5, 17-26. Vlek, C. (1987). Risk assessment, risk perception and decision making about courses of action involving genetic risks. Birth Defects Original Articles Series, 23(2), 171-207. Volman, M. & van Eck, E. (2001). Gender equity and information technology in education: The second decade. Review of educational research, 71 (4), 613–634.
972
Compilation of References
von Neumann, J., & Morgenstern, O. (1947). Theory of games and economic behaviour (second edition). Princeton University Press. von Winterfeldt, D., & Edwards, W. (1986). Decision analysis and behavioral research. Cambridge: Cambridge University Press. Vries, M.J. de (2005). Teaching about technology. An introduction to the philosophy of technology for nonphilosophers. Dordrecht: Springer. Vries, M.J. de (2005). 80 years of research at the Philips Natuurkundig Laboratorium, 1914-1994. Amsterdam: Amsterdam University Press. Vries, M.J. de (2006). Technological knowledge and artifacts: An analytical view. In Dakers, J.R. (Ed.), Defining technological literacy. Towards an epistemological framework (pp. 17-30). New York: MacMillan. Vrij, A. (2000). Detecting lies and deceit: the psychology of lying and the implications for professional practice. Chichester, UK: Wiley. Wachbroit, R S (1998). The question not asked: the challenge of pleiotropic genetic tests. Kennedy Institute of Ethics Journal Ethics Journal, 8(2), 131-144. Wahlsten, D. (1997). Leilani Muir versus the philosopher king: Eugenics on trial in Alberta. Genetica, 99(2/3), 185–198. Waite, M., Hawker, S., & Soanes, C. (2001). Oxford Dictionary, thesaurus and wordpower guide. Oxford, UK: Oxford University Press. Wakefield, J. & Dreyfus, H. (1990). Intentionality and the phenomenology of action. In Lepore E, van Gulick R (eds.), John Searle and his critics. Oxford: Blackwell. Waks, L. J. & Barchi, B. A. (1992). STS in U.S. school science: Perceptions of selected leaders and their implications for STS education. Science Education, 76, 79-90. Walch, R. & Lafferty, M. (2006). Tricks of the Podcasting Masters. Que. Walch, Sr. M. R. (1952). Pestalozzi and the Pestalozzian theory of education. Washington DC: Catholic University Press.
Waldby, C. (2002). Stem cells, tissue cultures and the production of biovalue. Health, 6(3), 305-323. Waldron, A. M., Spencer, D., & Batt, C. A. (2006). The current state of public understanding of nanotechnology. Journal of Nanoparticle Research, online publication. Walker, W., Harremoës, P., Rotmans, J., Van der Sluijs, J.P., Van Asselt, M., Janssen, P., & Krayer Von Krauss, M. (2003). Defining uncertainty. A conceptual basis for uncertainty management in model-based decision support. Integrated Assessment, 4(1), 5-17. Wall, D. S. (2004). Digital realism and the governance of spam as cybercrime. European Journal on Criminal Policy and Research, 10, 309-335. Wall, D. S. (2005). The Internet as a conduit for criminal activity. In A. Pattavina (Ed.), Information technology and the criminal justice system (pp. 78-94), Thousand Oaks, CA: Sage. Wall, D. S. (2007). Policing cybercrimes: Situating the public police in networks of security within cyberspace. Police Practice and Research, 8, 183-205. Wallace, W.A. & Fleischmann, K.R. 2005. Models and modeling. In Carl Mitcham (ed.), Encyclopedia of science, technology, and ethics. Macmillan Reference. Wallace, W.A. (Ed.) (1994). Ethics in modeling. New York: Elsevier Science. Waller, J.M. (2002, December 24). Fears mount over ‘total’ spy system: civil libertarians and privacy-rights advocates are fearful of a new federal database aimed at storing vast quantities of personal data to identify terrorist threats—Nation: homeland security. Insight Magazine. Retrieved from http://www.findarticles. com/p/articles/mi_m1571/is_1_19/ai_95914215 Walsham, G. (1993). Interpreting information systems in organisations. Chichester, UK: John Wiley and Sons. Walsham, G. (1999). Interpretive evaluation design for information systems. In L. Willcocks & S. Lester (Eds.), Beyond the IT productivity paradox (pp. 363-380). Chichester, UK: John Wiley and Sons.
973
Compilation of References
Walsham, G., & Waema, T. (1994). Information systems strategy and implementation: A case study of a building society. ACM Transactions on Information Systems, 12(2), 150-173. Walters, M. (1978) The Nude Male: A New Perspective. New York: Paddington. Walther, J. (1995). Relational aspects of computer-mediated communication: Experimental observations over time. Organizational Science, 6(2), 186-203. Walton, D. (1985). Physician-patient decision-making: A study in medical ethics. Westport, Connecticut: Greenwood Press. Walton, M., & Vukovic, V. (2003). Cultures, literacy, and the Web: Dimensions of information “scent.” Interactions, 10, 64-71. Wang Q., Štrkalj, G., & Sun, L. (2002). On the concept of race in Chinese biological anthropology: Alive and well. American Anthropologist, 43(2), 403. Wang Q., Štrkalj, G., & Sun, L. (2002). The status of the race concept in Chinese biological anthropology. Anthropologie, 40(1), 95–98. Wang, F., Zhang, H., Zang, H. & Ouyang, M. (2005). Purchasing pirated software: An initial examination of chinese consumers. Journal of Consumer Marketing, 22(6), 340-351. Ward, J., & Peppard, J. (1996). Reconciling the IT/ business relationship: A troubled marriage in need of guidance. Journal of Strategic Information Systems, 5, 37-65. Ward, L. M. (2003). Understanding the role of the entertainment media in the sexual socialization of American youth: A review of empirical research. Developmental Review, 23, 347-388. Warne, L. (1997). Organizational politics and project failure: A case study of a large public sector project. Failures and Lessons Learned in Information Technology Management, 1(1), 57-65.
974
Warneke, B., Last, M., Liebowitz, B., Pister, K.S.J. (2001). Smart dust: Communicating with a cubic-millimeter computer. Computer, 34(1), 44-51. Warneke, B.A. & Pister, K.J.S. (2004, Feb. 16-18). An ultra-low energy microcontroller for smart dust wireless sensor networks. In Proceedings of the Int’l Solid-State Circuits Conf. 2004 (ISSCC 2004). San Francisco. Retrieved on June 26, 2007, from www-bsac.eecs.berkeley. edu/archive/users/warneke-brett/pubs/17_4_slides4. pdf Warren, J., & Vara, V. (2006). New Facebook features have members in an uproar. Wall Street Journal (Eastern Edition), B1-35. Warren, S. & Brandeis, L. D. (1890). The Right to Privacy. Harvard Law Review, Vol. 4, pp. 193-220. Warschauer, M. (2003). Technology and social inclusion: Rethinking the digital divide. Boston: MIT Press. Warwick, K., Gasson, M., Hutt, B., Goodhew, I., Kyberd, P., Andrews, B., Teddy, P. & Shad, A. (2003). The application of implant technology for cybernetic systems. Archives of Neurology, 60(10), 1369-1373. Warwick, K., Gasson, M., Hutt, B., Goodhew, I., Kyberd, P., Schulzrinne, H., & Wu, X. (2004). Thought communication and control: A first step using radiotelegraphy. IEE Proceedings on Communications, 151(3), 185-189. Washburn, S.L. (1953). The study of race. American Anthropologist, 65, 521–531. Watson, M. & McMahon, M. (2006). My system of career influences: responding to challenges facing career education. International Journal Educucational Vocational Guidance, 6, 159–166. WCED (1987). Our common future. Report of the World Commission on Environment and Development (WCED) (pp. 323-333). New York: Oxford University Press. Weber, E. (1969). The kindergarten: Its encounter with educational thought in America. New York: Teachers College Press.
Compilation of References
Weber, J.A. (2007). Business ethics training: Insights from learning theory. Journal of Business Ethics, 70, 61-85. Weber, M. (1994). Objectivity and understanding in economics. In D. M. Hausman (Ed.), The philosophy of economics: An anthology (2nd ed.) (pp. 69-82). Cambridge: Cambridge University Press. Website for Massachusetts Institute of Technology Institute for Soldier Nanotechnology. Found at www. mit/edu/isn Weckert, J. (1997). Intellectual property rights and computer software. Journal of Business Ethics, 6(2), 101-109. Weckert, J. (Ed.). (2005). Electronic monitoring in the workplace: Controversies and solutions. Hershey, PA: Idea Group Publishing. Wei, J., & Hemmings, G. P. (2000). The NOTCH4 locus is associated with susceptibility to schizophrenia. Nature Genetics, 25, 376-377. Weidenreich, F. (1943). The skull of Sinanthrupus pekinensis: a comparative study on a primitive hominid skull. Acta Anthropologica Sinica, 5, 243–258. Weijer C, Goldsand G, Emanuel EJ. (1999) Protecting communities in research: Current guidelines and limits of extrapolation. Nature America Inc. (Commentary) http://www.nhrmrc.gov.au Weijer C. & Anderson J A., (2001). The ethics wars; disputes over international research. The Hasting Center Report, 31(3), 18-20. Weil, V. (2001). Ethical issues in nanotechnology. In M.C. Roco, & W.S.Bainbridge (eds.) Societal implications of nanoscience and nanotechnology. (pp. 244–251) Dordrecht, Netherlands: Kluwer. Weimann, G. (2005). Cyberterrorism: The sum of all fears? Studies in Conflict & Terrorism, 28(2), 129-149. Weiner, N. (1954). The human use of human beings: Cybernetics and society (Second edition Revised). New York: Doubleday Anchor.
Weinman, J. & Haag, P. (1999). Gender equity in cyberspace. Educational leadership, 56 (5), 44–49. Weiser, M. (1993). Some computer-science issues in ubiquitous computing. Communications of the ACM, 67 (7), 75–84. Weiser, M. (1999). The spirit of the engineering quest. Technology in Society, 21, 355-361. Weiss, C. (1970). The politicization of evaluation research. Journal of Social Issues, 26(4), 57-68. Weiss, C. (1998). Have we learned anything new about the use of evaluation? American Journal of Evaluation, 19(1), 21-34. Weiss, R. (2003, May 8). 400 000 Human Embryos Frozen in U.S. The Washington Post. Retrieved May 25, 2007, from http://www.washingtonpost.com/ac2/wpdyn?pagename=article&contentId=A27495-2003May7 Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. New York: Freeman. Wernick, A.S. (2006, December). Data theft and state law. Journal of American Health Information Management Association, 40-44. Wertz, D.C., Sorenson, J.R., & Heeren, T.C. (1986). Client’s interpretation of risks provided in genetic counseling. American Journal of Human Genetics, 39, 253-264. Wertz, S. K. (1981). The varieties of cheating. Journal Of The Philosophy Of Sport, VIII, 19-40. West, Andrew C., Smith, C. D., Detwiler, D. J. , and Kaman, P. (2005, November). Trends in Technology. (AGA CPAG Research Series. Report #3). West, D. (1998). Radiation experiments on children at the fernal and Wrentham schools: Lessons for protocols in human subject research, Accountability in Research, 6(1-2), 103-125. Westervelt, R. (2007, February 23). Data breach law could put financial burden on retailers. SearchSecurity.com.
975
Compilation of References
Westin, A. (1967) Privacy and Freedom, London: Bodley Head. Weston, A. (1997). A practical companion to ethics. New York: Oxford University Press. WhatIs. (2004). Dictionary main page. Retrieved March 5, 2004, from http://www.whatis.com Wheeler, Deborah. (2005, November 19th – 24th). Digital politics meets authoritarianism in the Arab world: Results still emerging from Internet cafes and beyond. Paper presented at the Middle Eastern Studies Association Annual Meeting, Mariott Wardman Park Hotel, Washington D.C. Wheeler, J. L., Miller, T. M., Halff, H. M., Fernandez, R., Halff, L. A., Gibson, E. G., & Meyer, T. N. (1999, November 1). Web places: Project-based activities for atrisk youth. Current Issues in Education, 2(6). Available from http://cie.ed.asu.edu/volume2/number6/. White, M. (n.d.) People or Representations? Ethics and Information Technology, 4: 249-266. Whitehead, A. N. (1929). The Aims of Education and Other Essays. New York: Macmillan Publishing Company. Whitehead, B. (2005) ‘Online Pron: How do We Keep It From Our Kids?’ Commonweal, 132: 18. Whitfield, J. (2001). Parasite corrals computer power. Nature, August 30. Retrieved September 9, 2003 from: http://www.nature.com/nsu/010830/010830-8.html Wichman A., Kalyan D N., Abbott L J., Wasley R., & Sandler A L. (2006). Protecting human subject in the NIH’s Intramural research program: A draft instrument to evaluate convened meetings of its IRBs. IRB Ethics and Human Research, 28(3), 7-10. Wiener, N. (1948). Cybernetics: Or control and communication in the animal and the machine. Cambridge: The Technology Press. Wiener, N. (1954). Human use of human beings. Houghton Mifflin, 2nd ed, Doubleday Anchor.
976
Wiggins, G. (2001). Living with malware. Sans Institute. Retrieved September 9, 2003 from: http://www.sans. org/rr/paper.php?id=48 Wikipedia - The free encyclopaedia. (2006). Intellectual property. Retrieved March 16, 2006 from http:// en.wikipedia.org/wiki/Intellectual_property Wikipedia. (2004). Definition of terrorism. Retrieved July 26, 2004, from http://en.wikipedia.org/wiki/Definition_of _terrorism Wikipedia. (2006). blog. Retrieved 23 Sept, 2006, from http://en.wikipedia.org/wiki/Blog Wikipedia. (2006). Web 2.0. Retrieved 23 Sept, 2006, from http://en.wikipedia.org/wiki/Web_2 Wikipedia. (2006). wiki. Retrieved 23 Sept, 2006, from http://en.wikipedia.org/wiki/Wiki Wikipedia. (2007). Wikipedia home page. Retrieved 3 May, 2006, from http://en.wikipedia.org Wild,R. H., Griggs, K. A., Downing, T. (2002). A framework for e-learning as a tool for knowledge management. Industrial Management + Data Systems, 102(7), 371-381. Wilkie, A. (2001). Genetic prediction: What are the limits? Studies in History and Philosophy of Biological and Biomedical Sciences, 32(4), 619-633. Wilkinson, C., Allan, S., Anderson, A & Petersen, A. (2007). From uncertainty to risk? Scientific and news media portrayals of nanoparticle safety. Health, Risk and Society, 9(2), 145-157. William, D., Howell, S., & Hricko, M. (2006). Online assesssment, measurement and evaluation: Emerging practices. Hershey, PA: Information Science Publishing. Williams, M. (2006, March/April). The knowledge: biotechnology’s advance could give malefactors the ability to manipulate life processes—and even affect human behavior. MIT Technology Review. Retrieved from http://www.technologyreview.com/Biotech/16485/
Compilation of References
Williams, N. (1991) False Images: Telling the Truth about Pornography. Kingsway Publications: London. Wilmut, I., Schneike, A.E., McWhir, J., Kind, A.J. & Campbell, K.H.S. (1997). Viable offspring derived from fetal and adult mammalian cells. Nature, 385, 810-813. Wilsdon, J. & Willis, R. (2004). See-through science: Why public engagement needs to move upstream. London: Demos. Wilson, B. (1984). Systems: Concepts, methodologies, and applications. Chichester, UK: John Wiley and Sons. Wilson, B. (2002). Soft systems methodology: Conceptual model and its contribution. Chichester, UK: John Wiley and Sons. Wilson, E.O., & Brown, W.L. (1953). The subspecies concept and its taxonomic application. Systematic Zoology, 2, 97–111. Winker, M. A., Flanagin, A., Chi-Lum, B., White, J., Andrews, K., Kennett, R. L., et al. (2000). Guidelines for medical and health information sites on the internet: principles governing AMA web sites. American Medical Association. JAMA, 283(12), 1600-1606. Winner, L. (1977). Autonomous technology: Technicsout-of-control as a theme in political thought. Boston: MIT Press. Winner, L. (1992).Technology and democracy. Dordrecht and Boston: Reidel/Kluwer. Winner, L. (1997). Cyberlibertarian myths and the prospect for community. ACM Computers and Society, 27, 14-19. Winograd, T., & Flores, F. (1986). Understanding computers and cognition. Norwood, NJ: Arlex Publishing Corporation. Winschiers, H., & Paterson, B. (2004). Sustainable software development. Proceedings of the 2004 Annual Research Conference of the South African Institute of Computer Scientists and Information Technologists on IT Research In Developing Countries (SAICSIT 2004)—Fulfilling the promise of ICT, Stellenbosch, South Africa (pp. 111-115).
Winston, R. & Saunders, S. (1998). Professional ethics in a risky world. In D. Cooper & J. Lancaster (Ed.), Beyond law and policy: Reaffirming the role of student affairs. New directions for student services, 82, (pp.77–94). San Francisco: Jossey-Bass. Wired News Staff (2003, May 1). Students fork it over to RIAA. Retrieved May 1, 2003, from http://www.wired. com/news/digiwood/0,1412,58707,00.html Wiredu, K. (1980). Philosophy and an African culture. London: Cambridge. Wittgenstein, L. (1979). On certainty. Oxford: Basil Blackwell. Wolak, J., Finkelhor, D. & Mitchell, K. (2004). Internet-initiated sex crimes against minors: Implications for prevention based on findings from a national study. Journal of Adolescent Health, 35(424), e11– e20. Wolak, J., Mitchell, K. , & Finkelhor, D. (2006). Online Victimization of Youth: Five Years Later. Durham, NH: National Center for Missing and Exploited Children. Wolf, S (1995). Beyond genetic discrimination: The broader harm of geneticism. American Journal of Law and Medicine, 23, 345-353. Wolf, S. (1987). Sanity and the Metaphysics of Responsibility. In Schoeman, F. (ed.), Responsibility, Character, and the Emotions. New Essays in Moral Psychology. Cambridge: Cambridge University Press, 46-62. Wolpoff, M., & Caspari, R. (1997). Race and human evolution. New York: Simon & Schuster. Wolpoff, M., & Caspari, R. (2000). Multiregional, not multiple origins. American Journal of Physical Anthropology, 112(1), 129–136. Wolpoff, M., Xinzhi, W., & Thorne, A. (1984). Modern Homo sapiens origins: a general theory of hominid evolution involving the fossil evidence from East Asia. In F.H. Smith & F. Spence (Eds.), The origins of modern humans: A world survey of the fossil evidence (pp. 441–483). New York: Alan R. Liss.
977
Compilation of References
Wong, A., Tjosvold, D., Wong, W. Y. L., & Liu, C. K. (1999). Cooperative and Competitive Conflict for Quality Supply Partnerships between China and Hong Kong. International Journal of Physical Distribution & Logistics Management, 29(1), 7-21. Wong, R. (2005). Privacy: Charting its Developments & Prospects. In Klang & Murray (eds), Human Rights in the Digital Age, London: Cavendish Publishing. Woo, R. (2006, July 6). Privacy crises in Hong Kong and how the privacy commissioner is dealing with them. Paper presented at the Privacy Laws and Business Conference, Cambridge, England. Retrieved from http://www.pcpd. org.hk/english/files/infocentre/speech_20060705.pdf Wood, S. Jones, R. & Geldart, A. (2007). Nanotechnology: From the science to the social. A Report for the ESRC. http://www.esrc.ac.uk/ESRCInfoCentre/Images/ESRC_Nano07_tcm6-18918.pdf Wooton, R., & Darkins, A. (1997). Telemedicine and the doctor-patient relationship. Journal of the Royal College of Physicians, 31(6), 598-599. World Anti-Doping Agency (2005). The Stockholm Declaration. World Anti-Doping Agency. World Anti-Doping Agency. (2003). Prohibited classes of substances and prohibited methods. World Bank. (2004). World development report 2004: Making services work for poor people. New York: Oxford University Press. World Futures Studies Federation. (2003). Home page. Retrieved December 30, 2003, from www.wfsf.org World Health Organization (1995). Guidelines for good clinical practice for trials on pharmaceutical products. Annex 3 of The Use of Essential Drugs. Sixth Report of the WHO Expert Committee. Geneva, WHO. World Health Organization (2002). Guidance for implementation. In Handbook for good clinical practice (GCP). Geneva, WHO. World Health Organization (2005). Genetics, genomics and the patenting of DNA. Review of potential implica-
978
tions for health in developing countries. Human Genetics Programme, Chronic Diseases and Health Promotion. World Internet Usage Statistics. (2005). Data on Internet usage and population. Retrieved November 2005, from http://www.internetworldstats.com/stats1.htm World Medical Association (2000). The World Medical Association Declaration of Helsinki: Ethical principles for medical research involving human subjects. Retrieved 2003 from http://www.wma.net/e/policy/17c.pdf World Wide Web Consortium. (1999). Web accessibility initiative page author guidelines. Retrieved May 17, 2007, from http://www.w3.org/TR/WD-WAI-PAGEAUTH Wray-Bliss, E. (2003). Research subjects/research subjections: Exploring the ethics and politics of critical research. Organization, 10(2), 307-325. Wright, D. Ahoenen, P., Alahuhta, P., Daskala, B., De Hert, P., Delaitre, S., Friedewald, M., Gutwirth, S., Lindner, R., Maghiros, I., Moscibroda, A., Punie, Y., Schreurs, W., Verlinden, M., Vildjiounaite, E. (2006, August 30). Safeguards in a world of ambient intelligence. Fraunhofer Institute Systems and Innovation Research. WSIS. (2004). The world summit on the information society: Declaration of principles. Retrieved November 2005, from http://www.itu.int/wsis WTO (2007). Understanding the WTO – Intellectual property: Protection and enforcement. Retrieved May 30, 2007, from http://www.wto.org/english/thewto_e/ whatis_e/tif_e/agrm7_e.htm Wurman, R. S. (2001). Information Anxiety 2. Indianapolis, IN: QUE. Wynne, B. (1995) Public Understanding of Science. In S. Jasanoff, G. Markle, J. Petersen & T. Pinch (Eds.), Handbook of Science and Technology Studies (pp. 361380). Thousand Oaks (California): Sage Publications (in cooperation with the Society for Social Studies of Science). Wynne, B. (2001, April). Managing Scientific Uncertainty in Public Policy. Paper presented at Global Conference: Crisis and Opportunity, Cambridge, MA.
Compilation of References
Yager, R. (1990). Science/technology/society movement in the United States: Its origin, evolution, and rationale. Social Education, 54, 198-201. Yao, C. (1939). Methods of nursing the sick [Bing ren kan hu fa]. Shanghai: Shang wu yin shu guan. In Chinese.
Young, K. & Case C. (2004). Internet abuse in the workplace: new trends in risk management. CyberPsychology and Behaviour, 7 (1), 105–111. Retrieved May 17, 2007, from http://www.netaddiction.com/articles/ eia_new_trends.pdf
Yar, M. (2006). Cybercrime and society. London: Sage Publications
Young, K.S. (2005). An empirical examination of client attitudes towards online counselling. Cyberpsychology & Behavior, 8(2), 172-177.
Ybarra M. L., & Mitchell, K. J. (2004). Youth engaging in online harassment: Associations with caregiver-child relationships, Internet use, and personal characteristics. Journal of Adolescence, 27, 319-336.
Yourdon, E. (2001). Just enough structured analysis. Retrieved December 30, 2003, from www.Yourdon. com/books/msa2e/
Ybarra M. L., Mitchell, K. J. Wolak, J. & Finkelhor, D. (2006). Examining Characteristics and Associated Distress Related to Internet Harassment: Findings From the Second Youth Internet Safety Survey Pediatrics, 118(4), e1169-e1177. Ybarra M. L., Mitchell, K. J., Finkelhor, D., & Wolak, J. (2007). Internet prevention messages: Targeting the right online behaviours. Archives of Pediatrics & Adolescent Medicine, 161(2), 138-145. Ybarra, M. (2004). Linkages between depressive symptomatology and Internet harassment among young regular Internet users. Cyberpsychology & Behavior, 7(2), 247-257. Ybarra, M., & Mitchell, K. (2004). Online aggressor/targets, aggressors, and targets: a comparison of associated youth characteristics. Journal of Child Psychology and Psychiatry 45(7), 1308–1316. Yeaman, A. R. J. (2005, March/ April). The origins of educational technology’s professional ethics: Part two. TechTrends, 49(2), 14-16. Yevics, P. (1999). Netiquette – what it is and why should you care? The Maryland State Bar Association, Inc. Retrieved September 28, 2005, from http://www.msba. org/Departments/Ioma/articles/officemngmt/netiquette. htm Young, Iris Marion (2000). Inclusion and democracy. Oxford: Oxford UP.
Youth Voices Research. (2007). Cyberhealthliteracy home page. Retrieved July 2008, from http://www. ehealthliteracy.ca Zack, A. (1989). Grievance arbitration. New York: American Arbitration Association. Zaikowshi, L. A. & Garrett, J. M. (2004). A three-tiered approach to enhance undergraduate education in bioethics. BioScience, 54, 942-9. Zeldin, S. (2004). Youth as agents of adult and community development: Mapping the process and outcomes of youth engaged in organizational governance. Applied Developmental Science, 8(2), 75-90. Zeldin, S., Camino, L. A., & Mook, C. (2005). The adoption of innovation in youth organizations: Creating the conditions for youth-adult partnerships. Journal of Community Psychology, 33(1), 121-135. Zelizer, V. A. (1978). Human values and the market: The case of life insurance and death in 19th-century America The American Journal of Sociology, 84(3), pp. 591-610. Zeller, T., Jr. (2005, May 18). Personal data for the taking. The New York Times. Zhang, B.T. & Seo, Y.W. (2001). Personalized Web-Document Filtering Using Reinforcement Learning. Applied Artificial Intelligence, 15(7), 665-685. Zhang, X. (2005). What do Consumers Really Know About Spyware? Communications of the ACM, Vol. 48, No. 8, pp. 44-48.
979
Compilation of References
Zimmern, R., Emery, J., & Richards, T. (2001). Putting genetics in perspective. British Medical Journal, 322, 1005-1006. Ziv, O. (1996). Writing to Work: How Using E-mail Can Reflect Technological and Organizational Change. In S. C. Herring (Ed.), Computer-Mediated Communication: Linguistic, Social and Cross-Cultural Perspectives. Philadelphia: John Benjamins Publishing. Zone Alarm (2000) Essential Security for DSL and Cable Modem Users Now Available with New Internet Security Utility - ZoneAlarm 2.0, Press Release, 26 January.
980
Zonghao B. & Kun X. (2006). Digitalization and global ethics. Ethics and Information Technology, 8, 41-47. Zubrin,R. (1996). The case for Mars: The plan to settle the red planet and why we must ( pp. 248-249). NY: Simon & Schuster/Touchstone. Zwicky, F. (1969). Discovery, invention, research – through the morphological approach. Toronto: Macmillan.
About the Contributors
Rocci Luppicini, PhD, is an assistant professor in the Department of Communication at the University of Ottawa in Canada. He has published in a number of areas including virtual learning communities and practice (Quarterly Review Of Distance Education), research methodology on online instruction (Journal of Distance Education), issues in Higher Education, instructional design (Canadian Journal of Learning and Technology), and design research (International Journal of Technology and Design Education). His most recent edited books include, Online Learning Communities in Education (2007) and the Handbook of Conversation Design For Instructional Applications (2008). Rebecca Adell, PhD, is the former coordinator of the Academic Writing Help Centre at the University of Ottawa, Canada, and currently works as the business manager of Eck MacNeely Architects in Boston, Massachusetts. Her dissertation on the history of smoke pollution regulation in nineteenthcentury England was rooted in science and technology studies, with particular focus on the connections between technological advance, evolving perceptions of pollution, and the response of government and the courts to the new technological realities of the Industrial Revolution. *** M.Akbari is PhD candidate at technology university of Amir Kabir (Iran, Tehran). He was granted his MSc in polymer blends and polymer recycling from technology university of Isfahan (Iran, Isfahan). His BSc is in textile field and morphology treatment of synthetic fibers. He has some papers about polymer blending and electrospinning of nano fibers. Stuart Allan is professor of journalism at Bournemouth University, UK. Previous publications include the authored book Media, Risk and Science (2002) and the co-edited collection Environmental Risks and the Media (2000). Currently, he is co-writing with Alison Anderson, Alan Petersen and Clare Wilkinson the book Nanotechnology, Risk and Communication for Palgrave. Alison Anderson is reader in sociology at the University of Plymouth, UK. Prior publications include Media, Culture and the Environment (University College London and Rutgers University Press, 1997). Her recent articles on journalistic portrayals of environmental risks, nanotechnologies, genetics and terrorism have appeared in Science Communication, Knowledge, Technology and Society, Genetics and Society and Health, Risk and Society.
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
About the Contributors
Alireza Bagheri received a doctor of medicine degree from Jondi Shapour University of Medical Sciences, Iran and a PhD in medical ethics from school of medicine, Tsukuba University in Japan. Dr. Bagheri worked at the Center for Medical Ethics in Tehran where he served in a national project as the executive director to compile Iranian National Ethical Codes in medical research. He is a clinical ethics fellow at the Joint Centre for Bioethics, University of Toronto and a visiting scholar at the Center for Clinical Bioethics, Georgetown University. Keith Bauer is an assistant professor of philosophy at Marquette University. He earned his PhD in philosophy/bioethics in 2002 from the University of Tennessee. From 2000-2001, he served as a fellow at the Institute for Ethics at the American Medical Association in Chicago. He has published in journals such as Ethics and Information Technology, The Cambridge Quarterly for Healthcare Ethics, The American Journal of Bioethics, and Theoretical Medicine and Bioethics. Dr. Bauer also holds a Master of Social Work degree. Present areas of research include electronically mediated communication, the distributive justice implications of ICT, online education, and transhumanism. Michael Billinger received his PhD in physical anthropology from the University of Alberta in 2006. His research interests include cranial morphology, regional variation, ethnogenesis, forensic anthropology, biological taxonomy and the philosophy of biology. His dissertation, ‘Beyond the Racial Paradigm: New Perspectives on Human Biological Variation’, provides a comprehensive analysis of the ‘race problem’ in anthropology, incorporating these perspectives both theoretically and methodologically. He is currently employed by the Edmonton Police Service in Alberta, Canada. [email:
[email protected]] Arsalan Butt is a doctoral student in Simon Fraser University’s School of Communication. In 2006, he completed an MSc in applied sciences, prior to which he did a Master’s in computer science. He also holds a data manager position at the British Columbia Children’s Hospital. Arsalan’s research focuses on socio-political aspects of technology. He is particularly interested in issues related to: the use of ICTs in political communication; intellectual property protection (particularly in developing countries); health technology. Along with several years of university teaching, he has published in the fields of medicine, health technology and information technology ethics. Adeel I. Butt is in the last year of his information technology undergraduate program at Simon Fraser University. He has been actively engaged in teaching several college level computer science and IT courses. Currently he is working in the IT industry with focus on client support and relationship building. Jennifer Candor is the special education technology consultant for the Gahanna-Jefferson public school district in Gahanna, Ohio (USA). She has more then 15 years of experience in public education with the last 11 years, in special education. She has been the special education technology consultant for the last five of these years. She is also an adjunct professor for Ashland University, in Ashland, Ohio. In 2007, she presented at the Council for Children with Behavior Disorders (CCBD) Conference in Dallas, Texas on the behavior benefits of assistive technology – http://www.ccbd.net. Candor holds a MA in education, from The Ohio State University, as well as a BA in political science from Capital University.
About the Contributors
Rafael Capurro is professor of information management and information ethics at the Hochschule der Medien (HdM) - Stuttgart Media University, Germany. He is the founder of the ICIE (International Center of Information Ethics) and editor in chief of IRIE (International Review of Information Ethics). He is member of the European Group on Ethics and New Technologies (EGE) to the European Commission and Information Ethics Senior Fellow, 2007-2008, Center for Information Policy Research, School of Information Studies, UW-Milwaukee, USA. Homepage: http://www.capurro.de Daniela Cerqui, PhD, is a social and cultural anthropologist involved in the study of the relationship between technology and society and, more fundamentally, humankind. Her research focuses on the development of the new information technologies and the ‘information society’ these technologies are supposed to create. Helen Yue-lai Chan is a PhD candidate in the School of Nursing, Hong Kong Polytechnic University. She has involved in various studies since she worked as a research assistant. Her research interest is in the area of end of life care and older people. Currently, her study is to investigate how to implement advance care planning among frail elderly people living in the long-term care facilities and their quality of life. Matthew Charlesworth, SJ, BBusSc, MCom (Rhodes) was born in 1979 in the UK and grew up in Johannesburg, South Africa. After matriculating from St Stithians College and studying Information Systems and Strategic Management at Rhodes University in Grahamstown, South Africa, he joined the Society of Jesus in 2005. His academic interests are in the field of computer ethics and professional/curriculum issues within information systems. He is currently an intern at the Jesuit Institute - South Africa in Johannesburg. You can email Matthew at
[email protected]. This work is based on his master’s thesis which can be found online at http://eprints.ru.ac.za/199/ Alejandra Cortés-Pascual has a PhD in psycho-pedagogy and is professor education at the University of Zaragoza, Spain, within the field of research methods and diagnosis in education. Her research work, interests and publications are linked to education in values, ecological theory, educational technology and values, professional guidance and life histories. She is currently coordinating a national project on the media, technologies, ethics and adult education. Her belongs to a group of investigation in the University of the Basque Country (research project on values in television), as well as to the Group of Investigation Applied in Education (EtnoEdu) of the University of Zaragoza. He is educational in matters and courses of professional guidance, diagnosis in special education and evaluation. J. José Cortez is an adjunct assistant professor in the Computer Science Department at the State University of New York (SUNY) College at Oswego and a PhD student in the Instructional Design, Development, and Evaluation (IDD&E) Department in the School of Education, at Syracuse University. Previously, he taught in College of Education and Human Services at Wright State University, in Fairborn, Ohio. Before entering the field of education, he worked in the information technology for Fortune 500 companies IBM Corporation and Digital Equipment Corporation. Marc J. de Vries studied physics at the Vrije Universiteit Amsterdam. He is affiliate professor for reformational philosophy of technology at the Delft University of Technology and assistant professor for
About the Contributors
philosophy and ethics of technology at the Eindhoven University of Technology, both in the Netherlands. Currently, he is the editor-in-chief of the International Journal of Technology and Design Education (published by Springer). Stefano Fait is 32 and lives in Trento, Italy. He has a degree in political science (Bologna University, 2000) and a PhD in social anthropology (St. Andrews University, Scotland, 2004). He is a member of the Centre for the Anthropological Study of Knowledge and Ethics, a research institute attached to the Department of Social Anthropology at St. Andrews, of which he is also an honorary fellow. He works for “il Trentino”, a local newspaper which is part of one of the leading media companies in Italy. He is a peer reviewer for “Anthropology Matters.” His research interests include political anthropology; human rights and bioethics; racism; immigration; social and historical studies of the biomedical sciences; disability studies. Kenneth R. Fleischmann is an assistant professor in the College of Information Studies of the University of Maryland, College Park. He holds degrees in computer science, anthropology, and science and technology studies. His research and teaching interests include information ethics, social informatics, and human-computer interaction. He has published articles in journals such as Communications of the ACM, Journal of the American Society for Information Science and Technology, Library Quarterly, and The Information Society. His ongoing research on the ethical implications of values embedded in information technologies is supported by the National Science Foundation. Sarah Flicker is an assistant professor in the Faculty of Environmental Studies at York University in Toronto. Previously Sarah was the director of research at the Wellesley Institute. She has a doctorate in social science and health from the University of Toronto’s Department of Public Health Sciences. She was an active member of the TeenNet Research Group throughout her doctoral studies. Her research interests are in the areas of youth health, health promotion, HIV and community-based participatory research. She holds a MPH in maternal and child health from UC Berkeley and an honours degree in Anthropology from Brown University. Sarah sits on a number of community boards and believes strongly in community partnerships for research and action. Gerhard Fortwengel is an associated researcher at the Institute for Medical Law, Human Resources and Health Politics at UMIT. He is currently working on his PhD thesis in the field of medical law and ethics. He received Master’s degrees in public health from Bielefeld University, Germany, as well as a Master’s degree in clinical research from Liverpool John Moores University, UK. Deb Gearhart recently became the director of eCampus for Troy University. Previously Deb served as the founding director of E-Education Services at Dakota State University in Madison, South Dakota and was there for the 11 years. Before joining Dakota State she spent 10 years with the Department of Distance Education at Penn State. Deb is an associate professor for educational technology at Dakota State University teaching at both the undergraduate and graduate levels. She has co-authored at textbook entitled Designing and Developing Web-Based Instruction. Dr. Gearhart has earned a BA in Sociology from Indian University of Pennsylvania. She earned a MEd in adult education with a distance education emphasis and an MPA in Public Administration, both from Penn State. Deb completed her PhD program in Education , with a certificate in distance education, from Capella University.
About the Contributors
Adrian Guta is currently pursuing a PhD at the University of Toronto Department of Public Health Sciences. He recently completed a Master’s of social work at the University of Toronto, specializing in diversity and social justice. His research interests are in the areas of HIV, health promotion, sexual diversity, research ethics, and community-based participatory research. Adrian is currently doing a practicum at the Ontario HIV Treatment Network, as well as sitting on a number of community boards. A. K. Haghi is professor and vice chancellor for research at University of Guilan (Iran). He also served as visiting professor of polymer science at Mahatma Gandhi University (India). He is author of more than 250 papers and contributed more than 30 chapters on books published in USA . He obtained his GCE in pure mathematics from England. He was then educated in USA where he was awarded his BSc and MSc in Engineering. He obtained his DEA and PhD from France. Soraj Hongladarom is an associate professor of philosophy at Chulalongkorn University in Bangkok, Thailand. He has published books and articles on such diverse issues as bioethics, computer ethics, and the roles that science and technology play in the culture of developing countries. His concern is mainly on how science and technology can be integrated into the life-world of the people in the so-called Third World countries, and what kind of ethical considerations can be obtained from such relation. A large part of this question concerns how information technology is integrated in the lifeworld of the Thai people, and especially how such integration is expressed in the use of information technology in education. He is the editor, together with Charles Ess, of Information Technology Ethics: Cultural Perspectives, published by IGI. A. Pablo Iannone, professor of philosophy at Central Connecticut State University, studied engineering, mathematics, philosophy and literature at the Universidad Nacional de Buenos Aires, received a BA in philosophy from U.C.L.A. and an MA and a PhD in philosophy from the University of Wisconsin-Madison, and pursued graduate studies in business and economics at the University of WisconsinMadison and Iowa State. He taught at Canada’s Dalhousie University, Perú’s Universidad Inca Garcilaso de la Vega, and the U.S. universities of Wisconsin-Madison, Texas at Dallas, Iowa State, and Florida. His publications include nine philosophy books, two literature books, and articles and reviews. Yasmin Ibrahim is a senior lecturer in the division of Information and Media Studies at the University of Brighton where she lectures in globalisation and the media and political communication. Her main research interests include the use of the Internet for empowerment and political communication in repressed polities and diasporic communities, global governance and the development of alternative media theories in non-Western contexts. Gareth Jones is deputy vice-chancellor (Academic and International) and professor of anatomy and structural biology at the University of Otago, Dunedin, New Zealand. Recent books include Valuing People (1999), Speaking for the Dead (2000), Clones (2001), Stem Cell Research and Cloning (2004), and Designers of the Future (2005). He is coauthor of Medical Ethics (2005, 4th edition). Mathias Klang currently holds positions both at the University of Lund and the University of Göteborg. In Lund he is conducting a copyright research project aimed at developing the state of Open Access at university libraries in Sweden. In Göteborg Mathias conducts research in the field of legal
About the Contributors
informatics with particular interest in copyright, democracy, human rights, free expression, censorship, open access and ethics. He has published several articles in these topics. Joyce Yi-Hui Lee is a research student in Information System at the School of Management, University of Bath. Her research interests are concerned with the relationship between information systems and organizational behavior. She has wide-ranging experience in industry related to international business collaboration. Her research has been presented in several international conferences. Address: School of Management, University of Bath, Bath BA2 7AY, UK. [email:
[email protected]] Darryl Macer, PhD, is regional adviser on social and human sciences in Asia and the Pacific, Regional Unit for Social and Human Sciences in Asia and the Pacific (RUSHSAP), UNESCO Bangkok, 920 Sukhumvit Road, Prakanong, Bangkok Thailand 10110. Email:
[email protected] After studying analytical philosophy in Göttingen, Germany, A. Matthias worked for fifteen years as a computer programmer, consultant, and head of the information systems group of the Computing Centre of the University of Kassel. At the same time, he received his PhD in philosophy from the Humboldt University in Berlin with a thesis on “machines as holders of rights”. He presently teaches courses in both philosophy and programming languages at the University of Kassel, Germany. Andy Miah is reader in new media & bioethics at the University of the West of Scotland and fellow in visions of utopia and dystopia for the Institute of Ethics and Emerging technologies. He is author of ‘Genetically Modified Athletes: Biomedical Ethics, Gene Doping & Sport’ (2004, Routledge) and co-author of ‘The Medicalization of Cyberspace (2007, Routledge). He has published over 70 articles and is regularly invited to speak about the ethics of human enhancement technologies. He has been a regular participant in projects related to human enhancement and sport based at the Hastings Center, New York. He is also a member of the European Union programme NanoBio-RAISE. Carl Mitcham, PhD, is professor of liberal arts and international studies, Colorado School of Mines, Golden, Colorado, USA. He also has associate appointments at the Center for Science and Technology Policy Research, University of Colorado, Boulder, and the European Graduate School, Saas Fee, Switzerland. His research in the philosophy of science and technology has led to a number of publications, including Thinking through Technology: The Path between Engineering and Philosophy (1994) and a 4-volume Encyclopedia of Science, Technology, and Ethics (2005). One forthcoming publication is the Oxford Handbook on Interdisciplinarity, for which he serves as a co-editor. V. Mottaghitalab is assistant professor in faculty of engineering at university of Guilan(Iran). He graduated from ARC centre for nanostructured electromaterial of University of Wollongong, Wollongong, Australia with the PhD degree in polymer engineering in 2006. Also, BSc and MSc certificate granted to him in chemical engineering from recognized universities in Iran. He is author of more than 20 papers and a few book chapters in diverse area of engineering topic. Tim Murphy is professor of philosophy in the Biomedical Sciences at the University of Illinois College of Medicine at Chicago. He has written on ethical aspects of genetics, assisted reproductive technology, and research ethics. He is the author of Case Studies in Biomedical Research Ethics (The MIT Press, 2004).
About the Contributors
Makoto Nakada is professor at Graduate School of Humanities and Social Sciences, the University of Tsukuba, Japan. Main topics of research and teaching: Studies on the Information Society, Studies on the transmission of information, Modern Thought(1999-2005). Home page: http://www.logos.tsukuba. ac.jp/~nakada/regis/ Busi Nkala holds a diploma in general nursing and midwifery, obtained a BA in social work from the University of Witwatersrand, and received Forgaty International Scholarship studied towards a Master’s of health science in bioethics at the University of Toronto. She has been involved in a number of HIV prevention studies. She is currently involved in methods for improving reproductive health in Africa and MRKAd5 HIV-1-Gag/Pol/Nef Vaccine Trial. Other studies include a study in collaboration with CDC which was looking at HIV acquisition among hormonal contraceptive users and MDP feasibility for Phase II Microbicides Trials. She spent 10 months participating as a member of ethics board for human research at the University of Toronto. Cameron Norman is an assistant professor in the Department of Public Health Sciences at the University of Toronto, where he is a faculty member in the health promotion program, and the director of evaluation for the Peter A. Silverman Global eHealth Program. His research interests focus on knowledge networks, collaborative learning and action, and engaging the public in health promotion through the use of new technologies and participatory action research. Dr. Norman completed a post-doctoral fellowship in systems thinking and knowledge translation at the University of British Columbia and holds a PhD in public health from the University of Toronto, an MA in community psychology from Wilfrid Laurier University, and an honours degree in psychology from the University of Regina. Herwig Ostermann is senior researcher at the Institute for Medical Law, Human Resources and Health Politics. His research interests encompass the field of health politics as well as public administration and management. He received a Master’s degree in international business administration from Innsbruck University, Austria and a PhD in health sciences from UMIT, Hall/Tyrol, Austria. Samantha Mei Che Pang is a nurse by professional training, and a researcher in the field of health care and nursing ethics by scholarly endeavor. She started her career as a nurse academic in 1990, when she took up a teaching post at the Hong Kong Polytechnic University, where she now serves as head and professor in the School of Nursing. Over the past ten more years, she has taught, lectured, researched, and published in the areas of nursing ethics, caring practices, and ethics in end-of-life care. As an editorial board member of Nursing Ethics and the International Centre for Nursing Ethics, she is actively involved in studies of nursing ethics internationally. Niki Panteli is a senior lecturer in information system at the School of Management, University of Bath. She holds a PhD from Warwick Business School, University of Warwick. Broadly defined, her research lies in the field of information and communication technologies and emergent organizational arrangements. She is currently involved in research on virtuality and interactivity within computermediated and mobile communication systems. Her research has published widely, such as chapters in books and has appeared in European and international journals such as Behavior and Information Technology, European Journal of Information Systems, Futures, New Technology, Work and Employment, and Journal of Business Ethics.
About the Contributors
Alan Petersen is professor of sociology, School of Politics and Law, Monash University, Melbourne, Australia. His recent research projects include an ESRC-funded study of news production in relation to nanotechnologies, undertaken with Alison Anderson and Stuart Allan (with Clare Wilkinson as the research fellow). Darren Pullen is a lecturer in ICT, health science and professional studies in the Faculty of Education at the University of Tasmania, Australia. He has a diverse background with previous employment as a Research Fellow in the health sector, ICT consultant and classroom teacher. His research interest is in the management of change processes with a particular interest in the micro-meso-macro level relationships between technology innovations and human-machine (humachine) relationships and interactions. Mike Ribble has worked at several different levels of education and technology. He has served as a classroom biology teacher, secondary school administrator, a network manager for a community college, a university instructor and college instructional support. He received a doctorate in educational leadership at Kansas State University. His interests include technology leadership, professional development, and working with teachers to help enhance teaching and learning with technology. He has written several articles and has presented in regional and national conferences on digital citizenship and its affect on the changes within education. Russell W. Robbins is an assistant professor in the School of Computer Science and Mathematics at Marist College in Poughkeepsie, NY. Prior to Marist, he taught at Rensselaer Polytechnic Institute in Troy, NY. He holds degrees in business, accounting, information technology, and engineering science. His research surrounds studying, simulating, and supporting ethical problem solving. His teaching interests are software engineering and project management. He has published in Decision Support Systems and ACM Proceedings. Dr. Robbins was a finalist in the 2004 Excellence in Ethics Dissertation Proposal competition at the Institute for Ethical Business Worldwide within the University of Notre Dame. Lynne Roberts is a research fellow at the Crime Research Centre at the University of Western Australia. Lynne has conducted both quantitative and qualitative research in and about virtual communities. Based on this research she has had 13 journal articles and book chapters published on social interaction, relationship formation, sense of community and gender-switching online; and ethical and methodological issues in conducting research in virtual environments. Antoinette Rouvroy is doctor of laws of the European University Institute (Florence) and author of Human Genes and Neoliberal Governance: A Foucauldian Critique (Routledge-Cavendish, 2007), looking at the knowledge-power relations in the post-genomic era and addressing the issues of genetic privacy and discrimination in the context of neoliberal governance. She is particularly interested in the mechanisms of mutual production between biotechnology and cultural, political, economic and legal frameworks. Besides her interdisciplinary research interests in the law, ethics and politics of human genetics, she is also involved in research projects about the societal challenges raised by ambient intelligence and ubiquitous computing as senior researcher in the Research Centre on Information Technology Law of the University of Namur in Belgium.
About the Contributors
Neil C. Rowe is professor and coordinator of research in computer science at the U.S. Naval Postgraduate School where he has been since 1983. He has a PhD in computer science from Stanford University (1983), and EE (1978), SM (1978), and SB (1975) degrees from the Massachusetts Institute of Technology. His main research interest is the role of deception in information processing, and he has also done research on intelligent access to multimedia databases, image processing, robotic path planning, and intelligent tutoring systems. He is the author of over 100 technical papers and a book. Eduardo A. Rueda is professor of bioethics and social studies of science and technology at the Universidad Javeriana in Bogotá. He has been researcher and lecturer at the Basque Country University, Guest Visiting Researcher at University of Oslo and Researcher Fellow at the Latin-American Council of Social Sciences. He has authored numerous journal articles and book chapters on social and ethical implications of emergent technologies. Email:
[email protected] Martin Ryder, MA, is a development engineer for Sun Microsystems. He also serves on the adjunct faculty at the University of Colorado at Denver, teaching research methods in Information and Learning Technologies. Heidi L. Schnackenberg is an associate professor of educational technology in the Education Departments at SUNY Plattsburgh in Plattsburgh, NY. She currently teaches both undergraduate and graduate classes on the use of technology to enhance teaching and learning in the P-12 classroom, social issues in education, and ethical issues in educational technology. Her various research interests include the integration of technology into pedagogical practices and the legal and ethical implications of western technologies in non-western and third world cultures. David Sewry, MSc, PhD (Rhodes) is HOD of information systems at Rhodes University in Grahamstown, South Africa. He is involved in teaching information systems at all levels. After studying for his MSc and PhD at Rhodes in computer science, he went to work at South Africa’s CSIR before returning to Rhodes. He has been visiting professor at the University of Bristol in England, and Massey University in New Zealand. His research interests are: information systems curricula; data warehousing; ict in the organisation; ict for development; elearning; service level management (SLM) and agreements (SLA). You can email David at
[email protected] Roland Staudinger is full professor and chair of the Institute for Medical Law, Human Resources and Health Politics. He received a Master’s degree in Law from Salzburg University, Austria, a Master’s degree in Health Care Management from Innsbruck University, Austria, a PhD in Law from Innsbruck University, Austria and a PhD in Theoretical Medicine from Halle/Wittenberg University, Germany. Verena Stuehlinger Verena Stuehlinger is research assistant at the Institute for Medical Law, Human Resources and Health Politics. Her research is focused on international and national health law. She received a Master’s degree in law from Innsbruck University as well as LLM degree from Golden Gate University, SF. John P. Sullins is an assistant professor of philosophy at Sonoma State University. PhD Binghamton University (SUNY), 2002. He specializes in; philosophy of technology, artificial intelligence/robotics,
About the Contributors
Machine morality, cognitive science, engineering and computer ethics, fencing, fight directing for the stage, and the philosophy and history of swordplay. Currently research interests are studying the technology of robotics, AI and ALife and how they inform traditional philosophical topics as well as their impacts on society and the ethical design of successful autonomous machines. May Thorseth, PhD, associate professor at the Department of Philosophy, NTNU Norwegian University of Science and Technology. Director of Programme for Applied Ethics, NTNU. Main research field is political philosophy, ethics and argumentation theory in the context of globalisation and multicultural conflicts. Ongoing research project: “Worldwide Deliberation and Public Use of Reason Online”. Current member of the Regional Research Ethical Committee in Medicine in Mid-Norway, the National Research Ethical Committee in Medicine in Norway, and previous member of the Clinical Ethics Committee at St. Olavs Hospital, Trondheim. Eddie Vega has been in the field of communications for over 19 years. During that time, he’s been a cameraman, video editor, graphic designer, web developer, comic book and storyboard illustrator, and an instructor. In 2001, Eddie launched VISIONES, which in Spanish means “visions”. The primary objective of the company is to meet the needs of individuals and create projects in the realm of digital media (or multimedia). He is currently an adjunct professor in the Education Departments at SUNY Plattsburgh in Plattsburgh, NY. Seppo Visala works as CIO at the University of Tampere since 2002. His earlier academic career consists of research and teaching positions at the Finnish universities of Oulu, Vaasa and Tampere in the fields of information systems and software engineering. He reached his MA degree in mathematics and philosophy in 1977 at the University of Tampere, and PhD degree in information systems at the University of Oulu in 1993. His later research has focused on ethical issues of technology. William A. Wallace is a professor in the Department of Decision Sciences and Engineering Systems and holds joint appointments in cognitive science and civil engineering at Rensselaer Polytechnic Institute in Troy, NY. His research interests include applying artificial intelligence to incident management and emergency response, understanding issues in trust and ethical decision making, and studying emergent and improvisational organizational behavior in disaster management. Dr. Wallace has received the International Emergency Management and Engineering Conference Award for Outstanding Long-Term Dedication, the IEEE Third Millennium Medal, and the 2004 INFORMS President’s Award for work that advances the welfare of society. Zachary B. Warner is a graduate student pursuing his MST in adolescence education at SUNY Plattsburgh in Plattsburgh, NY. He completed his undergraduate degree in mathematics at the same institution. Zachary recently completed a thesis project that required him to design, implement and serve as an instructor for a seminar course in the university’s honors program. He plans to continue his study of education in areas such as educational technology and the social implications of education. Kevin Warwick is professor of cybernetics at the University of Reading, England, where he carries out research in artificial intelligence, control, robotics and cyborgs. He is also director of the University KTP Centre, which links the university with small to medium enterprises and raises over £2.5 million
0
About the Contributors
each year in research income. He completed a PhD and research post at Imperial College, London. He subsequently held positions at Oxford, Newcastle and Warwick Universities before being offered the Chair at Reading, at the age of 33. As well as publishing over 500 research papers, Kevin’s experiments into implant technology led to him being featured as the cover story on the US magazine, ‘Wired’. Kevin has been awarded higher doctorates both by Imperial College and the Czech Academy of Sciences, Prague. He was presented with The Future of Health Technology Award in MIT, was made an honorary member of the Academy of Sciences, St. Petersburg and in 2004 received The IEE Achievement Medal. In 2000 Kevin presented the Royal Institution Christmas Lectures, entitled “The Rise of the Robots”. Maja Whitaker is an assistant research fellow in the Department of Anatomy and Structural Biology at the University of Otago. Clare Wilkinson is a lecturer in science communication at the Science Communication Unit, University of the West of England, Bristol, UK. Clare has published articles in journals including Science Communication, Social Science and Medicine and Qualitative Health Research. She is co-writing the book Nanotechnology, Risk and Communication with Alison Anderson, Alan Petersen and Stuart Allan for Palgrave.
Index
A
C
advance care planning (ACP) 318, 322, 323, 324, 327 advising problem 731 allele frequencies 49, 66 alter ego 224, 230 application services 796 applied e-business ethics 854 applied ethics 3, 4, 6, 7, 9, 19, 29, 30, 102, 127, 186, 189, 190, 191, 194, 195, 199, 200, 201, 264, 267, 268, 270, 275 artificial autonomous agent 208, 215, 216, 218, 220 artificial life 135, 205, 220, 572 artificial moral agent (AMA) 167, 205, 208, 218, 220, 314 artificial neural network 640, 650 assistive technology (AT) 409, 410, 412, 415, 416, 419, 420, 421, 422, 424 Attention Deficit Hyperactivity Disorder (ADHD) 160, 417 authentic learning 413, 416, 424 authoritarianism 743
Categorical Imperative (I. Kant) 104, 105, 110, 397, 398, 401 Caux Round Table 864 chimera 152, 160 circle of mediocrity 413, 424 cladistic 49, 50, 51, 53, 59, 61, 65, 67, 68 clinical trial 115, 116, 118, 119, 120, 121, 124, 125 code of ethics 858 collectivism 341, 342, 343, 344, 345, 346, 350, 353, 371 communication technoethics 10, 14 communitarianism 504, 508 complex systems 807 computer-based culture 744 computer-mediated communication 705 computer-mediated messages 705 computer ethics 3, 7, 8, 14, 16, 17, 186, 187, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 203, 204, 213, 219, 229, 233, 252, 260, 340, 351, 352, 360, 369, 396, 397, 403, 405, 406, 539, 540, 696, 746 computerization 740 connectionism 572, 574 constructivism 246, 508 control group 113, 117, 124 cooperative learning 419, 446, 447, 451 CopyLeft 695, 698 course integrity 730 covert channel 797
B base pair 152, 160 belief revision 809 belief systems 806 biometrics 12, 550, 553, 554, 557 biosensors 180, 181, 182, 185 biotechnology 755 British culture 760
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Index
covert exploitation 795 cyber-bullying 304, 309, 314, 578, 580, 587 cyber-stalking 576, 578, 579, 582, 587, 592 cyberethics 7, 9, 10, 14, 16, 407, 530 cyber identity theft 9, 542, 543, 545, 547, 551, 552, 553, 554, 557, 577, 578 cyber porn 508, 513 cyborg 36, 41, 42, 213
D daguerreotype 657, 666 declarative programming 637, 638, 639, 643, 650 deliberative polling 280, 281, 282, 289, 290, 291, 292 Democracy Unbound 280, 281, 292 denial of service 797 developmental matrix 484, 485, 493 digital divide 6, 12, 15, 172, 173, 232, 233, 234, 242, 244, 246, 247, 252, 261, 275, 289, 311, 350, 395, 429, 693, 696 digital imagery 660 digital ontology 349, 516, 517, 524, 525 digital property 700 Digital World 707 disinformation 226, 268, 541 distributive justice 170, 172, 173, 182, 184 doping 69, 70, 71, 72, 73, 74, 75, 77, 78, 79, 80, 81, 82, 83, 84
E e-book 237, 240, 443, 451 e-business outsourcing 847 e-learning 12, 18, 224, 442, 443, 449, 450, 451, 697 Earth Charter 736, 745 EBP (see evidence-based practice) 756 educational apparatuses 655 educational technology 655, 659, 660 eHealth 297, 307, 308, 312, 315 embryonic stem cells 366, 609, 613, 620, 621, 622 emoticons 704 engineering ethics 9, 14, 30, 85, 92, 99, 127, 194, 200, 233
enlightenment 654 Enterprise Reconfiguration Dynamics 752 epistemological issues of computerization 740 ergonomics 256, 261 ethical action development 710 ethical judgment development 709 ethical motivation development 709 ethical problem 729 ethical sensitivity development 709 ethnogenesis 50, 51, 67 evidence-based practice (EBP) 756 extra sensory input 42
F Foucault, Michel 458 Four Component Model 700 framing uncertainties 480, 481, 488, 493 free and open source software (FOSS) 247, 687, 698
G gene doping 69, 70, 71, 73, 79, 81, 83, 84 genetic programming 637, 643, 650 genetic risk 455, 459, 469, 471, 479, 480, 482, 485, 486, 490, 491, 492 germline engineering 146, 148, 149, 160 germplasm 146, 160 Global Compact 865 Global Delivery (and Service) Model 850 global positioning system 773 Global Reporting Initiative 866
H healthcare informatics 754 hegemony 232, 233, 240, 241, 247 heterotopic 515 homology 59, 64 honeynet 537 honeypot 537, 538, 540, 541 human cloning 40, 93, 149, 160, 366 hyperparenting 160 hypoxic training 69, 76, 77, 83
I identity crime 543, 544, 550, 554, 557
Index
identity fraud 9, 542, 543, 544, 545, 546, 547, 553, 557, 577 Ikai 339, 340, 343, 344, 345, 353 imperative programming 636, 638, 650 Independent Ethics Committee 142, 143 information age 701 informational capitalism 458, 470 information colonialism 747 information ownership 392, 395, 396, 408 information warfare 536, 539, 540, 541 informed consent 87, 118, 119, 123, 125, 132, 133, 134, 136, 137, 138, 139, 141, 142, 143, 148, 153, 166, 167, 184, 296, 300, 301, 330, 331, 334, 337, 474, 475, 486, 487, 490, 492, 493, 532, 595, 602, 615, 683, 684 intellectual property 3, 83, 121, 154, 188, 1 94, 202, 238, 244, 247, 255, 268, 269, 276, 355, 357, 350, 357, 356, 357, 358, 360, 361, 365, 367, 368, 369, 376, 392, 393, 395, 396, 408, 436, 652, 661, 669, 684, 687, 688, 689, 697, 698, 731 intellectual property rights 730 inter-organizational collaboration 624, 629, 630, 631, 632, 634 intercultural information ethics (IIE) 339, 340, 341, 346, 347, 349, 350, 353 Internet ethics 799 In Vitro Fertilization (IVF) 12, 160, 161, 609, 611, 612, 614, 615, 616, 617, 618, 619, 620, 621, 622 IP-related vulnerabilities 796 ISO standard on social responsibility 867
K keyloggers 545, 546, 557, 577
L Law of Technoethics 16, 19 learning machine 239, 650 level of abstracton (LoA) 213, 214, 220 link layer 795
M machine intelligence 40, 43 macroethics 196, 197, 198, 202 malware 207, 221, 545, 557, 577, 592 manipulation of objects 740 Marshall plan 223, 230 media ethics 14 medical diagnostics 165, 169 metaethics 142, 193, 202 microbicides 329, 330, 338 microethics 197, 202 Mitochondrial DNA (mtDNA) 48, 65, 68, 161 modernity 109, 147, 176, 185, 658 modularity 572, 574 moral development 85, 87, 88, 90, 95, 98, 102, 267 morphometrics 44, 59, 61, 62, 64, 65, 66 multidisciplinarity 25, 27, 30
N nano-divide 376, 389 nanoethics 13, 14, 27, 28, 30, 374 nanoparticle 383, 389 nanoscale technology 164, 167, 169 nanotechnology 3, 12, 13, 15, 16, 27, 28, 29, 30, 31, 169, 223, 229, 374, 375, 376, 377, 378, 379, 380, 381, 383, 384, 385, 386, 387, 388, 389, 433, 755 netiquette 9, 10, 252, 261 network layer 795 neural networks 482, 572, 636, 639, 640, 641, 643, 650 non-reductionist approach 23 normativity 20, 510 NP-complete satisfiability (SAT) 798 nutrigenomics 79, 83
O Observational Laboratory on Technoethics for Adults (OLTA) 225, 228, 428, 431, 433, 434, 437 ontogeny 59, 61, 66, 67, 476, 491, 492, 493
Index
P parasite computer 794 parasitic computing 794 participating computer 795 patenting gene 469
paternalism 87, 90, 474, 475, 485, 486, 4 87, 488, 493 permeation stage 756 personal information privacy (PIP) 771 Personal Interests Schema 702 phenetic 57, 67 philosophy of technology 5, 6, 7, 19, 22, 30 , 31,70, 246 phishing 9, 254, 529, 541, 545, 546, 547, 5 51, 555, 556, 557, 576, 577 phylogenetic 49, 50, 59, 60, 62, 67, 208 placebo 113, 117, 124, 125 podcast 418, 668, 670, 671, 672, 673, 674, 675,676, 677, 679 postnormal approach 489, 494 POWER Planning Framework 819 predictive genetic testing 455, 466, 474, 476, 478, 480, 481, 482, 483, 485, 486, 4 87, 488, 489, 490, 493, 494 preimplantation embryo 610, 615, 622 preimplantation genetic diagnosis (PGD) 149, 154, 161, 612, 613, 614, 616, 618, 6 19, 620, 621, 622 primitive streak 610, 611, 615, 622 privacy genetic 455, 460, 463
privacy paradox 304, 315 privacy strategy 782 pro-social 702 professional technoethics 9, 11, 14 Progressive Era 655, 658 prosumers 296, 306, 315 pseudo-photograph 509, 519 psychological distance 708
R radio frequency identification 771
rationalistic tradition 740 reflective judgment 284, 285, 289, 290, 293 reformational philosophy 24 reprogenetics 12, 148, 161 research protocol 115, 121, 124, 131, 133, 1 34, 135, 138, 142, 143 rhizotic 50, 67 rootkit 537, 541 Rosanvallon, Pierre 468
S screencasting 677, 679 Seken 339, 340, 343, 344, 345, 350, 352, 353 Shakai 339, 340, 343, 344, 345, 353 skeletal morphology 45, 46, 49, 60, 67 smart homes 180, 181, 182, 185 smart motes 774 social amplification of risk (SARF) 380, 390 social network analysis 807 softlifting 357, 369, 370 software piracy 187, 189, 201, 202, 260, 3 54, 356, 357, 358, 359, 360, 361, 362, 3 63, 364, 365, 366, 367, 368, 369, 370, 3 71, 587 spam 529, 531, 532, 535, 564, 577, 583, 584, 585, 591, 592, 689 spoofing 541, 545, 551 spyware 9, 529, 532, 535, 538, 539, 541, 5 46, 547, 593 stigma 333, 386, 390, 657, 674 subversive rationalization 241, 244, 247 surplus embryos 616, 617, 622 surrogate 317, 325, 327 synthetic biological constructs 205, 207, 221 systems theory 811
T taxonomic 44, 47, 48, 51, 55, 57, 59, 62, 64, 66, 67 technoethics 652, 660, 661, 663, 664 technological determinism 738, 747
Index
technology 651, 652, 653, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 875 technology-mediated communication 704 telemedicine 171, 172, 182, 183, 184, 185 tetrahydrogestrinone (THG) 79, 83 theory of morality 753 thought communication 41, 43 transport layer 795 Triple A-Engine 513 TRIPS 188, 202, 244 Trojan Horse 796
U UNESCO 48, 63, 64, 78, 85, 86, 89, 91, 92, 97, 99, 101, 102, 113, 114, 116, 124, 222, 230, 377, 386, 389 upstream public engagement 390 utilitarianism 760
V value-based practice (VBP) 756 value-concept 755 VBP (see value-based practice) 756 veil of ignorance 103, 104, 106, 107, 108, 110, 208 virtual alliance 624, 625, 630, 632, 634 visual instruction 657, 658, 666 visual instruction movement 657 vodcast 668, 671, 672, 673, 674, 675, 676, 679 vulnerable subjects 135, 136, 143
W Web 2.0 253, 306, 307, 308, 309, 314, 315