To Cameron, Seth, and Lorraine
Psychology Editor: Marianne Taflinger Assistant Editor: Jennifer Keever Editorial Assistant: Justin Courts Technology Project Manager: Darin Derstine Marketing Manager: Chris Caldeira Marketing Assistant: Laurel Anderson Advertising Project Manager: Brian Chaffee Senior Project Manager, Editorial Production: Paul Wells Art Director: Vernon Boes Print/Media Buyer: Karen Hunt
Permissions Editor: Joohee Lee Production Service: Melanie Field, Strawberry Field Publishing Photo Researcher: Myrna Engler Copy Editor: Steve Summerlight Cover Designer: Ross Carron Cover Image: Mark Tomalty/Masterfile Compositor: ATLIS Graphics Text and Cover Printer: Phoenix Color Corporation
COPYRIGHT © 2005 Wadsworth, a division of Thomson Learning, Inc. Thomson Learning™ is a trademark used herein under license.
Thomson Wadsworth 10 Davis Drive Belmont, CA 94002-3098 USA
ALL RIGHTS RESERVED. No part of this work covered by the copyright hereon may be reproduced or used in any form or by any means—graphic, electronic, or mechanical, including but not limited to photocopying, recording, taping, Web distribution, information networks, or information storage and retrieval systems—without the written permission of the publisher.
Asia Thomson Learning 5 Shenton Way #01-01 UIC Building Singapore 068808
Printed in Canada 1 2 3 4 5 6 7 08 07 06 05 04
Australia/New Zealand Thomson Learning 102 Dodds Street Southbank, Victoria 3006 Australia
For more information about our products, contact us at: Thomson Learning Academic Resource Center 1-800-423-0563 For permission to use material from this text or product, submit a request online at http://www.thomsonrights.com. Any additional questions about permissions can be submitted by email to
[email protected].
ExamView® and ExamView Pro® are registered trademarks of FSCreations, Inc. Windows is a registered trademark of the Microsoft Corporation used herein under license. Macintosh and Power Macintosh are registered trademarks of Apple Computer, Inc. Used herein under license. Library of Congress Control Number: 2004106398 ISBN 0-534-63306-4
Canada Nelson 1120 Birchmount Road Toronto, Ontario M1K 5G4 Canada Europe/Middle East/Africa Thomson Learning High Holborn House 50/51 Bedford Row London WC1R 4LR United Kingdom Latin America Thomson Learning Seneca, 53 Colonia Polanco 11560 Mexico D.F. Mexico Spain/Portugal Paraninfo Calle Magallanes, 25 28015 Madrid, Spain
Brief Contents Part I
Principles CHAPTER 1 CHAPTER 2 CHAPTER 3 CHAPTER 4 CHAPTER 5 CHAPTER 6 CHAPTER 7
Part II
Applications CHAPTER 8 CHAPTER 9 CHAPTER 10 CHAPTER 11 CHAPTER 12 CHAPTER 13 CHAPTER 14 CHAPTER 15 CHAPTER 16 CHAPTER 17 CHAPTER 18
Part III
Introduction 1 Norms and Basic Statistics for Testing 25 Correlation and Regression 62 Reliability 99 Validity 132 Writing and Evaluating Test Items 157 Test Administration 183
Interviewing Techniques 201 Theories of Intelligence and the Binet Scale 230 The Wechsler Intelligence Scales: WAIS-III, WISC-IV, and WPPSI-III 252 Other Individual Tests of Ability in Education and Special Education 278 Standardized Tests in Education, Civil Service, and the Military 311 Applications in Clinical and Counseling Settings 347 Projective Personality Tests 390 Tests Based on Psychological Science and the New Age of Computers 421 Testing in Counseling Psychology 452 Testing in Health Psychology and Health Care 475 Testing in Industrial and Business Settings 509
Issues CHAPTER 19 CHAPTER 20 CHAPTER 21
Test Bias 538 Testing in Forensic Settings 573 Ethics and the Future of Psychological Testing 611
Contents Part I
Principles CHAPTER 1
Introduction 1 Basic Concepts 6 What a Test Is 6 Types of Tests 7 Overview of the Book 10 Principles of Psychological Testing 10 Applications of Psychological Testing 10 Issues of Psychological Testing 11 Historical Perspective 11 Early Antecedents 12 Charles Darwin and Individual Differences 12 Experimental Psychology and Psychophysical Measurement 13 The Evolution of Intelligence and Standardized Achievement Tests 14 Personality Tests: 1920–1940 17 The Emergence of New Approaches to Personality Testing 20 The Period of Rapid Changes in the Status of Testing 20 The Current Environment 22 Summary 23 Web Activity 24
CHAPTER 2
Norms and Basic Statistics for Testing 25 Why We Need Statistics 26 Scales of Measurement 27 Properties of Scales 27 Types of Scales 29 Permissible Operations 31 Frequency Distributions 31 Percentile Ranks 34 Percentiles 38 Describing Distributions 39 Mean 39 Standard Deviation 40 Z Score 42 Standard Normal Deviation 45 McCall’s T 50 Quartiles and Deciles 51
Contents v Norms 53 Age-Related Norms 54 Tracking 54 Criterion-Referenced Tests 59 Summary 60 Web Activity 61 CHAPTER 3
Correlation and Regression 62 The Scatter Diagram 63 Correlation 65 Regression 66 The Regression Line 66 The Best-Fitting Line 68 Testing the Statistical Significance of a Correlation Coefficient 70 How to Interpret a Regression Plot 75 Other Correlation Coefficients 79 Terms and Issues in the Use of Correlation 80 Residual 80 Standard Error of Estimate 82 Coefficient of Determination 82 Coefficient of Alienation 83 Shrinkage 83 Cross Validation 84 The Correlation-Causation Problem 84 Third Variable Explanation 84 Restricted Range 84 Multivariate Analysis (Optional) 86 General Approach 87 An Example Using Multiple Regression 87 Discriminant Analysis 88 Factor Analysis 89 Summary 92 Appendix 3-1: Calculation of a Regression Equation and a Correlation Coefficient 93 Calculation of a Regression Equation (Data from Table 3-2) 94 Calculation of a Correlation Coefficient (Data from Table 3-5) 96 Web Activity 98
CHAPTER 4
Reliability 99 History and Theory of Reliability 100 Conceptualization of Error 100 Spearman’s Early Studies 101 Basics of Test Score Theory 101
vi Contents The Domain Sampling Model 103 Models of Reliability 105 Sources of Error 106 Time Sampling: The Test–Retest Method 107 Item Sampling: Parallel Forms Method 108 Split-Half Method 109 KR20 Formula 111 Coefficient Alpha 113 Reliability of a Difference Score 114 Reliability in Behavioral Observation Studies 117 Connecting Sources of Error with Reliability Assessment Method 119 Using Reliability Information 120 Standard Errors of Measurement and the Rubber Yardstick 120 How Reliable Is Reliable? 123 What to Do About Low Reliability 124 Summary 128 Appendix 4-1: Using Coefficient Alpha to Estimate Split-Half Reliability When the Variances for the Two Halves of the Test Are Unequal 129 Appendix 4-2: The Calculation of Reliability Using KR20 129 Web Activity 131 CHAPTER 5
Validity 132 Defining Validity 134 Aspects of Validity 134 Face Validity 135 Content-Related Evidence for Validity 135 Criterion-Related Evidence for Validity 137 Construct-Related Evidence for Validity 147 Relationship Between Reliability and Validity 154 Summary 155 Web Activity 156
CHAPTER 6
Writing and Evaluating Test Items 157 Item Writing 158 Item Formats 159 Other Possibilities 167 Item Analysis 168 Item Difficulty 168 Discriminability 170
Contents vii Pictures of Item Characteristics 172 Linking Uncommon Measures 178 Items for Criterion-Referenced Tests 179 Limitations of Item Analysis 180 Summary 181 Web Activity 182 CHAPTER 7
Test Administration 183 The Examiner and the Subject 184 The Relationship Between Examiner and Test Taker 184 The Race of the Tester 185 Language of Test Taker 188 Training of Test Administrators 188 Expectancy Effects 188 Effects of Reinforcing Responses 190 Computer-Assisted Test Administration 193 Subject Variables 195 Behavioral Assessment Methodology 195 Reactivity 196 Drift 197 Expectancies 197 Deception 198 Statistical Control of Rating Errors 199 Summary 199 Web Activity 200
Part II
Applications CHAPTER 8
Interviewing Techniques 201 The Interview as a Test 205 Reciprocal Nature of Interviewing 206 Principles of Effective Interviewing 207 The Proper Attitudes 207 Responses to Avoid 207 Effective Responses 209 Responses to Keep the Interaction Flowing 210 Measuring Understanding 214 Types of Interviews 216 Evaluation Interview 216 Structured Clinical Interviews 217 Case History Interview 221
viii Contents Mental Status Examination 222 Developing Interviewing Skills 223 Sources of Error in the Interview 224 Interview Validity 224 Interview Reliability 227 Summary 227 Web Activity 229 CHAPTER 9
Theories of Intelligence and the Binet Scales 230 The Problem of Defining Intelligence 231 Binet’s Principles of Test Construction 233 Principle 1: Age Differentiation 233 Principle 2: General Mental Ability 234 Spearman’s Model of General Mental Ability 234 Implications of General Mental Intelligence (g) 235 The gf-gc Theory of Intelligence 236 The Early Binet Scales 236 The 1905 Binet-Simon Scale 236 The 1908 Scale 237 Terman’s Stanford-Binet Intelligence Scale 239 The 1916 Stanford-Binet Intelligence Scale 239 The Intelligence Quotient (IQ) 240 The 1937 Scale 241 The 1960 Stanford-Binet Revision and Deviation IQ (SB-LM) 243 The Modern Binet Scale 244 Model for the Fourth and Fifth Editions of the Binet Scale 244 Characteristics of the 1986 Revision 246 Characteristics of the 2003 Fifth Edition 247 Psychometric Properties of the 2003 Fifth Edition 249 Median Validity 250 Summary 250 Web Activity 251
CHAPTER 10
The Wechsler Intelligence Scales: WAIS-III, WISC-IV, and WPPSI-III 252 The Wechsler Intelligence Scales 254 Point and Performance Scale Concepts 254 From the Wechsler-Bellevue Intelligence Scale to the WAIS-III 256 Scales, Subtests, and Indexes of the WAIS-III 256 The Verbal Subtests 258 Raw Scores, Scaled Scores, and the Verbal IQ 261 The Performance Subtests 262
Contents ix Performance IQs 264 Full-Scale IQs 264 Index Scores 264 Interpretive Features of the Wechsler Tests 265 Verbal-Performance IQ Comparisons 265 Pattern Analysis 266 Hypothetical Case Studies 266 Psychometric Properties of the Wechsler Adult Scale 268 Standardization 268 Reliability 269 Validity 270 Evaluation of the Wechsler Adult Scales 270 Downward Extensions of the WAIS-III: The WISC-IV and the WPPSI-III 270 The WISC-IV 271 The WPPSI-III 274 Summary 275 Web Activity 277 CHAPTER 11
Other Individual Tests of Ability in Education and Special Education 278 Alternative Individual Ability Tests Compared with the Binet and Wechsler Scales 279 Alternatives Compared with One Another 281 Specific Individual Ability Tests 283 Infant Scales 283 Major Tests for Young Children 289 General Individual Ability Tests for Handicapped and Special Populations 295 Testing Learning Disabilities 299 Visiographic Tests 303 Creativity: Torrance Tests of Creative Thinking (TTCT) 306 Individual Achievement Tests: Wide Range Achievement Test–3 (WRAT-3) 307 Legal Issues in Special Education 308 Schools Are Required by Law to Identify Students with Disabilities 308 Enforcing a Child’s Right Under IDEA 308 Summary 309 Web Activity 310
CHAPTER 12
Standardized Tests in Education, Civil Service, and the Military 311 Comparison of Group and Individual Ability Tests 313 Advantages of Individual Tests 313 Advantages of Group Tests 314
x Contents Overview of Group Tests 315 Characteristics of Group Tests 315 Selecting Group Tests 315 Using Group Tests 316 Group Tests in the Schools: Kindergarten Through 12th Grade 317 Achievement Tests Versus Aptitude Tests 317 Group Achievement Tests 318 Group Tests of Mental Abilities (Intelligence) 320 College Entrance Tests 323 The Scholastic Assessment Test 323 Cooperative School and College Ability Tests 328 The American College Test 328 Graduate and Professional School Entrance Tests 330 Graduate Record Examination Aptitude Test 330 Miller Analogies Test 336 The Law School Admission Test 337 Nonverbal Group Ability Tests 339 Raven Progressive Matrices 339 Goodenough-Harris Drawing Test 342 IPAT Culture Fair Intelligence Test 343 Standardized Tests Used in the U.S. Civil Service System 344 Standardized Tests in the U.S. Military: The Armed Services Vocational Aptitude Battery 344 Summary 345 Web Activity 346 CHAPTER 13
Applications in Clinical and Counseling Settings 347 Strategies of Structured Personality-Test Construction 349 Deductive Strategies 350 Empirical Strategies 351 Criteria Used in Selecting Tests for Discussion 352 The Logical-Content Strategy 353 Woodworth Personal Data Sheet 353 Early Multidimensional Logical-Content Scales 354 Mooney Problem Checklist 354 Criticisms of the Logical-Content Approach 354 The Criterion-Group Strategy 355 Minnesota Multiphasic Personality Inventory 355 California Psychological Inventory–Third Edition 366 The Factor Analytic Strategy 367 Guilford’s Pioneer Efforts 368
Contents xi Cattell’s Contribution 368 Problems with the Factor Analytic Strategy 371 The Theoretical Strategy 371 Edwards Personal Preference Schedule 372 Personality Research Form and Jackson Personality Inventory 375 Self-Concept 375 Combination Strategies 378 Positive Personality Measurement and the NEO-PI-R 378 The NEO Personality Inventory (NEO-PI-R) 378 Frequently Used Measures of Positive Personality Traits 382 Rosenberg Self-Esteem Scale 382 General Self-Efficacy Scale 382 Ego Resiliency Scale 382 Dispositional Resilience Scale 383 Hope Scale 383 Life Orientation Test–Revised (LOT-R) 383 Satisfaction with Life Scale 384 Positive and Negative Affect Schedule 385 Coping Intervention for Stressful Situations 385 Core Self-Evaluations 385 Future of Positive Personality Research 386 Summary 387 Web Activity 389 CHAPTER 14
Projective Personality Tests 390 The Projective Hypothesis 392 The Rorschach Inkblot Test 393 Historical Antecedents 393 Stimuli, Administration, and Interpretation 395 Psychometric Properties 401 An Alternative Inkblot Test: The Holtzman 409 The Thematic Apperception Test 410 Stimuli, Administration, and Interpretation 411 Psychometric Properties 414 Alternative Apperception Procedures 415 Nonpictorial Projective Procedures 416 Word Association Test 416 Sentence Completion Tasks 417 Figure Drawing Tests 418 Summary 419 Web Activity 420
xii Contents CHAPTER 15
Tests Based on Psychological Science and the New Age of Computers 421 Cognitive-Behavioral Assessment Procedures 423 The Rationale for Cognitive-Behavioral Assessment 423 Procedures Based on Operant Conditioning 424 Self-Report Techniques 427 Kanfer and Saslow’s Functional Approach 431 The Dysfunctional Attitude Scale 432 Irrational Beliefs Test 433 Cognitive Functional Analysis 434 Psychophysiological Procedures 435 Physiological Variables with Treatment Implications 436 Evaluation of Psychophysiological Techniques 436 Computers and Psychological Testing 437 Computer-Assisted Interview 438 Computer-Administered Tests 439 Computer Diagnosis, Scoring, and Reporting of Results 440 Internet Usage for Psychological Testing 442 The Computerization of Cognitive-Behavioral Assessment 443 Tests Possible Only by Computer 444 Computer-Adaptive Testing 445 Psychophysical and Signal-Detection Procedures 446 Summary 449 Web Activity 450
CHAPTER 16
Testing in Counseling Psychology 452 Measuring Interests 453 The Strong Vocational Interest Blank 454 The Strong-Campbell Interest Inventory 455 The Campbell Interest and Skill Survey 457 The Kuder Occupational Interest Survey 462 The Jackson Vocational Interest Survey 466 The Minnesota Vocational Interest Inventory 466 The Career Assessment Inventory 466 The Self-Directed Search 467 Eliminating Gender Bias in Interest Measurement 468 Aptitudes and Interests 469 Measuring Personal Characteristics for Job Placement 469 Trait Factor Approach: Osipow’s Vocational Dimensions 469 The Career Maturity Inventory: Super’s Development Theory 470 The California Occupational Preference Survey: Roe’s Career-Choice Theory 471 Are There Stable Personality Traits? 472
Contents xiii Summary 473 Web Activity 474
CHAPTER 17
Testing in Health Psychology and Health Care 475 Neuropsychological Assessment 476 Clinical Neuropsychology 476 Developmental Neuropsychology 481 Adult Neuropsychology 484 California Verbal Learning Test 490 Anxiety and Stress Assessment 493 Stress and Anxiety 493 The State-Trait Anxiety Inventory 494 Measures of Test Anxiety 495 Measures of Coping 499 Ecological Momentary Assessment 500 Measures of Social Support 501 Quality-of-Life Assessment 502 What Is Health-Related Quality of Life? 503 Common Methods for Measuring Quality of Life 504 Summary 506 Web Activity 507
CHAPTER 18
Testing in Industrial and Business Settings 509 Personnel Psychology—The Selection of Employees 510 Employment Interview 510 Base Rates and Hit Rates 512 Taylor-Russell Tables 516 Utility Theory and Decision Analysis 521 Incremental Validity 523 Personnel Psychology from the Employee’s Perspective: Fitting People to Jobs 525 The Myers-Briggs Type Indicator 525 Tests for Use in Industry: Wonderlic Personnel Test 526 Measuring Characteristics of the Work Setting 527 The Social-Ecology Approach 528 Classifying Environments 529 Job Analysis 531 Measuring the Person–Situation Interaction 533 Summary 537 Web Activity 537
xiv Contents
Part III
Issues CHAPTER 19
Test Bias 538 Why Is Test Bias Controversial? 539 Test Fairness and the Law 540 The Traditional Defense of Testing 544 Content-Related Evidence for Validity 545 Criterion-Related Sources of Bias 548 Other Approaches to Testing Minority Group Members 553 Ignorance Versus Stupidity 553 The Chitling Test 554 The Black Intelligence Test of Cultural Homogeneity 555 The System of Multicultural Pluralistic Assessment 556 Suggestions for Solutions 560 Ethical Concerns and the Definition of Test Bias 560 Thinking Differently: Finding New Interpretations of Data 563 Developing Different Criteria 565 Changing the Social Environment 568 Summary 571 Web Activity 572
CHAPTER 20
Testing in Forensic Settings 573 Laws Governing the Use of Tests 574 Federal Authorities 574 Specific Laws 583 Major Lawsuits That Have Affected Psychological Testing 586 Early Desegregation Cases 586 Stell v. Savannah-Chatham County Board of Education 587 Hobson v. Hansen 587 Diana v. State Board of Education 588 Larry P. v. Wilson Riles 589 Parents in Action on Special Education v. Hannon 591 Crawford et al. v. Honig et al. 595 Marchall v. Georgia 596 Debra P. v. Turlington 596 Regents of the University of California v. Bakke 598 Golden Rule Insurance Company et al. v. Washburn et al. 599 Adarand Constructors, Inc. v. Pena, Secretary of Transportation et al. 600 Affirmative Action in Higher Education 600 Grutter v. Bollinger and Gratz v. Bollinger 601 Personnel Cases 602 Cases Relevant to the Americans with Disabilities Act 607 A Critical Look at Lawsuits 609
Contents xv Summary 609 Web Activity 610 CHAPTER 21
Ethics and the Future of Psychological Testing 611 Issues Shaping the Field of Testing 612 Professional Issues 612 Moral Issues 617 Social Issues 621 Current Trends 624 The Proliferation of New Tests 624 Higher Standards, Improved Technology, and Increasing Objectivity 625 Greater Public Awareness and Influence 626 The Computerization of Tests 627 Testing on the Internet 627 Future Trends 628 Future Prospects for Testing Are Promising 628 The Proliferation of New and Improved Tests Will Continue 629 Revolutionary Changes: “Perestroika” in School Testing? 630 Controversy, Disagreement, and Change Will Continue 632 The Integration of Cognitive Science and Computer Science Will Lead to Several Innovations in Testing 632 Summary 633 Web Activity 633
APPENDIX 1
Areas of a Standard Normal Distribution 634
APPENDIX 2
Publishers of Major Tests 637
APPENDIX 3
Critical Values of r for .05 and .01 (Two-Tailed Test) 641
APPENDIX 4
Critical Values of t 642
APPENDIX 5
Code of Fair Testing Practices in Education 644
Glossary 649 References 655 Author Index 724 Subject Index 734
List of Sample Test Profiles Figure 9-7
Cover page of the Stanford-Binet Intelligence Scale 245
Figure 12-1
Example of a score report for the Stanford Achievement Test 319
Figure 12-2 & Figure 12-3
Sample items from the verbal and mathematical sections of the Scholastic Aptitude Test 325–326
Figure 12-4
A sample student profile from the ACT 329
Figure 12-5
GRE verbal ability sample items 331
Figure 12-6
GRE quantitative ability sample items 332
Figure 12-7
GRE analytical ability sample items 333
Figure 12-9
MAT sample items 336
Figure 13-2
An MMPI profile sheet 356
Figure 13-3
An MMPI-2 profile sheet 362
Figure 13-4
Jackson Personality Inventory profile 376
Figure 13-5
NEO Personality Inventory profile sheet 379
Table 14-1
Summary of Rorschach Scoring 400
Focused The danger of basing Rorschach interpretations on insufficient Example 14-2 evidence 406-407 Sentence completion tasks 417 Figure 17-4
Profile of a patient tested with the Luria-Nebraska battery 489
Table 17-4
Some of the questions used in the Test Anxiety Questionnaire 497
Figure 18-2
Sample questions from the Wonderlic 527
Figure 19-8
Sample SOMPA profile 559
Table 20-2
Examples of items from a minimum competence test 597
Table 21-1
Performance testing 631
Preface
P
sychology is a broad, exciting field. Psychologists work in settings ranging from schools and clinics to biochemical laboratories and private international companies. Despite this diversity, all psychologists have at least two things in common: They all study behavior, and they all depend to some extent on its measurement. This book concerns a particular type of measurement, psychological tests, which measure characteristics that pertain to all aspects of behavior in human beings. Psychological Testing is the result of a long-standing partnership between the authors. As active participants in the development and use of psychological tests, we became disheartened because far too many undergraduate college students view psychological testing courses as boring and unrelated to their goals or career interests. In contrast, we view psychological testing as an exciting field. It has a solid place in the history of psychology, yet it is constantly in flux because of challenges, new developments, and controversies. A book on testing should encourage, not dampen, a student’s interest. Thus, we provide an overview of the many facets of psychological tests and measurement principles in a style that will appeal to the contemporary college student. To understand the applications and issues in psychological testing, the student must learn some basic principles, which requires some knowledge of introductory statistics. Therefore, some reviewing and a careful reading of Part I will pave the way for an understanding of the applications of tests discussed in Part II. Part III examines the issues now shaping the future of testing. Such issues include test anxiety, test bias, and the interface between testing and the law. The future of applied psychology may depend on the ability of psychologists to face these challenging issues. Throughout the book, we present a series of focused discussions and focused examples. These sections illustrate the material in the book through examples or provide a more detailed discussion of a particular issue. We also use technical boxes to demonstrate material such as statistical calculations.
Increased Emphasis on Application Students today often favor informal discussions and personally relevant examples. Consequently, we decided to use models from various fields and to write in an informal style. However, because testing is a serious and complicated field in which major disagreements exist even among scholars and experts, we have treated the controversial aspects of testing with more formal discussion and detailed referencing.
xviii Preface
The first edition of Psychological Testing: Principles, Applications, and Issues was published in 1982. In the nearly one-quarter century since the text was first introduced, the world has changed in many ways. For example, personal computers were new in 1982. Most students and professors had never heard of e-mail or the Internet. There were many fewer applications of psychological testing than there are today. On the other hand, principles of psychological testing have remained relatively constant. Thus, newer editions have included improvements and refinements in the Principles chapters. The later chapters on Applications and Issues have evolved considerably. Not only has the field of psychological testing changed, but so have the authors. One of us (RMK) has spent most of his career as a professor in a school of medicine and is now in a school of public health. The other (DPS) completed law school and works as both a psychology professor and an attorney. While maintaining our central identities as psychologists, we have also had the opportunity to explore cutting-edge practice in medicine, public health, education, and law. The sixth edition goes further than any previous edition in spelling out the applications of psychological testing in a wide variety of applied fields. In developing the sixth edition, we have organized topics around the application areas. Chapter 11 considers psychological testing in education and special education. Chapter 12 looks at the use of standardized tests in education, civil service, and the military. Chapters 13 and 14 consider the use of psychological tests in clinical and counseling settings. The age of computers has completely revolutionized psychological testing. We deal with some of these issues in the Principles chapters by discussing computer-adaptive testing and item response theory. In Chapter 15, we discuss new applications of psychological science in the computer age. Chapter 16 discusses the use of psychological testing in the field of counseling psychology and focuses primarily on interest inventories. Chapter 17 explores the rapidly developing fields of psychological assessment in health psychology, medicine, and health care. Chapter 18, which is new to the sixth edition, reviews psychological testing in industry and business settings. The final chapters on issues in psychological testing retain the previous titles but have been extensively updated to reflect new developments in these areas. The first edition of Psychological Testing was produced on typewriters before word processors were commonly used. At the time, few professors or students had access to private computers. The early editions of the book offered instruction for preparing the submission of statistical analysis to mainframe computers. As recently as the production of the third edition, the Internet was largely unused by university students. Today, nearly all students have ready access to the Internet and World Wide Web, and we now commonly provide references to Web sites. Furthermore, we provide greater discussion of computer-administered tests.
Preface xix
Changes in the Sixth Edition Producing six editions of Psychological Testing over nearly a quarter of a century has been challenging and rewarding. We are honored that hundreds of professors have adopted our text and that it is now used in hundreds of colleges and universities all over the world. However, some professors have suggested that we reorganize the book to facilitate their approach to the class. To accommodate the large variety of approaches, we have tried to keep the chapters independent enough for professors to teach them in whatever order they choose. For example, one approach to the course is to go through the book in the sequence that we present. Professors who wish to emphasize psychometric issues, however, might assign Chapters 1 through 7, followed by Chapters 19 and 20. Then, they might return to certain chapters from the Applications section. On campuses that require a strong statistics course as a prerequisite, Chapters 2 and 3 might be dropped. Professors who emphasize applications might assign Chapters 1 through 5 and then proceed directly to Part II, with some professors assigning only some of its chapters. Though Chapters 9 through 13 are the ones most likely to be used in a basic course, we have found sufficient interest in Chapters 14 through 18 to retain them. Chapters 17 and 18 represent newer areas into which psychological testing is expanding. Finally, Chapters 19 and 20 were written so that they could be assigned either at the end of the course or near the beginning. For example, some professors prefer to assign Chapters 19 and 20 after Chapter 5.
Supplements Beyond Compare As with the previous editions, a student workbook is available. Professors have access to an instructor’s manual and a bank of electronic test items.
Book Companion Web Site The Web site contains several components that will be invaluable to instructors. First, a data set consisting of 25 examinees’ scores on several measures can be downloaded and used with accompanying reliability and validity exercises. Second, several integrative assignments—including a report on a battery of psychological tests, an evaluation of a mock test manual, and a test critique—and associated grading rubrics will be posted on the Web site. The integrative assignment files and grading rubrics are modifiable, allowing you to make changes so they better fit your specific course objectives.
Student Workbook (ISBN 0-534-63308-0) More than a traditional study guide, the Student Workbook—written by Katherine Nicolai of Rockhurst University—truly helps students understand
xx Preface
the connections between abstract measurement concepts and the development, evaluation, selection, and use of psychological tests in the real world. The Student Workbook contains interesting hands-on exercises and assignments, including case studies to critique, test profiles to interpret, and studies on the psychometric properties of tests to evaluate. Of course, the Student Workbook also contains traditional features such as chapter outlines and practice multiple-choice quizzes. Best of all, the workbook is presented in a threering binder in which students can keep other course notes and handouts. Students will discover that the Student Workbook will help them organize their study of Kaplan and Saccuzzo’s text and excel on course exams, assignments, and projects!
Instructor’s Resource Manual/Test Bank (ISBN: 0-534-63307-2) The Instructor’s Resource Manual (IRM) was written by Katherine Nicolai of Rockhurst University, and the Test Bank by Ira Bernstein and Kimberly McConnell of the University of Texas at Arlington In an easy-to-use three-ring binder, the IRM contains a bevy of resources, including guides on designing your course, the use of psychological tests in the classroom, the use of student test data to teach measurement, suggested use of class time, and demonstrations, activities, and activity-based lectures. The IRM provides a description of integrative assignments found on the book companion Web site and gives the instructors unique mock projectives and much more. The test bank contains more than 750 multiple-choice questions in addition to many “thought” essay questions.
Acknowledgments We are highly indebted to the many reviewers and professors who provided feedback on the fourth edition or reviewed drafts of the fifth edition. Special thanks go to reviewers of this edition, including: Virginia Allen, Idaho State University; David Bush, Utah State University; Maureen Hannah, Siena College; Ronald McLaughlin, Juniata College; Michael Mills, Loyola Marymount University; Philip Moberg, University of Akron; Jennifer Neemann, University of Baltimore; Karen Obremski Brandon, University of South Florida; Frederick Oswald, Michigan State University; Stefan Schulenberg, University of Mississippi; Chockalingam Viswesvaran, Florida International University; and Mark Wagner, Wagner College. The six editions of this book have been developed under five different editors at Wadsworth. The earlier editions benefited from the patient and inspired supervision of Todd Lueders, C. Deborah Laughton, Phil Curson, Marianne Taflinger, and Jim Brace-Thompson. We are pleased that the editor for the third edition, Marianne Taflinger, has returned to help us complete the sixth edition. We are most appreciative of her patience, wisdom, and support in the
Preface xxi
development of the current edition. Although we have had many editors, we have learned from each of them. We also want to thank Paul Wells, production project manager, Vernon Boes for coordinating the cover, Joohee Lee for permissions, and Jennifer Keever, assistant editor for coordinating supplements. We want to give particular thanks to Kate Nicolai for authoring the exciting new Student Workbook and the much expanded Instructor’s Resource Manual. Special thanks go to Wendy Koen and Nancy E. Johnson. Wendy conducted much of the research for the updating of about half the chapters, drafted several new sections, and attended to numerous details. Nancy also assisted in numerous way, including research, editing, and locating difficultto-find sources. Without these two individuals, publication of this edition would have been much delayed. Robert M. Kaplan Dennis P. Saccuzzo May 2004
About the Authors Robert M. Kaplan is the chair of the Department of Health Services and Professor of Medicine at UCLA. Previously, he was professor and chair of the Department of Family and Preventive Medicine at the University of California, San Diego. He is also a past president of several organizations, including the American Psychological Association Division of Health Psychology, Section J of the American Association for the Advancement of Science (Pacific), the International Society for Quality of Life Research, and the Society for Behavioral Medicine. Dr. Kaplan is the editor-in-chief of the Annals of Behavioral Medicine and an associate or consulting editor for several other academic journals. Selected additional honors include the APA Division of Health Psychology Annual Award for Outstanding Scientific Contribution in 1987, Distinguished Research Lecturer, 1988, and Health Net Distinguished Lecturer in 1991; University of California 125th Anniversary Award for Most Distinguished Alumnus, University of California, Riverside; American Psychological Association Distinguished Lecturer; and the Distinguished Scientific Contribution Award from the American Association of Medical School Psychologists. His public service contributions include various National Institutes of Health (NIH), Agency for Healthcare Research and Quality, and VA grant review groups, as well as service on the local American Lung Association Board of Directors. He served as co-chair of the Behavioral Committee for the NIH Women’s Health Initiative and a member of both the National Health, Lung, and Blood Institute (NHLBI) Behavioral Medicine Task Force and the Institute of Medicine National Academy of Sciences Committee on Health and Behavior. In addition, he is the chair of the Cost/Effectiveness Committee for the NHLBI National Emphysema Treatment Trial (NETT). Dr. Kaplan is the author of 14 books and over 390 publications. Dennis P. Saccuzzo is a professor of psychology at San Diego State University, an adjunct professor of psychiatry at the University of California, San Diego, and an adjunct professor of law at the California Western School of Law. He has been a scholar and practitioner of psychological testing for over 27 years and has numerous peer-reviewed publications and professional presentations in the field. Dr. Saccuzzo’s research has been supported by the National Science Foundation, the National Institutes of Mental Health, the National Institutes of Health, the U.S. Department of Education, the Scottish Rite Foundation, and the U.S. armed services. He is also a California licensed psychologist and a California licensed attorney. He is board certified in clinical psychology by the American Board of Professional Psychology (ABPP). In addition, he is a Diplomate of the American Board of Assessment Psychology, American Board of Forensic Medicine, American Board of Forensic Examiners, and American Board of Psychological Specialties (in forensic psychology). He is a fellow of the American Psychological Association, American Psychological Society, and Western Psychological Association for outstanding and unusual contributions to the field of psychology. Dr. Saccuzzo is the author or co-author of over 250 peer-reviewed papers and publications, including eight textbooks.
CHAPTER 1
Introduction
LEARNING OBJECTIVES
When you have completed this chapter, you should be able to:
Define the basic terms pertaining to psychological and educational tests
Distinguish between an individual test and a group test
Define the terms achievement, aptitude, and intelligence and identify a concept that can encompass all three terms
Distinguish between ability tests and personality tests
Define the term structured personality test
Explain how structured personality tests differ from projective personality tests
Explain what a normative or standardization sample is and why such a sample is important
Identify the major developments in the history of psychological testing
Explain the relevance of psychological tests in contemporary society
2 Chapter 1 Introduction
ou are sitting at a table. You have just been fingerprinted and have shown a picture ID. You look around and see 40 nervous people. A stern-looking test proctor with a stopwatch passes out booklets. You are warned not to open the booklet until told to do so; you face possible disciplinary action if you disobey. This is not a nightmare or some futuristic fantasy— this is real. Finally, after what seems like an eternity, you are told to open your booklet to page 3 and begin working. Your mouth is dry; your palms are soaking wet. You open to page 3. You have 10 minutes to solve a five-part problem based on the following information.1
Y
A car drives into the center ring of a circus and exactly eight clowns—Q, R, S, T, V, W, Y, and Z—get out of the car, one clown at a time. The order in which the clowns get out of the car is consistent with the following conditions: V gets out at some time before both Y and Q. Q gets out at some time after Z. T gets out at some time before V but at some time after R. S gets out at some time after V. R gets out at some time before W. Question 1. If Q is the fifth clown to get out of the car, then each of the following could be true except: Z is the first clown to get out of the car. T is the second clown to get out of the car. V is the third clown to get out of the car. W is the fourth clown to get out of the car. Y is the sixth clown to get out of the car.
Not quite sure how to proceed, you look at the next question. Question 2. If R is the second clown to get out of the car, which of the following must be true? S gets out of the car at some time before T does. T gets out of the car at some time before W does. W gets out of the car at some time before V does. Y gets out of the car at some time before Q does. Z gets out of the car at some time before W does.
Your heart beats a little faster and your mind starts to freeze up like an overloaded computer with too little working memory. You glance at your watch and notice that 2 minutes have elapsed and you still don’t have your bearings. The person sitting next to you looks a bit faint. Another three rows up someone storms up to the test proctor and complains frantically that he cannot do
1
Used by permission from the Law Schools Admission Test, October 2002. Answer to question one is D, answer to question two is E.
Chapter 1 Introduction 3
this type of problem. While the proctor struggles to calm this person down, another makes a mad dash for the restroom. Welcome to the world of competitive, “high stakes,” standardized psychological tests in the 21st century. The questions you just faced were actual problems from a past version of the LSAT—the Law School Admission Test. Whether or not a student is admitted into law school in the United States is almost entirely determined by that person’s score on the LSAT and undergraduate college grade point average. Thus, one’s future can depend to a tremendous extent on a single score from a single test given in a tensionpacked morning or afternoon. Similar problems appear on the GRE—the Graduate Record Exam, a test that plays a major role in determining who gets to study at the graduate level in the United States. (Later in this book we shall discuss how to prepare for such tests and what their significance, or predictive validity, is.) Tests such as the LSAT and GRE are the most difficult modern psychological tests. The scenes we’ve described are real; some careers do ride on a single test. Perhaps you have already taken the GRE or LSAT. Or perhaps you have not graduated yet but are thinking about applying for an advanced degree or professional program and will soon be facing the GRE, LSAT, or MCAT (Medical College Admission Test). Clearly, it will help you to have a basic understanding of the multitude of psychological tests people are asked to take throughout their lives. From our birth, tests have a major influence on our lives. When the pediatrician strokes the palms of our hands and the soles of our feet, she is performing a test. When we enter school, tests decide whether we pass or fail classes. Testing may determine if we need special education. There is a movement to have competence tests to determine if students will graduate from high school (Gutloff, 1999; Jacob, 2001; Liu, Spicuzza, & Erickson, 1999; Mehrens, 2000; Shimmel & Langer, 2001). More tests determine which college we may attend. And, of course, when we get into college we face still more tests. After graduation, those who choose to avoid tests such as the GRE may need to take tests to determine where they will work. In the modern world, a large part of everyone’s life and success depends on test results. Indeed, tests even have international significance. For example, 15-year-old children in 32 nations were given problems such as the following from the Organization for Economic Co-operation and Development (OECD) and the Programme for International Student Assessment (PISA) (Schleicher & Tamassia, 2000): A result of global warming is that ice of some glaciers is melting. Twelve years after the ice disappears, tiny plants, called lichen, start to grow on the rocks. Each lichen grows approximately in the shape of a circle. The relationship between the diameter of the circles and the age of the lichen can be approximated with the formula: d 7.0 the square root of (t 12) for any t less than or equal to 12, where d represents the diameter of the lichen in millimeters, and t represents the number of years after the ice has disappeared.
4 Chapter 1 Introduction FIGURE 1-1
Approximate average scores of 15-year-old students on the PISA mathematical literacy test. (Statistics used by permission of the OECD and PISA. Figure courtesy of W. J. Koen.)
International Mathematical Literacy Scores Brazil Mexico Luxembourg Greece Portugal Italy Latvia Poland Spain Russia Hungary Germany USA Czech Rep. Norway Ireland Sweden Liechtenstein Iceland Denmark Austria France Belgium U.K. Switzerland Canada Australia Finland New Zealand Japan Korea 300
350
400
450
500
550
600
Points
Calculate the diameter of the lichen 16 years after the ice disappeared. The complete and correct answer is: d 7.0 the square root of (16 12 mm) d 7.0 the square root of 4 mm d 14 mm
Eighteen countries ranked above the United States in the percentage of 15-year-olds who had mastered such concepts (see Figure 1-1). The results were similar for an OECD science literacy test (see Figure 1-2), which had questions such as the following: A bus is moving along a straight stretch of road. The bus driver, named Ray, has a cup of water resting in a holder on the dashboard. Suddenly Ray has to
Chapter 1 Introduction 5 FIGURE 1-2
Approximate average scores of 15-year-old students on the PISA scientific literacy test. (Statistics used by permission of the OECD and PISA. Figure courtesy of W. J. Koen.)
International Scientific Literacy Scores Brazil Mexico Luxembourg Portugal Latvia Russia Greece Liechtenstein Italy Denmark Poland Germany Spain Switzerland Belgium Iceland Hungary USA Norway France Czech. Rep. Sweden Ireland Austria Australia New Zealand Canada U.K. Finland Japan Korea 300
350
400
450
500
550
600
Points
slam on the brakes. What is most likely to happen to the water in the cup immediately after Ray slams on the brakes? A. B. C.
D.
The water will stay horizontal. The water will spill over side 1. The water will spill over side 2. The water will spill but you cannot tell if it will spill over side 1 or side 2.
The correct answer is C.
How useful are tests such as these? Do they measure anything meaningful? How accurate are they? Such questions concern not only every U.S. citizen but also all members of the highly competitive international community. To answer
6 Chapter 1 Introduction
them, you must understand the principles of psychological testing that you are about to learn. To answer questions about tests, you must understand the concepts presented in this book, such as reliability, validity, item analysis, and test construction. A full understanding of these concepts will require careful study and a knowledge of basic statistics, but your efforts will be richly rewarded. When you finish this book, you will be a better consumer of tests.
Basic Concepts You are probably already familiar with some of the elementary concepts of psychological testing. For the sake of clarity, however, we shall begin with definitions of the most basic terms so that you will know how they are used in this textbook.
What a Test Is Everyone has had experience with tests. A test is a measurement device or technique used to quantify behavior or aid in the understanding and prediction of behavior. A spelling test, for example, measures how well someone spells or the extent to which someone has learned to spell a specific list of words. At some time during the next few weeks, your instructor will likely want to measure how well you have learned the material in this book. To accomplish this, your instructor may give you a test. As you well know, the test your instructor gives may not measure your full understanding of the material. This is because a test measures only a sample of behavior, and error is always associated with a sampling process. Test scores are not perfect measures of a behavior or characteristic, but they do add significantly to the prediction process, as you will see. An item is a specific stimulus to which a person responds overtly; this response can be scored or evaluated (for example, classified, graded on a scale, or counted). Because psychological and educational tests are made up of items, the data they produce are explicit and hence subject to scientific inquiry. In simple terms, items are the specific questions or problems that make up a test. The problems presented at the beginning of this chapter are examples of test items. The overt response would be to fill in or blacken one of the spaces:
A
B
C
D
E
F
G
A psychological test or educational test is a set of items that are designed to measure characteristics of human beings that pertain to behavior. There are
Chapter 1 Introduction 7
many types of behavior. Overt behavior is an individual’s observable activity. Some psychological tests attempt to measure the extent to which someone might engage in or “emit” a particular overt behavior. Other tests measure how much a person has previously engaged in some overt behavior. Behavior can also be covert—that is, it takes place within an individual and cannot be directly observed. For example, your feelings and thoughts are types of covert behavior. Some tests attempt to measure such behavior. Psychological and educational tests thus measure past or current behavior. Some also attempt to predict future behavior, such as success in college or in an advanced degree program. What does it mean when someone gets 75 items correct on a 100-item test? One thing it means, of course, is that 75% of the items were answered correctly. In many situations, however, knowing the percentage of correct items a person obtained can be misleading. Consider two extreme examples. In one case, out of 100 students who took the exam, 99 had 90% correct or higher, and 1 had 75% correct. In another case, 99 of the 100 students had scores of 25% or lower, while 1 had 75% correct. The meaning of the scores can change dramatically, depending on how a well-defined sample of individuals scores on a test. In the first case, a score of 75% is poor because it is in the bottom of the distribution; in the second case, 75% is actually a top score. To deal with such problems of interpretation, psychologists make use of scales, which relate raw scores on test items to some defined theoretical or empirical distribution. Later in the book you will learn about such distributions. Scores on tests may be related to traits, which are enduring characteristics or tendencies to respond in a certain manner. “Determination,” sometimes seen as “stubbornness,” is an example of a trait; “shyness” is another. Test scores may also be related to the state, or the specific condition or status, of an individual. A determined individual after many setbacks may, for instance, be in a weakened state and therefore be less inclined than usual to manifest determination. Tests measure many types of behavior.
Types of Tests Just as there are many types of behavior, so there are many types of tests. Those that can be given to only one person at a time are known as individual tests (see Figure 1-3). The examiner or test administrator (the person giving the test) gives the test to only one person at a time, the same way that psychotherapists see only one person at a time. A group test, by contrast, can be administered to more than one person at a time by a single examiner, such as when an instructor gives everyone in the class a test at the same time. One can also categorize tests according to the type of behavior they measure. Ability tests contain items that can be scored in terms of speed, accuracy, or both. On an ability test, the faster or the more accurate your responses, the better your scores on a particular characteristic. The more algebra problems you can correctly solve in a given amount of time, the higher you score in ability to solve such problems.
8 Chapter 1 Introduction FIGURE 1-3
An individual test administration. (Ann Chwatsky/ Jeroboam.)
Historically, experts have distinguished among achievement, aptitude, and intelligence as different types of ability. Achievement refers to previous learning. A test that measures or evaluates how many words you can spell correctly is called a spelling achievement test. Aptitude, by contrast, refers to the potential for learning or acquiring a specific skill. A spelling aptitude test measures how many words you might be able to spell given a certain amount of training, education, and experience. Your musical aptitude refers in part to how well you might be able to learn to play a musical instrument given a certain number of lessons. Traditionally distinguished from achievement and aptitude, intelligence refers to a person’s general potential to solve problems, adapt to changing circumstances, think abstractly, and profit from experience. When we say a person is “smart,” we are usually referring to intelligence. When a father scolds his daughter because she has not done as well in school as she can, he most likely believes that she has not used her intelligence (general potential) to achieve (acquire new knowledge). The distinctions among achievement, aptitude, and intelligence are not always so cut-and-dried because all three are highly interrelated. Attempts to separate prior learning from potential for learning, for example, have not succeeded. In view of the considerable overlap of achievement, aptitude, and intelligence tests, all three concepts are encompassed by the term human ability. There is a clear-cut distinction between ability tests and personality tests. Whereas ability tests are related to capacity or potential, personality tests are related to the overt and covert dispositions of the individual—for example, the
Chapter 1 Introduction 9 FIGURE 1-4
True
Self-report test items.
False
1. I like heavy metal music. 2. I believe that honesty is the best policy. 3. I am in good health. 4. I am easily fatigued. 5. I sleep well at night.
tendency of a person to show a particular behavior or response in a given situation. Remaining isolated from others, for instance, does not require any special skill or ability, but some people typically prefer or tend to remain thus isolated. Personality tests measure typical behavior. There are several types of personality tests. In Chapter 13, you will learn about structured, or objective, personality tests. Structured personality tests provide a statement, usually of the “self-report” variety, and require the subject to choose between two or more alternative responses such as “True” or “False” (see Figure 1-4). In contrast to structured personality tests, projective personality tests are unstructured. In a projective personality test, either the stimulus (test materials) or the required response—or both—are ambiguous. For example, in the highly controversial Rorschach test, the stimulus is an inkblot. Furthermore, rather than being asked to choose among alternative responses, as in structured personality tests, the individual is asked to provide a spontaneous response. The inkblot is presented to the subject, who is asked, “What might this be?” Projective tests assume that a person’s interpretation of an ambiguous stimulus will reflect his or her unique characteristics (see Chapter 14). See Table 1-1 for a brief overview of ability and personality tests. Psychological testing refers to all the possible uses, applications, and underlying concepts of psychological and educational tests. The main use of these tests, though, is to evaluate individual differences or variations among individuals. Such tests measure individual differences in ability and personality and TABLE 1-1
Types of Tests
I.
Ability tests: Measure skills in terms of speed, accuracy, or both. A. Achievement: Measures previous learning. B. Aptitude: Measures potential for acquiring a specific skill. C. Intelligence: Measures potential to solve problems, adapt to changing circumstances, and profit from experiences. II. Personality tests: Measure typical behavior—traits, temperaments, and dispositions. A. Structured (objective): Provides a self-report statement to which the person responds “True” or “False,” “Yes” or “No.” B. Projective: Provides an ambiguous test stimulus; response requirements are unclear.
10 Chapter 1 Introduction
assume that the differences shown on the test reflect actual differences among individuals. For instance, individuals who score high on an IQ test are assumed to have a higher degree of intelligence than those who obtain low scores. Thus, the most important purpose of testing is to differentiate among those taking the tests. We shall discuss the idea of individual differences later in this chapter.
Overview of the Book This book is divided into three parts: Principles, Applications, and Issues. Together, these parts cover psychological testing from the most basic ideas to the most complex. Basic ideas and events are introduced early and stressed throughout to reinforce what you have just learned. In covering principles, applications, and issues, we intend to provide not only the who’s of psychological testing but also the how’s and why’s of major developments in the field. We also address an important concern of many students—relevance—by examining the diverse uses of tests and the resulting data.
Principles of Psychological Testing By principles of psychological testing we mean the basic concepts and fundamental ideas that underlie all psychological and educational tests. Chapters 2 and 3 present statistical concepts that provide the foundation for understanding tests. Chapters 4 and 5 cover two of the most fundamental concepts in testing: reliability and validity. Reliability refers to the accuracy, dependability, consistency, or repeatability of test results. In more technical terms, reliability refers to the degree to which test scores are free of measurement errors. As you will learn, there are many ways a test can be reliable. For example, test results may be reliable over time, which means that when the same test is given twice within any given time interval, the results tend to be the same or highly similar. Validity refers to the meaning and usefulness of test results. More specifically, validity refers to the degree to which a certain inference or interpretation based on a test is appropriate. When one asks the question, “What does this psychological test measure?” one is essentially asking “For what inference is this test valid?” Another principle of psychological testing concerns how a test is created or constructed. In Chapter 6, we present the principles of test construction. The act of giving a test is known as test administration, which is the main topic of Chapter 7. Though some tests are easy to administer, others must be administered in a highly specific way. The final chapter of Part I covers the fundamentals of administering a psychological test.
Applications of Psychological Testing Part II, on applications, provides a detailed analysis of many of the most popular tests and how they are used or applied. It begins with an overview of the essential terms and concepts that relate to the application of tests. Chapter 8
Chapter 1 Introduction 11
discusses interviewing techniques. An interview is a method of gathering information through verbal interaction, such as direct questions. Not only has the interview traditionally served as a major technique of gathering psychological information in general, but also data from interviews provide an important complement to test results. Chapters 9 and 10 cover individual tests of human ability. In these chapters, you will learn not only about tests but also about the theories of intelligence that underlie them. In Chapter 11, we cover testing in education with an emphasis on special education. In Chapter 12, we present group tests of human ability. Chapter 13 covers structured personality tests, and Chapter 14 covers projective personality tests. In Chapter 15, we discuss the important role of computers in the testing field. We also consider the influence of cognitive psychology, which today is the most prominent of the various schools of thought within psychology (Kellogg, 2003; Leahy & Dowd, 2002; Weinstein & Way, 2003). These chapters not only provide descriptive information but also delve into the ideas underlying the various tests. Chapter 16 reviews the relatively new area of medical testing for brain damage and health status. It also covers important recent advancements in developmental neuropsychology. Chapter 17 examines interest tests, which measure behavior relevant to such factors as occupational preferences. Finally, Chapter 18 covers tests for industrial and organizational psychology and business.
Issues of Psychological Testing Many social and theoretical issues, such as the controversial topic of racial differences in ability, accompany testing. Part III covers many of these issues. As a compromise between breadth and depth of coverage, we focus on a comprehensive discussion of those issues that have particular importance in the current professional, social, and political environment. Chapter 19 examines test bias, one of the most volatile issues in the field today (Fox, 1999; Geisinger, 2003; Reynolds & Ramsay, 2003 Ryan & DeMark, 2002). Because psychological tests have been accused of being discriminatory or biased against certain groups, this chapter takes a careful look at both sides of the argument. Because of charges of bias and other problems, psychological testing is increasingly coming under the scrutiny of the law (Phillips, 2002; Saccuzzo, 1999). Chapter 20 examines test bias as related to legal issues and discusses testing in forensic settings. Chapter 21 presents a general overview of other major issues currently shaping the future of psychological testing in the United States with an emphasis on ethics. From our review of the issues, we also speculate on what the future holds for psychological testing.
Historical Perspective We shall now briefly provide the historical context of psychological testing. This discussion will touch on some of the material presented earlier in this chapter.
12 Chapter 1 Introduction
Early Antecedents Most of the major developments in testing have occurred over the last century, many of them in the United States. The origins of testing, however, are neither recent nor American. Evidence suggests that the Chinese had a relatively sophisticated civil service testing program more than 4000 years ago (DuBois, 1970, 1972). Every third year in China, oral examinations were given to help determine work evaluations and promotion decisions. By the Han Dynasty (206 B.C.E. to 220 C.E.), the use of test batteries (two or more tests used in conjunction) was quite common. These early tests related to such diverse topics as civil law, military affairs, agriculture, revenue, and geography. Tests had become quite well developed by the Ming Dynasty (1368–1644 C.E.). During this period, a national multistage testing program involved local and regional testing centers equipped with special testing booths. Those who did well on the tests at the local level went on to provincial capitals for more extensive essay examinations. After this second testing, those with the highest test scores went on to the nation’s capital for a final round. Only those who passed this third set of tests were eligible for public office. The Western world most likely learned about testing programs through the Chinese. Reports by British missionaries and diplomats encouraged the English East India Company in 1832 to copy the Chinese system as a method of selecting employees for overseas duty. Because testing programs worked well for the company, the British government adopted a similar system of testing for its civil service in 1855. After the British endorsement of a civil service testing system, the French and German governments followed suit. In 1883, the U.S. government established the American Civil Service Commission, which developed and administered competitive examinations for certain government jobs. The impetus of the testing movement in the Western world grew rapidly at that time (Wiggins, 1973).
Charles Darwin and Individual Differences Perhaps the most basic concept underlying psychological and educational testing pertains to individual differences. No two snowflakes are identical, no two fingerprints the same. Similarly, no two people are exactly alike in ability and typical behavior. As we have noted, tests are specifically designed to measure these individual differences in ability and personality among people. Although human beings realized long ago that individuals differ, developing tools for measuring such differences was no easy matter. To develop a measuring device, we must understand what we want to measure. An important step toward understanding individual differences came with the publication of Charles Darwin’s highly influential book, The Origin of Species, in 1859. According to Darwin’s theory, higher forms of life evolved partially because of differences among individual forms of life within a species. Given that individual members of a species differ, some possess characteristics that are more adaptive or successful in a given environment than are those of other members. Dar-
Chapter 1 Introduction 13 FIGURE 1-5
Sir Francis Galton. (From the National Library of Medicine.)
win also believed that those with the best or most adaptive characteristics survive at the expense of those who are less fit and that the survivors pass their characteristics on to the next generation. Through this process, he argued, life has evolved to its currently complex and intelligent levels. Sir Francis Galton, a relative of Darwin’s, soon began applying Darwin’s theories to the study of human beings (see Figure 1-5). Given the concepts of survival of the fittest and individual differences, Galton set out to show that some people possessed characteristics that made them more fit than others, a theory he articulated in his book Hereditary Genius, published in 1869. Galton (1883) subsequently began a series of experimental studies to document the validity of his position. He concentrated on demonstrating that individual differences exist in human sensory and motor functioning, such as reaction time, visual acuity, and physical strength. In doing so, Galton initiated a search for knowledge concerning human individual differences, which is now one of the most important domains of scientific psychology. Galton’s work was extended by the U.S. psychologist James McKeen Cattell, who coined the term mental test (Cattell, 1890). Cattell’s doctoral dissertation was based on Galton’s work on individual differences in reaction time. As such, Cattell perpetuated and stimulated the forces that ultimately led to the development of modern tests.
Experimental Psychology and Psychophysical Measurement A second major foundation of testing can be found in experimental psychology and early attempts to unlock the mysteries of human consciousness through the scientific method. Before psychology was practiced as a science, mathematical models of the mind were developed, in particular those of J. E. Herbart. Herbart eventually used these models as the basis for educational theories that strongly influenced 19th-century educational practices. Following Herbart, E. H. Weber attempted to demonstrate the existence of a psychological
14 Chapter 1 Introduction
threshold, the minimum stimulus necessary to activate a sensory system. Then, following Weber, G. T. Fechner devised the law that the strength of a sensation grows as the logarithm of the stimulus intensity. Wilhelm Wundt, who set up a laboratory at the University of Leipzig in 1879, is credited with founding the science of psychology, following in the tradition of Weber and Fechner (Hearst, 1979). Wundt was succeeded by E. B. Titchner, whose student, G. Whipple, recruited L. L. Thurstone. Whipple provided the basis for immense changes in the field of testing by conducting a seminar at the Carnegie Institute in 1919 attended by Thurstone, E. Strong, and other early prominent U.S. psychologists. From this seminar came the Carnegie Interest Inventory and later the Strong Vocational Interest Blank. Later in this book we discuss in greater detail the work of these pioneers and the tests they helped to develop. Thus, psychological testing developed from at least two lines of inquiry: one based on the work of Darwin, Galton, and Cattell on the measurement of individual differences, and the other (more theoretically relevant and probably stronger) based on the work of the German psychophysicists Herbart, Weber, Fechner, and Wundt. Experimental psychology developed from the latter. From this work also came the idea that testing, like an experiment, requires rigorous experimental control. Such control, as you will see, comes from administering tests under highly standardized conditions. The efforts of these researchers, however necessary, did not by themselves lead to the creation of modern psychological tests. Such tests also arose in response to important needs such as classifying and identifying the mentally and emotionally handicapped. One of the earliest tests resembling current procedures, the Seguin Form Board Test (Seguin, 1866/1907), was developed in an effort to educate and evaluate the mentally disabled. Similarly, Kraepelin (1912) devised a series of examinations for evaluating emotionally impaired people. An important breakthrough in the creation of modern tests came at the turn of the 20th century. The French minister of public instruction appointed a commission to study ways of identifying intellectually subnormal individuals in order to provide them with appropriate educational experiences. One member of that commission was Alfred Binet. Working in conjunction with the French physician T. Simon, Binet developed the first major general intelligence test. Binet’s early effort launched the first systematic attempt to evaluate individual differences in human intelligence (see Chapter 9).
The Evolution of Intelligence and Standardized Achievement Tests The history and evolution of Binet’s intelligence test are instructive. The first version of the test, known as the Binet-Simon Scale, was published in 1905. This instrument contained 30 items of increasing difficulty and was designed to identify intellectually subnormal individuals. Like all well-constructed tests, the Binet-Simon Scale of 1905 was augmented by a comparison or standard-
Chapter 1 Introduction 15
ization sample. Binet’s standardization sample consisted of 50 children who had been given the test under standard conditions—that is, with precisely the same instructions and format. In obtaining this standardization sample, the authors of the Binet test had norms with which they could compare the results from any new subject. Without such norms, the meaning of scores would have been difficult, if not impossible, to evaluate. However, by knowing such things as the average number of correct responses found in the standardization sample, one could at least state whether a new subject was below or above it. It is easy to understand the importance of a standardization sample. However, the importance of obtaining a standardization sample that represents the population for which a test will be used has sometimes been ignored or overlooked by test users (Malreaux, 1999). For example, if a standardization sample consists of 50 white men from wealthy families, then one cannot easily or fairly evaluate the score of an African American girl from a poverty-stricken family. Nevertheless, comparisons of this kind are sometimes made. Clearly, it is not appropriate to compare an individual with a group that does not have the same characteristics as the individual (Garcia & Fleming, 1998). Binet was aware of the importance of a standardization sample. Further development of the Binet test involved attempts to increase the size and representativeness of the standardization sample. A representative sample is one that comprises individuals similar to those for whom the test is to be used. When the test is used for the general population, a representative sample must reflect all segments of the population in proportion to their actual numbers. By 1908, the Binet-Simon Scale had been substantially improved. It was revised to include nearly twice as many items as the 1905 scale. Even more significantly, the size of the standardization sample was increased to more than 200. The 1908 Binet-Simon Scale also determined a child’s mental age, thereby introducing a historically significant concept. In simplified terms, you might think of mental age as a measurement of a child’s performance on the test relative to other children of that particular age group. If a child’s test performance equals that of the average 8-year-old, for example, then his or her mental age is 8. In other words, in terms of the abilities measured by the test, this child can be viewed as having a similar level of ability as the average 8-year-old. The chronological age of the child may be 4 or 12, but in terms of test performance, the child functions at the same level as the average 8-yearold. The mental age concept was one of the most important contributions of the revised 1908 Binet-Simon Scale. In 1911, the Binet-Simon Scale received a minor revision. By this time, the idea of intelligence testing had swept across the world. By 1916, L. M. Terman of Stanford University had revised the Binet test for use in the United States. Terman’s revision, known as the Stanford-Binet Intelligence Scale (Terman, 1916), was the only American version of the Binet test that flourished. It also characterizes one of the most important trends in testing—the drive toward better tests. Terman’s 1916 revision of the Binet-Simon Scale contained many improvements. The standardization sample was increased to include 1000
16 Chapter 1 Introduction
people, original items were revised, and many new items were added. Terman’s 1916 Stanford-Binet Intelligence Scale added respectability and momentum to the newly developing testing movement. World War I. The testing movement grew enormously in the United States because of the demand for a quick, efficient way of evaluating the emotional and intellectual functioning of thousands of military recruits in World War I. The war created a demand for large-scale group testing because relatively few trained personnel could evaluate the huge influx of military recruits. However, the Binet test was an individual test. Shortly after the United States became actively involved in World War I, the army requested the assistance of Robert Yerkes, who was then the president of the American Psychological Association (see Yerkes, 1921). Yerkes headed a committee of distinguished psychologists who soon developed two structured group tests of human abilities: the Army Alpha and the Army Beta. The Army Alpha required reading ability, whereas the Army Beta measured the intelligence of illiterate adults. World War I fueled the widespread development of group tests. About this time, the scope of testing also broadened to include tests of achievement, aptitude, interest, and personality. Because achievement, aptitude, and intelligence tests overlapped considerably, the distinctions proved to be more illusory than real. Even so, the 1916 Stanford-Binet Intelligence Scale had appeared at a time of strong demand and high optimism for the potential of measuring human behavior through tests. World War I and the creation of group tests had then added momentum to the testing movement. Shortly after the appearance of the 1916 Stanford-Binet Intelligence Scale and the Army Alpha test, schools, colleges, and industry began using tests. It appeared to many that this new phenomenon, the psychological test, held the key to solving the problems emerging from the rapid growth of population and technology. Achievement tests. Among the most important developments following World
War I was the development of standardized achievement tests. In contrast to essay tests, standardized achievement tests provide multiple-choice questions that are standardized on a large sample to produce norms against which the results of new examinees can be compared. Standardized achievement tests caught on quickly because of the relative ease of administration and scoring and the lack of subjectivity or favoritism that can occur in essay or other written tests. In school settings, standardized achievement tests allowed one to maintain identical testing conditions and scoring standards for a large number of children. Such tests also allowed a broader coverage of content and were less expensive and more efficient than essays. In 1923, the development of standardized achievement tests culminated in the publication of the Stanford Achievement Test by T. L. Kelley, G. M. Ruch, and L. M. Terman. By the 1930s, it was widely held that the objectivity and reliability of these new standardized tests made them superior to essay tests. Their use prolifer-
Chapter 1 Introduction 17
ated widely. It is interesting, as we shall discuss later in the book, that teachers of today appear to have come full circle. Currently, many people favor written tests and work samples (portfolios) over standardized achievement tests as the best way to evaluate children (Boerum, 2000; Harris, 2002; Muir & Tracy, 1999; Potter, 1999; Russo & Warren, 1999). Rising to the challenge. For every movement there is a countermovement, and the testing movement in the United States in the 1930s was no exception. Critics soon became vocal enough to dampen enthusiasm and to make even the most optimistic advocates of tests defensive. Researchers, who demanded nothing short of the highest standards, noted the limitations and weaknesses of existing tests. Not even the Stanford-Binet, a landmark in the testing field, was safe from criticism. Although tests were used between the two world wars and many new tests were developed, their accuracy and utility remained under heavy fire. Near the end of the 1930s, developers began to reestablish the respectability of tests. New, improved tests reflected the knowledge and experience of the previous two decades. By 1937, the Stanford-Binet had been revised again. Among the many improvements was the inclusion of a standardization sample of more than 3000 individuals. A mere 2 years after the 1937 revision of the Stanford-Binet test, David Wechsler published the first version of the Wechsler intelligence scales (see Chapter 10), the Wechsler-Bellevue Intelligence Scale (W-B) (Wechsler, 1939). The Wechsler-Bellevue scale contained several interesting innovations in intelligence testing. Unlike the Stanford-Binet test, which produced only a single score (the so-called IQ, or intelligence quotient), Wechsler’s test yielded several scores, permitting an analysis of an individual’s pattern or combination of abilities. Among the various scores produced by the Wechsler test was the performance IQ. Performance tests do not require a verbal response; one can use them to evaluate intelligence in people who have few verbal or language skills. The Stanford-Binet test had long been criticized because of its emphasis on language and verbal skills, making it inappropriate for many individuals, such as those who cannot speak or who cannot read. In addition, few people believed that language or verbal skills play an exclusive role in human intelligence. Wechsler’s inclusion of a nonverbal scale thus helped overcome some of the practical and theoretical weaknesses of the Binet test. In 1986, the Binet test was drastically revised to include performance subtests. More recently, it was overhauled again in 2003, as we shall see in Chapter 9. (Other important concepts in intelligence testing will be formally defined in Chapter 10, which covers the various forms of the Wechsler intelligence scales.)
Personality Tests: 1920–1940 Just before and after World War II, personality tests began to blossom. Whereas intelligence tests measured ability or potential, personality tests measured presumably stable characteristics or traits that theoretically underlie behavior.
18 Chapter 1 Introduction FIGURE 1-6
The Woodworth Personal Data Sheet represented an attempt to standardize the psychiatric interview. It contains questions such as those shown here.
Yes
No
1. I wet the bed. 2. I drink a quart of whiskey each day. 3. I am afraid of closed spaces. 4. I believe I am being followed. 5. People are out to get me. 6. Sometimes I see or hear things that other people do not hear or see.
Traits are relatively enduring dispositions (tendencies to act, think, or feel in a certain manner in any given circumstance) that distinguish one individual from another. For example, we say that some people are optimistic and some pessimistic. Optimistic people tend to remain so regardless of whether or not things are going well. A pessimist, by contrast, tends to look at the negative side of things. Optimism and pessimism can thus be viewed as traits. One of the basic goals of traditional personality tests is to measure traits. As you will learn, however, the notion of traits has important limitations. The earliest personality tests were structured paper-and-pencil group tests. These tests provided multiple-choice and true–false questions that could be administered to a large group. Because it provides a high degree of structure— that is, a definite stimulus and specific alternative responses that can be unequivocally scored—this sort of test is a type of structured personality test. The first structured personality test, the Woodworth Personal Data Sheet, was developed during World War I and was published in final form just after the war (see Figure 1-6). As indicated earlier, the motivation underlying the development of the first personality test was the need to screen military recruits. History indicates that tests such as the Binet and the Woodworth were created by necessity to meet unique challenges. Like the early ability tests, however, the first structured personality test was simple by today’s standards. Interpretation of the Woodworth test depended on the now-discredited assumption that the content of an item could be accepted at face value. If the person marked “False” for the statement “I wet the bed,” then it was assumed that he or she did not “wet the bed.” As logical as this assumption seems, experience has shown that it is often false. In addition to being dishonest, the person responding to the question may not interpret the meaning of “wet the bed” the same way as the test administrator does. (Other problems with tests such as the Woodworth are discussed in Chapter 13.) The introduction of the Woodworth test was enthusiastically followed by the creation of a variety of structured personality tests, all of which assumed that a subject’s response could be taken at face value. However, researchers
Chapter 1 Introduction 19 FIGURE 1-7
Card 1 of the Rorschach inkblot test, a projective personality test. Such tests provide an ambiguous stimulus to which a subject is asked to make some response.
scrutinized, analyzed, and criticized the early structured personality tests, just as they had done with the ability tests. Indeed, the criticism of tests that relied on face value alone became so intense that structured personality tests were nearly driven out of existence. The development of new tests based on more modern concepts followed, revitalizing the use of structured personality tests. Thus, after an initial surge of interest and optimism during most of the 1920s, structured personality tests declined by the late 1930s and early 1940s. Following World War II, however, personality tests based on fewer or different assumptions were introduced, thereby rescuing the structured personality test. During the brief but dramatic rise and fall of the first structured personality tests, interest in projective tests began to grow. In contrast to structured personality tests, which in general provide a relatively unambiguous test stimulus and specific alternative responses, projective personality tests provide an ambiguous stimulus and unclear response requirements. Furthermore, the scoring of projective tests is often subjective. Unlike the early structured personality tests, interest in the projective Rorschach inkblot test grew slowly (see Figure 1-7). The Rorschach test was first published by Herman Rorschach of Switzerland in 1921. However, several years passed before the Rorschach came to the United States, where David Levy introduced it. The first Rorschach doctoral dissertation written in a U.S. university was not completed until 1932, when Sam Beck, Levy’s student, decided to investigate the properties of the Rorschach test scientifically. Although initial interest in the Rorschach test was lukewarm at best, its popularity grew rapidly after Beck’s work, despite suspicion, doubt, and criticism from the scientific community. Today, however, the Rorschach is under a dark cloud (see Chapter 14). Adding to the momentum for the acceptance and use of projective tests was the development of the Thematic Apperception Test (TAT) by Henry Murray and Christina Morgan in 1935. Whereas the Rorschach test contained completely ambiguous inkblot stimuli, the TAT was more structured. Its stimuli consisted of ambiguous pictures depicting a variety of scenes and situations, such as a boy sitting in front of a table with a violin on it. Unlike the Rorschach
20 Chapter 1 Introduction
test, which asked the subject to explain what the inkblot might be, the TAT required the subject to make up a story about the ambiguous scene. The TAT purported to measure human needs and thus to ascertain individual differences in motivation.
The Emergence of New Approaches to Personality Testing The popularity of the two most important projective personality tests, the Rorschach and TAT, grew rapidly by the late 1930s and early 1940s, perhaps because of disillusionment with structured personality tests (Dahlstrom, 1969a). However, as we shall see in Chapter 14, projective tests, particularly the Rorschach, have not withstood a vigorous examination of their psychometric properties (Wood, Nezworski, Lilienfeld, & Garb, 2003). In 1943, the Minnesota Multiphasic Personality Inventory (MMPI) began a new era for structured personality tests. The idea behind the MMPI—to use empirical methods to determine the meaning of a test response—helped revolutionize structured personality tests. The problem with early structured personality tests such as the Woodworth was that they made far too many assumptions that subsequent scientific investigations failed to substantiate. The authors of the MMPI, by contrast, argued that the meaning of a test response could be determined only by empirical research. The MMPI, along with its updated companion the MMPI-2 (Butcher, 1989, 1990), is currently the most widely used and referenced personality test. Its emphasis on the need for empirical data has stimulated the development of tens of thousands of studies. Just about the time the MMPI appeared, personality tests based on the statistical procedure called factor analysis began to emerge. Factor analysis is a method of finding the minimum number of dimensions (characteristics, attributes), called factors, to account for a large number of variables. We may say a person is outgoing, is gregarious, seeks company, is talkative, and enjoys relating to others. However, these descriptions contain a certain amount of redundancy. A factor analysis can identify how much they overlap and whether they can all be accounted for or subsumed under a single dimension (or factor) such as extroversion. In the early 1940s, J. R Guilford made the first serious attempt to use factor analytic techniques in the development of a structured personality test. By the end of that decade, R. B. Cattell had introduced the Sixteen Personality Factor Questionnaire (16PF); despite its declining popularity, it remains one of the most well-constructed structured personality tests and an important example of a test developed with the aid of factor analysis. Today, factor analysis is a tool used in the design or validation of just about all major tests. (Factor analytic personality tests will be discussed in Chapter 13.) See Table 1-2 for a brief overview of personality tests.
The Period of Rapid Changes in the Status of Testing The 1940s saw not only the emergence of a whole new technology in psychological testing but also the growth of applied aspects of psychology. The role and significance of tests used in World War I were reaffirmed in World War II.
Chapter 1 Introduction 21 TABLE 1-2
Summary of Personality Tests
Woodworth Personal Data Sheet: An early structured personality test that assumed that a test response can be taken at face value. The Rorschach Inkblot Test: A highly controversial projective test that provided an ambiguous stimulus (an inkblot) and asked the subject what it might be. The Thematic Apperception Test (TAT): A projective test that provided ambiguous pictures and asked subjects to make up a story. The Minnesota Multiphasic Personality Inventory (MMPI): A structured personality test that made no assumptions about the meaning of a test response. Such meaning was to be determined by empirical research. The California Psychological Inventory (CPI): A structured personality test developed according to the same principles as the MMPI. The Sixteen Personality Factor Questionnaire (16PF): A structured personality test based on the statistical procedure of factor analysis.
By this time, the U.S. government had begun to encourage the continued development of applied psychological technology. As a result, considerable federal funding provided paid, supervised training for clinically oriented psychologists. By 1949, formal university training standards had been developed and accepted, and clinical psychology was born. Other applied branches of psychology—such as industrial, counseling, educational, and school psychology—soon began to blossom. One of the major functions of the applied psychologist was providing psychological testing. The Shakow, Hilgard, Kelly, Sanford, and Shaffer (1947) report, which was the foundation of the formal training standards in clinical psychology, specified that psychological testing was a unique function of the clinical psychologist and recommended that testing methods be taught only to doctoral psychology students. A position paper of the American Psychological Association published 7 years later (APA, 1954) affirmed that the domain of the clinical psychologist included testing. It formally declared, however, that the psychologist would conduct psychotherapy only in “true” collaboration with physicians. Thus, psychologists could conduct testing independently, but not psychotherapy. Indeed, as long as psychologists assumed the role of testers, they played a complementary but often secondary role vis-à-vis medical practitioners. Though the medical profession could have hindered the emergence of clinical psychology, it did not, because as tester the psychologist aided the physician. Therefore, in the late 1940s and early 1950s, testing was the major function of the clinical psychologist (Shaffer, 1953). For better or worse, depending on one’s perspective, the government’s efforts to stimulate the development of applied aspects of psychology, especially clinical psychology, were extremely successful. Hundreds of highly talented and creative young people were attracted to clinical and other applied areas of psychology. These individuals, who would use tests and other psychological techniques to solve practical human problems, were uniquely trained as practitioners of the principles, empirical foundations, and applications of the science of psychology. Armed with powerful knowledge from scientific psychology, many of these early clinical practitioners must have felt frustrated by their relationship
22 Chapter 1 Introduction
to physicians (see Saccuzzo & Kaplan, 1984). Unable to engage independently in the practice of psychotherapy, some psychologists felt like technicians serving the medical profession. The highly talented group of post–World War II psychologists quickly began to reject this secondary role. Further, because many psychologists associated tests with this secondary relationship, they rejected testing (Lewandowski & Saccuzzo, 1976). At the same time, the potentially intrusive nature of tests and fears of misuse began to create public suspicion, distrust, and contempt for tests. Attacks on testing came from within and without the profession. These attacks intensified and multiplied so fast that many psychologists jettisoned all ties to the traditional tests developed during the first half of the 20th century. Testing therefore underwent another sharp decline in status in the late 1950s that persisted into the 1970s (see Holt, 1967).
The Current Environment During the 1980s, 1990s, and 2000s several major branches of applied psychology emerged and flourished: neuropsychology, health psychology, forensic psychology, and child psychology. Because each of these important areas of psychology makes extensive use of psychological tests, psychological testing again grew in status and use. Neuropsychologists use tests in hospitals and other clinical settings to assess brain injury. Health psychologists use tests and surveys in a variety of medical settings. Forensic psychologists use tests in the legal system to assess mental state as it relates to an insanity defense, competency to stand trial or to be executed, and emotional damages. Child psychologists use tests to assess childhood disorders. As in the past, psychological testing in the first decade of the 21st century remains one of the most important yet controversial topics in psychology. As a student, no matter what your occupational or professional goals, you will find the material in this text invaluable. If you are among those who are interested in using psychological techniques in an applied setting, then this information will be particularly significant. From the roots of psychology to the present, psychological tests have remained among the most important instruments of the psychologist in general and of those who apply psychology in particular. Testing is indeed one of the essential elements of psychology. Though not all psychologists use tests and some psychologists are opposed to them, all areas of psychology depend on knowledge gained in research studies that rely on measurements. The meaning and dependability of these measurements are essential to psychological research. To study any area of human behavior effectively, one must understand the basic principles of measurement. In today’s complex society, the relevance of the principles, applications, and issues of psychological testing extends far beyond the field of psychology. Even if you do not plan to become a psychologist, you will likely encounter psychological tests. Attorneys, physicians, social workers, business managers,
Chapter 1 Introduction 23
educators, and many other professionals must frequently deal with reports based on such tests. Even as a parent, you are likely to encounter tests (taken by your children). To interpret such information adequately, you need the information presented in this book. The more you know about psychological tests, the more confident you can be in your encounters with them. Given the attacks on tests and threats to prohibit or greatly limit their use, you have a responsibility to yourself and to society to know as much as you can about psychological tests. The future of testing may well depend on you and people like you. A thorough knowledge of testing will allow you to base your decisions on facts and to ensure that tests are used for the most beneficial and constructive purposes. Tests have probably never been as important as they are today. For example, consider just one type of testing—academic aptitude. Every year more than 2.5 million students take tests that are designed to measure academic progress or suitability, and the testing process begins early in students’ lives. Some presecondary schools require certain tests, and thousands of children take them each year. When these students become adolescents and want to get into college preparatory schools, tens of thousands will take a screening examination. Few students who want to go to a 4-year college can avoid taking a college entrance test. The SAT alone is given to some 2 million high-school students each year. Another 100,000 high-school seniors take other tests in order to gain advanced placement in college. These figures do not include the 75,000 people who take a special test for admission to business school or the 148,000 who take a Law School Admission Test—or tests for graduate school, medical school, dental school, the military, professional licenses, and others. In fact, the Educational Testing Service alone administers more than 11 million tests annually in 181 countries (Gonzalez, 2001). As sources of information about human characteristics, the results of these tests affect critical life decisions.
SUMMARY
The history of psychological testing in the United States has been brief but intense. Although these sorts of tests have long been available, psychological testing is very much a product of modern society with its unprecedented technology and population growth and unique problems. Conversely, by helping to solve the challenges posed by modern developments, tests have played an important role in recent U.S. and world history. You should realize, however, that despite advances in the theory and technique of psychological testing, many unsolved technical problems and hotly debated social, political, and economic issues remain. Nevertheless, the prevalence of tests despite strong opposition indicates that, although they are far from perfect, psychological tests must fulfill some important need in the decision-making processes permeating all facets of society. Because decisions must be made, such tests will probably flourish until a better or more objective way of making decisions emerges. Modern history shows that psychological tests have evolved in a complicated environment in which hostile and friendly forces have produced a
24 Chapter 1 Introduction
balance characterized by innovation and a continuous quest for better methods. One interesting thing about tests is that people never seem to remain neutral about them. If you are not in favor of tests, then we ask that you maintain a flexible, open mind while studying them. Our goal is to give you enough information to assess psychological tests intelligently throughout your life.
WEB ACTIVITY
For some interesting and relevant Web sites, you might want to check the following: www.aclu.org/FreeSpeech/FreeSpeechMain.cfm Officials silence critic of high-stakes testing www.apa.org/pi/psych.html Psychological testing of language minority and culturally different children www.apa.org/science/fairtestcode.html Code of fair testing practices in education www.bccla.org/positions/privacy/87psytest.html Privacy in psychological testing www.romingerlegal.com/expert/ Psychological assessment by expert witnesses in legal cases
CHAPTER 2
Norms and Basic Statistics for Testing
LEARNING OBJECTIVES
When you have completed this chapter, you should be able to:
Discuss three properties of scales of measurement
Determine why properties of scales are important in the field of measurement
Tell why methods are available for displaying distributions of scores
Describe the mean and the standard deviation
Define a Z score and explain how it is used
Relate the concepts of mean, standard deviation, and Z score to the concept of a standard normal distribution
Define quartiles, deciles, and stanines and explain how they are used
Tell how norms are created
Relate the notion of tracking to the establishment of norms
26 Chapter 2 Norms and Basic Statistics for Testing
e all use numbers as a basic way of communicating: Our money system requires us to understand and manipulate numbers, we estimate how long it will take to do things, we count, we express evaluations on scales, and so on. Think about how many times you use numbers in an average day. There is no way to avoid them. One advantage of number systems is that they allow us to manipulate information. Through sets of well-defined rules, we can use numbers to learn more about the world. Tests are devices used to translate observations into numbers. Because the outcome of a test is almost always represented as a score, much of this book is about what scores mean. This chapter reviews some of the basic rules used to evaluate number systems. These rules and number systems are the psychologist’s partners in learning about human behavior. If you have had a course in psychological statistics, then this chapter will reinforce the basic concepts you have already learned. If you need additional review, reread your introductory statistics book. Most such books cover the information in this chapter. If you have not had a course in statistics, then this chapter will provide some of the information needed for understanding other chapters in this book.
W
Why We Need Statistics Through its commitment to the scientific method, modern psychology has advanced beyond centuries of speculation about human nature. Scientific study requires systematic observations and an estimation of the extent to which observations could have been influenced by chance alone (Collett, 2003). Statistical methods serve two important purposes in the quest for scientific understanding. First, statistics are used for purposes of description. Numbers provide convenient summaries and allow us to evaluate some observations relative to others (Cohen & Lea, 2004; Pagano, 2004). For example, if you get a score of 54 on a psychology examination, you probably want to know what the 54 means. Is it lower than the average score, or is it about the same? Knowing the answer can make the feedback you get from your examination more meaningful. If you discover that the 54 puts you in the top 5% of the class, then you might assume you have a good chance for an A. If it puts you in the bottom 5%, then you will feel differently. Second, we can use statistics to make inferences, which are logical deductions about events that cannot be observed directly. For example, you do not know how many people watched a particular television movie unless you ask everyone. However, by using scientific sample surveys, you can infer the percentage of people who saw the film. Data gathering and analysis might be considered analogous to criminal investigation and prosecution (Nathanson, Higgins, Giglio, Munshi, & Steingrub, 2003; Tukey, 1977). First comes the detective work of gathering and displaying clues, or what the statistician John Tukey calls exploratory data analysis. Then comes a period of confirmatory data
Chapter 2 Norms and Basic Statistics for Testing 27
analysis, when the clues are evaluated against rigid statistical rules. This latter phase is like the work done by judges and juries. Some students have an aversion to numbers and anything mathematical. If you find yourself among them, you are not alone. Not only students but also professional psychologists can feel uneasy about statistics. However, statistics and the basic principles of measurement lie at the center of the modern science of psychology. Scientific statements are usually based on careful study, and such systematic study requires some numerical analysis. This chapter will review both descriptive and inferential statistics. Descriptive statistics are methods used to provide a concise description of a collection of quantitative information. Inferential statistics are methods used to make inferences from observations of a small group of people known as a sample to a larger group of individuals known as a population. Typically, the psychologist wants to make statements about the larger group but cannot possibly make all the necessary observations. Instead, he or she observes a relatively small group of subjects (sample) and uses inferential statistics to estimate the characteristics of the larger group.
Scales of Measurement One may define measurement as the application of rules for assigning numbers to objects. The rules are the specific procedures used to transform qualities of attributes into numbers (Camilli, Cizek, & Lugg, 2001; Nunnally & Bernstein, 1994; Yanai, 2003). For example, to rate the quality of wines, wine tasters must use a specific set of rules. They might rate the wine on a 10-point scale where 1 means extremely bad and 10 means extremely good. For a taster to assign the numbers, the system of rules must be clearly defined. The basic feature of these types of systems is the scale of measurement. For example, to measure the height of your classmates, you might use the scale of inches; to measure their weight, you might use the scale of pounds. There are numerous systems by which we assign numbers in psychology. Indeed, the study of measurement systems is what this book is about. Before we consider any specific scale of measurement, however, we should consider the general properties of measurement scales.
Properties of Scales Three important properties make scales of measurement different from one another: magnitude, equal intervals, and an absolute 0. Magnitude. Magnitude is the property of “moreness.” A scale has the property
of magnitude if we can say that a particular instance of the attribute represents more, less, or equal amounts of the given quantity than does another instance (Aron & Aron, 2003; Hurlburt, 2003; McCall, 2001). On a scale of height, for
28 Chapter 2 Norms and Basic Statistics for Testing
example, if we can say that John is taller than Fred, then the scale has the property of magnitude. A scale that does not have this property arises, for example, when a gym coach assigns identification numbers to teams in a league (team 1, team 2, and so forth). Because the numbers only label the teams, they do not have the property of magnitude. If the coach were to rank the teams by the number of games they have won, then the new numbering system (games won) would have the property of magnitude. Equal intervals. The concept of equal intervals is a little more complex than that of magnitude. A scale has the property of equal intervals if the difference between two points at any place on the scale has the same meaning as the difference between two other points that differ by the same number of scale units. For example, the difference between inch 2 and inch 4 on a ruler represents the same quantity as the difference between inch 10 and inch 12: exactly 2 inches. As simple as this concept seems, a psychological test rarely has the property of equal intervals. For example, the difference between IQs of 45 and 50 does not mean the same thing as the difference between IQs of 105 and 110. Although each of these differences is 5 points (50 45 5 and 110 105 5), the 5 points at the first level do not mean the same thing as 5 points at the second. We know that IQ predicts classroom performance. However, the difference in classroom performance associated with differences between IQ scores of 45 and 50 is not the same as the differences in classroom performance associated with IQ score differences of 105 and 110. In later chapters we will discuss this problem in more detail. When a scale has the property of equal intervals, the relationship between the measured units and some outcome can be described by a straight line or a linear equation in the form Y a bX. This equation shows that an increase in equal units on a given scale reflects equal increases in the meaningful correlates of units. For example, Figure 2-1 shows the hypothetical relationship between scores on a test of manual dexterity and ratings of artwork. Notice that the relationship is not a straight line. By examining the points on the figure, you can see that at first the relationship is nearly linear: Increases in manual dexterity are associated with increases in ratings of artwork. Then the relationship becomes nonlinear. The figure shows that after a manual dexterity score of approximately 5, increases in dexterity produce relatively smaller increases in quality of artwork. Absolute 0. An absolute 0 is obtained when nothing of the property being mea-
sured exists. For example, if you are measuring heart rate and observe that your patient has a rate of 0 and has died, then you would conclude that there is no heart rate at all. For many psychological qualities, it is extremely difficult, if not impossible, to define an absolute 0 point. For example, if one measures shyness on a scale from 0 through 10, then it is hard to define what it means for a person to have absolutely no shyness (McCall, 2001).
Chapter 2 Norms and Basic Statistics for Testing 29 10
FIGURE 2-1 Ratings of Artwork
Hypothetical relationship between ratings of artwork and manual dexterity. In some ranges of the scale, the relationship is more direct than it is in others.
8 6 4 2 0
0
2 4 6 8 Manual Dexterity Test
10
Types of Scales Table 2-1 defines four scales of measurement based on the properties we have just discussed. You can see that a nominal scale does not have the property of magnitude, equal intervals, or an absolute 0. Nominal scales are really not scales at all; their only purpose is to name objects. For example, the numbers on the backs of football players’ uniforms are nominal. Nominal scales are used when the information is qualitative rather than quantitative. Social science researchers commonly label groups in sample surveys with numbers (such as 1 African American, 2 white, and 3 Mexican American). When these numbers have been attached to categories, most statistical procedures are not meaningful. On the scale for ethnic groups, for instance, what would a mean of 1.87 signify? This is not to say that the sophisticated statistical analysis of nominal data is impossible. Indeed, several new and exciting developments in data analysis allow extensive and detailed use of nominal data (Chen, 2002; Miller, Scurfield, Drga, Galvin, & Whitmore, 2002; Stout, 2002). A scale with the property of magnitude but not equal intervals or an absolute 0 is an ordinal scale. This scale allows you to rank individuals or objects but not to say anything about the meaning of the differences between the ranks. If you were to rank the members of your class by height, then you would
TABLE 2-1
Scales of Measurement and Their Properties
Property Type of scale
Magnitude
Equal intervals
Absolute 0
Nominal Ordinal Interval Ratio
No Yes Yes Yes
No No Yes Yes
No No No Yes
30 Chapter 2 Norms and Basic Statistics for Testing
have an ordinal scale. For example, if Fred was the tallest, Susan the second tallest, and George the third tallest, you would assign them the ranks 1, 2, and 3, respectively. You would not give any consideration to the fact that Fred is 8 inches taller than Susan, but Susan is only 2 inches taller than George. For most problems in psychology, the precision to measure the exact differences between intervals does not exist. So, most often one must use ordinal scales of measurement. For example, IQ tests do not have the property of equal intervals or an absolute 0, but they do have the property of magnitude. If they had the property of equal intervals, then the difference between an IQ of 70 and one of 90 should have the same meaning as the difference between an IQ of 125 and one of 145. Because it does not, the scale can only be considered ordinal. Furthermore, there is no point on the scale that represents no intelligence at all—that is, the scale does not have an absolute 0. When a scale has the properties of magnitude and equal intervals but not absolute 0, we refer to it as an interval scale. The most common example of an interval scale is the measurement of temperature in degrees Fahrenheit. This temperature scale clearly has the property of magnitude, because 35°F is warmer than 32°F, 65°F is warmer than 64°F, and so on. Also, the difference between 90°F and 80°F is equal to a similar difference of 10 degrees at any point on the scale. However, on the Fahrenheit scale, temperature does not have the property of absolute 0. If it did, then the 0 point would be more meaningful. As it is, 0 on the Fahrenheit scale does not have a particular meaning. Water freezes at 32°F and boils at 212°F. Because the scale does not have an absolute 0, we cannot make statements in terms of ratios. A temperature of 22°F is not twice as hot as 11°F, and 70°F is not twice as hot as 35°F. The Celsius scale of temperature is also an interval rather than a ratio scale. Although 0 represents freezing on the Celsius scale, it is not an absolute 0. Remember that an absolute 0 is a point at which nothing of the property being measured exists. Even on the Celsius scale of temperature, there is still plenty of room on the thermometer below 0. When the temperature goes below freezing, some aspect of heat is still being measured. A scale that has all three properties (magnitude, equal intervals, and an absolute 0) is called a ratio scale. To continue our example, a ratio scale of temperature would have the properties of the Fahrenheit and Celsius scales but also include a meaningful 0 point. There is a point at which all molecular activity ceases, a point of absolute 0 on a temperature scale. Because the Kelvin scale is based on the absolute 0 point, it is a ratio scale: 22°K is twice as cold as 44°K. Examples of ratio scales also appear in the numbers we see on a regular basis. For example, consider the number of yards gained by running backs on football teams. Zero yards actually means that the player has gained no yards at all. If one player has gained 1000 yards and another has gained only 500, then we can say that the first athlete has gained twice as many yards as the second. Another example is the speed of travel. For instance, 0 miles per hour (mph) is the point at which there is no speed at all. If you are driving onto a highway at 30 mph and increase your speed to 60 when you merge, then you have doubled your speed.
Chapter 2 Norms and Basic Statistics for Testing 31
Permissible Operations Level of measurement is important because it defines which mathematical operations we can apply to numerical data. For nominal data, each observation can be placed in only one mutually exclusive category. For example, you are a member of only one gender. One can use nominal data to create frequency distributions (see the next section), but no mathematical manipulations of the data are permissible. Ordinal measurements can be manipulated using arithmetic; however, the result is often difficult to interpret because it reflects neither the magnitudes of the manipulated observations nor the true amounts of the property that have been measured. For example, if the heights of 15 children are rank ordered, knowing a given child’s rank does not reveal how tall he or she stands. Averages of these ranks are equally uninformative about height. With interval data, one can apply any arithmetic operation to the differences between scores. The results can be interpreted in relation to the magnitudes of the underlying property. However, interval data cannot be used to make statements about ratios. For example, if IQ is measured on an interval scale, one cannot say that an IQ of 160 is twice as high as an IQ of 80. This mathematical operation is reserved for ratio scales, for which any mathematical operation is permissible.
Frequency Distributions A single test score means more if one relates it to other test scores. A distribution of scores summarizes the scores for a group of individuals. In testing, there are many ways to record a distribution of scores. The frequency distribution displays scores on a variable or a measure to reflect how frequently each value was obtained. With a frequency distribution, one defines all the possible scores and determines how many people obtained each of those scores. Usually, scores are arranged on the horizontal axis from the lowest to the highest value. The vertical axis reflects how many times each of the values on the horizontal axis was observed. For most distributions of test scores, the frequency distribution is bell-shaped, with the greatest frequency of scores toward the center of the distribution and decreasing scores as the values become greater or less than the value in the center of the distribution. Figure 2-2 shows a frequency distribution of 1000 observations that takes on values between 61 and 90. Notice that the most frequent observations fall toward the center of the distribution, around 75 and 76. As you look toward the extremes of the distribution, you will find a systematic decline in the frequency with which the scores occur. For example, the score of 71 is observed less frequently than 72, which is observed less frequently than 73, and so on. Similarly, 78 is observed more frequently than 79, which is noted more often than 80, and so forth. Though this neat symmetric relationship does not characterize all sets of scores, it occurs frequently enough in practice for us to devote special attention
32 Chapter 2 Norms and Basic Statistics for Testing FIGURE 2-2
100 Frequency of Occurrence
Frequency distribution approximating a normal distribution of 1000 observations.
80 60 40 20 0
62
64
66
68
70
72
74
76 Score
78
80
82
84
86
88
90
to it. In the section on the normal distribution, we explain this concept in greater detail. Table 2-2 lists the rainfall amounts in San Diego, California, between 1970 and 2003. Figure 2-3 is a histogram based on the observations. The distribution is slightly skewed, or asymmetrical. We say that Figure 2-3 has a positive skew because the tail goes off toward the higher or positive side of the X axis. There is a slight skew in Figures 2-3 and 2-4, but the asymmetry in these figures is relatively hard to detect. Figure 2-5 gives an example of a distribution that is clearly skewed. The figure summarizes annual household income in the United States in 2002. Very few people make high incomes, while the great bulk of the population is bunched toward the low end of the income distribution. Of particular interest is that this figure only includes household incomes less than $100,000. For household incomes greater than $100,000, the government only reports incomes using class intervals of $50,000. In 2002, about 14% of the U.S. households had incomes greater than $100,000. Since some households have extremely high incomes, you can imagine that the tail of this distribution would go very far to the right. Thus, income is an example of a variable that has positive skew. One can also present this same set of data as a frequency polygon (see Figure 2-4). Here the amount of rainfall is placed on the graph as a point that represents the frequencies with which each interval occurs. Lines are then drawn to connect these points. Whenever you draw a frequency distribution or a frequency polygon, you must decide on the width of the class interval. The class interval for inches of rainfall is the unit on the horizontal axis. For example, in Figures 2-3 and 2-4, the class interval is 3 inches—that is, the demarcations along the X axis increase in 3-inch intervals. This interval is used here for convenience; the choice of 3 inches is otherwise arbitrary.
Chapter 2 Norms and Basic Statistics for Testing 33 TABLE 2-2
Inches of Rainfall in San Diego, 1970–2003
Year
Inches
1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 Sum Mean Variance Standard deviation N
6.48 8.20 6.24 11.16 6.68 10.80 9.24 9.32 17.56 15.52 15.72 7.48 12.04 18.76 5.44 9.76 15.20 9.44 12.64 5.96 7.76 12.20 12.48 18.23 9.92 17.08 5.91 7.75 16.9 6.49 6.92 8.52 4.23 7.97 356 10.4705882 17.5875633 4.19256047 34
Data for year 2003 are estimates based on projections at the time the book went to press. Full data going back to 1850 can be found at http://cdec.water.ca.gov/ cgi-progs/queryMonthly?SDG&d=29-Aug-2001+11:15&span=2years.
34 Chapter 2 Norms and Basic Statistics for Testing
Histogram for San Diego rainfall, 1970–2003.
Frequency
FIGURE 2-3
Frequency polygon for rainfall in San Diego, 1970–2003.
Frequency
FIGURE 2-4
FIGURE 2-5
(Data from the United States Department of Labor Statistics and the Bureau the Census. http://ferret.bls. census.gov/macro/ 032003/hhinc/ new06_000.htm.)
0–6
8 7 6 5 4 3 2 1 0
0–6
6.01–9
6.01–9
9.01–12 12.01–15 15.01–18 Inches of Rainfall
9.01–12 12.01–15 15.01–18 Inches of Rainfall
>18
>18
4,500 4,000 3,500 3,000
Frequency (Thousands)
Household income up to $100,000 in the United States for 2002. This is an example of positive skew.
8 7 6 5 4 3 2 1 0
2,500 2,000 1,500 1,000 500 0 00 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 99 ,5 ,4 ,4 ,4 ,4 ,4 ,4 ,4 ,4 ,4 ,4 ,4 ,4 ,4 ,4 ,4 ,4 ,4 ,4 ,4 $2 $7 $12 $17 $22 $27 $32 $37 $42 $47 $52 $57 $62 $67 $72 $77 $82 $87 $92 $97 r o de 0 t to to to to to to to to to to to to to to to to to to Un ,00 000 000 000 ,000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 5 , $ 10 15, 20, 25 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95, $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $
Income (Thousands US$)
Percentile Ranks Percentile ranks replace simple ranks when we want to adjust for the number of scores in a group. A percentile rank answers the question “What percent of the scores fall below a particular score (Xi)?” To calculate a percentile rank, you
Chapter 2 Norms and Basic Statistics for Testing 35
need only follow these simple steps: (1) Determine how many cases fall below the score of interest, (2) Determine how many cases are in the group, (3) Divide the number of cases below the score of interest (Step 1) by the total number of cases in the group (Step 2), and (4) Multiply the result of Step 3 by 100. The formula is B Pr 100 percentile rank of X1 N where Pr Xi B N
percentile rank the score of interest the number of scores below Xi the total number of scores
This means that you form a ratio of the number of cases below the score of interest and the total number of scores. Because there will always be either the same or fewer cases in the numerator (top half) of the equation than there are in the denominator (bottom half), this ratio will always be less than or equal to 1. To get rid of the decimal points, you multiply by 100. As an example, consider the runner who finishes 62nd out of 63 racers in a gym class. To obtain the percentile rank, divide 1 (the number of people who finish behind the person of interest) by 63 (the number of scores in the group). This gives you 63, or .016. Then multiply this result by 100 to obtain the percentile rank, which is 1.6. This rank tells you the runner is below the 2nd percentile. Now consider the Bay to Breakers race, which attracts 50,000 runners to San Francisco. If you had finished 62nd out of 50,000, then the number of people who were behind you would be 49,938. Dividing this by the number of entrants gives .9988. When you multiply by 100, you get a percentile rank of 99.88. This tells you that finishing 62nd in the Bay to Breakers race is exceptionally good because it places you in the 99.88th percentile. Technical Box 2-1 presents the calculation of percentile ranks of the infant mortality rates of selected countries as reported by the World Health Organization in 2003. Infant mortality is defined as the number of babies out of 1000 who are born alive but die before their first birthday. Before proceeding, we should point out that the meaning of this calculation depends on which countries are used in the comparison. In this example, the calculation of the percentile rank is broken into five steps and uses the raw data in the table. In Step 1, we arrange the data points in descending order. Sweden has the lowest infant mortality rate (2.4), Japan is next (3.4), and Zambia has the highest rate (168.1). In Step 2, we determine the number of cases with worse rates than that of the case of interest. In this example, the case of interest is the United States. Therefore, we count the number of cases with a worse rate than that of the United States. Ten countries—Colombia, Saudi Arabia, Turkey, China, Morocco, Bolivia, Laos, Zambia, Ethiopia, and Mozambique—have infant mortality rates greater than 7.58.
36 Chapter 2 Norms and Basic Statistics for Testing
TECHNICAL BOX 2-1
Infant Mortality in Selected Countries, 2003 Infant Mortality per 1000 Live Births
Country Australia Bolivia Colombia China Ethiopia France Israel Italy Japan Laos Morocco Mozambique Saudia Arabia Spain Sweden Turkey United States Zambia Mean SD
5.0000000 66.4000000 20.4000000 37.9000000 142.6000000 4.5000000 6.1000000 4.8000000 3.4000000 105.8000000 56.3000000 148.6000000 23.7000000 3.9000000 2.4000000 34.3000000 7.500000 168.1000000 46.7611111 56.2937792
To calculate the percentile rank of infant mortality in the United States in comparison to that in selected countries, use the following formula: B Pr 100 N where Pr the percentile rank B the number of cases with worse rates than the case of interest N the total number of cases
Chapter 2 Norms and Basic Statistics for Testing 37
Infant Mortality per 1000 Live Births
Country Sweden Japan Spain France Italy Australia Israel United States Colombia Saudia Arabia Turkey China Morocco Bolivia Laos Ethiopia Mozambique Zambia
2.4000000 3.4000000 3.9000000 4.5000000 4.8000000 5.0000000 6.1000000 7.500000 20.4000000 23.7000000 34.3000000 37.9000000 56.3000000 66.4000000 105.8000000 142.6000000 148.6000000 168.1000000
Steps 1. Arrange data in ascending order—that is, the lowest score first, the second lowest
score second, and so on. N 18, mean 46.76, standard deviation 56.29 2. Determine the number of cases with worse rates than the score of interest. There
are 10 countries in this sample with infant mortality rates greater than that in the United States. 3. Determine the number of cases in the sample (18). 4. Divide the number of scores worse than the score of interest (Step 2) by the total number of scores (Step 3): 10 .56 18 5. Multiply by 100:
.56 100 56th percentile rank
38 Chapter 2 Norms and Basic Statistics for Testing
In Step 3, we determine the total number of cases (18). In Step 4, we divide the number of scores worse than the score of interest by the total number of scores: 10 .56 18 Technically, the percentile rank is a percentage. Step 4 gives a proportion. Therefore, in Step 5 you transform this into a whole number by multiplying by 100: .56 100 56 Thus, the United States is in the 56th percentile. The percentile rank depends absolutely on the cases used for comparison. In this example, you calculated that the United States is in the 56th percentile for infant mortality within this group of countries. If all countries in the world had been included, then the ranking of the United States might have been different. Using this procedure, try to calculate the percentile rank for Bolivia. The calculation is the same except that there are four countries with worse rates than Bolivia (as opposed to 10 worse than the United States). Thus, the percentile rank for Bolivia is 4 .22 100 22 18 or the 22nd percentile. Now try France. You should get a percentile rank of 78.
Percentiles Percentiles are the specific scores or points within a distribution. Percentiles divide the total frequency for a set of observations into hundredths. Instead of indicating what percentage of scores fall below a particular score, as percentile ranks do, percentiles indicate the particular score, below which a defined percentage of scores falls. Try to calculate the percentile and percentile rank for some of the data in Technical Box 2-1. As an example, look at Italy. The infant mortality rate in Italy is 4.8/1000. When calculating the percentile rank, you exclude the score of interest and count those below (in other words, Italy is not included in the count). There are 13 countries in this sample with infant mortality rates worse than Italy’s. To calculate the percentile rank, divide this number of countries by the total number of cases and multiply by 100: B 13 Pr 100 100 .72 100 72 N 18
Chapter 2 Norms and Basic Statistics for Testing 39
Thus, Italy is in the 72nd percentile rank, or the 72nd percentile in this example is 4.8/1000 or 4.8 deaths per 1000 live births. Now take the example of Israel. The calculation of percentile rank requires looking at the number of cases below the case of interest. In this example, 11 countries in this group have infant mortality rates worse than Israel’s. Thus, the percentile rank for Israel is 11/18 100 61. The 61st percentile corresponds with the point or score of 6.1 (6.1/1000 live births). In summary, the percentile and the percentile rank are similar. The percentile gives the point in a distribution below which a specified percentage of cases fall (4.8/1000 for Italy). The percentile is in raw score units. The percentile rank gives the percentage of cases below the percentile; in this example, the percentile rank is 72. When reporting percentiles and percentile ranks, you must carefully specify the population you are working with. Remember that a percentile rank is a measure of relative performance. When interpreting a percentile rank, you should always ask the question “Relative to what?” Suppose, for instance, that you finished in the 17th percentile in a swimming race (or fifth in a heat of six competitors). Does this mean that you are a slow swimmer? Not necessarily. It may be that this was a heat in the Olympic games, and the participants were the fastest swimmers in the world. An Olympic swimmer competing against a random sample of all people in the world would probably finish in the 99.99th percentile. The example for infant mortality rates depends on which countries in the world were selected for comparison. The United States actually does quite poorly when compared with European countries. However, the U.S. infant mortality rate looks much better compared with countries in the developing world.
Describing Distributions Mean Statistics are used to summarize data. If you consider a set of scores, the mass of information may be too much to interpret all at once. That is why we need numerical conveniences to help summarize the information. An example of a set of scores that can be summarized is shown in Table 2-2 (see page 33), amounts of rainfall in San Diego. We signify the variable as X. A variable is a score that can have different values. The amount of rain is a variable because different amounts of rain fell in different years. The arithmetic average score in a distribution is called the mean. To calculate the mean, we total the scores and divide the sum by the number of cases, or N. The capital Greek letter sigma () means summation. Thus, the formula for the mean, which we signify as X, is X X N
40 Chapter 2 Norms and Basic Statistics for Testing
In words, this formula says to total the scores and divide the sum by the number of cases. Using the information in Table 2-2, we find the mean by following these steps: 1. Obtain X, or the sum of the scores: 6.48 8.20 6.24 11.16
6.68 7.97 356.00
2. Find N, or the number of scores:
N 34 3. Divide X by N: 356/34 10.47
Technical Box 2-2 summarizes common symbols used in basic statistics.
Standard Deviation The standard deviation is an approximation of the average deviation around the mean. The standard deviation for the amount of rainfall in San Diego is 4.19. To understand rainfall in San Diego, you need to consider at least two dimensions: first, the amount of rain that falls in a particular year; second, the degree of variation from year to year in the amount of rain that falls. The calculation suggests that, on the average, the variation around the mean is approximately 4.19 inches. However informative, knowing the mean of a group of scores does not give you that much information. As an illustration, look at the following sets of numbers. Set 1
Set 2
Set 3
4 4 4 4 4 4
5 5 4 4 3 3
8 8 6 2 0 0
TECHNICAL BOX 2-2
Common Symbols You need to understand and recognize the symbols used throughout this book. X is the mean; it is pronounced “X bar.” is the summation sign. It means sum, or add, scores together and is the capital Greek letter sigma. X is a variable that takes on different values. Each value of Xi represents a raw score, also called an obtained score.
Chapter 2 Norms and Basic Statistics for Testing 41
Calculate the mean of the first set. You should get 4. What is the mean of the second set? If you calculate correctly, you should get 4 again. Next find the mean for Set 3. It is also 4. The three distributions of scores appear quite different but have the same mean, so it is important to consider other characteristics of the distribution of scores besides the mean. The difference between the three sets lies in variability. There is no variability in Set 1, a small amount in Set 2, and a lot in Set 3. Measuring this variation is similar to finding the average deviation around the mean. One way to measure variability is to subtract the mean from each score (X X) and then total the deviations. Statisticians often signify this with a lowercase x, as in x (X X). Try this for the data in Table 2-2. Did you get 0? You should have, and this is not an unusual example. In fact, the sum of the deviations around the mean will always equal 0. However, you do have an alternative: You can square all the deviations around the mean in order to get rid of any negative signs. Then you can obtain the average squared deviation around the mean, known as the variance. The formula for the variance is (X X)2 s 2 N where (X X) is the deviation of a score from the mean. The symbol s is the lowercase Greek sigma; s 2 is used as a standard description of the variance. Though the variance is a useful statistic commonly used in data analysis, it shows the variable in squared deviations around the mean rather than in deviations around the mean. In other words, the variance is the average squared deviation around the mean. To get it back into the units that will make sense to us, we need to take the square root of the variance. The square root of the variance is the standard deviation (s ), and it is represented by the following formula s2
(X X)2 N
The standard deviation is thus the square root of the average squared deviation around the mean. Although the standard deviation is not an average deviation, it gives a useful approximation of how much a typical score is above or below the average score. Because of their mathematical properties, the variance and the standard deviation have many advantages. For example, knowing the standard deviation of a normally distributed batch of data allows us to make precise statements about the distribution. The formulas just presented are for computing the variance and the standard deviation of a population. That is why we use the lowercase Greek sigma (s and s 2). Technical Box 2-3 summarizes when you should use Greek and Roman letters. Most often we use the standard deviation for a sample to estimate the standard deviation for a population. When we talk about a sample, we replace the Greek s with a Roman letter S. Also, we divide
42 Chapter 2 Norms and Basic Statistics for Testing
TECHNICAL BOX 2-3
Terms and Symbols Used to Describe Populations and Samples Population
Sample
Definition
All elements with the same definition
Descriptive characteristics Symbols used to describe Symbol for mean Symbol for standard deviation
Parameters Greek m s
A subset of the population, usually drawn to represent it in an unbiased fashion Statistics Roman – X S
by N 1 rather than N to recognize that S of a sample is only an estimate of the variance of the population. S
(X X)2 N1
In calculating the standard deviation, it is often easier to use the raw score equivalent formula, which is
S
(X)2 X2 N N1
This calculation can also be done automatically by some minicalculators. In reading the formula, you may be confused by a few points. In particular, be careful not to confuse X2 and (X)2. To get X2, each individual score is squared and the values are summed. For the scores 3, 5, 7, and 8, X2 would be 32 52 72 82 9 25 49 64 147. To obtain (X)2, the scores are first summed and the total is squared. Using the example, (X)2 (3 5 7 8)2 232 529.
Z Score One problem with means and standard deviations is that they do not convey enough information for us to make meaningful assessments or accurate interpretations of data. Other metrics are designed for more exact interpretations. The Z score transforms data into standardized units that are easier to interpret.
Chapter 2 Norms and Basic Statistics for Testing 43
A Z score is the difference between a score and the mean, divided by the standard deviation: X X Z S In other words, a Z score is the deviation of a score X from the mean X in standard deviation units. If a score is equal to the mean, then its Z score is 0. For example, suppose the score and the mean are both 6; then 6 6 0. Zero divided by anything is still 0. If the score is greater than the mean, then the Z score is positive; if the score is less than the mean, then the Z score is negative. Let’s try an example. Suppose that X 6, the mean X 3, and the standard deviation S 3. Plugging these values into the formula, we get 63 3 Z1 3 3 Let’s try another example. Suppose X 4, X 5.75, and S 2.11. What is the Z score? It is .83: 4 5.74 1.74 Z .82 2.11 2.11 This means that the score we observed (4) is .83 standard deviation below the average score, or that the score is below the mean but its difference from the mean is slightly less than the average deviation. Example of depression in medical students: Center for Epidemiologic Studies Depression Scale (CES-D). The CES-D is a general measure of depression that has
been used extensively in epidemiological studies. The scale includes 20 items and taps dimensions of depressed mood, hopelessness, appetite loss, sleep disturbance, and energy level. Each year, students at the University of California, San Diego, School of Medicine are asked to report how often they experienced a particular symptom during the first week of school on a 4-point scale ranging from rarely or none of the time [0 to 1 days (0)] to most or all of the time [5 to 7 days (3)]. Items 4, 8, 12, and 16 on the CES-D are reverse scored. For these items, 0 is scored as 3, 1 is scored as 2, 2 as 1, and 3 as 0. The CES-D score is obtained by summing the circled numbers. Scores on the CES-D range from 0 to 60, with scores greater than 16 indicating clinically significant levels of depressive symptomatology in adults. Feel free to take the CES-D measure yourself. Calculate your score by summing the numbers you have circled. However, you must first reverse the scores on items 4, 8, 12, and 16. As you will see in Chapter 5, the CES-D does not have high validity for determining clinical depression. If your score is less than 16, the evidence suggests that you are not clinically depressed. If your score is high, it raises suspicions about depression—though this does not mean you have a problem. (Of course, you may want to talk with your college counselor if you are feeling depressed.)
44 Chapter 2 Norms and Basic Statistics for Testing
Center for Epidemiologic Studies Depression Scale (CES-D) Instructions: Circle the number for each statement that best describes how often you felt or behaved this way DURING THE PAST WEEK. Rarely or none of the time (less than 1 day)
Some or a little of the time (1–2 days)
Occasionally or a moderate amount of the time (3–4 days)
Most or all of the time (5–7 days)
don’t bother me. . . . . . . . . . . . . . . . . .
0
.......1
.........2
.......3
2. I did not feel like eating. . . . . . . . . . . .
0
.......1
.........2
.......3
0
.......1
.........2
.......3
0
.......1
.........2
.......3
0
.......1
.........2
.......3
1. I was bothered by things that usually
3. I felt that I could not shake off the blues
even with help from my family or friends. R
4. I felt that I was just as good as other
people. . . . . . . . . . . . . . . . . . . . . . . . 5. I had trouble keeping my mind on what
I was doing. . . . . . . . . . . . . . . . . . . . . 6. I felt depressed. . . . . . . . . . . . . . . . . .
0
.......1
.........2
.......3
7. I felt that everything I did was an effort.
0
.......1
.........2
.......3
8. I felt hopeful about the future. . . . . . . .
0
.......1
.........2
.......3
9. I thought my life had been a failure. . . .
0
.......1
.........2
.......3
10. I felt fearful. . . . . . . . . . . . . . . . . . . . .
0
.......1
.........2
.......3
11. My sleep was restless. . . . . . . . . . . . .
0
.......1
.........2
.......3
R 12. I was happy. . . . . . . . . . . . . . . . . . . .
0
.......1
.........2
.......3
13. I talked less than usual. . . . . . . . . . . .
0
.......1
.........2
.......3
14. I felt lonely. . . . . . . . . . . . . . . . . . . . . .
0
.......1
.........2
.......3
15. People were unfriendly. . . . . . . . . . . . .
0
.......1
.........2
.......3
R 16. I enjoyed life. . . . . . . . . . . . . . . . . . . .
0
.......1
.........2
.......3
17. I had crying spells. . . . . . . . . . . . . . . .
0
.......1
.........2
.......3
R
18. I felt sad. . . . . . . . . . . . . . . . . . . . . . .
0
.......1
.........2
.......3
19. I felt that people disliked me. . . . . . . .
0
.......1
.........2
.......3
20. I could not get “going.” . . . . . . . . . . . .
0
.......1
.........2
.......3
Table 2-3 shows CES-D scores for a selected sample of medical students. You can use these data to practice calculating means, standard deviations, and Z scores. In creating the frequency distribution for the CES-D scores of medical students we used an arbitrary class interval of 5.
Chapter 2 Norms and Basic Statistics for Testing 45 TABLE 2-3
The Calculation of Mean, Standard Deviation, and Z Scores for CES-D Scores
Name
X2
Test score (X )
John Carla Fred Monica Eng Fritz Mary Susan Debbie Elizabeth Sarah Marcel Robin Mike Carl Phyllis Jennie Richard Tyler Frank
14 10 8 8 26 0 14 3 9 10 7 12 10 25 9 12 23 7 13 1 ∑ X 221
196 100 64 64 676 0 196 9 81 100 49 144 100 625 81 144 529 49 169 1 ∑ X 2 3377
Z score .42 .15 .44 .44 2.13 1.58 .42 1.15 .29 .15 .58 .14 .15 1.99 .29 .14 1.70 .58 .28 1.43
∑X 221 X 11.05 N 20
S
(∑X )2 N N1 ∑X 2
(221)2 3377 20 7.01 20 1
X X 8 11.05 Monica’s Z score .44 S 7.01 X X 12 11.05 Marcel’s Z score .14 S 7.01 X X 23 11.05 Jennie’s Z score 1.70 S 7.01
Standard Normal Deviation Now we consider the standard normal distribution because it is central to statistics and psychological testing. First, though, you should participate in a short exercise. Take any coin and flip it 10 times. Now repeat this exercise of 10 coin flips 25 times. Record the number of heads you observe in each group of 10 flips. When you are done, make a frequency distribution showing how many times you observed 1 head in your 10 flips, 2 heads, 3 heads, and so on. Your frequency distribution might look like the example shown in Figure 2-6. The most frequently observed events are approximately equal numbers of
46 Chapter 2 Norms and Basic Statistics for Testing FIGURE 2-6
6 Frequency of Occurrence
Frequency distribution of the number of heads in 25 sets of 10 flips.
5 4 3 2 1 0
0
2
1
3
4 5 6 Number of Heads
7
8
9
10
FIGURE 2-7
The theoretical distribution of the number of heads in an infinite number of coin flips.
Frequency of Occurrence
heads and tails. Toward the extremes of 10 heads and 0 tails or 10 tails and 0 heads, events are observed with decreasing frequency. For example, there were no occasions in which fewer than 2 heads were observed and only one occasion in which more than 8 heads were observed. This is what we would expect from the laws of probability. On the average, we would expect half of the flips to show heads and half to show tails if heads and tails are equally probable events. Although observing a long string of heads or tails is possible, it is improbable. In other words, we sometimes see the coin come up heads in 9 out of 10 flips. The likelihood that this will happen, however, is quite small. Figure 2-7 shows the theoretical distribution of heads in an infinite number of flips of the coin. This figure might look a little like the distribution from your coin-flipping exercise or the distribution shown in Figure 2-6. Actually, this is a normal distribution, or what is known as a symmetrical binomial probability distribution. On most occasions, we refer to units on the X (or horizontal) axis of the normal distribution in Z score units. Any variable transformed into Z score units takes on special properties. First, Z scores have a mean of 0 and a standard deviation of 1.0. If you think about this for a minute, you should be able
.3413
.00135
.3413
.00135
.1359 –3
.0214 –2
.1359 –1
0 Z Scores
1
.0214 2 3
Chapter 2 Norms and Basic Statistics for Testing 47
to figure out why this is true. Recall that the sum of the deviations around the mean is always equal to 0. The numerator of the Z score equation is the deviation around the mean, while the denominator is a constant. Thus, the mean of Z scores can be expressed as (X X)/S N
or
Z N
Because (X X) will always equal 0, the mean of Z scores will always be 0. In Figure 2-7, the standardized, or Z score, units are marked on the X axis. The numbers under the curve are the proportions of cases (in decimal form) that we would expect to observe in each area. Multiplying these proportions by 100 yields percentages. For example, we see that 34.13% or .3413 of the cases fall between the mean and one standard deviation above the mean. Do not forget that 50% of the cases fall below the mean. Putting these two bits of information together, we can conclude that if a score is one standard deviation above the mean, then it is at about the 84th percentile rank (50 34.13 84.13 to be exact). A score that is one standard deviation below the mean would be about the 16th percentile rank (50 34.13 15.87). Thus, you can use what you have learned about means, standard deviations, Z scores, and the normal curve to transform raw scores, which have little meaning, into percentile scores, which are easier to interpret. These methods can be used only when the distribution of scores is normal or approximately normal. Methods for nonnormal distributions are discussed in most statistics books under “nonparametric statistics.” Percentiles and Z scores. These percentile ranks are the percentage of scores that
fall below the observed Z score. For example, the Z score 21.6 is associated with the percentile rank of 5.48. The Z score 1.0 (third column) is associated with the percentile rank of 84.13. Part I of Appendix 1 is a simplified version of Part II, which you need for more advanced use of Z scores. Part II gives the areas between the mean and various Z scores. Standard scored values are listed in the “Z” column. To find the proportion of the distribution between the mean of the distribution and a given Z score, you must locate the entry indicated by a specific Z score. Z scores are carried to a second decimal place in the columns that go across the table. First, consider the second column of the table because it is similar to Part I of Appendix 1. Take the Z score of 1.0. The second column is labeled .00, which means that the second decimal place is also 0. The number listed in the table is .3413. Because this is a positive number, it is above the mean. Because the area below the mean is .5, the total area below a Z score of 1.0 is .5 .3413 .8413. To make this into a percentile (as shown in Part I of the appendix), multiply by 100 to get 84.13. Now try the example of a Z score of 1.64. To locate this value, find 1.6 in the first column. Then move your hand across the row until you get to the number below the heading .04. The number is .4495. Again, this is a positive Z score, so you must add the observed proportion to
48 Chapter 2 Norms and Basic Statistics for Testing
the .5 that falls below the mean. The proportion below 1.64 is .9495. Stated another way, 94.95% of the cases fall below a Z score of 1.64. Now try to find the percentile rank of cases that fall below a Z score of 1.10. If you are using the table correctly, you should obtain 86.43. Now try .75. Because this is a negative Z score, the percentage of cases falling below the mean should be less than 50. But there are no negative values in Part II of Appendix 1. For a negative Z score, there are several ways to obtain the appropriate area under the curve. The tables in Appendix 1 give the area from the mean to the Z score. For a Z score of .75, the area between the mean and the Z score is .2734. You can find this by entering the table in the row labeled .7 and then moving across the row until you get to the figure in that row below the heading .05. There you should find the number .2734. We know that .5 of the cases fall below the mean. Thus, for a negative Z score, we can obtain the proportion of cases falling below the score by subtracting .2734, the tabled value listed in the appendix, from .5. In this case, the result is .5 .2734 .2266 Because finding the percentile ranks associated with negative Z scores can be tricky, you might want to use Part I of Appendix 1 to see if you are in the right ballpark. This table gives both negative and positive Z scores but does not give the detail associated with the second decimal place. Look up .7 in Part I. The percentile rank is 24.20. Now consider a Z score of .8. That percentile rank is 21.19. Thus, you know that a Z score of .75 should be associated with a percentile rank between 21.19 and 24.20. In fact, we have calculated that the actual percentile rank is 22.66. Practice with Appendix 1 until you are confident you understand how it works. Do not hesitate to ask your professor or teaching assistant if you are confused. This is an important concept that you will need throughout the rest of the book. Look at one more example from Table 2-2 (rainfall in San Diego, page 33). California had a dry year in 1999. The newscasters frequently commented that this was highly unusual. They described it as the “La Niña” effect, and some even claimed that it signaled global warming. The question is whether or not the amount of rainfall received in 1999 was unusual given what we know about rainfall in general. To evaluate this, calculate the Z score for rainfall. According to Table 2-2, there were 6.49 inches of rainfall in 1999. The mean for rainfall is 10.47 inches and the standard deviation is 4.19. Thus, the Z score is 6.49 10.47 0.95 4.19 Next determine where a Z score of 0.95 falls within the Z distribution. According to Appendix 1, a Z score of 0.95 is equal to the 17.11th percentile (50 32.89). Thus, the low rainfall year in 1999 was unusual—given all years, it was in about the 17th percentile. However, it was not that unusual. You can estimate that there would be less rainfall in approximately 17% of all years.
Chapter 2 Norms and Basic Statistics for Testing 49
You can also turn the process around. Instead of using Z scores to find the percentile ranks, you can use the percentile ranks to find the corresponding Z scores. To do this, look in Part II of Appendix 1 under percentiles and find the corresponding Z score. For example, suppose you wish to find the Z score associated with the 90th percentile. When you enter Part II of Appendix 1, look for the value closest to the 90th percentile. This can be a little tricky because of the way the table is structured. Because the 90th percentile is associated with a positive Z score, you are actually looking for the area above the 50th percentile. So you should look for the entry closest to .4000 (.5000 .4000 .9000). The closest value to .4000 is .3997, which is found in the row labeled 1.2 and the column labeled .08. This tells you that a person who obtains a Z score of 1.28 is at approximately the 90th percentile in the distribution. Now return to the example of CES-D scores for medical students (Table 2-3). Monica had a Z score on the CES-D of .44. Using Appendix 1, you can see that she was in the 33rd percentile (obtained as .50 .1700 .33 100 33). Marcel, with his Z score of .14, was in the 56th percentile; and Jennie, with a Z score of 1.70, was in the 96th percentile. You might have few worries about Monica and Marcel. However, it appears that Jennie is more depressed than 96% of her classmates and may need to talk to someone. An example close to home. One of the difficulties in grading students is that per-
formance is usually rated in terms of raw scores, such as the number of items a person correctly answers on an examination. You are probably familiar with the experience of having a test returned to you with some number that makes little sense to you. For instance, the professor comes into class and hands you your test with a 72 on it. You must then wait patiently while he or she draws the distribution on the board and tries to put your 72 into some category that you understand, such as B. An alternative way of doing things would be to give you a Z score as feedback on your performance. To do this, your professor would subtract the average score (mean) from your score and divide by the standard deviation. If your Z score was positive, you would immediately know that your score was above average; if it was negative, you would know your performance was below average. Suppose your professor tells you in advance that you will be graded on a curve according to the following rigid criteria. If you are in the top 15% of the class, you will get an A (85th percentile or above); between the 60th and the 84th percentiles, a B; between the 20th and the 59th percentiles, a C; between the 6th and the 19th percentiles, a D; and in the 5th percentile or below, an F. Using Appendix 1, you should be able to find the Z scores associated with each of these cutoff points for normal distributions of scores. Try it on your own and then consult Table 2-4 to see if you are correct. Looking at Table 2-4, you should be able to determine what your grade would be in this class on the basis of your Z score. If your Z score is 1.04 or greater, you would receive an A; if it were greater than .25 but less than 1.04, you would get a B; and so on. This system assumes that the scores are distributed normally.
50 Chapter 2 Norms and Basic Statistics for Testing TABLE 2-4
Z Score Cutoffs for a Grading System
Grade
Percentiles
Z score cutoff
A B C D F
85–100 60–84 20–59 6–19 0–5
1.04 .25 .84 1.56