Handbook of Research on Human Cognition and Assistive Technology: Design, Accessibility and Transdisciplinary Perspectives
Soonhwa Seok Center for Research on Learning - eLearning Design Lab, University of Kansas, USA Edward L. Meyen Kansas University, USA Boaventura DaCosta Solers Research Group, USA
Medical inforMation science reference Hershey • New York
Director of Editorial Content: Director of Book Publications: Acquisitions Editor: Development Editor: Typesetter: Production Editor: Cover Design: Printed at:
Kristin Klinger Julia Mosemann Lindsay Johnston Julia Mosemann Michael Brehm Jamie Snavely Lisa Tosheff Yurchak Printing Inc.
Published in the United States of America by Medical Information Science Reference (an imprint of IGI Global) 701 E. Chocolate Avenue Hershey PA 17033 Tel: 717-533-8845 Fax: 717-533-8661 E-mail:
[email protected] Web site: http://www.igi-global.com/reference Copyright © 2010 by IGI Global. All rights reserved. No part of this publication may be reproduced, stored or distributed in any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher. Product or company names used in this set are for identification purposes only. Inclusion of the names of the products or companies does not indicate a claim of ownership by IGI Global of the trademark or registered trademark. Library of Congress Cataloging-in-Publication Data Handbook of research on human cognition and assistive technology : design, accessibility and transdisciplinary perspectives / Soonhwa Seok, Edward L. Meyen and Boaventura DaCosta, editors. p. cm. Includes bibliographical references and index. Summary: "The intent of this book is to assist researchers, practitioners, and the users of assistive technology to augment the accessibility of assistive technology by implementing human cognition into its design and practice"--Provided by publisher. ISBN 978-1-61520-817-3 (hbk.) -- ISBN 978-1-61520-818-0 (ebook) 1. Self-help devices for people with disabilities. 2. Cognitive science. 3. Human engineering. I. Seok, Soonhwa, 1970- II. Meyen, Edward L. III. DaCosta, Boaventura. HV1569.5.H364 2010 681'.761--dc22 2009054320 British Cataloguing in Publication Data A Cataloguing in Publication record for this book is available from the British Library. All work contributed to this book is new, previously-unpublished material. The views expressed in this book are those of the authors, but not necessarily of the publisher.
List of Reviewers Brian Bryant, University of Texas, USA Diana Bryant, University of Texas, USA Muhammet Demirbilek, Suleyman Demirel University, Turkey Joan B. Hodapp, Area Education Agency 267, USA Neha Khetrapal, University of Bielefeld, Germany Carolyn Kinsell, Solers Research Group, USA Angelique Nasah, Solers Research Group, USA Brian Newberry, California State University San Bernardino, USA
List of Contributors
Banerjee, Rashida / University of Northern Colorado, USA ............................................................ 339 Belfi, Marcie M. / University of Texas, USA ...................................................................................... 325 Brown, Monica R. / New Mexico State University, USA................................................................... 374 Bryant, Brian / University of Texas, USA.......................................................................................... 264 Bryant, Diana / University of Texas, USA ......................................................................................... 264 DaCosta, Boaventura / Solers Research Group, USA .............................................................. 1, 21, 43 Delisi, Jennifer / Lifeworks Services, USA ........................................................................................ 121 Demirbilek, Muhammet / Suleyman Demirel University, Turkey .................................................... 109 Dotterer, Gary / Oklahoma State University, USA .................................................................... 299, 306 Dunn, Michael W. / Washington State University Vancouver, USA .................................................. 313 Estrada-Hernández, Noel / University of Iowa, USA ............................................................... 239, 286 Fitzpatrick, Michael / New Mexico State University, USA ....................................................... 179, 374 Hodapp, Joan B. / Area Education Agency 267, USA ............................................................... 199, 220 Horn, Eva / University of Kansas, USA ............................................................................................. 339 Johnson, Vivian / Hamline University, USA...................................................................................... 192 Jones, Kristen E. / University of Texas, USA .................................................................................... 325 Khetrapal, Neha / University of Bielefeld, Germany .......................................................................... 96 Kinsell, Carolyn / Solers Research Group, USA ................................................................................. 61 Kouba, Barbara J. / California State University, San Bernardino, USA .......................................... 360 Laffey, James / University of Missouri, USA....................................................................................... 76 Lowe, Mary Ann / Nova Southeastern University, USA ................................................................... 251 Nankee, Cindy / UTLL (Universal Technology for Learning & Living), USA .................................. 157 Newberry, Brian / California State University, San Bernardino, USA ............................................. 360 Okrigwe, Blessing Nma / Rivers State College of Education, Nigeria ............................................. 388 Pascoe, Jeffrey / Laureate Learning Systems, Inc., USA ................................................................... 132 Plunkett, Diane / University of Kansas, USA .................................................................................... 339 Price, Carol / Hamline University, USA ............................................................................................ 192 Rachow, Cinda / Area Education Agency 13, USA ................................................................... 199, 220 Schmidt, Matthew / University of Missouri, USA............................................................................... 76 Seok, Soonhwa / Center for Research on Learning - eLearning Design Lab, University of Kansas, USA............................................................................................. 1, 21, 43, 264 Slotznick, Benjamin / Point-and-Read, Inc., USA ............................................................................ 169
Stachowiak, James R. / University of Iowa, USA ..................................................................... 239, 286 Stichter, Janine / University of Missouri, USA.................................................................................... 76 Theoharis, Raschelle / Gallaudet University, USA ........................................................................... 179 Wagner, Cynthia L. / Lifeworks Services, USA ................................................................................ 121 Wilson, Mary Sweig / Laureate Learning Systems, Inc., USA .......................................................... 132
Table of Contents
Foreword ............................................................................................................................................. xx Preface ................................................................................................................................................ xxi Acknowledgment ..............................................................................................................................xxiii Section 1 Human Cognition and Assistive Technology Design Chapter 1 Human Cognition in the Design of Assistive Technology for Those with Learning Disabilities ........... 1 Boaventura DaCosta, Solers Research Group, USA Soonhwa Seok, Center for Research on Learning - eLearning Design Lab, University of Kansas, USA Chapter 2 Managing Cognitive Load in the Design of Assistive Technology for Those with Learning Disabilities ..................................................................................................................... 21 Boaventura DaCosta, Solers Research Group, USA Soonhwa Seok, Center for Research on Learning - eLearning Design Lab, University of Kansas, USA Chapter 3 Multimedia Design of Assistive Technology for Those with Learning Disabilities ............................. 43 Boaventura DaCosta, Solers Research Group, USA Soonhwa Seok, Center for Research on Learning - eLearning Design Lab, University of Kansas, USA Chapter 4 Investigating Assistive Technologies using Computers to Simulate Basic Curriculum for Individuals with Cognitive Impairments ......................................................................................... 61 Carolyn Kinsell, Solers Research Group, USA
Section 2 The Internet, Media, and Cognitive Loads Chapter 5 Social Orthotics for Youth with ASD to Learn in a Collaborative 3D VLE ......................................... 76 James Laffey, University of Missouri, USA Janine Stichter, University of Missouri, USA Matthew Schmidt, University of Missouri, USA Chapter 6 Cognition Meets Assistive Technology: Insights from Load Theory of Selective Attention ............... 96 Neha Khetrapal, University of Bielefeld, Germany Chapter 7 Cognitive Load and Disorientation Issues in Hypermedia as Assistive Technology .......................... 109 Muhammet Demirbilek, Suleyman Demirel University, Turkey Section 3 Software and Devices Chapter 8 Multi-Sensory Environments and Augmentative Communication Tools ........................................... 121 Cynthia L. Wagner, Lifeworks Services, USA Jennifer Delisi, Lifeworks Services, USA Chapter 9 Using Software to Deliver Language Intervention in Inclusionary Settings ...................................... 132 Mary Sweig Wilson, Laureate Learning Systems, Inc., USA Jeffrey Pascoe, Laureate Learning Systems, Inc., USA Chapter 10 Switch Technologies ........................................................................................................................... 157 Cindy Nankee, UTLL (Universal Technology for Learning & Living), USA Chapter 11 Point-and-Chat®: Instant Messaging for AAC Users ......................................................................... 169 Benjamin Slotznick, Point-and-Read, Inc., USA Chapter 12 Assistive Technology for Deaf and Hard of Hearing Students ........................................................... 179 Michael Fitzpatrick, New Mexico State University, USA Raschelle Theoharis, Gallaudet University, USA
Chapter 13 A Longitudinal Case Study on the Use of Assistive Technology to Support Cognitive Processes across Formal and Informal Educational Settings .............................................................. 192 Vivian Johnson, Hamline University, USA Carol Price, Hamline University, USA Section 4 Evaluation and Assessment Chapter 14 Impact of Text-to-Speech Software on Access to Print: A Longitudinal Study .................................. 199 Joan B. Hodapp, Area Education Agency 267, USA Cinda Rachow, Area Education Agency 13, USA Chapter 15 Measure It, Monitor It: Tools for Monitoring Implementation of Text-to-Speech Software.............. 220 Joan B. Hodapp, Area Education Agency 267, USA Cinda Rachow, Area Education Agency 13, USA Chapter 16 Evaluating Systemic Assistive Technology Needs ............................................................................. 239 Noel Estrada-Hernández, University of Iowa, USA James R. Stachowiak, University of Iowa, USA Chapter 17 Developing Electronic Portfolios........................................................................................................ 251 Mary Ann Lowe, Nova Southeastern University, USA Chapter 18 Assistive Technology Solutions for Individuals with Learning Problems: Conducting Assessments Using the Functional Evaluation for Assistive Technology (FEAT) ............................. 264 Brian Bryant, University of Texas, USA Soonhwa Seok, Center for Research on Learning - eLearning Design Lab, University of Kansas, USA Diana Bryant, University of Texas, USA Section 5 Teacher Education Chapter 19 Improving Assistive Technology Training in Teacher Education Programs: The Iowa Model .......... 286 James R. Stachowiak, University of Iowa, USA Noel Estrada-Hernández, University of Iowa, USA
Chapter 20 Effects of Assistive Technologies Combined with Desktop Virtual Reality in Instructional Procedures (1) ..................................................................................................................................... 299 Gary Dotterer, Oklahoma State University, USA Chapter 21 Effects of Assistive Technologies Combined with Desktop Virtual Reality in Instructional Procedures (2) ..................................................................................................................................... 306 Gary Dotterer, Oklahoma State University, USA Chapter 22 Response to Intervention: Assistive Technologies which can Help Teachers with Intervention Programming and Assessment ............................................................................................................ 313 Michael W. Dunn, Washington State University Vancouver, USA Chapter 23 Assistive Technology for Teacher Education: From Research to Curriculum .................................... 325 Marcie M. Belfi, University of Texas, USA Kristen E. Jones, University of Texas, USA Chapter 24 Supporting Early Childhood Outcomes through Assistive Technology ............................................. 339 Diane Plunkett, University of Kansas, USA Rashida Banerjee, University of Northern Colorado, USA Eva Horn, University of Kansas, USA Section 6 Past, Present, and Future Chapter 25 Assistive Technology’s Past, Present and Future ................................................................................ 360 Barbara J. Kouba, California State University, San Bernardino, USA Brian Newberry, California State University, San Bernardino, USA Chapter 26 Digital Inequity: Understanding the Divide as it Relates to Culture and Disability........................... 374 Monica R. Brown, New Mexico State University, USA Michael Fitzpatrick, New Mexico State University, USA
Chapter 27 Cognition and Learning ...................................................................................................................... 388 Blessing Nma Okrigwe, Rivers State College of Education, Nigeria Compilation of References ............................................................................................................... 401 About the Contributors .................................................................................................................... 443 Index ................................................................................................................................................... 451
Detailed Table of Contents
Foreword ............................................................................................................................................. xx Preface ................................................................................................................................................ xxi Acknowledgment ..............................................................................................................................xxiii Section 1 Human Cognition and Assistive Technology Design Chapter 1 Human Cognition in the Design of Assistive Technology for Those with Learning Disabilities ........... 1 Boaventura DaCosta, Solers Research Group, USA Soonhwa Seok, Center for Research on Learning - eLearning Design Lab, University of Kansas, USA This chapter is the first of three serving as the introduction to this handbook which addresses the relationship between human cognition and assistive technologies and its design for individuals with cognitive disabilities. In this chapter the authors introduce the human information processing system, discuss the modal model of memory, and describe ways in which to increase learning. Altogether, the authors present the approach that assistive technologies for individuals with learning disabilities should be created with an understanding of design principles empirically grounded in the study of how the human mind works. Chapter 2 Managing Cognitive Load in the Design of Assistive Technology for Those with Learning Disabilities ..................................................................................................................... 21 Boaventura DaCosta, Solers Research Group, USA Soonhwa Seok, Center for Research on Learning - eLearning Design Lab, University of Kansas, USA This chapter is the second of three serving as the introduction to this handbook which addresses the relationship between human cognition and assistive technologies and its design for individuals with learning disabilities. In this chapter the authors present strategies to manage cognitive load in the design of
instructional materials for those with learning disabilities by introducing cognitive load theory. Altogether, the authors affirm the approach discussed in the last chapter—assistive technologies for individuals with learning disabilities should be created with an understanding of design principles empirically grounded in the study of how the human mind works. Chapter 3 Multimedia Design of Assistive Technology for Those with Learning Disabilities ............................. 43 Boaventura DaCosta, Solers Research Group, USA Soonhwa Seok, Center for Research on Learning - eLearning Design Lab, University of Kansas, USA This chapter is the last of three serving as the introduction to this handbook which addresses the relationship between human cognition and assistive technologies and its design for individuals with learning disabilities. In this chapter the authors build upon the last two chapters and focus specifically on research investigating the visual and auditory components of working memory by presenting the cognitive theory of multimedia learning (CTML). Altogether, the authors stress the common thread found throughout this three chapter introduction—assistive technologies for individuals with learning disabilities should be created with an understanding of design principles empirically grounded in the study of how the human mind works. They argue that the principles emerging from the CTML may have potential benefits in the design of assistive technologies for those with learning disabilities. Chapter 4 Investigating Assistive Technologies using Computers to Simulate Basic Curriculum for Individuals with Cognitive Impairments ......................................................................................... 61 Carolyn Kinsell, Solers Research Group, USA For middle and high school students, learning is often conducted in traditional classroom settings. Peer pressure is generally high and any lack of classroom participation or subject knowledge is quickly apparent to other classmates. The cognitively impaired student who is behind is not always properly identified nor is the learning solution readily available. It is the hope that assistive technologies can become more common-place for cognitively impaired students left hanging in the balance in traditional classrooms. This chapter addresses the use of computer-based simulations as an assistive technology solution for the cognitively impaired student who is having difficulties. Section 2 The Internet, Media, and Cognitive Loads Chapter 5 Social Orthotics for Youth with ASD to Learn in a Collaborative 3D VLE ......................................... 76 James Laffey, University of Missouri, USA Janine Stichter, University of Missouri, USA Matthew Schmidt, University of Missouri, USA
This chapter describes the conceptualization as well as design and development work underway to advance the use and study of social orthotics as an assistive technology in 3-Dimensional Virtual Learning Environment for youth with autism spectrum disorders (ASD). The work to understand and develop social orthotics is part of a larger effort to build iSocial, an online learning systems being developed to implement a curriculum for developing social competence for youth with ASD. The chapter describes the development of two forms of social orthotics, iTalk and iGroup. In their current forms, iTalk supports conversational turn taking by constraining interruptions, and iGroup supports conversational turn taking by supporting appropriate adjacency, distance and orientation behavior. This chapter describes results from early tests of prototypes of the social orthotics and suggests directions for future research. Chapter 6 Cognition Meets Assistive Technology: Insights from Load Theory of Selective Attention ............... 96 Neha Khetrapal, University of Bielefeld, Germany This chapter aspires to lay emphasis on transdignostic process as a means for diagnosis and for cognitive intervention by modern technological tools. In this pursuit, it highlights the intimate links shared by cognitive and emotional processes and brings in several examples for developing better understanding. The attempt on encouraging cooperative ties among disciplines and various contemporary concepts from cognition is also discussed. Chapter 7 Cognitive Load and Disorientation Issues in Hypermedia as Assistive Technology .......................... 109 Muhammet Demirbilek, Suleyman Demirel University, Turkey Hypermedia as an assistive technology has the potential to teach and train individuals with disabilities. However, like every technology, hypermedia itself is not free from problems. Disorientation and cognitive load are two of the most challenging problems related to hypermedia learning environments. The purpose of this chapter is to highlight disorientation and cognitive load problems in hypermedia learning environments where learners are usually faced a serious problem while navigating hypermedia systems. This chapter includes a brief introduction of assistive technologies, hypermedia as a learning environment, human memory and hypermedia, usability issues, disorientation, and cognitive load in hypermedia and hypermedia in inclusive education. Section 3 Software and Devices Chapter 8 Multi-Sensory Environments and Augmentative Communication Tools ........................................... 121 Cynthia L. Wagner, Lifeworks Services, USA Jennifer Delisi, Lifeworks Services, USA This chapter puts forward the idea that use of a multi-sensory environment to decrease defensiveness in the body can promote integration of the senses, and lead a person to be in a better position to communicate
their wants and needs. It has also been noted that adults with developmental disabilities or autism can sometimes be overlooked as emerging communicators. Shifting this view will increase their access to new tools and techniques for enhancing communication skills. People with severe disabilities can live, work, play, communicate, and form relationships with a wide variety of people in their communities, schools, and workplaces, and they deserve to be provided with opportunities to do so. Chapter 9 Using Software to Deliver Language Intervention in Inclusionary Settings ...................................... 132 Mary Sweig Wilson, Laureate Learning Systems, Inc., USA Jeffrey Pascoe, Laureate Learning Systems, Inc., USA Using Software to Deliver Language Intervention in Inclusionary Settings Receptive language intervention with an emphasis on syntax is essential when serving the educational needs of children with language delays and disorders. Syntax mastery is necessary for sentence understanding and use as well as for reading comprehension and writing. Yet there are challenges to providing individualized syntax intervention on a daily basis in inclusionary settings. This chapter reviews the linguistic foundations and instructional approaches used in language intervention software designed for preschool and elementary school children. Also described are the results of classroom field-testing where regular use of the software was found to be associated with accelerated language development. Chapter 10 Switch Technologies ........................................................................................................................... 157 Cindy Nankee, UTLL (Universal Technology for Learning & Living), USA This chapter provides information on best practices in the area of switch technologies including: (a) a background of the what, why, when and where of switch accessibility; (b) a summary of five popular assessment tools; and (c) an overview of types of switches and training strategies for the successful sustained use of switches. Information included in this chapter will benefit assistive technology professionals, case managers, educators, physical therapists, occupational therapists, speech and language pathologists, rehabilitation counselors as well as students of these professions and consumers. The information will apply to all age groups including birth to six, all levels of primary and secondary education, adulthood and senior services. The goal of this chapter is to compile information in a concise step-by-step fashion including information from assessment to implementation, additional resources, readings, and references for further study. Chapter 11 Point-and-Chat®: Instant Messaging for AAC Users ......................................................................... 169 Benjamin Slotznick, Point-and-Read, Inc., USA This chapter details the design choices and user interfaces employed by Point-and-Chat® software to make it easier to use by reducing cognitive load. Point-and-Chat® is software for Instant Messaging (IM) which is especially designed to be used in conjunction with Augmentative/Alternative Communication (AAC) devices. The software includes a built-in screen reader and special picture-based control and navigation for people who have difficulty reading or have cognitive limitations. This chapter also
discusses the challenges and opportunities to AAC users presented by the growing importance of IM, as well as recent research which points out a need to develop special IM vocabulary interfaces to overcome those challenges. Chapter 12 Assistive Technology for Deaf and Hard of Hearing Students ........................................................... 179 Michael Fitzpatrick, New Mexico State University, USA Raschelle Theoharis, Gallaudet University, USA This chapter discusses the reality that the majority of deaf and hard of hearing (d/hh) students are educated in the public school setting. Unfortunately educators are often ill prepared to address the unique technological needs of d/hh students. This chapter focuses on providing educators and other service providers with an overview of various educational technologies that they can employ to increase the academic and social outcomes for d/hh students. Chapter 13 A Longitudinal Case Study on the Use of Assistive Technology to Support Cognitive Processes across Formal and Informal Educational Settings .............................................................. 192 Vivian Johnson, Hamline University, USA Carol Price, Hamline University, USA This qualitative case study describes the challenges faced by one child with documented learning challenges and her parents in their ten-year struggle to include the use of assistive and repurposed technology in the learning environment. Understanding the context of this challenge juxtaposed with the impact of federal legislation can inform and encourage policy reform. Section 4 Evaluation and Assessment Chapter 14 Impact of Text-to-Speech Software on Access to Print: A Longitudinal Study .................................. 199 Joan B. Hodapp, Area Education Agency 267, USA Cinda Rachow, Area Education Agency 13, USA This chapter examines the outcome of extended use text-to-speech software as an accommodation to improve student access to core content. Using the Time Series Concurrent and Differential Approach, this study examines the impact on student fluency and comprehension. In addition, multiple measures of perceptual and objective data measures were collected from 20 middle school special education students and nine teachers during the 27-week study. Chapter 15 Measure It, Monitor It: Tools for Monitoring Implementation of Text-to-Speech Software.............. 220 Joan B. Hodapp, Area Education Agency 267, USA Cinda Rachow, Area Education Agency 13, USA
This chapter examines a variety of innovative tools for monitoring successful implementation of assistive technology that were field tested in the Iowa Text Reader Project. Explanations of each tool and its use are provided. Strategies include measures to collect data from students, teachers, administrators, and assistive technology team members. Chapter 16 Evaluating Systemic Assistive Technology Needs ............................................................................. 239 Noel Estrada-Hernández, University of Iowa, USA James R. Stachowiak, University of Iowa, USA The purpose of this chapter is to introduce the concept and application of needs assessment, as well as the benefits of conducting this type of research to improve the quality of assistive technology (AT) services. This chapter begins with a discussion of what is AT and the role it plays in the life of a person with a disability. This will include a discussion of the idea that the earlier AT is introduced to the individual, the more likely it will continue to be used and the larger effect it will have on the individual’s future education, employment, and independent living needs. Also, this chapter highlights the impact this type of research has on teacher preparation. Chapter 17 Developing Electronic Portfolios........................................................................................................ 251 Mary Ann Lowe, Nova Southeastern University, USA This chapter explores the use of electronic portfolios as a way of documenting the use of Assistive Technology / Augmentative and Alternative Communication (AT/AAC) for individuals with severe physical needs and communication impairments. Documenting individual student characteristics, the strategies used for successful implementation of AT/AAC tools, and the progress of these individuals using technology via electronic portfolios can be a useful tool for service providers. Chapter 18 Assistive Technology Solutions for Individuals with Learning Problems: Conducting Assessments Using the Functional Evaluation for Assistive Technology (FEAT) ............................. 264 Brian Bryant, University of Texas, USA Soonhwa Seok, Center for Research on Learning - eLearning Design Lab, University of Kansas, USA Diana Bryant, University of Texas, USA This chapter provides information about how such an assistive technology (AT) assessment can be conducted using the Functional Evaluation for Assistive Technology (FEAT). Readers are provided with an overview of the importance of person-centered assessments, and then are given a description of each of the FEAT components. A case study is also provided, wherein the process of an effective and efficient AT assessment is described.
Section 5 Teacher Education Chapter 19 Improving Assistive Technology Training in Teacher Education Programs: The Iowa Model .......... 286 James R. Stachowiak, University of Iowa, USA Noel Estrada-Hernández, University of Iowa, USA This chapter discusses the model that the College of Education at the University of Iowa is using to provide assistive technology training to their preservice teacher education students. The College’s Iowa Center for Assistive Technology Education and Research has developed an innovative hands-on project that revolves around their Mobile AT Lab. This chapter will focus on how the project was developed, is being carried out, and an evaluation of its success. Chapter 20 Effects of Assistive Technologies Combined with Desktop Virtual Reality in Instructional Procedures (1) ..................................................................................................................................... 299 Gary Dotterer, Oklahoma State University, USA With the advancements in technology and the ability to use different training techniques in industry, business, and educational environments, those who may need alternate methods of delivery of training materials should also be considered. Individuals with impairment who rely on assistive technologies could benefit from these alternative methods. Desktop virtual reality combined with assistive technologies could provide a safe, reliable, and productive opportunity while addressing the specific needs from the safety of their own personal computer. This chapter introduces the merging of these technologies and the opportunities that may be possible by providing viable training procedures and processes in the workforce. Chapter 21 Effects of Assistive Technologies Combined with Desktop Virtual Reality in Instructional Procedures (2) ..................................................................................................................................... 306 Gary Dotterer, Oklahoma State University, USA This chapter frames the structure of the study introduced in the previous chapter. The framework of the chapter sets up the methodology (subjects, testing instruments, and procedures) and includes screenshots of the three web-based instruments. The results and findings section contains detailed information with illustrated tables containing the data generated from the statistical analysis program SPSS. The chapter concludes with the discussion, further research directions, and conclusions based on the outcomes of the findings.
Chapter 22 Response to Intervention: Assistive Technologies which can Help Teachers with Intervention Programming and Assessment ............................................................................................................ 313 Michael W. Dunn, Washington State University Vancouver, USA How to assess and provide intervention programming for students with characteristics of having a learning disability has been a long-standing challenge in education. Traditionally, students with a possible learning disability completed assessments of IQ (i.e., intellectual potential) and academic achievement (i.e., demonstrated ability) around the end of third grade; a discrepancy of 15 points or more would typically provide for a student to be classified with a learning disability and receive special education services. Due to systemic bias in standardized assessments, such as IQ tests as well as a desire to address the needs of low-functioning students in the early/initial grades of school, educators have developed a new intervention and assessment process called response to intervention. If students do not make good progress with intervention programming, this curriculum-based data can be used to justify learningdisability classification. Assistive technologies such as intervention software can help teachers manage the provision of intervention programming and data collection. Chapter 23 Assistive Technology for Teacher Education: From Research to Curriculum .................................... 325 Marcie M. Belfi, University of Texas, USA Kristen E. Jones, University of Texas, USA This chapter provides teacher educators with research on assistive technology in K-12 schools from two different strands, and concludes with a model of teacher training to meet those needs. The first section describes assistive technology needs for culturally and linguistically diverse families of children with special needs. The second section highlights instructional and assistive technology for students with learning disabilities in the area of writing. Finally, the chapter concludes with a curriculum model for training preservice teachers to use assistive technology across the lifespan of students within all populations. Chapter 24 Supporting Early Childhood Outcomes through Assistive Technology ............................................. 339 Diane Plunkett, University of Kansas, USA Rashida Banerjee, University of Northern Colorado, USA Eva Horn, University of Kansas, USA This chapter focuses on the application of assistive technology to meet the Office of Special Education Programs early childhood outcomes with attention given to evidence-based strategies and practices in the use of assistive technology. By offering case vignettes, early childhood professionals may reflect upon their experience, and recognize the benefit that assistive technology has in meeting child developmental outcomes. In presenting a firm understanding for assistive technology consideration, the early childhood professional can assist children with disabilities and their families in meeting early childhood outcomes.
Section 6 Past, Present, and Future Chapter 25 Assistive Technology’s Past, Present and Future ................................................................................ 360 Barbara J. Kouba, California State University, San Bernardino, USA Brian Newberry, California State University, San Bernardino, USA The term assistive technology is relatively new, however the use of technology to assist humans to perform better is certainly not. Throughout history we have been creating devices to assist us by augmenting our senses, retaining information, accessing information, communicating and finding our way in the world. Many technologies that were originally assistive have become mainstream. New technologies that are being developed as assistive will no doubt also become mainstream in the future. Chapter 26 Digital Inequity: Understanding the Divide as it Relates to Culture and Disability........................... 374 Monica R. Brown, New Mexico State University, USA Michael Fitzpatrick, New Mexico State University, USA This chapter presents literature and research related to the digital divide for students from culturally and linguistically diverse (CLD) backgrounds. This chapter discusses issues of access for CLD students with disabilities, factors that have led to inequitable access to technology for those groups, and provides solutions for increasing access to technology for students with disabilities and students from CLD backgrounds. Chapter 27 Cognition and Learning ...................................................................................................................... 388 Blessing Nma Okrigwe, Rivers State College of Education, Nigeria Cognitivist theories have identified the cognitive skills—perception, conception, memory, language, reasoning and creativity—as underlying all academic learning, and absence of these skills will result in the child’s academic failure. Learners learning in different ways, which has an implication to personalized learning with the teacher playing the role of a facilitator, can help the student construct knowledge rather than the production of facts. This has thus led to a shift in paradigm from the traditional behavioral approach, where learners are passive, to a paradigm where learners are active and empowered. Personalized learning is made possible through digital technology such as the Internet; hence, innovation in teaching and learning has been dominated by the computer. Despite the numerous advantages of digital technology, which can provide opportunities for individuals with disabilities to meet their full learning potential, this has yet to be seen by most countries in the developing world. Compilation of References ............................................................................................................... 401 About the Contributors .................................................................................................................... 443 Index ................................................................................................................................................... 451
xx
Foreword
This handbook is the realization of a year long unified learning endeavor of collaborative writing and thinking from educators and professionals from around the world dedicated to the field of assistive technology. The contributors of this handbook have been leaders and positive influences in the everchanging, transdisciplinary assistive technology landscape. We are grateful to have been able to harvest such a wonderful collection of works into a single source. This handbook is a collaboration of 27 empirically-supported chapters addressing the current issues of human cognition and assistive technology design, the Internet, media, cognitive load, software and devices, evaluation and assessment, teacher education, and the practices of assistive technology in the past, present, and future. This handbook was written specifically for families, practitioners, and others involved in aiding those with disabilities. It is our hope that this handbook is a step forward in bettering the practice of assistive technology, the lives of those with disabilities, and a positive influence towards the mission behind the Individuals with Disabilities Education Act (IDEA). Gary M. Clark University of Kansas, USA
Gary M. Clark’s research focus is assessment for transition planning; consultation and training on using and interpreting the Transition Planning Inventory. Teaching assignments over the 38 years at the University of Kansas have included courses at the undergraduate and graduate levels. Most consistently, the courses assigned have been the introductory courses (UG and G) to the field of special education and two core courses in transition education and services. One other course taught periodically has been a course on counseling individuals with disabilities. Visiting professorship teaching assignments have focused on secondary special education and transition from school to adult living.
xxi
Preface
The Handbook of Research on Human Cognition and Assistive Technology: Design, Accessibility and Transdisciplinary Perspectives marks a critical milestone in the history of implementation and practice of assistive technology in this country. We have come a long way since the term universal design was coined in the 1970s and linear perspectives and technology were at the forefront of assisting those with disabilities. Over the years, numerous studies have been conducted on assistive technology from a special education perspective. With the unprecedented advancements in computing power coupled with the societal movement towards inclusive settings, there is no better time than today to strive for assistive technology equity in terms of universal implementation within a transdisciplinary perspective. This edited book is borne from this opportunity and attempts to consolidate the relationships among human cognition and assistive technology. The intent of this book is to assist researchers, practitioners, and the users of assistive technology to augment the accessibility of assistive technology by implementing human cognition into its design and practice. Consequently, this book presents assistive technology as an intervention for people with disabilities from a transdisciplinary perspective. This book is composed of 27 chapters prepared into six sections. Section 1 serves as the scaffolding for the remainder of the book, by laying the theoretical foundation of human cognition and its direct applicability to the design of assistive technology. The chapters in this part are intended to align assistive technology with the study of how the human mind works, discussing the importance of cognitive load and knowing when to avoid it, how to managing it, and in some cases, promote it. The chapters also delve into the understanding of empirically supported instructional principles that can be leveraged to assist those with special needs. The use of simulation-based instruction is also introduced as a precursor to Section 2 of this book, presenting the significant contributions simulation technology has towards assisting those with learning disabilities. Section 2 focuses specifically on the Internet, media, and continues the line of thinking behind the management of cognitive load. The benefits of simulation-based instruction are expounded upon and the utilization of 3D virtual environments is presented. There has been a surge of popularity with such environments given the potentially limitless possibilities beyond that of entertainment. The chapters in this part discuss how such environments may hold significant opportunities to assist those who are challenged by traditional classroom instruction and interaction. Section 3 looks at software and devices as tools to benefit interventions for individuals with disabilities. The chapters in this part discuss the role of assistive technology practice in the field as it aligns with research. That is, practice triggers motivation to conduct formative and experimental design research to enhance the quality life of individuals with disabilities. Section 4 emphasizes the changing culture of evaluation and assessment in the area of assistive technology. In particular, the use of ecological evaluations and multi-model assessments reflect the
xxii
trend toward transdisciplinary perspectives. The chapters in this part present evidence-based systematic research in the field. Section 5 stresses the practice of assistive technology as a strategy for teaching and learning. The chapters in this Section 1nitiate the development of formative instructional strategies using assistive technology to achieve effective learning outcomes. Finally, Section 6 summaries and describes what assistive technology looked like in the past, how it looks now, and how it might look in the future. This includes a review of the history of assistive technology-related legislation, research, and practice from traditional to modernist theories, from Helen Keller to Steven Hawking. It also addresses the “digital divide” and equity as major issues in the history of assistive technology. There are many books providing insights into assistive technology. What sets this book apart from other edited books is that this book has been forged from a transdisciplinary perspective. The editors and contributing authors come from a number of disciples to include computer science, instructional design, curriculum and special education, and psychology, to name only a few. This book is a collaboration between researchers and practitioners alike and we hope that you enjoy reading it as much as we enjoyed the delightful journey it was in getting it published. Soonhwa Seok University of Wisconsin-Whitewater, USA Edward L. Meyen Kansas University, USA Boaventura DaCosta Soler Research Group, USA
xxiii
Acknowledgment
First, special thanks go to the authors of the 27 chapters comprising this edited book. We especially thank them for their support and encouragement through the writing and publishing process and their interest in this effort. They were an inspiration to us as we carried out our research. This work is a reflection of their vision in guiding our collaboration. Sincere gratitude is extended to Julia Mosemann for her editorial efforts, her style, but most importantly, her unwavering sense of humor. Special appreciation goes to Brian Bryant, for his support, encouragement, and inspiration. He taught us how to turn water into wine. Finally, we are grateful to our all of our research collaborators who donated their time and expertise in this publication. “Thank You.” Soonhwa Seok University of Wisconsin-Whitewater, USA Edward L. Meyen Kansas University, USA Boaventura DaCosta Soler Research Group, USA
Section 1
Human Cognition and Assistive Technology Design To say that learning through the use of technology has had an intertwined past of failing to deliver on expectations is an understatement. History can provide a myriad of examples related to motion picture, radio, television, and computer-assistive instruction that serve as a reminder to what many of us already know—using technology for the sake of technology does not work. In fact, it can be detrimental to learning. Instead, the driving force behind technology should be the learner; namely, the manner in which technologies can be used to promote human cognition. Instructional designers, educators, practitioners, and others involved in the design of learning technologies know that the latest advancements in cutting-edge technology are simply not enough, but rather, such knowledge must be coupled with an understanding of the human information processing system. Such an argument could not be any more applicable to the design of assistive technologies for those with learning disabilities. Firmly grounded in the field of cognitive psychology, the human information processing system provides a useful frame in which to understand the mental processes involved in higher-order thinking and consequently, learning. Why is this important? Some theories in cognitive psychology suggest that working memory, a system that temporarily stores and manages information for performing complex cognitive tasks, is limited in capacity. When coupled with the deficits of those with learning disabilities, such as difficulty with attention, memory, and problem-solving, all of which are vital factors in learning, the human information processing system can prove invaluable to designers in understanding best practices in which to present information. In the first part of this handbook, we present the relationship between human cognition and assistive technologies and its design for individuals with learning disabilities. In the first chapter, we introduce the human information processing system, discuss the modal model of memory, and describe ways in which to increase learning. In the second chapter, we present strategies to manage cognitive load in the design of instructional materials for those with learning disabilities by introducing cognitive load theory—a learning theory that proposes a set of instructional principles grounded in human information processing research that can be leveraged in the creation of efficient and effective learning environments. In the third chapter, we focus specifically on research investigating the visual and auditory components of working memory by presenting the cognitive theory of multimedia learning—a learning theory proposing a set of instructional principles grounded in human information processing research that provide best practices in designing efficient multimedia learning environments. While finally, in the fourth chapter, we present the use of computer-based simulation as an assistive technology solution. Altogether, we present the approach that assistive technologies for individuals with learning disabilities should be created with an understanding of design principles empirically grounded in the study of how the human mind works.
1
Chapter 1
Human Cognition in the Design of Assistive Technology for Those with Learning Disabilities Boaventura DaCosta Solers Research Group, USA Soonhwa Seok Center for Research on Learning - eLearning Design Lab, University of Kansas, USA
AbstrAct This is the first of three chapters serving as the introduction to this handbook which addresses the relationship between human cognition and assistive technologies and its design for individuals with cognitive disabilities. In this chapter the authors introduce the human information processing system. They discuss the modal model of memory, a basic framework offering the most popular explanations behind the active processes used in the construction of new knowledge. In doing so, the authors examine the three memory stores comprising the modal model which are responsible for the acquisition, storage, and retrieval of information. The authors then discuss ways in which to increase learning. Altogether, they present the approach that technology for learning should be created with an understanding of design principles empirically grounded in the study of how the human mind works, particularly when it comes to the design of assistive technologies for individuals with learning disabilities.
IntroductIon the case for Human cognition in the design of Assistive technology for those with Learning disabilities First published in the Technology-Related Assistance for Individuals with Disabilities Act of 1988 DOI: 10.4018/978-1-61520-817-3.ch001
and since amended and replaced with the Assistive Technology Act of 1998, assistive technology (AT) has been formally defined as “any item, piece of equipment, or product system, whether acquired commercially, modified, or customized, that is used to increase, maintain, or improve functional capabilities of individuals with disabilities” (United States Congress, 1998, Definitions and Rule section, para. 3). Although we typically think of technologies such as wheelchairs and prosthetics to help those with
Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Human Cognition in the Design of Assistive Technology for Those with Learning Disabilities
physical impairments, ATs can also have a significant impact on the lives of those with cognitive disabilities. If created with the abilities and deficits of those with cognitive disabilities in mind, ATs can remove obstacles and offer individuals greater independence which they might not otherwise be able to experience. Likewise, the opposite is also true. Assistive technologies created without an understanding of the cognitive disabilities of individuals can become a hindrance. Unlike that of AT, the mere act of defining the term “cognitive disability” has proved troublesome.
the broad nature of cognitive disabilities and our Focus on Learning disabilities Definitions for the term “cognitive disability” vary by source. Generally speaking, a cognitive disability is any disorder which affects mental processing. There are different severities of cognitive disabilities. Individuals with severe disorders may need uninterrupted assistance and supervision by caregivers in almost every aspect of daily life, whereas individuals with minor cognitive disabilities may require very little if any assistance. In fact, some cognitive disabilities may be so minor that they are never diagnosed (Rogers, 1979).
Our Focus on Learning Disabilities In this three chapter introduction we focus on cognitive disabilities which impair learning. Specifically, children, adolescents, and adults diagnosed with learning disabilities (LDs). We refer to children, adolescents, and adults because LDs are considered to be lifelong disorders. Children with LDs will someday grow up to become adults with LDs. Their brains are not defective or damaged. Instead, they see, hear, and understand things differently. Learning disabilities are thought to be neurological in nature and are related to central
2
nervous system dysfunction (National Joint Committee on Learning Disabilities, 1991). This brings up an important point. Learning disabilities should not be used as a measure in which to gauge intelligence. Individuals with LDs have average or above average intelligence, but have difficulty with rudimentary skills that those without LDs take for granted. Learning disabilities are typically considered to be less severe cognitive disorders which can manifest themselves in many different forms. Reading disabilities (dyslexia), writing disabilities (dysgraphia), and math disabilities (dyscalculia), are probably the most recognizable LDs owing their mainstream familiarity to the media and other public channels.
bAckground What are Learning disabilities? There have been a myriad of efforts to define LDs. The need for a definition has stemmed from the fact that without one, LDs cannot be clearly recognized and measured for the purpose of diagnosis and remediation (Wong, Graham, Hoskyn, & Berman, 1996). As you might imagine, obtaining consensus among professionals and policy makers within the LDs community has been difficult. In fact, some efforts have ignited fierce debates, such as the 2004 reauthorization of the U.S. Individuals with Disabilities Act (Fletcher, Lyon, Fuchs, & Barnes, 2006). Although varying definitions continue to exist among professional organizations and government agencies (see Wong et al., 1996, for an in-depth review), a definition that appears to have captured the essence of LDs and received the broadest endorsement in the LDs community (Hammill, 1993, as cited in Wong et al., 1996) is the one by the National Joint Committee on Learning Disabilities (NJCLD) (1991), which reads:
Human Cognition in the Design of Assistive Technology for Those with Learning Disabilities
Learning disabilities is a generic term that refers to a heterogeneous group of disorders manifested by significant difficulties in the acquisition and use of listening, speaking, reading, writing, reasoning, or mathematical abilities. These disorders are intrinsic to the individual and presumed to be due to central nervous system dysfunction. Even though a learning disability may occur concomitantly with other handicapping conditions (e.g., sensory impairment, mental retardation, social and emotional disturbance), or environmental influences (e.g., cultural differences, insufficient/ inappropriate instruction, psychogenic factors), it is not the direct result of those conditions or influences (p. 3). As seen in the definition, the NJCLD supports the idea that LDs are not the direct result of other disabilities, but instead may occur alongside other disorders. This is important in the context of this and the next two chapters because although we focus on children, adolescents, and adults diagnosed with LDs, we do not exclude more severe cognitive disabilities such as autism, Down syndrome, the most common form of dementia, Alzheimer’s disease, and disorders resulting from traumatic brain injury.
Viewing Learning Disabilities from a Functional Perspective Given the broad nature of LDs and the potential degree of severity that can exist with these disorders, it goes without saying that individuals with LDs interact with technology in different ways. So, in addition to viewing LDs from a clinical standpoint, as we just have, when developing ATs, it might be more advantageous to view such disabilities from a functional standpoint. This perspective focuses strictly on the abilities and deficits facing individuals with LDs. As defined by the NJCLD (1991), deficits typically include difficulty with listening speaking, reading, writing, reasoning, or mathematical abilities. Taking
a step back and examining these deficits further reveal that individuals diagnosed with LDs have difficulty with attention, memory, and problemsolving, all of which are vital factors in learning. We once again stress that the utmost care must be taken by designers involved in the creation of ATs. Designers must not only have a deep understanding of the technologies currently available, but an even deeper functional understanding of the abilities and deficits facing the individuals for whom they are designing. As commonsensical as this may sound, as we will discuss next, history has shown us quit the opposite.
the troubled Past of technology and Learning The potential for the improvement of learning through the use of technology within education has not translated very well into everyday practice. For the most part, technology and learning have had an intertwined past of failing to deliver on expectations. In fact, a mere cursory examination of the prospect of new technology for learning in the 20th century would produce a myriad of examples relating to motion picture, radio, television, and computer-assisted instruction (see Cuban, 1986, for an in-depth review). The example commonly cited as evidence for this long-lasting, tempestuous relationship is the quote by Thomas Edition, who in 1922 prophesized “…the motion picture is destined to revolutionize our educational system and that in a few years it will supplant largely, if not entirely, the use of textbooks” (cited in Cuban, 1986, p. 9). This quote serves as a reminder to what many of us all ready know—using technology for the sake of technology does not work. In fact, there is a consequence when the technology-centered approach is adopted over that of the learner-centered one (Mayer, 2005b). Namely, the technologycentered approach does not generally lead to longterm advancements in education (Cuban, 1986, as cited in Mayer, 2005b). Hindsight has taught us
3
Human Cognition in the Design of Assistive Technology for Those with Learning Disabilities
this lesson with the prediction made by Thomas Edison and the many other failed attempts at using technology for learning that have followed. Needless to say, the driving force behind technology design should not be the technology itself, but rather the learner. When the learner-centered approach is taken, focus is placed on the manner in which technologies can be used in the promotion of human cognition (Mayer, 2005b). This should come as no surprise to instructional designers, educators, practitioners, and others involved in the design of technology for the improvement of learning. Understanding the latest advancements in cutting-edge technology is simply not enough. Rather, such knowledge must be coupled with an understanding of the human information processing system. According to Mayer (2005b), “…designs that are consistent with the way the human mind works are more effective in fostering learning than those that are not” (p. 9). If the full potential of technology for learning is to be realized, there must be a clear understanding between design principles and the means by which humans acquire, store, and retrieve information.
Assistive technology, Learning disability, and Human Information Processing We argue that this approach could not be any more applicable to the design of ATs for those with LDs. Arising from work in cognitive psychology, the human information processing system and associated models provide a useful framework in which to understand the mental processes involved in higher-order thinking and consequently, learning. This is important because some theories in cognitive psychology suggest that working memory, a system that temporarily stores and manages information for performing complex cognitive tasks, is limited in capacity (Baddeley, 1986, 1998, 2002). When coupled with the functional perspective of LDs, the human information processing system can prove invalu-
4
able to designers in understanding best practices in which to present information. This is the first of three chapters serving as the introduction to this handbook which addresses the relationship between human cognition and ATs and its design for individuals with LDs. In this chapter, we introduce the human information processing system. We describe the most popular explanations behind the acquisition, storage, and retrieval of information. We then discuss ways in which to increase learning. This chapter serves as scaffolding for the subsequent two chapters of which are both grounded in learning theory. In the second chapter, we focus specifically on the cognitive limitations of working memory. In doing so, we introduce cognitive load theory and its applicability in the design of ATs. Finally, in the third and final chapter in this introduction, we discuss the importance of visual and auditory modality in the design of ATs. We go about this by offering an in-depth discussion of the cognitive theory of multimedia learning. In both chapters, we focus predominately on those individuals who have been diagnosed with LDs, but as we have discussed, may also include individuals diagnosed with more severe cognitive disabilities. We begin this chapter with a brief history of the field of cognitive psychology. To better understand the most accepted theories behind human cognition, we feel it is prudent to first understand how the field came to be.
A brIeF HIstory oF cognItIve PsycHoLogy As you might imagine, the study of the human mind is centuries old. In fact, the study of human cognition can be traced back as early as the ancient Greeks, with Aristotelian notions of the beginnings of empiricism, the belief that knowledge originates from experience (John Robert Anderson, 2004). However, early study of the human mind was philosophical in nature, purely grounded in con-
Human Cognition in the Design of Assistive Technology for Those with Learning Disabilities
jecture. According to Anderson (2004), “only in the last 125 years has it been realized that human cognition could be the subject of scientific study rather than philosophical speculation” (p. 6). It was not conceived until the Nineteenth century, actually, that the study of the human mind could be grounded in scientific analysis (John Robert Anderson, 2004). This does not imply that cognitive psychology emerged right away as a major field of study in the 20th century. Instead, it would be half a century before the beginnings of such a movement would be seen. According to Brunning, Schraw, Norby, and Ronning (2004), from 1920-1970, associationism, the proposition that all consciousness can be explained by the association of sensory stimulus with responses, was the dominant, theoretical, psychological perspective in America. So it should come as no surprise that by the 1960s (Brunning et al., 2004), behaviorism, the theory that all behaviors are a result of conditioning, began to flourish. As a matter of fact, during the behaviorism movement, psychologists such as B. F. Skinner, who were involved in one branch of stimulusresponse psychology called radical behaviorism, were successfully applying behavioral principles to a variety of settings, which included education (Brunning et al., 2004). It was right about this time that American psychologists, wanting to provide plausible explanations of human memory, where becoming increasingly disenchanted and bothered with the strict stimuli-response framework governing behaviorism (Brunning et al., 2004). The focus of behaviorism was external behavior. It was not concerned with the internal workings of human memory that cause this behavior. Behaviorism, for that reason, was ill-equipped to provide the necessary scaffolding needed to adequately describe human cognition. It is difficult to say exactly when the cognitive revolution overthrew behaviorism. We know that the cognitive psychology movement took form between 1950 and 1970, with significant mark-
ers, such as the journal, Cognitive Psychology, first appearing in 1970 (John Robert Anderson, 2004). Another commonly cited milestone is that of the publication of Ulrich Neisser’s Cognitive Psychology in 1967, which helped both legitimize the field and define it (John Robert Anderson, 2004; Brunning et al., 2004). Although cognitive psychology is still relatively young in American psychology, it has seen widespread growth. The field has far reaching implications, touching many other disciplines such as computer science and the study of artificial intelligence. The exception is education, as Brunning et al. point out (2004), which only until recently has become the target of exploration (e.g., Brandt, 2000; Bransford, Brown, & Cocking, 2000; Jonassen & Land, 2000; Kirschner, 2002; Marshall, 1996). Unlike earlier portrayals, cognitive psychology views humans as information processors. The field of cognitive psychology is dedicated to the understanding of how information is represented in the human mind. Thus, cognitive psychologists have developed numerous models aimed at adequately depicting the active nature of cognition. From this, cognitive learning models have been conceived, which attempt to explain the processes of learning (Brunning et al., 2004). In the next section, we begin our discussion of these models and their most common features.
tHe HumAn InFormAtIon ProcessIng modeLs Human memory has been traditionally depicted as models composed of acquisition, storage, and retrieval stages, otherwise collectively referred to as information processing models (e.g., Atkinson & Shiffrin, 1968; Waugh & Norman, 1965). Although variations of these models have been abundant (e.g., Atkinson & Shiffrin, 1968; Norman, 1968; Shiffrin & Atkinson, 1969; Waugh & Norman, 1965), their common features have influenced a basic framework made popular over
5
Human Cognition in the Design of Assistive Technology for Those with Learning Disabilities
Figure 1. The modal model of memory (Adapted from Brunning et al., 2004, pg. 16; Clark, 2003, pg. 54-55; and Mayer, 2005b, pg. 37)
the past five decades called the modal model of memory (Healy & McNamara, 1996). The modal model offers a helpful way in which to describe the active processes that humans use to construct new knowledge. The model postulates that information is processed by means of sensory memory, short-term memory (STM) (or what is more commonly referred to as working memory), and long-term memory (LTM), all of which are discrete memory systems each serving a specific function (Healy & McNamara, 1996). The model is typically compared metaphorically with that of the operations of a computer (Atkinson & Shiffrin, 1968; Shiffrin & Atkinson, 1969). Although variations exist as to how learning takes place with regard to the modal model, it is generally agreed upon that information is transferred between these memory stores using a variety of encoding and retrieval processes (see Brunning et al., 2004, for an in-depth discussion of these processes). A representation of the modal model of memory is depicted in Figure 1. As the figure illustrates, for learning to take place, new information must first be brought into sensory memory. This is accomplished through what is seen and/or heard. While other senses obviously play a role, sensory memory is typically described in terms of visual and auditory modality, as these are the senses that have been studied the most with regard to cognition and
6
learning, for obvious reasons. Information found in sensory memory is then brought into STM or working memory. Here, the information is further processed and potentially associated with relevant, prior knowledge, until it is permanently encoding into LTM. As you may already have guessed, there is much more to the modal model than our rushed explanation. In fact, Figure 1 could be very well argued as a gross oversimplification as to the actually mechanics behind human information processing. In the subsequent sections we introduce each of the memory stores comprising the modal model.
An Introduction to sensory memory We turn our attention to the first store of the modal model, sensory memory. We begin with a general discussion, describing sensory memory in terms of visual and auditory modality. We then present the significant role that perception and attention have in our ability to process information.
Visual and Auditory Modalities Information first enters sensory memory by means of the senses. Sensory memory is considered to be a short-term buffer limited in both capacity and duration. Stimuli entering sensory memory are
Human Cognition in the Design of Assistive Technology for Those with Learning Disabilities
stored in what is referred to as sensory registers (Brunning et al., 2004). It is in these registers where information is held until it can be processed. Sensory memory is typically described in two modalities: iconic memory, which handles visual stimuli (see Sperling, 1960) and echoic memory, which handles auditory stimuli (see Darwin, Turkey, & Crowder, 1972). Overall, evidence in support of echoic memory is much less impressive than that of iconic memory (Pashler, 1998). Furthermore, echoic information is considered to be more persistent in sensory memory, lasting approximately one to two seconds (Pashler, 1998), than that of iconic information, which is believed to last approximately one to two tenths of a second (Ware, 2004). Practically speaking, however, this difference is negligible, and both are viewed as short-term buffers.
Perception and Attention Perception is considered vital in the successful processing of incoming stimuli. Perceptual analysis occurs by means of attention and the recognition of patterns (Brunning et al., 2004). That is, we focus our attention on information that we consider meaningful. For example, we are constantly inundated with visual and auditory information. We are incapable of processing all information that enters through our senses, so instead we only focus on information that we perceive as significant and useful (Brunning et al., 2004; Clark, 2003). Attention can, therefore, be defined as the cognitive process involved in selectively focusing on relevant information, while at the same time, ignoring information that is not relevant. We attend to only certain information that, one way or another stimulates us. Our level of attention can, therefore, vary depending on our state of arousal which can be inadvertently affected by ailments such as fatigue and anxiety (Clark, 2003). What we already know or what we are currently processing in STM can also play a significant role on what sensory information we focus our atten-
tion. Prior knowledge found in LTM has a direct influence on perception and pattern recognition (Adams, 1990), helping us make sense of what we see and/or hear. The significance of prior knowledge on sensory memory brings up and important point. Although the modal model is depicted as a system composed of discrete memory structures, each serving a specific function, these stores do not operate in isolation. Instead, it is believed that there is a substantial amount of interactivity that occurs between the memory stores. What’s more, attention is considered a key contributor in the interactivity between these stores; helping us focus only on what is relevant, mitigating the extent to which STM becomes overwhelmed. (The importance of attention will become clearer as we progress through this chapter.) Unfortunately, research focused on attention has been controversial at best. In fact, research on the subject has yielded contradictory findings. Brunning et al. (2004) explains that conflicting findings frequently result from the extremely susceptible nature of attention allocation to the type of task being carried out. So a commonly accepted explanation as to the mechanics behind attention has yet to be adopted (Clark, 2003). What is agreed upon, however, is that we are severely limited as to the quantity of information that can be attended to at any given time (e.g., Friedman, Polson, & Dafoe, 1988; Spear & Riccio, 1994). Many of us have experienced this in our professional lives when trying to juggle multiple tasks all at the same time. In the end, we may realize that we were only successful at carrying out everything poorly. (The exception to this is the ability to automatically process tasks, which we discuss next in this chapter.) Instead, we think most effectively and maximize our ability for learning when we are selective in our attention, only forwarding pertinent information into STM. Doing so, as we will soon discuss, does not guarantee learning, however. Short-term memory has its own share of limitations.
7
Human Cognition in the Design of Assistive Technology for Those with Learning Disabilities
An Introduction to shortterm memory In this section, we turn our attention to the second store of the modal model, short-term memory. We begin with a discussion of the capacity and duration limitations of the memory store. In doing so, we present chunking and automaticity, means by which we can increase the efficiency of STM. We then discuss working memory, a theoretical model explaining the information processing mechanics behind STM.
Capacity and Duration Limitations Short-term memory has long been viewed as the store limited in both capacity and duration where information is processed for meaning. The earliest quantification of the capacity limitations of STM is the landmark article by Miller (1956) who proposed that an “informational bottleneck” (Summary section, ¶ 2) exists with regard to STM. Miller had proposed that STM is limited to seven (plus or minus two) chunks at any one time. Think of a chunk as a meaningful grouping of information. This means that we can only handle 5 to 9 units of information in memory. Fortunately, the capacity limit of STM can be stretched by increasing chunk size, thus providing an outlet in which to dramatically improve information processing (Miller, 1956). This is important, because although STM is limited in the number of chunks that can be held, these chunks are not restricted in their size. The example of chunking most commonly offered is probably that of telephone numbers. When initially learning a new local phone number, such as 5-6-8-0-8-5-4, each individual digit must be chunked because the number has not yet been committed to memory (i.e., encoded into LTM). Based on the format of telephone numbers, we know that we can chunk these digits into groups of three and four numbers. Using our example, we could consequently chunk the number as
8
568-0854. Furthermore, since it is a local number, we more than likely have already chunked the area code. By grouping the individual units of information into large blocks (as we did in our example from seven chunks to that of two) we are capable of managing larger amounts of information in STM. There is yet another way in which to deal with the seven (plus or minus two) limitation. Automaticity is the belief that if a task is repeated enough times, it can be performed automatically, to the extent that it bypasses and no longer needs STM (Brunning et al., 2004; Clark, 2003). Performing such tasks with little or no conscious attention or thought alleviates the burden placed on STM. These available resources can then be used towards the processing of new information. An example is the decoding you are doing right now while reading this chapter. Another example commonly cited (e.g., Brunning et al., 2004) is that of driving. You might have, at some point in your life, witness a driver making a turn, which may have consisted of the driver: shifting gears, speaking to their passenger(s) or on their cell phone, activating their turning signal, making the actual turn, and watching out for other drivers and pedestrians, all while obeying local traffic laws! Without the ability to perform tasks automatically, the capacity limitation of STM would severely cripple our ability to function. Much like attention, automaticity is considered essential to learning. Research quantifying the duration limit of STM can also be found dating as far back as half a century. Early studies found that information is incredibly volatile in STM. For example, Peterson and Peterson (1959) had demonstrated in two experiments that information in STM is quickly forgotten within about 20 seconds if not rehearsed. (Take a moment to recall, if you can, the telephone number used in our chunking example. Do you remember it?) Early psychologists believed this decay of information was due to time (Waugh & Norman, 1965). Subsequent studies (e.g., Waugh & Norman, 1965), however, have suggested in-
Human Cognition in the Design of Assistive Technology for Those with Learning Disabilities
terference caused by later information (i.e., items in a series) is more than likely the culprit of this information decay (Greene, 1992; Solso, 2001). In other words, information is easily forgotten if it is immediately followed by other information.
Working Memory As an assortment of operations were being attributed to STM, little was being offered as to how these operations occurred, and by the later half of the 20th century, researchers were becoming dissatisfied with the idea of STM (Brunning et al., 2004). The complexities of STM eventually led cognitive theorists and psychologists towards proposing theoretical models explaining the information processing mechanics behind STM, or what would be called working memory. Although the distinction between STM and working memory varies, in the broadest sense, STM can be viewed as an abstract and theory neutral premise explaining the temporary storage of information within behavioral psychology (e.g., Miller, 1956); whereas, working memory is much more theoretical in nature, explaining the processing of information within cognitive psychology (e.g., Baddeley, 1986, 1998). Working memory is seen as the store responsible for active processing, unlike STM, which has been seen as a much more passive store, responsible for the maintenance of information. Although various models have been created (e.g., MacDonald & Christiansen, 2002; Niaz & Logie, 1993), one of the most prominent contributors to the theory of working memory is Baddeley (1986, 1998, 2002), who proposed the model of working memory. A three system model composed of an executive control system, a phonological loop, and a visuospatial sketchpad. The executive control system manages the two subsystems along with deciding what information to allow into working memory and what course of action to take to process information once in working memory. The
two subsystems, the visuospatial sketchpad and the phonological loop, hold and process information. Spatial information is handled by the visualspatial sketchpad, whereas acoustic and verbal information are handled by the phonological loop. According to Baddeley (1986, 1998, 2002), these three systems work collaboratively to process all information in working memory. Much like STM, working memory is believed to suffer from the same capacity and duration limitations. Cognitive psychologists see working memory as a limited capacity information processing system which temporarily stores and processes information for encoding into LTM. It is limited to seven (plus or minus two) chunks of information at any one time. It is believed that as storage demands increase, available processing resources decrease (Niaz & Logie, 1993). This poses a significant challenge, as these limitations can seriously hamper learning. Although Baddeley’s model is seen as one of the most influential contributions to cognitive psychology, many feel that working memory is not a distinct store composed of separated components. But instead, tightly integrated with and strongly influenced by LTM. As we discuss next, LTM plays a vital role in the processing of information within working memory.
An Introduction to Longterm memory Thus far we have discussed the first two stores of the modal model: sensory memory and STM. (We will use the term “working memory” from this point forward.) In this section, we turn our attention to the third and final component of the model, long-term memory. We begin by presenting the different types of knowledge found in LTM. We then describe how this knowledge is stored and organized. We finish this section with a brief discussion of the connections between LTM and working memory.
9
Human Cognition in the Design of Assistive Technology for Those with Learning Disabilities
Classifications of Knowledge Long-term memory is the persistent store of all the information we have amassed over the course of a lifetime. We call it a persistent store because LTM is not susceptible to the capacity and duration limitations which plaque sensory memory and working memory. Long-term memory is composed of three different types of knowledge: declarative, procedural (John R. Anderson, 1983; Squire, 2008), and conditional (see Brunning et al., 2004). Declarative and procedural knowledge are the most commonly discussed in cognitive psychology, whereas conditional knowledge has not been as widely talked about (Brunning et al., 2004), but as we will soon see, is just as important if not more. Declarative knowledge is best described as “what” we know about things (Brunning et al., 2004). It is factual knowledge that can be easily recalled and explicitly articulated. Declarative knowledge has been further categorized into semantic memory and episodic memory (Squire, 2008; Tulving, 2002). Semantic memory refers to the generalization of knowledge. It is composed of factual information and general knowledge that is unrelated to personal events in our lives. Examples include geographic knowledge we have about places we have visited (and not visited for that matter). Episodic memory, on the other hand, is knowledge that is associated with personal events from our past. Episodic knowledge is autobiographical in nature (Tulving, 1983) and may date as far back as childhood or as recently as today. Retrieving episodic knowledge can help activate the reconstruction of personal events. The opposite is also true. The recollection of personal events can help in the retrieval of specific knowledge which might otherwise be inaccessible. Whereas declarative knowledge deals with factual information, procedural knowledge is the knowledge we have about “how” to do things (Brunning et al., 2004). Procedural knowledge is the implicit knowledge we have about the skills
10
we posses. Because it is implicit, procedural knowledge cannot be easily communicated. For example, if someone were to ask you to clearly explain how to ride a bicycle, could you do it? Could you provide an unambiguous explanation on how to play a musical instrument, such as the clarinet? When performing a task that requires procedural knowledge, we are not consciously aware of the individual steps which comprise it, and consequently, have difficulty putting into words how we go about performing such a task. Finally, conditional knowledge is best described as knowing “when” and “why” to use and not use declarative and procedural knowledge (Brunning et al., 2004). Conditional knowledge is knowledge about why we should use certain strategies, under what conditions to use them, and why we should use them over other strategies we have. Conditional knowledge can be easily argued as the most important of the three for obvious reasons. It goes beyond facts and skills. Unfortunately, as you might suspect, it is also the one that individuals struggle with the most. Meaning, unlike the learning of factual information or a skill, learning to make decisions is incredibly difficult. Think about how you decide to use one strategy over that of another? How did you learn to do this?
Organization of Knowledge There have been a number of theories proposed which attempt to explain how knowledge is represented in LTM. These theories are best suited towards the different classifications of knowledge we have discussed thus far. As Brunning et al. (2004) explain, the theories—concepts, propositions, and schemata—best describe ways of representing declarative knowledge, most notably semantic memory; whereas the theories— productions and scripts—best describe ways of representing procedural knowledge. While all of these theories have made significant contributions to the understanding of human memory and
Human Cognition in the Design of Assistive Technology for Those with Learning Disabilities
cognition, we focus solely on schemata (sing., schema), which are mental frameworks explaining the means by which knowledge is organized in LTM. Schemata are cognitive structures we use to organize our knowledge so that we can understand the world around us. Schemata are representations of our prior knowledge and experience and are responsible for the encoding, storage, and retrieval of information (Brunning et al., 2004). Needless to say, schemata are viewed as essential to information processing. It is through schemata that our current knowledge influences new information we are trying to learn. Schemata are viewed as having relationships between variables called slots (Brunning et al., 2004). Think of these slots as “placeholders” that house information associated with the schemata. As you might expect, schemata can be composed of an array of slots. The information stored in these slots control what knowledge we encode, store, retrieve, and even attend to. For example, if you were to bring the schema of “human face” to mind, the schema’s slots would be instantiated with information that you’ve associated with the human face. More than likely your array of slots might have the values “eyes”, “ears”, “nose”, “mouth”, and “hair”; you would also have slots for “eye color”, “hair color”, and so on. While this example is a gross oversimplification, it helps illustrate this concept. By bringing the schema of the human face into the foreground, you retrieve all the knowledge that you’ve associated with it. This also brings up an important point; schemata vary in complexity (Jonassen, Louise, & Grabowski, 1993). They can represent relatively simply concepts or very complex ones to the extent of comprising an infinite array of values and relationships. The complexity of a schema relies heavily on the personal experiences of the individual. The more an individual knows about something, the more complex their schema will be. It is also believed that the more sophisticated the schema, the better the learning (Clark, 2003).
For example, a medical professional, such as a dermatologist might have a much more sophisticated schema about the human face than most adults, but at the same time, the average adult might have a much more complex schema of the human face than most young children.
Working Memory and LongTerm Memory Interactions The complexity of schemata also has an impact on working memory, since LTM and working memory interact with one another. Recall our discussion on the chunking of information? (Were you able to remember that telephone number? If so, can you recall it again?) In that although working memory is limited in the number of chunks that can be held, chunks are not restricted in their size. This has direct applicability to schemata. The more sophisticated our schemata, the more we can chunk and temporarily store in working memory. Everything you know about the human face could be retrieved into working memory as a single chunk of information, making best use of working memory’s limited resources. Automaticity also plays a significant role in the interactions between LTM and working memory. We discussed that if a task is repeated enough times, it can be performed automatically, requiring very little from working memory’s already limited resources. This is applicable to schemata as well; alleviating resources than can be used in the processing of new information. All in all, schemata theory is one of the more compelling explanations to have emerged. This popularity stems from the fact that the theory views the encoding of information as a constructive process (Brunning et al., 2004). Information is stored in LTM as representations of knowledge. For information to be retrieved, it must be re-created based on the schemata instantiated at the time (Brunning et al., 2004). In attempting to understand how knowledge is organized in LTM as well as how LTM and working memory interact
11
Human Cognition in the Design of Assistive Technology for Those with Learning Disabilities
with one another, we can come closer to understanding the complex nature of learning. In the next section, we briefly introduce metacognition, a concept just as important as attention to human information processing and learning.
An Introduction to metacognition Spawning a wealth of research since it was first proposed nearly four decades ago, metacognition refers to an individual’s awareness of his or her own cognitive processes and strategies (e.g., Flavell, 1979). Simply put, metacognition is “thinking about thinking” (Brunning et al., 2004; Clark, 2003; Reid & Lienemann, 2006). It is the means by which we know what we know, but more importantly, what we don’t know. Metacognition makes us aware of our own cognitive processes and suggests strategies to help us with learning. Metacognition is typically best explained with an example. Imagine a college student studying for a final exam. During the final class lecture, she asks the professor if the final exam will be cumulative, composed of all the material taught in the course to-date. She has done well during the semester, and in the days prior to the exam, chooses to only study the material that she has struggled with. Another example is the rate at which you are reading this chapter. Depending on your familiarity with the material, you may be reading each section carefully, highlighting and/ or taking notes, or you may be quickly reading through each section, skipping passages you are comfortable with. Both of our examples show how metacognition is important in determining what we know and don’t know. It allows us to maximize our time and only focus on what we believe is important for our own learning. Metacognition has been traditionally described as encompassing both knowledge and regulation (e.g., Flavell, 1979). Metacognitive knowledge is what we know about cognition (Brown, 1987). It is more than knowing factual information. It is knowledge about our own cognitive processes that
12
can be used to help us in learning. Metacognitive knowledge is knowledge about what we need to do to accomplish our goal of learning and what strategies we need to employ to meet that goal. It is knowledge and beliefs about us. Not surprisingly, metacognitive knowledge is described as three components: declarative, procedural, and conditional knowledge (Brown, 1987; Jacobs & Paris, 1987). In the case of our college student, she might decide to study at the local library because she knows that she will be better able to focus on the material and consequently learn more in a shorter period of time than if she were to go home and be tempted by the many distractions she has become accustomed to. Metacognitive regulation, on the other hand, is the means by which we control cognition (Brown, 1987). These are strategies that we use to manage our cognitive activities. Metacognitive regulation allows us to plan appropriate strategies and allocate resources, monitor our cognitive activities, and reflect on the outcomes of those activities, making changes if necessary. Our college student might use a number of strategies to help her learn the material while at the library. These strategies will help her gauge whether or not she has learned the material well enough to get a satisfactory score on the final exam. She may decide to self-test after reading each section, by mentally asking herself questions about what she has just read. If she answers the questions to her own satisfaction, she may decide to continue to the next section, otherwise, she may instead decide to read the material a second time. Without metacognition, our ability to function, let alone learn, becomes stippled. We know from research on the subject that individuals of equal intellect can experience varying levels of success in learning based on their metacognitive skills (Clark, 2003). As a result, individuals who have problems with metacognition, such as those with LDs, cannot actively engage in the task of learning (Reid & Lienemann, 2006), placing these individuals at a severe disadvantage. Further
Human Cognition in the Design of Assistive Technology for Those with Learning Disabilities
compounding matters, research has told us that metacognition develops later in life (Alexander, Carr, & Schwanenflugel, 1995). We are simply not born with a repertoire of metacognitive skills, but instead these skills develop as we mature and gain experience. There is a great deal more to metacognition than we have been able to present in this chapter. However, we trust that our discussion has painted a picture of the undeniable importance metacognition serves in human information processing and learning. This section also completes our discussion of the modal model of memory. Next, we end this chapter with a discussion of the ways in which we can increase learning based on what we have presented thus far.
concLusIon Ways to Increase Learning As we have learned, human information processing is rather constrained. According to Brunning et al. (2004), there are three ways in which learning is hindered. First, working memory is seen as a contradiction in terms. Its limitations cause it to be a bottleneck (Miller, 1956); yet, it is also the conduit for learning. This is a problem because the acquisition of new knowledge relies so heavily on the processing and storage capabilities of working memory (Low & Sweller, 2005; Sweller & Chandler, 1994). New information may potentially overload working memory capacity and subsequently encumber learning (Kalyuga, Chandler, & Sweller, 1999; Sweller, van Merrienboer, & Paas, 1998). Second, the organization of knowledge in LTM may also be problematic depending on the construction of schemata. Prior knowledge retrieved into working memory which is not efficiently chunked can cause severe limitations in our ability to learn. Finally, a lack of metacognitive ability can prevent us from using
our memory as efficiently as possible, also seriously hampering learning. Despite these cognitive roadblocks, humans are obviously fully capable of learning. There are three ways in which to improve learning according to Brunning et al. (2004). These are: increase the amount of attention, decrease the amount of attention being consumed by each task, and limit attention to only important and relevant information to be learned. It should come as no surprise that attention is the means by which we can increase our learning potential. Attention allows us to focus only on relevant information in our environment. It also helps us in the retrieval of relevant prior knowledge found in LTM. Although attention can and has been viewed in a number of ways, from a learning standpoint, attention is typically described in two forms: selective and divided. Selective attention is the process of selecting from among many potentially available stimuli while at the same time screening out other stimuli (Pashler, 1998). The most common example given for selective attention is that of a dinner party in which focus is placed on a single voice, or a single conversation, among many voices or conversations being held at the same time. Divided attention, on the other hand, is just the opposite. As the name implies, is the selection of many stimuli all at once (Pashler, 1998). Needless to say, we can maximize our chances of learning by avoiding divided attention and using selective attention as much as possible. This is easier said than done, however. As Clark (2003) points out, when focused on new material, learners must be able to handle both selective and divided attention. She further explains that success at managing divided attention is dependent on a number of factors. First, the difficulty of the task plays a significant role in managing divided attention. We can handle more tasks at once if they are simple. The opposite is true as well. The more complex the tasks, the less likelihood we can perform more than one at
13
Human Cognition in the Design of Assistive Technology for Those with Learning Disabilities
a time. The exception to this is automaticity. We know that tasks which have been automated require very little resources from working memory. Automaticity can be acquired through extensive repetition and practice. However, if the tasks have not been automated, complex tasks should be broken down into smaller chunks whenever possible. A complex task separated into smaller simpler tasks can be more easily learned (Brunning et al., 2004). Second, experience level plays a critical role. Individuals with significant prior knowledge use much less attentional resources than those who have little knowledge about the instructional material. The same can be said about novice learners when the material is complex in nature. Fortunately, sound instructional practices can help. Clark (2003) notes that strategies (cues) such as the use of topic headers, learning objectives, and support questions that can be used to steer attention towards important instructional material, lessening divided attention and maximizing selective attention. (This is called the signaling principle and we discuss it in the third chapter in this introduction.) Finally, the means by which the instructional material is presented is very important. There are known benefits to presenting instructional material both visually and auditorially. Presenting information in dual modality can best help utilize the limited resources of working memory (Mayer & Moreno, 2003) and thus mitigate cognitive overload. Learning theories have emerged from the study of working memory. These theories offer best practices in the design of instructional material. In the next two chapters we will expound upon this in significant detail.
summAry This chapter is the first of three serving as the introduction to this handbook addressing the relationships among human cognition and ATs
14
and its design for individuals with LDs. In this chapter we introduced the most popular information processing model, the modal model of memory, a helpful way in which to explain the active processes involved in the construction of new knowledge. The modal model is composed of three memory stores: sensory memory, STM or working memory, and LTM, each of which serve a specific function, but as we have learned, are far from being distinct from one another. For learning to take place, we know that new information must first be brought into sensory memory. Information, presented as words and pictures, enters sensory memory through the eyes and ears. These visual and auditory stimuli are held for a brief period of time in the sensory registers. Depending on where attention is focused, a subset of this information in the registers is selected and retrieved into working memory. Once again, we cannot over emphasis the importance of attention. By focusing attention on what is significant, only relevant information is moved into working memory. This not only helps increase the chances of learning new information, but at the same time, helps mitigate cognitive load on the already taxed resources of working memory. Once in working memory, the information is temporarily stored for further processing. To help facilitate learning, prior knowledge in LTM that is relevant to new information currently stored in working memory is stimulated. By stimulated we mean that schema relevant to new information is activated. This activated prior knowledge is temporarily retrieved into working memory where it is integrated with new information. Depending on how well the schema has been chunked will determine the degree to which prior knowledge impacts the limited resources of working memory. The more efficiently prior knowledge is organized, the more resources are left on hand in working memory for use in learning new information. While in working memory, new information must be rehearsed if it has any chance of being encoding in LTM. Rehearsal helps in the integra-
Human Cognition in the Design of Assistive Technology for Those with Learning Disabilities
tion of new information with prior knowledge. The more new information in working memory is rehearsed the more likely it will be encoded within LTM. However, information which makes its way into LTM must still be retrieved if it is to be of any use and can still overburden working memory. Automaticity can aid in mitigating this dilemma, which we have already discussed, is believed to use very little of working memory’s limited resources. However, extensive practice is needed before a task becomes automatic. Meanwhile, amongst all this activity, metacognition is at play, helping in differentiating between what is known and not known all the while monitoring cognitive activities. While this explanation is much more detailed that our original rushed account presentation with Figure 1, we have only begun to scratch the surface of research on information processing models, specifically the modal model of memory. For example, this chapter did not include a discussion on motivational factors in learning, nor did it include a discussion on encoding and retrieval processes. Instead, our focus has been to strengthen our stance which was hopefully made clear earlier in this chapter—understanding the latest advancements in cutting-edge technology is simply not enough. Rather, such knowledge must be coupled with an understanding of the human information processing system. There must be a clear understanding between design principles and the means by which humans acquire, store, and retrieve information, if the full potential of technology for learning is to be realized; particularly when it comes to the design of ATs for individuals with LDs.
Alexander, J. M., Carr, M., & Schwanenflugel, P. J. (1995). Development of metacognition in gifted children: Directions for future research. Developmental Review, 15(1), 1–37. doi:10.1006/ drev.1995.1001
reFerences
Brown, A. L. (1987). Metacognition, executive control, self-regulation, and other more mysterious mechanisms. In Weinert, F. E., & Kluwe, R. H. (Eds.), Metacognition, motivation, and understanding (pp. 65–116). Hillsdale, NJ: Lawrence Erlbaum Associates.
Adams, M. J. (1990). Beginning to read: Thinking and learning about print. Cambridge, MA: The MIT Press.
Anderson, J. R. (1983). The architecture of cognition. Cambridge, MA: Harvard University Press. Anderson, J. R. (2004). Cognitive psychology and its implications (6th ed.). New York: Worth Publishers. Atkinson, R. C., & Shiffrin, R. M. (1968). Human memory: A proposed system and its control processes. In Spence, K. W., & Spence, J. T. (Eds.), The Psychology of learning and motivation: Advances in research and theory (Vol. 2, pp. 89–195). Oxford, UK: Academic Press. doi:10.1016/S0079-7421(08)60422-3 Baddeley, A. D. (1986). Working memory. New York: Oxford University Press. Baddeley, A. D. (1998). Human memory: Theory and practice. Boston, MA: Allyn and Bacon. Baddeley, A. D. (2002). Is working memory still working? European Psychologist, 7(2), 85–97. doi:10.1027//1016-9040.7.2.85 Brandt, R. S. (Ed.). (2000). Education in a new era. Alexandria, VA: Association for Supervision & Curriculum Development. Bransford, J. D., Brown, A. L., & Cocking, R. R. (Eds.). (2000). How people learn: Brain, mind, experience, and school: Expanded edition. Washington, DC: National Academies Press.
15
Human Cognition in the Design of Assistive Technology for Those with Learning Disabilities
Brunning, R. H., & Schraw, G. J., Norby, M. M., & Ronning, R. R. (2004). Cognitive psychology and instruction. Upper Saddle River, NJ: Pearson/ Merrill/Prentice Hall. Clark, R. (2003). Building Expertise: Cognitive Methods for Training and Performance Improvement (2nd ed.). Washington, DC: International Society for Performance Improvement. Cuban, L. (1986). Teachers and machines: The classroom use of technology since 1920. New York: Teachers College Press. Darwin, C. J., Turkey, M. T., & Crowder, R. G. (1972). An auditory analogue of the Sperling Partial Report Procedure: Evidence for brief auditory storage. Cognitive Psychology, 3, 255–267. doi:10.1016/0010-0285(72)90007-2 Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. The American Psychologist, 34, 906–911. doi:10.1037/0003-066X.34.10.906 Fletcher, J. M., Lyon, G. R., Fuchs, L. S., & Barnes, M. A. (2006). Learning disabilities: From identification to intervention. New York: The Guilford Press. Friedman, A., Polson, M. C., & Dafoe, C. G. (1988). Dividing attention between the hands and the head: Performance trade-offs between rapid finger tapping and verbal memory. Journal of Experimental Psychology. Human Perception and Performance, 14, 60–68. doi:10.1037/00961523.14.1.60 Greene, R. L. (1992). Human memory: Paradigms and paradoxes. Hillsdale, NJ: Lawrence Erlbaum. Healy, A., F., & McNamara, D., S. (1996). Verbal learning and memory: Does the modal model still work? Annual Review of Psychology, 47, 143–172. doi:10.1146/annurev.psych.47.1.143
16
Jacobs, J. E., & Paris, S. G. (1987). Children’s metacognition about reading: Issues in definition, measurement, and instruction. Educational Psychologist, 22(3), 255–278. doi:10.1207/ s15326985ep2203&4_4 Jonassen, D. H., & Land, S. (Eds.). (2000). Theoretical foundations of learning environments. Mahwah, NJ: Lawrence Erlbaum. Jonassen, D. H., Louise, B., & Grabowski, H. (1993). Handbook of individual differences, learning, and instruction. Mahwah, NJ: Lawrence Erlbaum. Kalyuga, S., Chandler, P., & Sweller, J. (1999). Managing split-attention and redundancy in multimedia instruction. Applied Cognitive Psychology, 13, 351–371. doi:10.1002/ (SICI)1099-0720(199908)13:43.0.CO;2-6 Kirschner, P. A. (2002). Cognitive load theory: Implications of cognitive load theory on the design of learning. Learning and Instruction, 12(1), 1–10. doi:10.1016/S0959-4752(01)00014-7 Low, R., & Sweller, J. (2005). The modality principle in multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 147–158). New York: Cambridge University Press. MacDonald, M. C., & Christiansen, M. H. (2002). Reassessing working memory: Comment on Just and Carpenter (1992) and Waters and Caplan (1996). Psychological Review, 109(1), 35–54. doi:10.1037/0033-295X.109.1.35 Marshall, H. H. (1996). Recent and emerging theoretical frameworks for research on classroom learning: Contributions and limitations (Vol. 31). Mahwah, NJ: Educational Psychologist. Mayer, R. E. (2005a). Principles of multimedia learning based on social cues: Personalization, voice, and image principles. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 201–212). New York: Cambridge University Press.
Human Cognition in the Design of Assistive Technology for Those with Learning Disabilities
Mayer, R. E. (Ed.). (2005b). The Cambridge handbook of multimedia learning. New York: Cambridge University Press.
Sperling, G. (1960). The information available in brief visual presentations. Psychological Monographs: General and Applied, 74(11), 1–29.
Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38(1), 43–52. doi:10.1207/S15326985EP3801_6
Squire, L. R. (2008). Memory and brain. New York: Oxford University Press.
Miller, G., A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81–97. doi:10.1037/h0043158 National Joint Committee on Learning Disabilities. (1991). Learning disabilities: Issues on definition [Electronic Version]. Asha, 33, 18–20. Retrieved April 29, 2009, from www.ldonline.org/?module =uploads&func=download&fileId=514 Niaz, M., & Logie, R. H. (1993). Working memory, mental capacity and science education: Towards an understanding of the ‘working memory overload hypothesis’. Oxford Review of Education, 19(4), 511–525. doi:10.1080/0305498930190407 Pashler, H. E. (1998). The psychology of attention. Cambridge, MA: The MIT Press. Peterson, L., & Peterson, M. J. (1959). Shortterm retention of individual verbal items. Journal of Experimental Psychology, 58(3), 193–198. doi:10.1037/h0049234 Reid, R., & Lienemann, T. O. (2006). Strategy instruction for students with learning disabilities. New York: The Guilford Press. Rogers, F. K. (1979). Parenting the difficult child. Radnor, PA: Chilton Book Co.
Sweller, J., & Chandler, P. (1994). Why some material is difficult to learn. Cognition and Instruction, 12(3), 185–233. doi:10.1207/ s1532690xci1203_1 Sweller, J., van Merrienboer, J. J. G., & Paas, F. G. W. C. (1998). Cognitive architecture and instructional design. Educational Psychology Review, 10(3), 251–296. doi:10.1023/A:1022193728205 Tulving, E. (1983). Elements of episodic memory. Oxford, UK: Oxford University Press. Tulving, E. (2002). Episodic memory: From mind to brain. Annual Review of Psychology, 53, 1–25. doi:10.1146/annurev.psych.53.100901.135114 United States Congress. (1998). Assistive Technology Act of 1998. Retrieved from http://section508. gov/docs/AT1998.html Ware, C. (2004). Information visualization: Perception for design (2nd ed.). San Francisco, CA: Morgan Kaufmann Publishers. Waugh, N. C., & Norman, D., A. (1965). Primary memory. Psychological Review, 72(2), 89–102. doi:10.1037/h0021797 Wong, B. Y. L., Graham, L., Hoskyn, M., & Berman, J. (1996). The ABCs of learning disabilities (2nd ed.). New York: Elsevier/Academic Press.
Solso, R. L. (2001). Cognitive psychology (Vol. 6). Needham Heights, MA: Allyn & Bacon.
AddItIonAL reAdIng
Spear, N. E., & Riccio, D. C. (1994). Memory: Phenomena and principles. Boston, MA: Allyn & Bacon.
Anderson, J. R. (2004). Cognitive psychology and its implications (6th ed.). New York: Worth Publishers.
17
Human Cognition in the Design of Assistive Technology for Those with Learning Disabilities
Baddeley, A. D. (1998). Human memory: Theory and practice. Boston, MA: Allyn and Bacon. Brunning, R. H., & Schraw, G. J., Norby, M. M., & Ronning, R. R. (2004). Cognitive psychology and instruction. Upper Saddle River, NJ: Pearson/ Merrill/Prentice Hall. Clark, R. (2003). Building Expertise: Cognitive Methods for Training and Performance Improvement (2nd ed.). Washington, DC: International Society for Performance Improvement. Greene, R. L. (1992). Human memory: Paradigms and paradoxes. Hillsdale, NJ: Lawrence Erlbaum. Mayer, R. E. (Ed.). (2005). The Cambridge handbook of multimedia learning. New York: Cambridge University Press. Pashler, H. E. (1998). The psychology of attention. Cambridge, MA: The MIT Press. Reid, R., & Lienemann, T. O. (2006). Strategy instruction for students with learning disabilities. New York: The Guilford Press. Sweller, J. (1994). Cognitive load theory, learning difficulty, and instructional design. Learning and Instruction, 4(4), 295–312. doi:10.1016/09594752(94)90003-5
key terms And deFInItIons Associationism: The theory that all consciousness can be explained by the association of sensory stimulus with responses. Assistive Technology: Any form of technology which can be used to aid individuals with disabilities. Attention: The cognitive process involved in selectively focusing on relevant information, while at the same time, ignoring other irrelevant information.
18
Automaticity: The automatic execution of cognitive skills requiring little conscious attention; lessens the need for resources from working memory. Behaviorism: The theory that all objectively, observable behaviors are a result of conditioning; the theory discredits mental activities. Cognitive Disability: Any disorder which affects mental processing. Cognitive Psychology: A branch of psychology dedicated to the study of the human mind focused on the acquisition, processing, and storage of information. Cognitive Load Theory: A theory proposed by John Sweller and his colleagues focused on the limitations of working memory during instruction. Cognitive Theory of Multimedia Learning: A theory credited to Richard E. Mayer and his colleagues focused on best practices in the use of visual and auditory information in multimediabased instruction. Conditional Knowledge: A classification of knowledge found in long-term memory; can be best described as knowing “when” and “why” to use declarative and procedural knowledge and when not to use those (Brunning et al., 2004). Chunk: A meaningful grouping of information; originally proposed in the 1956 paper “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information” by cognitive psychologist George A. Miller as the means to quantify the capacity limitations of working memory. Declarative Knowledge: A classification of knowledge found in long-term memory (John R. Anderson, 1983; Squire, 2008); deals specifically with factual information and can be best described as knowing “what” (Brunning et al., 2004). Divided Attention: The selection of many stimuli all at once (Pashler, 1998). Echoic Memory: The temporary storage of auditory stimuli in sensory memory.
Human Cognition in the Design of Assistive Technology for Those with Learning Disabilities
Empiricism: The belief that knowledge originates from experience (John Robert Anderson, 2004). Episodic Memory: A categorization of declarative knowledge; knowledge associated with personal, autobiographical events (Tulving, 1983). Executive Control System: One of three components comprising the model of working memory; manages the visuospatial sketchpad and the phonological loop in addition to controlling information flow in working memory (Baddeley, 1986, 1998, 2002). Iconic Memory: The temporary storage of visual stimuli in sensory memory. Information Processing Models: Models of human memory portraying the acquisition, storage, and retrieval of information (e.g., Atkinson & Shiffrin, 1968; Waugh & Norman, 1965). Learner-Centered: An instructional approach which places focus on the manner in which technologies can be used in the promotion of human cognition (Mayer, 2005b). Learning Disability: A heterogeneous group of disorders manifested by significant difficulties in the acquisition and use of listening, speaking, reading, writing, reasoning, or mathematical abilities (National Joint Committee on Learning Disabilities, 1991). Long-Term Memory (LTM): A memory store responsible for the persistent storage of the personal experiences, general and factual knowledge, and skills accumulated over the course of a lifetime. Metacognition: Refers to an individual’s awareness of his or her own cognitive processes and strategies (e.g., Flavell, 1979); commonly referred to as “thinking about thinking” (Brunning et al., 2004; Clark, 2003; Reid & Lienemann, 2006). Metacognitive Knowledge: A dimension of metacognition; refers to what we know about our own cognitive processes (Brown, 1987).
Metacognitive Regulation: A dimension of metacognition; the means by which we regulate our cognition (Brown, 1987). Modal Model of Memory: A widely accepted representation of the information processing model typically referred to as the classic Atkinson & Shiffrin (1968) model; depicted as three distinct memory stores: sensory memory, STM or working memory, and long-term memory. Perception: The process to recognize, store, and get meaning from sensory information. Phonological Loop: One of three components comprising the model of working memory; responsible for the management of acoustic and verbal information (Baddeley, 1986, 1998, 2002). Prior Knowledge: Information already learned and held in long-term memory. Procedural Knowledge: A classification of knowledge found in long-term memory (John R. Anderson, 1983; Squire, 2008); deals specifically with skills and can be best described as knowing “how” (Brunning et al., 2004). Radical Behaviorism: A branch of stimulusresponse psychology credited to American psychologist, B. F. Skinner which postulated that all measurable behavior can be predicted, and consequently controlled. Selective Attention: The process of selecting from among many potentially available stimuli while at the same time screening out other stimuli (Pashler, 1998). Semantic Memory: A categorization of declarative knowledge; refers to factual and general knowledge unrelated to personal, autobiographical events. Sensory Memory: A temporary memory store responsible for the handling of visual and auditory sensory information. Sensory Register: A temporary buffer responsible for the storage of sensory stimuli (Brunning et al., 2004). Schemata (sing., Schema): A popular explanation as to the organization of knowledge found in long-term memory.
19
Human Cognition in the Design of Assistive Technology for Those with Learning Disabilities
Short-Term Memory (STM): A memory store responsible for the temporary, passive storage of information; cognitive psychologists have since adopted a much more active view–working memory. Signaling Principle: An instructional principle proposing that learners learn more when cues are added to highlight the organization of the essential material (Mayer, 2005a). Slot: The core component of schemata; considered “placeholders” containing specific information about the concept represented by the schema (Brunning et al., 2004).
20
Technology-Centered: An approach which places focus on the capabilities of the technology and not necessarily on the capabilities of the learner (Mayer, 2005b). Visuospatial Sketchpad: One of three components comprising the model of working memory; responsible for the handling spatial information (Baddeley, 1986, 1998, 2002). Working Memory: A temporary memory store responsible for the active processing of information; the most widely accepted model is that by cognitive psychologist Alan D. Baddeley who proposed the model of working memory.
21
Chapter 2
Managing Cognitive Load in the Design of Assistive Technology for Those with Learning Disabilities Boaventura DaCosta Solers Research Group, USA Soonhwa Seok Center for Research on Learning - eLearning Design Lab, University of Kansas, USA
AbstrAct This is the second of three chapters serving as the introduction to this handbook which addresses the relationship between human cognition and assistive technologies and its design for individuals with cognitive disabilities. In this chapter the authors present strategies to manage cognitive load in the design of instructional materials for those with learning disabilities. The authors introduce cognitive load theory, which proposes a set of instructional principles grounded in human information processing research that can be leveraged in the creation of efficient and effective learning environments. They attempt to separate conjecture and speculation from empirically-based study and consolidate more than twentyfive years of research to highlight the best ways in which to increase learning. Altogether, the authors affirm the approach discussed in the last chapter—that technology for learning should be created with an understanding of design principles empirically supported by how the human mind works, particularly when it comes to the design of assistive technologies for individuals with learning disabilities. DOI: 10.4018/978-1-61520-817-3.ch002
Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Managing Cognitive Load in the Design of Assistive Technology for Those with Learning Disabilities
IntroductIon cognitive Load, Assistive technology, and those with Learning disabilities In the last chapter, we learned that human information processing is constrained in both capacity and duration. We explained how working memory, a system that temporarily stores and manages information for performing complex cognitive tasks, is a contradiction in terms. Its limitations cause it to be a bottleneck, restricted to seven (plus or minus two) chunks of information at any given time (Miller, 1956); yet, it is also the conduit for learning. This is a problem because the acquisition of new knowledge relies so heavily on the processing and storage capabilities of working memory (Low & Sweller, 2005; Sweller & Chandler, 1994). New information may potentially overload working memory capacity and subsequently encumber learning (Kalyuga, Chandler, & Sweller, 1999; Sweller, van Merrienboer, & Paas, 1998). While we are all confronted by these information processing roadblocks, individuals with cognitive disabilities are at particular risk. There has been considerable research focused on working memory and children with learning disabilities (LDs). Generally speaking, research on the matter suggests that children with LDs have difficulty with working memory in areas such as reading and mathematics (e.g., Bull, Johnston, & Roy, 1999; de Jong, 1998; Hitch & McLean, 1991; Keeler & Swanson, 2001; McLean & Hitch, 1999; Passolunghi & Siegel, 2004). For example, those with reading disabilities are not poor readers, but have less working memory capacity than more skilled readers (Swanson & Siegel, 2001). Fortunately, there has been considerable research in the study of cognitive load with regard to working memory. Even though some researchers
22
have examined cognitive load under the premise of the working memory overload hypothesis (e.g., Niaz & Logie, 1993), the most predominant work on cognitive load can be attributed to cognitive load theory (CLT) (e.g., Chandler & Sweller, 1991; Kalyuga, Chandler, Tuovinen, & Sweller, 2001; Mousavi, Low, & Sweller, 1995; Sweller, 1999; Sweller et al., 1998)—a learning theory focused on the limitations of working memory during instruction. This is the second of three chapters serving as the introduction to this handbook which addresses the relationship between human cognition and assistive technologies (ATs) and its design for individuals with cognitive disabilities. In this chapter we present strategies to manage cognitive load in the design of instructional materials for those with LDs. We introduce CLT, which proposes a set of instructional principles grounded in human information processing research that can be leveraged in the creation of efficient and effective instructional material. We attempt to separate conjecture and speculation from empirically-based study and consolidate more than twenty-five years of research to highlight the best ways in which to increase learning. This chapter also serves as scaffolding for the next chapter where we present the cognitive theory of multimedia learning (CTML), a learning theory which focuses on best practices in the use of visual and auditory information in multimedia-based instruction. Altogether, we affirm the approach discussed in the last chapter—that technology for learning should be created with an understanding of design principles empirically supported by how the human mind works, particularly when it comes to the design of ATs for individuals with LDs. Before we present these instructional principles, we begin this chapter with an in-depth discussion of CLT and its history.
Managing Cognitive Load in the Design of Assistive Technology for Those with Learning Disabilities
bAckground What is cognitive Load theory? Originating in the 1980s by John Sweller and undergoing substantial growth in later decades by researcher from around the globe (F. Paas, Renkl, & Sweller, 2003) (see Clark, Nguyen, & Sweller, 2006, for an in-depth historical account), CLT is grounded in aspects of human cognitive architecture and information structure to provide instructional principles best facilitating learning given the limitations of working memory (Pollock, Chandler, & Sweller, 2002). Put simply, CLT is a learning theory. It is grounded in the belief that instructional design needs to be driven by an understanding of human cognition because improperly presented instructional material may impose too great a burden on working memory, subsequently leading to higher information processing load on the already limited cognitive resources of working memory (Sweller et al., 1998). Without knowledge of the relevant aspects of human cognitive structures and their organization into a coherent cognitive architecture, it is thought that the efficiency of instructional material is likely to suffer. Consequently, information should be structured to reduce preventable load on working memory (Kalyuga, Chandler, & Sweller, 1998; Sweller, 1999; Sweller et al., 1998) by designing instructional material in such a way that it is processed more easily in working memory (Chandler & Sweller, 1991). Cognitive load theory has thereby been used to bridge the gap between instructional principles and knowledge of human cognition (Sweller, 2005a). This is what sets CLT apart from other theories. While it may be easy to find information on instructional design, such information may be solely based on conjecture and speculation. Cognitive load theory, on the other hand, is empirically grounded in research found in the field of cognitive psychology. Research, we might add, that can be found in dozens of articles, most
of which have been published in peer-reviewed journals that have been available for scrutiny by researchers, educators, and practitioners for the last 25 years. Since the theory is firmly grounded in the study of human cognition, it is based on a number of assumptions: cognitive tasks are carried out in working memory (Shiffrin & Atkinson, 1969); working memory is limited in capacity (Baddeley, 1986, 1998) and is only capable of processing a finite amount of information (recall our discussing of chunking in the last chapter) at any one time (Miller, 1956); working memory is composed of both visual and auditory information processing channels (Paivio, 1990); efficiency and unlimited capacity of long-term memory (LTM) to hold knowledge can be leveraged to overcome working memory capacity limitations (Pollock et al., 2002); schemas held in LTM, which allow multiple elements of information to be categorized as a single element (Sweller, 2005a), require less working memory capacity (Pollock et al., 2002); and cognitive load can be reduced through automation, which allows schemas to be processed automatically rather than consciously (Kotovsky, Hayes, & Simon, 1985; Schneider & Shiffrin, 1977; Richard M. Shiffrin & Schneider, 1977). While the goal of CLT is the reduction of preventable load on working memory through the structuring of information (Kalyuga, Chandler, & Sweller, 1998; Sweller, 1999; Sweller et al., 1998), not all types of cognitive load are bad. As we will discuss next, some cognitive load can be beneficial (Clark et al., 2006).
tHe tHree tyPes oF cognItIve LoAd Cognitive load theory distinguishes between three types of load: (a) extraneous, (b) intrinsic (Pollock et al., 2002; Sweller, 2005a; Sweller & Chandler, 1994; Sweller et al., 1998), and (c) germane (Sweller, 2005a; Sweller et al., 1998).
23
Managing Cognitive Load in the Design of Assistive Technology for Those with Learning Disabilities
Each of these must be accounted for in the design of instructional material if the instruction is to be efficient and effective.
extraneous Load Extraneous load (also called irrelevant load) is caused in situations where instructional material is created using instructional design that ignores the limitations of working memory and consequently fails to focus working memory resources on schema construction and automation (Sweller, 2005a). As the name implies, extraneous load is irrelevant to the learning goals at hand (Clark et al., 2006). In fact, extraneous load is detrimental to learning. It can result in longer learning times, unsatisfactory learning outcomes, or both (Clark et al., 2006). Extraneous load is the worst of the three types of cognitive load because it wastefully consumes the already limited resources of working memory. Fortunately, extraneous load is considered to be under the control of the instructional designer (Pollock et al., 2002) and is therefore avoidable if proper steps are taken. There has been extensive research on extraneous load (van Merrienboer & Ayres, 2005). From this research, a number of effects have emerged to include worked examples (e.g., Cooper & Sweller, 1987; Kalyuga et al., 2001; Stark, Mandl, Gruber, & Renkl, 2002; Van Gerven, Paas, Van Merrienboer, & Schmidt, 2002), split-attention (e.g., Sweller, Chandler, Tierney, & Cooper, 1990), modality (e.g., Tindall-Ford, Chandler, & Sweller, 1997), and redundancy (e.g., Chandler & Sweller, 1991). These effects can yield better schema construction and a decrease in extraneous load (van Merrienboer & Ayres, 2005) when applied correctly as instructional principles. For example, the worked examples principle replaces conventional practice problems with worked-out examples, reducing extraneous load by focusing the learner’s attention on problem states and useful solution steps. The split-attention principle replaces multiple sources of information
24
with a single source, thus reducing extraneous load because learners do not need to mentally integrate multiple sources of information. The modality principle reduces extraneous load through using both the visual and auditory processors of working memory. While finally, the redundancy principle replaces multimodal sources of information that are self-contained (i.e., can be understood in isolation) with a single source of information, reducing extraneous load typically caused by the unnecessary processing of superfluous information (van Merrienboer & Ayres, 2005).
Intrinsic Load Unlike extraneous load, intrinsic load is caused by the natural complexity of the information that must be processed. Intrinsic load is not under the control of the instructional designer, but instead is determined by levels of element interactivity (Sweller, 2005a). Think of an element as a single unit of information to be processed in working memory. To an extent this is similar to the chunking concept that we discussed in the last chapter. These elements may interact with one another at different levels of complexity. For instance, some information can be learned individually, element by element (Pollock et al., 2002). Sweller (2005a) has provided the example of learning nouns of a foreign language to demonstrate this idea. Each noun translation can be learned independent of other translations. For example, the noun “cat” can be learned independently of the noun “dog.” Element interactivity in this case is low because only a limited number of elements need to be processed in working memory at any given time to learn the information. As a result, cognitive load on working memory is also low (Pollock et al., 2002; Sweller, 2005a). Some information, however, cannot be learned in isolation, but instead must be learned in the context of other material. In other words, meaningfully learning of an element cannot occur without simultaneously learning other ele-
Managing Cognitive Load in the Design of Assistive Technology for Those with Learning Disabilities
ments (Sweller, 2005a). Take the construction of sentences, an example provided by Clark et al. (2006). The composition of sentences requires more than the mere understanding of words. There are grammar and syntax rules that must be taken into consideration as well. Unlike the learning of words that may be done in isolation of one another, the composition of sentences is a much more complex task, requiring the juggling of multiple elements simultaneously. Pollock et al. (2002) provide the example of understanding an electric circuit to further demonstrate this idea. Components of a circuit may be learned in isolation of one another; however, an understanding of the entire electrical circuit cannot be achieved without simultaneously considering several components and their relations. Element interactivity in this case is high because many elements must be processed in working memory simultaneously. As a result, cognitive load on working memory is also high (Pollock et al., 2002; Sweller, 2005a). This is why complex instructional material is difficult to comprehend, because of the high element interactivity and the resulting heavy cognitive load it imposes on working memory (Chandler & Sweller, 1996; Marcus, Cooper, & Sweller, 1996; Sweller & Chandler, 1994). Intrinsic load cannot be avoided or changed (Clark et al., 2006); however, it can be managed. There is research which has investigated various instructional methods on intrinsic load, although not to the extent that can be found with extraneous load. From this research, the pre-training and segmenting effects have emerged. These effects can be used to manage intrinsic load on the limited resources of working memory when applied correctly as instructional principles. For example, the pre-training principle can help manage intrinsic load when complex information is decomposed into smaller, simpler, and more manageable named concepts. These concepts and their behaviors can then be taught
first in the lesson. This allows instructional designers to establish prerequisites, sequence content, and scaffold complex information (Clark et al., 2006). The segmenting principle, on the other hand, affords the learner greater control during the learning process by allowing the learner to decide potentially what instructional material to receive and when.
germane Load Germane load (also called effective load) is caused by instructional design implementations that aid in meaningful learning. Clark et al. (2006) describe germane load “as relevant load imposed by instructional methods that lead to better learning outcome” (p. 11). Germane load is meaningful learning resulting from schema construction and automation (F. Paas et al., 2003; Sweller, 2005a). Like extraneous load and unlike intrinsic load, germane load is considered to be under the control of the instructional designer. Whereas extraneous load interferes with learning, germane load enhances learning. Extraneous load can tax the limited resources of working memory; whereas in germane load, those resources are devoted to schema acquisition and automation (F. Paas et al., 2003). If used properly, germane load can prove advantageous to learners in applying what they have learned to new tasks. Ironically, germane load is the least studied to date with regard to instructional methods. However, as we will soon learn, the effect that has shown promise thus far is worked examples. This effect can be used to promote germane load on the limited resources of working memory when applied correctly as an instructional principle. Overall, when implemented properly into instructional material, germane load allows learners to build a repertoire of skills and knowledge which they can apply to different situations (Clark et al., 2006).
25
Managing Cognitive Load in the Design of Assistive Technology for Those with Learning Disabilities
AvoIdIng extrAneous, mAnAgIng IntrInsIc, And PromotIng germAne LoAd When handling extraneous, intrinsic, and germane load it is important to understand that these types of cognitive load are additive (Clark et al., 2006; Paas et al., 2003). We learned earlier in this chapter that intrinsic load is not under the control of the instructional designer, but is instead caused by the natural complexity of the information that must be processed. Although there are instructional principles that can be used to help with intrinsic load, such as the pre-training and segmenting principles, in general, if the information to be learned is complex and intrinsic load is high, actions must be taken to lower extraneous load (van Merrienboer & Ayres, 2005). If extraneous load is not lowered, the total intrinsic and extraneous load may leave little, if any, cognitive resources available within working memory for the incorporation of germane load. For the best learning outcome possible, and to avoid cognitive overload, it is vital to managing intrinsic load by incorporating instructional principles that help avoid extraneous load and that help in promoting germane load whenever possible (Clark et al., 2006; F. Paas et al., 2003; van Merrienboer & Ayres, 2005). This is easier said than done. The problematic nature of extraneous load on the learner is dependent on the level of intrinsic load or the complexity of the information to be learned (van Merrienboer & Ayres, 2005); and herein lies the rub, in that the complexity of a task is relative to the expertise level or prior knowledge of the learner. In the last chapter we discussed the concept of schemata, which are mental frameworks explaining the means by which knowledge is organized in LTM. Depending on our experience level or prior knowledge, our schemata about a subject may be elementary or sophisticated. While some information may be relatively complex to a novice, the same information may be rudimentary for an expert. The level of expertise must be taken
26
into consideration in the design of instructional material, because this helps in determining the level of intrinsic load (van Merrienboer & Ayres, 2005). It also aids in determining to what effort extraneous load should be avoided. What’s more, research has shown that instructional principles used to avoid extraneous load only improve the learning of complex tasks (Clark et al., 2006). To compound matters further, extraneous load should only be avoided when it helps in mitigating the complexity of the instructional material if the learners are considered novices. Employing instructional principles to avoid extraneous load when learners are experts in the subject matter may actually impede learning (Clark et al., 2006; Paas et al., 2003; van Merrienboer & Ayres, 2005). This is because experts have more sophisticated schemata as a result of their prior knowledge, requiring far less cognitive resources than those of novices (Clark et al., 2006). As it can be seen from our discussion, though, the application of CLT is in no way “cookie-cutter”, but instead must be tailored to the situation at hand (Clark et al., 2006). Avoiding extraneous, managing intrinsic, and promoting germane load are dependent on the learners expertise level or prior knowledge, the complexity of the information to be learned, and the instructional environment (Clark et al., 2006). We dedicate the rest of this chapter to discussing each of these types of cognitive load in detail, outlining the instructional principles most applicable to these types of load in the design of instructional materials.
AvoIdIng extrAneous LoAd Extraneous load is the most detrimental of the three cognitive loads because it wastefully consumes limited resources in working memory. It is completely under the control of the instructional designer and consequently can be avoided with initial forethought in the design of instructional material. Research into extraneous load has re-
Managing Cognitive Load in the Design of Assistive Technology for Those with Learning Disabilities
sulted in the discovery of a number of effects that if implemented as instructional principles can be used to help alleviate cognitive load. We mentioned these principles briefly earlier in this chapter. These principles are: worked examples, split-attention, modality, and redundancy. We discuss each of these principles next in greater detail.
the Worked examples Principle The worked examples principle (also referred to as the worked-out-examples principle) proposes that learners learn more deeply when studying worked examples than studying practice problems (Sweller, 2005a). A worked example is a “step-by-step demonstration of how to perform a task or solve a problem” (Clark et al., 2006, p. 190). Firmly grounded in CLT, it is believed that individuals gain a better understanding when exposed to worked examples in initial cognitive skill acquisition (Renkl, 2005). This is because novice learners have the most to learn. They typically have little experience with or prior knowledge of the information to be learned. So, as you might expect, as experience and prior knowledge increases, the benefits of worked examples decrease. This should not be surprising. We learned earlier in the chapter that employing instructional principles to avoid extraneous load may actually impede learning when learners are experts in a subject matter (Clark et al., 2006; Paas et al., 2003; van Merrienboer & Ayres, 2005). Renkl (2005) explains that worked examples are normally composed of a problem formation, solutions steps, and a final solution. Typically applied in the mathematics and physics domains, worked examples are generally applied in the following way: first, the principle, rule, or theorem is first introduced to the learner; a worked example is offered; and then one or more to-be-solved problems are given. Worked examples are most efficient when offered in a series or paired with
problems (Renkl, 2005). These are called worked example-problem pairs and are implemented by altering a worked example with a similar practice problem (Clark et al., 2006). One of the drawbacks with worked examples is that they must be studied in-depth to be of any value. This becomes a problem if learners do not make the effort to first study the presented worked example(s). This dilemma can be addressed with the use of completion examples. Clark et al. (2006) describe completion examples as a hybrid between worked examples and practice problems. The idea behind the completion example is simple—some steps are provided as worked example, whereas others are presented as practice problems. Together, worked and completion examples can be used to help deal with the problem of the learner gaining experience and prior knowledge. Although this is the goal of instruction, as we have already discussed, worked examples can have negative effects on learning when learners have transitioned from novices to experts. Sweller and his colleagues call out the approach of backwards fading, to help handle this situation. Backwards fading is a strategy in which worked examples become gradually replaced with practice problems in a lesson as the learner gains expertise in the subject matter (Clark et al., 2006). Clark et al. (2006) demonstrates this concept with an example lesson composed of four problems. The first problem may be a full worked example. The next two problems may be completion examples, in which the second of the two examples includes more practice than completion steps. Finally, the fourth step is a full practice problem. Care should be taken when applying worked examples. Worked examples which are improperly formatted may cause more harm that good (Clark et al., 2006). To help aid the formatting of worked examples, other instructional principles can be applied, such as the two principles we will discuss next, the split-attention and modality principles.
27
Managing Cognitive Load in the Design of Assistive Technology for Those with Learning Disabilities
Empirical Support for the Worked Examples Principle
practice problems depended on prior knowledge and experience level.
There is wealth of researching investigating the effects of worked examples on learning dating back 25 years. We discuss three here, but you can find an in-depth discussion of these studies and others in Sweller (2005b) and Clark et al. (2006). The earliest quantification was that of Sweller and Cooper (1985) who used algebra to show that worked examples could be used more constructively than practice-based problems. In the study, one group was assigned eight practice-based problems, whereas the other group was assigned four worked example-problem pairs. Findings showed that those who received the worked examples completed a test of six new problems in significantly less time than those who had not. In a study conducted by Paas (1992), practice problems, worked examples and practice pairs, and completion example and practice pairs were compared for their learning effectiveness in the study of statistical concepts. The completion of the practice problem took significantly more time than completion of the worked examples or completion examples. In fact, more was learned from the worked and completion examples than the practice problems. Finally, Kalyuga et al. (2001) studied instances where practice problems were superior over that of worked examples. The researchers studied worked examples to practice problems over a period of time. Apprentices of mechanical trade were presented with one of two lessons, composed of a series of worked examples or practice problems. Test findings showed that learners initially benefited most from the worked examples, showing lower cognitive load than those who solved the practice problems. However, as time passed and experience grew, worked examples proved less beneficial and the problem-solving became superior. All in all, Kalyuga et al. (2001) concluded that the benefits of worked examples and
the split-Attention Principle
28
Examined in a number CLT related studies, the split-attention principle is derived from the worked example principle (Sweller et al., 1998). Splitattention occurs when multiple sources of information must be mentally integrated in a simultaneous manner before meaningful learning can take place. Because multiple sources of information must be mentally integrated, extraneous cognitive load is increased, negatively impacting learning (Ayres & Sweller, 2005). Clark et al. (2006) uses the example of having to look at an illustration on one page, while reading the accompanying text on another. This causes split-attention because both the illustration and text cannot be learned in isolation, but are dependent on one another, causing additional load on working memory as a result of having to integrate the disjointed sources of information. These multiple sources of information are frequently represented as pictures and accompanying text (Ayres & Sweller, 2005; van Merrienboer & Ayres, 2005), but can also be represented as text with text, or different forms of multimedia. Since there are always at least two sources of information involved in multimedia, it is very susceptible to split-attention (Sweller, 2005a). Split-attention can be avoided, however. If the instructional material is presented as a figure and text, split-attention can be circumvented by integrating the figure and text together (Sweller & Chandler, 1994). This is the premise behind the split-attention principle. It is important to understand that split-attention is only applicable when the sources of information are unintelligible from one another. If a figure and text can be learned in isolation of one another, meaning they are both self-explanatory, split-attention will not occur (Clark et al., 2006). In addition, split-attention is only applicable with
Managing Cognitive Load in the Design of Assistive Technology for Those with Learning Disabilities
complex information. This should come as no surprise. We learned earlier in this chapter that extraneous load should only be avoided when it helps in mitigating the complexity of instructional material when the learners are considered novices. The example commonly provided by Sweller and his colleagues (see Ayres & Sweller, 2005; Sweller & Chandler, 1994; Sweller et al., 1998) has been that of geometry instruction. The study of geometry typically requires the learner to examine a figure and associated text. Neither the figure nor text are intelligible in isolation, but instead need to be mentally integrated for meaningful learning to occur. This involves finding relationships between elements of the figure and text. If these relationships are not formed, meaningful learning does not occur. Geometry instruction is considered inherently complex by nature and, therefore, an amount of intrinsic cognitive load is unavoidable. However, in separating the figure and text, extraneous cognitive load is also imposed. If the split-attention principle is followed and the figure and text are incorporated together, it is believed that extraneous cognitive load can be greatly reduced if not eliminated (Sweller & Chandler, 1994).
Empirical Support for the Split-Attention Principle The earliest research on split-attention was conducted by Tarmizi and Sweller (1988) who examined the effectiveness of worked examples on learning geometry. Their findings showed that learners who studied worked examples did not have an advantage over those who did not. A finding that contradicted earlier investigations of the benefits of worked examples. Tarmizi and Sweller (1988) concluded that their findings were a result of participants having to integrate two sources of information, diagrams and text, which resulted in a split-attention.
The study by Sweller et al. (1990) would soon follow, which reproduced the study by Tarmizi and Sweller (1988), but used coordinated geometry examples instead. They found that worked examples depicted in the traditional way did not provide learners with an advantage. Instead, what proved beneficial was an integrated worked example format, in which text was placed on the diagrams, reducing unnecessary searching, consequently reducing cognitive load and helping learners master the information. The findings of the Sweller et al. (1990) study showed that learners who received the integrated worked examples performed significantly better than those who received those traditionally formatted. More studies would follow, such as that by Ward and Sweller (1990), who showed similar findings to that of Sweller et al. (1990). Other studies include the CTML related studies by Richard E. Mayer and his colleagues, most notably that by Mayer and Moreno (1998), which pioneered the way for the modality principle.
the modality Principle The modality principle proposes that presenting information in dual modalities (i.e., partly visual and partly auditory) spreads total induced load across the visual and auditory channels of working memory thereby reducing cognitive load (Low & Sweller, 2005; Sweller & Chandler, 1994; Sweller et al., 1998). In other words, a modality effect occurs when material, such as text, is presented in an auditory rather than written mode when integrated with other non-verbal material (Sweller et al., 1998; Tindall-Ford et al., 1997), such as illustrations, photos, animations, or video. This is important as learning novel material can be impeded due to the capacity limitations of working memory (Low & Sweller, 2005; Sweller & Chandler, 1994). Much like split-attention, the modality principle is only applicable when both sources of information are essential to learning. Both visual and auditory
29
Managing Cognitive Load in the Design of Assistive Technology for Those with Learning Disabilities
sources must be unintelligible when in isolation requiring mental integration for meaningful learning to occur. If both sources are intelligible, other principles, such as the redundancy principle should be leveraged instead (Low & Sweller, 2005). Furthermore, as you probably have already guessed, the use of audio is only beneficial for novice learners, or those with little prior knowledge (Clark et al., 2006). The modality principle has been thoroughly examined in numerous studies in past decades. Some of the earliest research focused specifically on the notion of distinct, yet interrelated information processing channels in working memory for visual and auditory information (see Penney, 1989, for an in-depth review). Much of the early research demonstrated that a dual mode of presenting information can result in increased performance, suggesting that there are modality specific processing resources in working memory (Low & Sweller, 2005). This is consistent with Baddeley’s (1986, 1998, 2002) model of working memory. (We will learn in the next chapter that the modality principle is also consistent with Paivio’s (J. M. Clark & Paivio, 1991; Paivio, 1971, 1990) dual coding theory, the idea that cognition is composed of verbal and non-verbal subsystems.) Cognitive load theory leveraged this early work, which established the premise that performance can be increased by presenting information in dual rather than single modalities, to suggest that a modality effect can be obtained under occurrences of split-attention (Low & Sweller, 2005). In fact, according to Clark et al. (2006), the most compelling finding of CLT research is the modality effect.
Empirical Support for the Modality Principle Perhaps the most well-known study addressing split-attention and modality (using CLT as the theoretical foundation) is the research conducted by
30
Mousavi et al. (1995), who has examined presentation sequence and modality and split-attention effects using geometry instruction. Their findings have shown instructional material presented in visual and auditory modes is significantly better than the same instructional material presented in a visual manner only. Their research has also enforced the idea that the benefits of multimodal material occurred irrespective of either sequential or simultaneous presented information. Similar studies would follow, examining the modality effect in the context of CLT (e.g., Jeung & Chandler, 1997; Leahy, Chandler, & Sweller, 2003; Tindall-Ford et al., 1997). It should come as no surprise that the modality principle is extremely relevant in the context of learning through multimedia (Low & Sweller, 2005). Consequently the modality principle is grounded in a wealth of research, thoroughly studied in number of experiments (Jeung & Chandler, 1997, see experiments 1, 2, and 3; Kalyuga et al., 1999, see experiment 1; Mayer, Dow, & Mayer, 2003, see experiment 1; Mayer & Moreno, 1998, see experiments 1 and 2; Moreno & Mayer, 1999, see experiments 1 and 2; 2002, see experiments 1a, 1b, 1c, 2a, and 2b; Moreno, Mayer, Spires, & Lester, 2001, see experiments 4a, 4b, 5a, and 5b). These experiments have studied a wide variety of instructional topics to include math problems, the formation of lightning, a car brake system, electrical engineering, an aircraft simulation, an environmental science game, and the mechanics behind an electric motor (see Mayer (2005b) for a discussion of the these experiments and Mayer (2005c) for an in-depth discussion of the modality principle). While we do not provide a discussion of each of the studies, we can tell you that across all experiments, learners who received animation with concurrent narration multimedia presentations performed better on transfer tests than did learners who received the text-based presentations (Mayer, 2003).
Managing Cognitive Load in the Design of Assistive Technology for Those with Learning Disabilities
the redundancy Principle The redundancy principle proposes that learners learn more deeply when identical information is not presented in more than one format (Mayer, 2005a). The redundancy principle is based on the premise that less is more. When it comes to instructional material, however, this practice can be sometimes difficult to accept, primarily because those who develop instruction sometimes want to include as much information as possible. According to Sweller (2005b), this is typically done as a way to enrich or elaborate upon information. However, research on the matter suggests that the use of redundant information in instructional material can interfere with learning (Sweller, 2005b). We have learned that working memory is limited in both capacity and duration. Redundant information places unnecessarily load on working memory. Therefore, the redundancy principle should be applied in the design of instructional materials. The redundancy principle advocates the replacement of multiple sources of information that are self-contained with a single source of information; reducing extraneous load typically caused by the unnecessary processing of redundant information (van Merrienboer & Ayres, 2005). In other words, only present the minimum information that is required to meet the instructional goals at hand, distinguishing between “need-to-have” information and information that is “nice-to-have” (Clark et al., 2006). Adding any more than is essential to the understanding of the information to be learned and you risk unnecessary load on the already taxes resources of working memory. Instructional material should be concise, clear, and to the point. This is particularly important to those with LDs in mathematics, for example, who may have problems in the ability to inhibit extraneous information due to a general working memory deficit (Passolunghi & Siegel, 2004).
Empirical Support for the Redundancy Principle Although signs of a redundancy effect can be seen in a number of studies spanning the last two decades, the redundancy principle did not formally emerge until recently. Sweller (2005b) blames this on a lack of a theoretical explanation. The principle was thought as a new discovery each and every time it appeared, but each subsequent discovery was not linked to the last. Sweller (2005b) points to a number of studies to substantiate this claim, among them the study by Chandler and Sweller (1991). Although the study focused on the effect of split-attention, Chandler and Sweller (1991) noted that with some types of instructional material (i.e., the flow of blood in the heart, lungs, and body), there was no difference between integrated material and material that imposed a split-attention. The cause for this was the redundancy effect. According to Sweller (2005b), since the Chandler and Sweller (1991) study, the redundancy effect has been shown in a variety of context to include the experiments by Sweller and Chandler (1994), Chandler and Sweller (1996), and Kalyuga et al. (1999). There have also been a number of studies by Mayer and his colleagues examining the redundancy effect with regard to multimedia. These studies have been conducted by Mayer, Heiser, and Lonn (2001) and Moreno and Mayer (2000a, 2000b). For an in-depth discussion of these studies and others in the context of the redundancy principle, see Sweller (2005b) and Clark et al. (2006).
mAnAgIng IntrInsIc LoAd Recall from our earlier discussion that intrinsic load is not in the control of the instructional designer because it is based on the complexity of the information to be learned. Further recall that
31
Managing Cognitive Load in the Design of Assistive Technology for Those with Learning Disabilities
the pre-training and segmenting principles are two instructional principles that have emerged from the research that can be leveraged to help in managing the amount of intrinsic load on working memory. One promotes deeper learning when named concepts are presented first, whereas the other promotes deeper learning when learners are allowed to pace themselves through instruction. We describe each of these principles next in greater detail.
the Pre-training Principle The pre-training principle is a load reducing method typically described as an instructional principle in the context of CTML. The principle proposes that learners learn more deeply when they are aware of names and behaviors of main concepts (Mayer, 2005a; Mayer & Moreno, 2003). According to Mayer (2005b), the theoretical foundation for the pre-training principle is that it allows learners to build schemata or prior knowledge about essential concepts or components that can be applied later in the learning process, thus decreasing the amount of cognitive load. This strategy may be particularly useful to individuals who have difficulty processing information as continuous units of information, such as those with LDs. Clark et al. (2006) describe this concept as segmenting. Furthermore, they indicate that it should be implemented differently depending on whether you are dealing with process or procedure knowledge. They define process knowledge as “a flow of events that summarize the operations of business, scientific, or mechanical systems” (p. 163) and procedure knowledge as “knowledge underpinning performance of a task that is completed more or less the same way each time” (p. 168). Examples of process knowledge are how a car break system and bicycle tire pump work, while examples of procedure knowledge are the steps you take to start your car, computer,
32
or perhaps even how you go about doing your grocery shopping.
Handling Process and Procedure Knowledge To avoid cognitive overload when dealing with process knowledge, the individual components comprising a system should be first introduced before the rest of the system. Clark et al. (2006) point to the study by Mayer, Mathias, and Wetzell (2002) as empirical support for this strategy, who recommend “providing pre-training aimed at clarifying the behavior of the components of the system” (p. 154). To accomplish this, they propose that three steps be followed: decompose the system into components, visually segregate and name the components, and represent the state change in each of the components (Mayer, Mathias et al., 2002). To avoid cognitive overload when dealing with procedure knowledge, Clark et al. (2006) present two alternative strategies based on the findings of Pollock et al. (2002, see experiments 2 and 3). In the first strategy, each step should be first taught, the learner should be allowed to practice each step, and then each step should be taught again, but this time accompanied by supporting information. In the second strategy, the support information is first taught and then each step (Clark, 1999, as cited in Clark et al., 2006). These strategies have their advantages and disadvantages. Both divide the complex information to be learned into two major segments, that of steps and supporting information (Clark et al., 2006). However, in implementing the first strategy, the learner may not fully grasp the steps because they are taught out of context of the supporting information. On the other hand, in implementing the second strategy, hands-on experience is postponed because the steps are not taught until after the supporting information is presented (Clark et al., 2006). So, which strategy should be used? It is
Managing Cognitive Load in the Design of Assistive Technology for Those with Learning Disabilities
really up to you to decide. As Clark et al. (2006) indicate, there is insufficient research suggesting one strategy is better than the other.
Empirical Support for the Pre-Training Principle The empirical basis for the pre-training principle lies in the studies conducted by Mayer, Mathias et al., (2002, see experiments 1, 2, and 3), Mayer, Mauntone, and Prothero (2002, see experiments 2 and 3) and Pollock et al. (2002, see experiments 2 and 3). We have already discussed the Mayer, Mathias et al., (2002) study to some extent. In the study, learning outcomes were compared between two groups who viewed a multimedia presentation on a car break system or a bicycle pump system. In the car break system experiment, those in the pre-training group were exposed to the names and states of the components comprising the break system before viewing the presentation. In the bicycle pump system experiment, those in the pre-training group were exposed to a model of a bicycle pump and allowed to operate it before viewing the presentation. In both experiments, the pre-training groups outperformed the other two groups in problem-solving tests. In the Mayer, Mauntone, et al. (2002) study, learning outcomes were compared between two groups who played a simulation game to learn about geology. Both groups were asked to identify a geological feature found on the earth’s surface. One group was afforded pre-training in the form of illustrations depicting geological features, while the other was not. The group who received the pre-training outperformed the group who did not on a problem-solving test. Finally, in the Pollock et al. (2002) study, learning outcomes were compared between two groups who viewed a two-phase multimedia lesson on how to conduct safety tests for electrical appliances. The first group received the lesson in which the first phase focused on how each individual component worked and the second
phase focused on how the individual components worked together within the entire system. The second group received the same lesson, however, both phases focused on how the individual components worked together within the entire system. The group who received the pre-training lesson outperformed those who did not receive the pre-training lesson on a problem-solving test. Although the findings presented in these seven experiments are promising, further research is needed in studying the conditions in which the pre-training principle is most effective.
the segmenting Principle The segmenting principle is also a load reducing method typically described as an instructional principle in the context of the CTML. The principle proposes that deeper learning can occur when a lesson is presented in learner-controlled segments rather than continuous units (Mayer, 2005a; Mayer & Moreno, 2003). This strategy allows learners to pace themselves as they move through instruction. The premise behind the principle is to slow the pace of instruction so that learners have more time to process the information to be learned. This is especially useful in situations where the instructional material is presented at too fast a rate for the learner. The principle places learners in control of the learning process. Learners can decide when (i.e., at what speed) the instructional material is presented, but could also be given the capability to decide what instructional material should be presented. This makes the segmenting principle an attractive and commonsense concept, particularly for those with LDs, who may have difficulty keeping pace with instruction, and consequently, are unable to engage in the processing needed to learn new information. There is a potential pitfall with the principle, however. Clark et al. (2006) share the concern that for a novice, deciding the order in which instructional units will be taught may impose too much of a cognitive load, because these novice
33
Managing Cognitive Load in the Design of Assistive Technology for Those with Learning Disabilities
learners may not understand the subject matter well enough to be able to make such decisions. However, they do agree with research on the matter—in that allowing a novice to decide at what speed to proceed through instructional material may be advantages to learning. Probably the most common way to implement self-pacing in instruction, at least from a computer-based standpoint, is with “Continue” or “Next” buttons. All of us have probably experienced this implementation at some point in our lives.
Empirical Support for the Segmenting Principle The empirical basis for the segmenting principle are the studies conducted by Mayer and Chandler (2001, see experiment 2) and Mayer, Dow, and Mayer (2003, see experiments 2a and 2b). In the Mayer and Chandler (2001) study, learning outcomes were compared between a group who viewed a 140-second narrated animation on lightning formation as a continuous presentation and a group who received the same presentation divided into 16 segments, each lasting approximately 10-seconds and sequenced by clicking a “Continue” button. The group who received the segmented presentation performed better on a problem-solving test than the group who viewed the continuous presentation. In the Mayer, Dow, and Mayer (2003) study, learning outcomes were compared between groups who learned about electronic motors while interacting with an avatar within a simulation game. One group was offered a continuous version of the simulation game in which the avatar showed how the electronic motor worked when clicked. The other group was offered a segmented version of the same game which displayed questions that corresponded to the segments of the narrated animation. Students in the segmented group could control what segments to view based on the question clicked. Like the findings of the Mayer and Chandler (2001) study, those in the segmented
34
group outperformed those in the continuous group across all experiments. Although these findings are promising, Mayer (2005b) is quick to point out that evidence for the segmenting principle is still preliminary and further research is warranted.
the modality Principle We end our discussion of instructional principles that help manage intrinsic load with the modality principle. We have already discussed this principle in the context of avoiding extraneous load and so we do not repeat the effort here. We know that the modality principle, under certain conditions, can effectively mitigate certain loads leaving more resources for other processing in working memory. Through presenting material in dual modalities, the total induced load is spread across the visual and auditory components of working memory, thereby reducing cognitive load. The modality principle, can therefore, prove advantages in managing intrinsic load.
PromotIng germAne LoAd We end our discussion of CLT with the most beneficial of three cognitive loads, germane load. As we have discussed, if used properly, germane load can prove advantageous to learners in applying what they have learned to new tasks, or what is called the transfer of learning. This is essentially the ability to apply what has been learned to new settings or situations. This may include the transfer of skills, knowledge, and/or attitudes. Transfer of learning can be decomposed into two types of transfer, near and far. Near transfer of skills and knowledge is typically applied the same way each and every time the skills and knowledge are used. Near transfer is procedure-based, and consequently, order is significant. Far transfer, on the other hand, is applied under conditions of change. Learners must be able to apply the skills and knowledge that they have learned to new
Managing Cognitive Load in the Design of Assistive Technology for Those with Learning Disabilities
situations. As you might expect, far transfer is the harder of the two to teach, but the most advantages. To foster transfer of learning, specifically far transfer, Clark et al. (2006) propose the use of diverse worked examples.
diverse Worked examples Far transfer requires the forming of new schemata. We learned in the last chapter that the formation of new schemata imposes additional cognitive load on working memory. Although we typically want to avoid any kind of unnecessary impact on working memory, the load in question is that of germane and is both helpful and necessary for learning. Diverse worked examples can be used to help minimize extraneous load and in the process offset the additional load imposed in the formatting of new schemata resulting from germane load (Clark et al., 2006). As the name implies, diverse worked examples are varied worked examples and practice problems that help in the application of skills and knowledge to varied scenarios. Because diverse worked examples are so diverse, they impose much more of a cognitive load. However, these diverse examples can lead to greater transfer of learning than examples which are all similar in nature (Clark et al., 2006). All in all, when learners are expected to transfer the skills and knowledge they have learned to new situations, a series of diverse worked examples and practice problems should be used to promote germane load and at the same time help mitigate extraneous load (Clark et al., 2006).
Additional Empirical Support for the Worked Examples Principle Unfortunately, germane load is the least empirically supported of the three loads (Clark et al., 2006). The reason for this is quite simple; although more and more CLT related studies are now investigating the effects of instructional methods on
intrinsic and germane load, CLT was once used to predominately study instructional methods intended to decrease extraneous load (van Merrienboer & Ayres, 2005). Consequently, we revisit the study conducted by Paas (1992), who showed that practice problems took significantly more time to complete than that of worked examples or completion examples, and that participants learned more from the worked and completion examples than the practice problems. Paas (1992) also investigated near and far transfer with regard to worked and completion examples. Test findings showed that scores for problems dealing with near transfer did not vary. However, scores did vary significantly for problems which dealt with far transfer for those participants who were exposed to the worked and completion examples. The rationale behind this finding was that the examples required fewer resources from working memory, leaving more resources for learning the information.
concLusIon Applying What We Have Learned In this chapter, we learned that CLT can be used to bridge the gap between instructional principles and knowledge of human cognition (Sweller, 2005a). We discussed a number of instructional principles coming out of the research on CLT. Instructional principles that if used properly can help manage intrinsic, avoid extraneous, and promote germane load. Although considerations for cognitive load should be part of the design of all instructional material, we argue that such principles hold even more weight in the design of instruction to assist those with learning disabilities. For example, a possible explanation for the inability of some children to meet mathematical literacy standards is that cognitive load on mathematics curriculum may be too high and these children may not be able to keep up with instructional
35
Managing Cognitive Load in the Design of Assistive Technology for Those with Learning Disabilities
activities that are otherwise optional to learning (Woodward, 2006; Woodward & Montague, 2002, as cited in Wong, Graham, Hoskyn, & Berman, 1996). In such a case, it is important to ensure that the redundancy principle is followed. Only include instruction that is essential to the understanding of the information to be learned. Additionally, worked examples should be leveraged in the delivery of the instruction. Initially, only worked examples should be used, and as the learner grows in their skills and knowledge of the subject, these worked examples can be slowly replaced with traditional practice problems—essentially implementing the approach of backwards fading. Moreover, the split-attention and modality principles should be considered in the design of the worked examples to ensure that, for example, a split-attention effect is not inadvertently created. The pre-training and sequencing principles can also be evaluated to their potential benefits. For instance, it may be prudent to allow the learner to control the rate in which the instruction is presented. While our example is simple, we understand that the design of instructional materials to assist those with LDs is far from such, but is instead a taunting, complex, and challenging task. However, we believe that it is important that technology for learning be created with an understanding of design principles empirically supported by how the human mind works. Thus, we invite those involved in the creation of instruction for those with learning disabilities to learn more about the principles presented in this chapter and add them to their repertoire of instructional design knowledge. Furthermore, we add that there are many more instructional principles stemming from the research on CLT that could be leverage to assist those with LDs. In the next and final chapter of this introduction, we present a number of these additional instructional principles developed from the body of research focused specifically on multimedia learning.
36
reFerences Ayres, P., & Sweller, J. (2005). The split-attention principle in multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 135–146). New York: Cambridge University Press. Baddeley, A. D. (1986). Working memory. New York, NY: Oxford University Press. Baddeley, A. D. (1998). Human memory: Theory and practice. Boston, MA: Allyn and Bacon. Baddeley, A. D. (2002). Is working memory still working? European Psychologist, 7(2), 85–97. doi:10.1027//1016-9040.7.2.85 Bull, R., Johnston, R. S., & Roy, J. A. (1999). Exploring the roles of the visual-spatial sketch pad and central executive in children’s arithmetical skills: Views from cognition and developmental neuropsychology. Developmental Neuropsychology, 15, 421–442. doi:10.1080/87565649909540759 Chandler, P., & Sweller, J. (1991). Cognitive load theory and the format of instruction. Cognition and Instruction, 8(4), 293–332. doi:10.1207/ s1532690xci0804_2 Chandler, P., & Sweller, J. (1996). Cognitive load while learning to use a computer program. Applied Cognitive Psychology, 10, 151–170. doi:10.1002/ (SICI)1099-0720(199604)10:23.0.CO;2-U Clark, J. M., & Paivio,A. (1991). Dual coding theory and education. Educational Psychology Review, 3(3), 149–210. doi:10.1007/BF01320076 Clark, R., Nguyen, F., & Sweller, J. (2006). Efficiency in learning: Evidence-based guidelines to manage cognitive load. San Francisco, CA: Pfeiffer. de Jong, P. F. (1998). Working memory deficits of reading disabled children. Journal of Experimental Child Psychology, 70(2), 75–96. doi:10.1006/ jecp.1998.2451
Managing Cognitive Load in the Design of Assistive Technology for Those with Learning Disabilities
Hitch, G. J., & McLean, J. F. (1991). Working memory in children with specific arithmetical learning difficulties. The British Journal of Psychology, 82, 375–386.
Marcus, N., Cooper, M., & Sweller, J. (1996). Understanding instructions. Journal of Educational Psychology, 88(1), 49–63. doi:10.1037/00220663.88.1.49
Jeung, H. J., & Chandler, P. (1997). The role of visual indicators in dual sensory mode instruction. Educational Psychology, 17(3), 329. doi:10.1080/0144341970170307
Mayer, R. E. (2003). Elements of science in E-learning. Journal of Educational Computing Research, 29(3), 297–313. doi:10.2190/YJLG09F9-XKAX-753D
Kalyuga, S., Chandler, P., & Sweller, J. (1998). Levels of expertise and instructional design. Human Factors, 40(1), 1–17. doi:10.1518/001872098779480587
Mayer, R. E. (2005a). Introduction to multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 1–16). New York: Cambridge University Press.
Kalyuga, S., Chandler, P., & Sweller, J. (1999). Managing split-attention and redundancy in multimedia instruction. Applied Cognitive Psychology, 13, 351–371. doi:10.1002/ (SICI)1099-0720(199908)13:43.0.CO;2-6
Mayer, R. E. (2005b). Principles for managing essential processing in multimedia learning: Segmenting, pretraining, and modality principles. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 169–182). New York: Cambridge University Press.
Kalyuga, S., Chandler, P., Tuovinen, J., & Sweller, J. (2001). When problem solving is superior to studying worked examples. Journal of Educational Psychology, 93(3), 579–588. doi:10.1037/00220663.93.3.579
Mayer, R. E. (Ed.). (2005c). The Cambridge handbook of multimedia learning. New York: Cambridge University Press.
Keeler, M. L., & Swanson, H. L. (2001). Does strategy knowledge influence working memory in children with mathematical disabilities? Journal of Learning Disabilities, 34(5), 418–434. doi:10.1177/002221940103400504 Leahy, W., Chandler, P., & Sweller, J. (2003). When auditory presentations should and should not be a component of multimedia instruction. Applied Cognitive Psychology, 17, 401–418. doi:10.1002/acp.877 Low, R., & Sweller, J. (2005). The modality principle in multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 147–158). New York: Cambridge University Press.
Mayer, R. E., & Chandler, P. (2001). When learning is just a click away: Does simple user interaction foster deeper understanding of multimedia messages? Journal of Educational Psychology, 93(2), 390–397. doi:10.1037/0022-0663.93.2.390 Mayer, R. E., Dow, G. T., & Mayer, S. (2003). Multimedia learning in an interactive self-explaining environment: What works in the design of agent-based microworlds? Journal of Educational Psychology, 95(4), 806–812. doi:10.1037/00220663.95.4.806 Mayer, R. E., Heiser, J., & Lonn, S. (2001). Cognitive constraints on multimedia learning: When presenting more material results in less understanding. Journal of Educational Psychology, 93(1), 187–198. doi:10.1037/0022-0663.93.1.187
37
Managing Cognitive Load in the Design of Assistive Technology for Those with Learning Disabilities
Mayer, R. E., Mathias, A., & Wetzell, K. (2002). Fostering understanding of multimedia messages through pre-training: Evidence for a two-stage theory of mental model construction. Journal of Experimental Psychology. Applied, 8(3), 147–154. doi:10.1037/1076-898X.8.3.147 Mayer, R. E., Mautone, P., & Prothero, W. (2002). Pictorial aids for learning by doing in a multimedia geology simulation game. Journal of Educational Psychology, 94(1), 171–185. doi:10.1037/00220663.94.1.171 Mayer, R. E., & Moreno, R. (1998). A splitattention effect in multimedia learning: Evidence for dual processing systems in working memory. Journal of Educational Psychology, 90(2), 312– 320. doi:10.1037/0022-0663.90.2.312 Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38(1), 43–52. doi:10.1207/S15326985EP3801_6 McLean, J. F., & Hitch, G. J. (1999). Working memory impairments in children with specific arithmetic learning difficulties. Journal of Experimental Child Psychology, 74(3), 240–260. doi:10.1006/jecp.1999.2516 Miller, G., A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81–97. doi:10.1037/h0043158 Moreno, R., & Mayer, R. E. (1999). Cognitive principles of multimedia learning: The role of modality and contiguity. Journal of Educational Psychology, 91(2), 358–368. doi:10.1037/00220663.91.2.358 Moreno, R., & Mayer, R. E. (2000a). A coherence effect in multimedia learning: The case for minimizing irrelevant sounds in the design of multimedia instructional messages. Journal of Educational Psychology, 92(1), 117–125. doi:10.1037/0022-0663.92.1.117
38
Moreno, R., & Mayer, R. E. (2000b). Engaging students in active learning: The case for personalized multimedia messages. Journal of Educational Psychology, 92(4), 724–733. doi:10.1037/00220663.92.4.724 Moreno, R., & Mayer, R. E. (2002). Learning science in virtual reality multimedia environments: Role of methods and media. Journal of Educational Psychology, 94(3), 598–610. doi:10.1037/0022-0663.94.3.598 Moreno, R., Mayer, R. E., Spires, H. A., & Lester, J. C. (2001). The case for social agency in computer-based teaching: Do students learn more deeply when they interact with animated pedagogical agents? Cognition and Instruction, 19(2), 177–213. doi:10.1207/S1532690XCI1902_02 Mousavi, S. Y., Low, R., & Sweller, J. (1995). Reducing cognitive load by mixing auditory and visual presentation modes. Journal of Educational Psychology, 87(2), 319–334. doi:10.1037/00220663.87.2.319 Niaz, M., & Logie, R. H. (1993). Working memory, mental capacity and science education: Towards an understanding of the ‘working memory overload hypothesis’. Oxford Review of Education, 19(4), 511–525. doi:10.1080/0305498930190407 Paas, F., Renkl, A., & Sweller, J. (2003). Cognitive load theory and instructional design: Recent developments. Educational Psychologist, 38(1), 1–4. doi:10.1207/S15326985EP3801_1 Paas, F. G. W. C. (1992). Training strategies for attaining transfer of problem-solving skill in statistics: A cognitive-load approach. Journal of Educational Psychology, 84(4), 429–434. doi:10.1037/0022-0663.84.4.429 Paivio, A. (1971). Imagery and verbal processes. New York: Holt, Rinehart and Winston.
Managing Cognitive Load in the Design of Assistive Technology for Those with Learning Disabilities
Paivio, A. (1990). Mental representations: A dual coding approach. New York: Oxford University Press. Passolunghi, M. C., & Siegel, L. S. (2004). Working memory and access to numerical information in children with disability in mathematics. Journal of Experimental Child Psychology, 88(4), 348–367. doi:10.1016/j.jecp.2004.04.002 Penney, C. G. (1989). Modality effects and the structure of short-term verbal memory. Memory & Cognition, 17, 398–422. Pollock, E., Chandler, P., & Sweller, J. (2002). Assimilating complex information. Learning and Instruction, 12, 61–86. doi:10.1016/S09594752(01)00016-0 Renkl, A. (2005). The worked-out examples principle in multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 229–245). New York: Cambridge University Press. Shiffrin, R. M., & Atkinson, R. C. (1969). Storage and retrieval processes in long-term memory. Psychological Review, 76(2), 179–193. doi:10.1037/ h0027277 Swanson, H. L., & Siegel, L. (2001). Learning disabilities as a working memory deficit. Issues in Education: Contributions of Educational Psychology, 7(1), 1–48. Sweller, J. (1999). Instructional design in technical areas. Australia: ACER Press. Sweller, J. (2005a). Implications of cognitive load theory for multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 19–30). New York: Cambridge University Press. Sweller, J. (2005b). The redundancy principle in multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 159–167). New York: Cambridge University Press.
Sweller, J., & Chandler, P. (1994). Why some material is difficult to learn. Cognition and Instruction, 12(3), 185–233. doi:10.1207/ s1532690xci1203_1 Sweller, J., Chandler, P., Tierney, P., & Cooper, M. (1990). Cognitive load as a factor in the structuring of technical material. Journal of Experimental Psychology. General, 119(2), 176–192. doi:10.1037/0096-3445.119.2.176 Sweller, J., & Cooper, G. A. (1985). The use of worked examples as a substitute for problem solving in learning algebra. Cognition and Instruction, 2(1), 59–89. doi:10.1207/s1532690xci0201_3 Sweller, J., van Merrienboer, J. J. G., & Paas, F. G. W. C. (1998). Cognitive architecture and instructional design. Educational Psychology Review, 10(3), 251–296. doi:10.1023/A:1022193728205 Tarmizi, R. A., & Sweller, J. (1988). Guidance during mathematical problem solving. Journal of Educational Psychology, 80(4), 424–436. doi:10.1037/0022-0663.80.4.424 Tindall-Ford, S., Chandler, P., & Sweller, J. (1997). When two sensory modes are better than one. Journal of Experimental Psychology. Applied, 3(4), 257–287. doi:10.1037/1076-898X.3.4.257 van Merrienboer, J. J. G., & Ayres, P. (2005). Research on cognitive load theory and its design implications for e-learning. Educational Technology Research and Development, 53(3), 5–13. doi:10.1007/BF02504793 Ward, M., & Sweller, J. (1990). Structuring effective worked examples. Cognition and Instruction, 7(1), 1–39. doi:10.1207/s1532690xci0701_1 Wong, B. Y. L., Graham, L., Hoskyn, M., & Berman, J. (1996). The ABCs of learning disabilities (2nd ed.). New York: Elsevier/Academic Press.
39
Managing Cognitive Load in the Design of Assistive Technology for Those with Learning Disabilities
AddItIonAL reAdIng Ayres, P., & Sweller, J. (2005). The split-attention principle in multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 135–146). New York: Cambridge University Press. Clark, R., Nguyen, F., & Sweller, J. (2006). Efficiency in learning: Evidence-based guidelines to manage cognitive load. San Francisco, CA: Pfeiffer. Low, R., & Sweller, J. (2005). The modality principle in multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 147–158). New York: Cambridge University Press. Mayer, R. E. (2005a). Introduction to multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 1–16). New York: Cambridge University Press. Mayer, R. E. (2005b). Principles for managing essential processing in multimedia learning: Segmenting, pretraining, and modality principles. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 169–182). New York: Cambridge University Press. Mayer, R. E. (Ed.). (2005c). The Cambridge handbook of multimedia learning. New York: Cambridge University Press. Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38(1), 43–52. doi:10.1207/S15326985EP3801_6 Paas, F., Renkl, A., & Sweller, J. (2003). Cognitive load theory and instructional design: Recent developments. Educational Psychologist, 38(1), 1–4. doi:10.1207/S15326985EP3801_1
40
Renkl, A. (2005). The worked-out examples principle in multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 229–245). New York: Cambridge University Press. Sweller, J. (2005a). Implications of cognitive load theory for multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 19–30). New York: Cambridge University Press. Sweller, J. (2005b). The redundancy principle in multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 159–167). New York: Cambridge University Press. Sweller, J., van Merrienboer, J. J. G., & Paas, F. G. W. C. (1998). Cognitive architecture and instructional design. Educational Psychology Review, 10(3), 251–296. doi:10.1023/A:1022193728205
key terms And deFInItIons Backwards Fading: A strategy in which worked examples become gradually replaced with practice problems in a lesson as the learner gains expertise in the subject matter (Clark et al., 2006). Cognitive Load: Refers to the amount of cognitive resources imposed on working memory. Cognitive Load Theory (CLT): A theory proposed by John Sweller and his colleagues focused on the limitations of working memory during instruction. Cognitive Theory of Multimedia Learning: A theory credited to Richard E. Mayer and his colleagues focused on best practices in the use of visual and auditory information in multimediabased instruction. Completion Examples: A hybrid approach between worked examples and practice problems
Managing Cognitive Load in the Design of Assistive Technology for Those with Learning Disabilities
where some steps are provided as worked example and others are presented as practice problems. Diverse Worked Examples: Varied worked examples and practice problems that help in the application of skills and knowledge to varied scenarios. Dual-Coding Theory: The theory proposed by Allan Paivio that cognition is composed of verbal and non-verbal subsystems. Element Interactivity: Used to measure intrinsic load; think of an element as a single unit of information to be processed in working memory. Extraneous (Irrelevant Load) Load: One of three types of cognitive load that is caused in situations where instructional material is created using instructional design that ignores the limitations of working memory and consequently fails to focus working memory resources on schema construction and automation (Sweller, 2005a); this load is irrelevant to the learning goals at hand (Clark et al., 2006) and is considered to be under the control of the instructional designer (Pollock et al., 2002) and, consequently, is avoidable if proper instructional methods are applied. Far Transfer: Transfer of skills and knowledge that is applied under conditions of change; learners must be able to apply the skills and knowledge that they have learned to new situations. Germane (Effective) Load: One of three types of cognitive load that can prove advantageous to learners in applying what they have learned to new tasks; it is caused by instructional design implementations that aid in meaningful learning and is under the control of the instructional designer. Intrinsic Load: One of three types of cognitive load that is caused by the natural complexity of the information that must be processed or the amount of element interactivity involved; this load is not under the control of the instructional designer. Modality Principle: An instructional principle proposing that presenting information in dual modalities spreads total induced load across the
visual and auditory channels of working memory thereby reducing cognitive load (Low & Sweller, 2005; Sweller & Chandler, 1994; Sweller et al., 1998). Near Transfer: The transfer of skills and knowledge that are typically applied the same way each and every time the skills and knowledge are used. Pre-training Principle: An instructional principle proposing that learners learn more deeply when they are aware of names and behaviors of main concepts (Mayer, 2005a; Mayer & Moreno, 2003). Procedure Knowledge: “[K]nowledge underpinning performance of a task that is completed more or less the same way each time” (Clark et al., 2006, p. 168). Process Knowledge: “[A] flow of events that summarize the operations of business, scientific, or mechanical systems” (Clark et al., 2006, p. 163). Redundancy Principle: An instructional principle proposing that learners learn more deeply when identical information is not presented in more than one format (Mayer, 2005a). Segmenting Principle: An instructional principle proposing that deeper learning can occur when a lesson is presented in learner-controlled segments rather than continuous units (Mayer, 2005a; Mayer & Moreno, 2003). Split-Attention Principle: An instructional principle proposing that if the instructional material is presented as a figure and text, split-attention can be circumvented by integrating the figure and text together (Sweller & Chandler, 1994). Transfer of Learning: The ability to apply what has been learned to new settings or situations. Worked (Worked-out) Examples: A stepby-step example that demonstrates how a task is performed or how to solve a problem (Clark et al., 2006); the principle proposes learners learn more deeply when studying worked examples than studying practice problems (Sweller, 2005a).
41
Managing Cognitive Load in the Design of Assistive Technology for Those with Learning Disabilities
Worked Example-Problem Pairs: The strategy of altering of worked example with similar practice problems (Clark et al., 2006).
42
43
Chapter 3
Multimedia Design of Assistive Technology for Those with Learning Disabilities Boaventura DaCosta Solers Research Group, USA Soonhwa Seok Center for Research on Learning - eLearning Design Lab, University of Kansas, USA
AbstrAct This is the final of three chapters serving as the introduction to this handbook which addresses the relationship between human cognition and assistive technologies and its design for individuals with cognitive disabilities. In this chapter the authors build upon the last two chapters and focus specifically on research investigating the visual and auditory components of working memory. The authors present the cognitive theory of multimedia learning, a learning theory proposing a set of instructional principles grounded in human information processing research that provide best practices in designing efficient multimedia learning environments. Much like the last chapter, the instructional principles presented are grounded in empirically-based study and consolidate nearly twenty years of research to highlight the best ways in which to increase learning. Altogether, the authors stress the common thread found throughout this three chapter introduction—that technology for learning should be created with an understanding of design principles empirically supported by how the human mind works. They argue that the principles emerging from the cognitive theory of multimedia learning may have potential benefits in the design of assistive technologies for those with learning disabilities. DOI: 10.4018/978-1-61520-817-3.ch003
Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Multimedia Design of Assistive Technology for Those with Learning Disabilities
IntroductIon multimedia, Assistive technology, and those with Learning disabilities Unlike early theories which viewed short-term memory as a single store capable of performing numerous operations (Sweller, 2005a), working memory is assumed to be composed of multiple stores (Baddeley, 1986, 1998, 2002; Paivio, 1990; Penney, 1989; Sweller, 2005). Baddeley’s model of working memory portrays numerous operations by handling visual and acoustic information individually with the visuospatial sketchpad and phonological loop subsystems. Making use of partial autonomy for processing visual and auditory information is believed to be a way in which to address the limitations of working memory. For example, Frick (1984) had investigated the idea of separate visual and auditory memory stores, showing how digit-span recall could be increased; Penney (1989), in a review, had provided evidence that appropriate use of the visual and auditory stores can maximize working memory capacity. Although researchers seem to disagree on a common nomenclature, using terms such as stores, channels, bisensory, dual-coding, and dual-processing (e.g., Allport, Antonis, & Reynolds, 1972; Baddeley, 1986, 1998; Jones, Macken, & Nicholls, 2004; Mayer & Anderson, 1991; Paivio, 1971; Penney, 1989) to represent the components of working memory, they do seem to agree with the premise that dual-processing is vital towards overcoming the limitations of working memory. This dual-processing assertion is best represented in Paivio’s dual-coding theory (Clark & Paivio, 1991; Paivio, 1971, 1990), which proposes that cognition is composed of verbal and nonverbal subsystems. These two subsystems are considered distinct, but interrelated. The verbal subsystem favors organized, linguistically-based information, stressing verbal associations. Examples include words, sentences, and stories. The
44
non-verbal subsystem, organizes information in nested sets, processed either synchronously or in parallel. Examples include pictures and sounds (Paivio, 1971, 1990; Paivio, Clark, & Lambert, 1988). Multimodal instructional material, which can be coded in both subsystems, rather than just one, is more easily recalled. By leveraging both the verbal and non-verbal subsystems, more information can be processed. Studies examining dual-coding have shown greater performance can be achieved when learners are presented with instructional material that takes advantage of both the verbal and non-verbal subsystems (e.g., Frick, 1984; Gellevij, Van Der Meij, De Jong, & Pieters, 2002; Leahy, Chandler, & Sweller, 2003; Mayer & Moreno, 1998; Moreno & Mayer, 1999). These findings are promising, as they suggest the limited capacity of working memory can be addressed by presenting instruction in a verbal and non-verbal manner (Mayer, 2001, 2005e; Sweller, van Merrienboer, & Paas, 1998). More importantly, the converse has also been shown. The verbal and non-verbal subsystems are believed to pool from the same processing resources. As such, multimodal information that is not interrelated can negatively impact working memory performance (Morey & Cowan, 2004). Thus, the non-verbal presentation of information should be relational to the verbal (textual), for it has a significant impact on working memory and learning. This is the final of three chapters serving as the introduction to this handbook which addresses the relationship between human cognition and assistive technologies (ATs) and its design for individuals with cognitive disabilities. In this chapter we build upon the last two chapters and focus specifically on research investigating the visual and auditory components of working memory. We present the cognitive theory of multimedia learning (CTML), a learning theory proposing a set of instructional principles grounded in human information processing research that provide best practices in designing efficient multimedia learn-
Multimedia Design of Assistive Technology for Those with Learning Disabilities
ing environments. Much like the last chapter, the instructional principles presented are grounded in empirically-based study and consolidate nearly twenty years of research to highlight the best ways in which to increase learning. Altogether, we stress the common thread found throughout this three chapter introduction—that technology for learning should be created with an understanding of design principles empirically supported by how the human mind works. We argue that the principles emerging from the CTML may have potential benefits in the design of ATs for those with learning disabilities (LDs). Before we delve into the principles composing the CTML, we begin by first defining multimedia learning itself. We then provide a brief explanation of the theory and discuss its theoretical foundation.
bAckground
then integrated with relevant existing knowledge (Marshall, 1996; Mayer, 2001; Mayer & Moreno, 2003; Wittrock, 1990). Meaningful learning is distinguished by good retention and transfer performance. Retention is reflected in the ability to remember pertinent presented material. Transfer is reflected in the ability to understand what was learned and apply it to new situations (Mayer, 2002, 2005b). Transfer includes being able to solve new problems with knowledge that is not explicitly presented in the material (Mayer, 2005b). Multimedia learning can therefore be described as the building of mental representations from the amalgamation of words and pictures, which induces the promotion of meaningful learning (Mayer, 2001, 2005b). As we will see later in this chapter, many CTML studies measure multimedia learning in terms of retention and transfer through post-tests.
What is multimedia Learning?
WHAt Is tHe cognItIve tHeory oF muLtImedIA LeArnIng?
In the broadest sense, multimedia can be defined as the presentation of both words and pictures to a learner in a variety of ways. Words can be presented in verbal form and can be written or spoken. Either their phonological or semantic aspects can be emphasized. Pictures are presented in pictorial form and can consist of static or dynamic objects to include illustrations, photos, animations, or video. The pairing of presentation mode and sensory modality allow for many conceivable permutations (Mayer, 2005b; Reed, 2006). Meaningful learning involves remembering and understanding instructional material. Whereas remembering is the ability to recognize or reproduce instructional material, understanding is the ability to construct sound mental representations from the material (Mayer, 2005b). Meaningful learning occurs when important aspects of the material are cognitively recognized, when the material is organized into a coherent structure, and
The CTML has shown steady growth since its earliest studies in the 1990s exploring the plausibility of multimedia learning. The premise that learners learn more deeply from words and pictures than from words only was one of the first predictions made by Richard E. Mayer and his colleagues based at the time on his generative theory. This would later become known as the multimedia principle, serving as the founding principle behind the CTML. Mayer and his colleagues continued to explore numerous effects while developing recommendations and guidelines throughout the remainder of the twentieth century. These effects would later be described as principles and encompass over 80 individual experiments (Veronikas & Shaughnessy, 2005). In recent years, research in the CTML has significantly grown. Although a substantial amount of research can be found exploring advanced effects and posturing new principles for how to
45
Multimedia Design of Assistive Technology for Those with Learning Disabilities
design multimedia learning, an emerging trend points to the study of existing principles in various content areas. One such example is the study of multimedia learning in the context of advanced computer-based environments (Mayer, 2005b). For example, Mayer and his colleagues have of late been examining the use of animated pedagogical agents (Moreno, 2005). Mayer hypothesizes that the basic principles can be applied in the use of these agents (Veronikas & Shaughnessy, 2005). Other such examples include the examination of multimedia learning in the context of virtual reality (Cobb & Fraser, 2005) and games, simulations, and microworlds (Rieber, 2005). These contexts have already fueled a number of studies (e.g., Atkinson, Mayer, & Merrill, 2005, Experiments 1 and 2; Dunsworth & Atkinson, 2007; Mayer, Dow, & Mayer, 2003, Experiments 1, 2a and 2b, 3, and 4; Merrill, 2003; Moreno & Flowerday, 2006; Moreno & Mayer, 2004; 2005, Experiments 1, 2, and 3; Moreno, Mayer, Spires, & Lester, 2001, Experiments 1, 2, 3, 4, and 5).
dual-channels, Limited capacity, and Active Processing Assumptions Three cognitive learning principles provide the theoretical underpinnings for the CTML. The first of these assumptions, dual-channels, posits that the human information processing system is composed of a separate processing channel for visual and auditory represented material. Mayer (Mayer, 2001, 2005f) has conceptualized these dual-channels as a presentation mode and a sensory modality. The presentation mode addresses verbal (e.g., spoken or written words) and pictorial (e.g., illustrations, photos, animations, or video) representations of presented material. This notion best resembles Paivio’s dual-coding theory (Clark & Paivio, 1991; Paivio, 1971, 1990) and borrows from the distinctions between the verbal and nonverbal subsystems (Mayer, 2001, 2005a). Sensory modality, on the other hand, deals with the sense through which the presented material is processed.
46
For example, learners initially process presented material through their eyes or ears. One channel processes verbal represented material, whereas the other channel processes auditory represented material. This notion is consistent with Baddeley’s (1986, 1998, 2002) model of working memory and borrows from the distinctions between the visuospatial sketchpad and the phonological loop (Baddeley, 2002; Mayer, 2001, 2005a). The second assumption, limited capacity, has already been discussed to some degree in the last two chapters. The assumption posits that working memory is limited in how much information can be processed within each channel. Unprocessed information that cannot be handled immediately decays over time. This notion is most consistent with Baddeley’s model of working memory as well as cognitive load theory (CLT) (Mayer, 2001, 2005a). The last assumption, active processing, posits that humans must actively engage in cognitive processing for learning to occur. Mayer has identified three processes required for this to take place. First, relevant incoming information must be cognitively recognized and selected. In other words, the learner must be actively paying attention for the relevant information to be brought into working memory. Second, the incoming information must be organized into a coherent structure. This involves constructing a logical mental representation (i.e., model) of the elements composing the selected information within working memory. Finally, the organized information must be integrated with relevant existing knowledge found in long-term memory (LTM) (Mayer, 1996, 2001; Mayer & Moreno, 2003; Wittrock, 1990). These three assumptions can be found in Mayer’s cognitive model of multimedia learning (see Mayer, 2001). Mayer (2001) provides a rather straightforward example of his cognitive model. Multimedia (presented in words and pictures) enters sensory memory through the eyes and ears. This permits the information to be held as visual and auditory
Multimedia Design of Assistive Technology for Those with Learning Disabilities
images for a brief period until such time that the relevant incoming information is selected and brought into working memory. Once in working memory, the incoming information is stored as raw material based on the visual and auditory sensory modalities. This information is then organized into coherent mental representations as verbal and pictorial models. Finally, the organized verbal and pictorial information is integrated with each other and relevant existing knowledge from LTM. This newly integrated knowledge is persistently stored in LTM resulting in multimedia learning. As you may have already realized, this process is very similar to that discussed in the first chapter in this introduction in which we presented the modal model of memory. We turn our attention next in this chapter to the instructional principles.
bAsIc And AdvAnced PrIncIPLes Mayer (2005b) has logically divided the effects emerging from the research into two groups of principles, basic and advanced. The basic principles make up the cornerstone of the CTML. In fact, some of the basic principles serve as the theoretical foundation for other principles. For example, the multimedia principle is the basis for all the CTML principles. It is embodied in 11 experiments across six studies (e.g., Mayer, 1989, Experiments 1 and 2; Mayer & Anderson, 1991, Experiments 2a and 2b; 1992, Experiments 1 and 2; Mayer & Gallini, 1990, Experiments 1, 2, and 3; Moreno & Mayer, 1999, Experiment 1; 2002, Experiment 1). It is one of the well-documented principles in the CTML along with the modality and the contiguity (spatial and temporal) principles. Other basic principles include the coherence principle, pre-training principle, personalization, voice, and image principles, redundancy principle, segmentation principle, and the signaling principle (see Mayer, 2005b; Mayer & Moreno, 2003, for an in-depth review). The modality, pre-training, and segmenting principles can be used effectively to
manage the extraneous processing of multimedia instructional material (we presented these in the last chapter in our discussion of cognitive load theory), whereas the coherence, contiguity, redundancy, and signaling principles can be effectively used in reducing it (Mayer, 2005f). The advanced principles, conversely, mark some of the most current research being conducted in multimedia learning. These principles, as expected, are the weakest in terms of empirically-based research. These principles include the animation and interactivity principles, collaboration principle, guideddiscovery principle, navigation principles, prior knowledge principle, self-explanation principle, site map principle, worked-out example principle, and the cognitive aging principle (see Mayer, 2005b, for an in-depth review). In the following sections, we briefly discuss each of the principles. We present the basic principles in terms of managing and reducing extraneous processing of multimedia-based instructional material. We pay particularly attention to the coherence, contiguity, and signaling principles as these have not yet been discussed in this three part chapter introduction, whereas the modality, pretraining, redundancy, and segmenting principles have. Finally, we end by briefly introducing each of the advanced principles.
reducing the extraneous Processing of multimedia According to Mayer (2005d), there are five ways in which situations that cause extraneous load on working memory can be handled. First, irrelevant, extraneous instructional materials should be eliminated whenever possible. This can be accomplished by applying the coherence principle. Second, signals and cues can be inserted into the instructional material to emphasis the importance of certain instruction. This can be accomplished by applying the signaling principle. Third, related instructional content can be placed next to one another. In the case of multimedia, text would
47
Multimedia Design of Assistive Technology for Those with Learning Disabilities
be placed next to graphics or as part of animations. This can be accomplished by applying the spatial contiguity principle. Finally, the temporal contiguity principle, when applied appropriately, can be leveraged to avoid the holding of crucial information in working memory for long periods of time. Overall, these principles point out that in the design of multimedia, less is more (Mayer, 2005d). We describe each of these principles next in greater detail.
Coherence Principle The coherence principle is a basic instructional principle proposing that learners learn more deeply when extraneous information is excluded (Mayer, 2005d). If used properly the coherence principle can reduce extraneous cognitive load on working memory. The principle is similar to that of the redundancy effect, in which learning may be hindered if identical information is not presented in more than one format (Mayer, 2005b). In the case of the coherence principle, learning may be hindered if irrelevant information is included in the instructional material. This may include words and pictures, but may also include animation and audio as well. A common example is the presentation of video prior to instruction. Mayer (2005d) cites the example of showing a video of lightning storms during an instructional animation that depicts the formation of lightning. Such extraneous information can serve as a distraction, hindering learning, and should be removed. There have been a number of CTML studies focused on the coherence principle (e.g., Harp & Mayer, 1997, experiment 1; 1998, experiments 1, 2, 3, and 4; Mayer, Bove, Bryman, Mars, & Tapangco, 1996, experiments 1, 2, and 3; Mayer, Heiser, & Lonn, 2001, experiment 3; Moreno & Mayer, 2000, experiments 1 and 2). In the studies conducted by Harp and Mayer (1997, 1998), two groups of students were asked to read a paper-based
48
lesson on lightning formation. Whereas one lesson was concise, void of extraneous information, the other was embellished. Both groups were given a transfer test upon reading the lesson. The group who was given the concise lesson outperformed the group who was given the embellished one. A similar study was conducted a year earlier by Mayer et al. (1996) resulting in the same finding. The group who read the concise, paper-based lesson outperformed the group who had been given the embellished one. In the study by Moreno and Mayer (2000), two groups were exposed to multimedia presentations depicting lightning formation and a car’s brake system. Each presentation was delivered as animation and narration. One presentation included background music and environmental sounds, whereas the other did not. The group who received the presentation without the extraneous sound outperformed the group who had. Finally, a similar study was conducted by Mayer et al. (2001), who used extraneous video clips of lightning formation. The group who was exposed to the extraneous information performed poorer on a transfer test than the group who had not.
Redundancy Principle The redundancy principle was discussed in the last chapter and so we do not duplicate the effort here. This instructional principle proposes that learners learn more deeply when identical information is not presented in more than one format (Mayer, 2005b). While this is sometimes a difficult concept to accept, because those who develop instruction sometimes want to include as much information as possible, research on the matter suggests that the use of redundant information in instructional material can interfere with learning (Sweller, 2005b). In a nutshell, redundant information places unnecessary load on working memory and should be eliminated whenever possible.
Multimedia Design of Assistive Technology for Those with Learning Disabilities
Signaling Principle The signaling principle proposes that learners learn more deeply when cues are added to highlight the organization of essential instructional material (Mayer, 2005d). The recommendations behind the signaling principle can help learners focus attention on instruction important in meeting the objectives of the lesson. Examples of signals that can be incorporated into instructional material include the use of highlighted, bolded, underlined, or italicized text, circles or arrows pointing to specific text, and the use of paragraph headings (Clark, Nguyen, & Sweller, 2006). By inserting cues into the instructional material, learner attention can be directly away from content that may be extraneous. There have been a number of CTML studies focused on the signaling principle (e.g., Harp & Mayer, 1998, experiment 3a; Mautone & Mayer, 2001, experiments 3a and 3b). In the study by Harp and Mayer (1998), two groups were offered paper-based lessons on the formation of lighting. One of the lessons used an organizational sentence listing the main steps of lightning formation, whereas the other did not. The group who was offered the paper-based lesson that included the signaling strategy performed better on a transfer test than the group who had not. A similar finding was found by Mautone and Mayer (2001), who incorporated signaling techniques into multimedia presentations delivered as animation and narration. The signaling group outperformed the non-signaling group.
Spatial Contiguity Principle The spatial contiguity principle proposes that learners learn more deeply when related words and pictures are presented near one another than far apart (Mayer, 2005d). The goal in the spatial contiguity principle is to create instructional material where all pertinent information is integrated.
We learned in the last chapter that a split-attention effect is created when the learner must expel cognitive resources accessing instructional content that is physically placed in different locations. This case is no different. Learners must spend cognitive resources to search for words connected to pictures or visa versa. To avoid extraneous load when presenting words and pictures, ensure they are integrated with one another. Like the name implies, the spatial contiguity principle is concerned with space. There have been a number of CTML studies focused on the spatial contiguity principle (e.g., Chandler & Sweller, 1991, experiment 1; Mayer, 1989, experiment 2; Mayer, Steinhoff, Bower, & Mars, 1995, experiments 1, 2, and 3; Moreno & Mayer, 1999, experiment 1; Sweller, Chandler, Tierney, & Cooper, 1990, experiment 1; TindallFord, Chandler, & Sweller, 1997, experiment 1). In the studies conducted by Chandler and Sweller (1991), Mayer (1989), Mayer et al. (1995), Sweller et al. (1990), and Tindall-Ford et al. (1997), students were exposed to paper-based lessons across three different content areas. In the Chandler and Sweller (1991) and Tindall-Ford et al. (1997) studies, students were exposed to a lesson on topics from electrical engineering; in the Mayer (1989) study, students were exposed to a lesson on a car’s break system; in the Mayer et al. (1995) study, students were exposed to a lesson on the formation of lightning; whereas in the Sweller et al. (1990) study, students were exposed to worked examples showing how to solve geometry problems. Across all five studies, students were divided into two groups—those who received paper-based lessons in which the text was placed next to the corresponding graphic, and those in which text was placed outside of the graphic. For example, in the case of the Chandler and Sweller (1991) study, the text was placed after the geometric diagram. Across all the experiments, those who received the integrated lessons outperformed those who had received the separated lessons. Similar findings
49
Multimedia Design of Assistive Technology for Those with Learning Disabilities
were also found by Moreno and Mayer (1999), who examined the contiguity principle with a multimedia presentation delivered as animation and on-screen text. The group who received the animation with integrated text performed better on a transfer test than the group who had not.
Temporal Contiguity Principle The temporal contiguity principle is an instructional principle proposing that learners learn more deeply when related animation and narration are presented concurrently rather than consecutively (2005d). According to the CTML, simultaneous presentation of words and pictures increases the odds that the information will be stored in the visual and auditory components of working memory, unlike the successive presentation of information, in which the learner must hold the information presented as auditory in working memory until the animation is presented. Visual and auditory information presented at the same time allows learners to build mental connections between the materials, whereas the same information presented successively makes the formation of mental connections much more difficult. While the spatial contiguity principle is focused on the proximity of words and pictures, the temporal contiguity principle is focused on time. There have been a number of CTML studies focused on the temporal contiguity principle (e.g., Mayer & Anderson, 1991, experiments 1 and 2; 1992, experiments 1 and 2; Mayer, Moreno, Boire, & Vagge, 1999, experiments 1 and 2; Mayer & Sims, 1994). The Mayer and Anderson (1991, 1992), Mayer et al. (1999) and Mayer and Sims (1994) studies all had similar findings. Those exposed to the animation with synchronized narrations outperformed those on transfer tests who received the animations with narrations that were presented in sequence.
50
managing the extraneous Processing of multimedia The modality, pre-training, and segmenting principles can be used effectively to manage the extraneous processing of multimedia instructional material (Mayer, 2005f). These principles were presented in detail in the last chapter in the discussion on the avoidance of extraneous, management of intrinsic, and promotion of germane load, and so we only briefly define them here. Furthermore, to complete our discussion of the basic principles, we also briefly define the multimedia and personalization, voice, and image principles.
Modality Principle The modality principle is one of the most important instructional principles to have emerged from the CTML. It proposes that presenting information in dual modalities spreads total induced load across the visual and auditory channels of working memory thereby reducing cognitive load (Low & Sweller, 2005; Sweller & Chandler, 1994; Sweller, van Merrienboer, & Paas, 1998). For an in-depth discussion of the modality principle, please see Low and Sweller (2005), Clark, Nguyen, and Sweller (2006), and Mayer (2005c).
Pre-Training Principle The pre-training principle proposes that learners learn more deeply when made aware of names and behaviors of main concepts prior to presenting the main lesson (Mayer, 2005b; Mayer & Moreno, 2003). For an in-depth review of the principle, please see Mayer (2005c).
Segmentation Principle The segmentation principle proposes that deeper learning occurs when a lesson is presented in
Multimedia Design of Assistive Technology for Those with Learning Disabilities
learner-controlled segments rather than continuous units (Mayer, 2005c; Mayer & Moreno, 2003). For an in-depth review of the principle, please see Mayer (2005c).
Multimedia Principle The multimedia principle is the cornerstone principle in which the CTML is founded. The principle proposes that learners learn more deeply from words and pictures than from words only. For an in-depth review of the principle, please see Fletcher and Tobias (2005).
Personalization, Voice, and Image Principles The personalization, voice, and image principles provide recommendations based on social cues. According to Mayer (2005e), the personalization principle proposes that learners learn more deeply when words are presented in a conversational style as opposed to formally; the voice principle proposes that learners learn more deeply when words are spoken in a human voice void of accent, opposed to an accented voice or a machine voice; while the image principle proposes that learners learn more deeply when a speaker’s image can be seen by the learner on screen. For an in-depth review of the principles, please see Mayer (2005e).
Advanced Principles The advanced principles, as we mentioned earlier, mark some of the most current research being conducted in multimedia learning. These principles, as expected, are the weakest in terms of empirically-based research. We briefly define them here.
Animation and Interactivity Principles The animation and interactivity principles provide guidance on the design of multimedia that incorporate sophisticated animated graphics. The principles focus on the complexities of learner interactivity during learning. For an in-depth discussion of the principles, please see Betrancourt (2005).
Cognitive Aging Principle The cognitive aging principle is focused on helping older learners by effectively managing working memory resources (Mayer, 2005b). Subscribing to the idea that working memory capability declines with age (Paas, Van Gerven, & Tabbers, 2005; Van Gerven, Paas, Van Merrienboer, & Schmidt, 2006), the principle suggests that some instructional materials presented in multiple modalities may be more efficient than instructional material presented in a single modality for older learners. For an in-depth review of the principle, please see Paas et al. (2005) and DaCosta (2009).
Collaboration Principle In recent years, online collaboration has taken root. The collaboration principle proposes a variety of recommendations that support online multimedia-based collaborative learning environments (Jonassen, Lee, Yang, & Laffey, 2005). For an in-depth review of the principle, please see Jonassen, Lee, Yang, and Laffey (2005).
Guided-Discovery Principle The guided-discovery principle proposes that learners learn more deeply when using the strategy of directing the learner toward discovery (Jong, 2005). For an in-depth review of the principle, please see Jong (2005).
51
Multimedia Design of Assistive Technology for Those with Learning Disabilities
Navigation Principles
Worked-out Example Principle
The navigation principles provide recommendations on the use of navigational aids. These aids include a broad category of visual and auditory devices ranging from local cues (e.g., headings and subheadings) to global content (e.g., tables and outlines) (Rouet & Potelle, 2005). For an indepth review of the principle, please see Rouet and Potelle (2005).
We discussed the worked-out example principle in the last chapter. A worked-out example is a stepby-step example that demonstrates how a task is performed or how to solve a problem (Clark et al., 2006). The principle proposes learners learn more deeply when studying worked examples than studying traditional practice problems (Sweller, 2005a). For an in-depth review of the principle, please see Renkle (2005) and Clark et al. (2006).
Prior Knowledge Principle The prior knowledge principle is focused on the effects of learner prior knowledge on the CTML principles (Kalyuga, 2005). The principle has emerged from consistent research findings suggesting that instructional principles may not benefit or adversely impact learners with high prior knowledge of the content to be learned. For an in-depth review of the principle, please see Kalyuga (2005).
Self-Explanation Principle The self-explanation principle proposes that learners learn more deeply when engaged in selfexplanation, a strategy which aids in attention and promotes meaningful learning though knowledge construction and integration activities (Roy & Chi, 2005). For an in-depth review of the principle, please see Roy and Chi (2005).
Site Map Principle The site map principle proposes that learners learn more deeply when appropriately structured site maps are used. It is suggested that these maps aid in learning because they provide learners with overarching view of the information to be learned (Shapiro, 2005). For an in-depth review of the principle, please see Shapiro (2005).
52
cHALLenges In APPLyIng tHe PrIncIPLes Although these principles and recommendations are grounded in a number of experiments spanning many studies over two decades, care should be exercised when applying these principles. As with all empirically-based research, methodological limitations exist. Although limitations could easily be described in the context in which they are studied (e.g., multimedia learning as it applies to reading or mathematics), we discuss limitations commonly cited. We do this independent of their application. This amounts to four major categories: (a) setting and content, (b) sampling, (c) time, and (d) individual differences. Laboratory versus real-world settings has long been a methodological concern. Early experiments were performed in controlled laboratory-like environments suggesting that principles need further examination in real-world settings, such as the classroom. Content has also been an issue, as early treatments typically dealt with cause-and-effect subject matter. This has brought about the need to test the principles in the context of authentic learning environments using real-world content. The need for real-world testing and the exploration of advanced content have been explicitly noted in a number of studies (e.g., Mautone & Mayer, 2001; Mayer & Chandler, 2001; Mayer, Heiser,
Multimedia Design of Assistive Technology for Those with Learning Disabilities
& Lonn, 2001; Mayer & Moreno, 1998; Mayer & Sims, 1994; Moreno et al., 2001). Sampling has also been a voiced methodological concern (e.g., Dunsworth & Atkinson, 2007; Mayer et al., 2001; Mayer & Moreno, 1998). Early experiments typically used college students from the psychology subject pool at the University of California, Santa Barbara. Consequently, the principles have been predominately tested with younger learners 18 and 19 years of age (e.g., Mautone & Mayer, 2001; Mayer & Chandler, 2001; Mayer, Fennell, Farmer, & Campbell, 2004; Mayer, Hegarty, Mayer, & Campbell, 2005; Mayer & Jackson, 2005; Mayer, Johnson, Shaw, & Sandhu, 2006; Mayer & Massa, 2003; Mayer, Sobko, & Mautone, 2003; Moreno & Mayer, 2004, 2005). Furthermore, other concerns have stemmed from sample size. These limitations have established the need to test the principles with larger samplings across different demographics, to include age, gender, and language. The implication of time on multimedia learning has also been noted in studies (e.g., Craig, Gholson, & Driscoll, 2002; Mayer & Chandler, 2001; Mayer & Sims, 1994). Early experiments typically administered measures of multimedia learning immediately after exposure to multimedia presentations. In other cases, presentations themselves were relatively short in length. As a result, the depth of learning measured in these studies has been a concern, suggesting the need to test the principles in consideration to time. For example, would the principles produce the same depth of learning if delayed testing were used or if exposed to multimedia learning presentations for longer periods? Finally, the matter of individual differences has been commonly identified as a limitation (e.g., Craig et al., 2002; Mayer & Anderson, 1992; Mayer et al., 2001; Mayer & Sims, 1994; Moreno et al., 2001). Many experiments have procedures to identify and preclude learners who can demonstrate a predetermined level of prior knowledge. This exclusion is based on the study by Mayer
and Gallini (1990) (and subsequently Mayer and Sims (1994)), which concluded that learners with low prior knowledge had shown improved performance over those with high prior knowledge. This is in line with our discussions in the last chapter. Many studies, however, still argue the point that the CTML principles need to be examined with high prior knowledge learners.
concLusIon Final thoughts In this three chapters introduction we have presented a number of theories, with each chapter building on the last. In the first chapter, we offered the most popular beliefs about human information processing, presenting the modal model of memory. Along the way we learned that working memory is both a blessing and a curse, in that its limitations cause it to be a bottleneck, but it is also the means for learning. This is a serious problem because the acquisition of new knowledge relies so heavily on the processing and storage capabilities of working memory (Low & Sweller, 2005; Sweller & Chandler, 1994). In the second chapter, we presented cognitive load theory, a learning theory proposing a set of instructional principles rooted in human information processing research that can be used to create sound instructional materials that take into considerations the limitations of working memory. We presented a number of recommendations that can help in avoiding extraneous, managing intrinsic, and promoting germane load—three types of cognitive load that learners must deal with during the learning process. While finally, in this chapter, we presented the CTML, another learning theory focused specifically on the design of multimodal instructional materials that take into account human information processing theories and focuses specifically on taking advantage of the visual and auditory components of working memory.
53
Multimedia Design of Assistive Technology for Those with Learning Disabilities
Our goal has been to separate conjecture and speculation from empirically-based study and consolidate more than twenty-five years of research to highlight the best ways in which to increase learning. Altogether we have stressed that technology for learning should be created with an understanding of design principles empirically supported by how the human mind works. Although considerable research is needed in the study of the instructional principles emerging from both CLT and the CTML with regard to ATs, we argue that the principles presented in this three chapter introduction show promise in helping those with LDs because of the focus these principles have in how the human mind works, specifically, cognitive load. We invite instructional designers, educators, practitioners, and others involved in the design of AT to learn more about CLT and the CTML and how the instructional principles they offer can be used as learning strategies for those with learning and potentially other cognitive disabilities.
DaCosta, B. (2009). The effect of cognitive aging on multimedia learning: An investigation of the cognitive aging principle. Germany: VDM Verlag Dr. Muller. Flecher, J. D., & Tobias, S. (2005). The multimedia principle. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 117–133). New York: Cambridge University Press. Frick, R. W. (1984). Using both an auditory and a visual short-term store to increase digit span. Memory & Cognition, 12(5), 507–514. Harp, S. F., & Mayer, R. E. (1997). The role of interest in learning from scientific text and illustrations: On the distinction between emotional interest and cognitive interest. Journal of Educational Psychology, 89(1), 92–102. doi:10.1037/00220663.89.1.92
reFerences
Harp, S. F., & Mayer, R. E. (1998). How seductive details do their damage: A theory of cognitive interest in science learning. Journal of Educational Psychology, 90(3), 414–434. doi:10.1037/00220663.90.3.414
Betrancourt, M. (2005). The animation and interactivity principles of multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 287–296). New York: Cambridge University Press.
Jonassen, D. H., Lee, C. B., Yang, C.-C., & Laffey, J. (2005). The collaboration principle in multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 247–270). New York: Cambridge University Press.
Chandler, P., & Sweller, J. (1991). Cognitive load theory and the format of instruction. Cognition and Instruction, 8(4), 293–332. doi:10.1207/ s1532690xci0804_2
Jong, T. d. (2005). The guided discovery principle in multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 215–228). New York: Cambridge University Press.
Clark, R., Nguyen, F., & Sweller, J. (2006). Efficiency in learning: Evidence-based guidelines to manage cognitive load. San Francisco, CA: Pfeiffer. Cobb, S., & Fraser, D. S. (2005). Multimedia learning in virtual reality. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning. New York: Cambridge University Press.
54
Kalyuga, S. (2005). Prior knowledge principle in multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 325–338). New York: Cambridge University Press.
Multimedia Design of Assistive Technology for Those with Learning Disabilities
Low, R., & Sweller, J. (2005). The modality principle in multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 147–158). New York: Cambridge University Press. Mautone, P. D., & Mayer, R. E. (2001). Signaling as a cognitive guide in multimedia learning. Journal of Educational Psychology, 93(2), 377–389. doi:10.1037/0022-0663.93.2.377 Mayer, R. E. (1989). Systematic thinking fostered by illustrations in scientific text. Journal of Educational Psychology, 81(2), 240–246. doi:10.1037/0022-0663.81.2.240 Mayer, R. E. (2001). Multimedia learning. New York: Cambridge University Press. Mayer, R. E. (2002). Rote versus meaningful learning. Theory into Practice, 41(4), 226–232. doi:10.1207/s15430421tip4104_4 Mayer, R. E. (2005a). Cognitive theory of multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 31–48). New York: Cambridge University Press. Mayer, R. E. (2005b). Introduction to multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 1–16). New York: Cambridge University Press. Mayer, R. E. (2005c). Principles for managing essential processing in multimedia learning: Segmenting, pretraining, and modality principles. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 169–182). New York: Cambridge University Press. Mayer, R. E. (2005d). Principles for reducing extraneous processing in multimedia learning: Coherence, signaling, redundancy, spatial contiguity, and temporal contiguity principles. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 183–200). New York: Cambridge University Press.
Mayer, R. E. (2005e). Principles of multimedia learning based on social cues: Personalization, voice, and image principles. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 201–212). New York: Cambridge University Press. Mayer, R. E. (Ed.). (2005f). The Cambridge handbook of multimedia learning. New York: Cambridge University Press. Mayer, R. E., & Anderson, R. B. (1991). Animations need narrations: An experimental test of a dual-coding hypothesis. Journal of Educational Psychology, 83(4), 484–490. doi:10.1037/00220663.83.4.484 Mayer, R. E., & Anderson, R. B. (1992). The instructive animation: Helping students build connections between words and pictures in multimedia learning. Journal of Educational Psychology, 84(4), 444–452. doi:10.1037/00220663.84.4.444 Mayer, R. E., Bove, W., Bryman, A., Mars, R., & Tapangco, L. (1996). When less is more: Meaningful learning from visual and verbal summaries of science textbook lessons. Journal of Educational Psychology, 88(1), 64–73. doi:10.1037/00220663.88.1.64 Mayer, R. E., & Gallini, J. K. (1990). When is an illustration worth ten thousand words? Journal of Educational Psychology, 82(4), 715–726. doi:10.1037/0022-0663.82.4.715 Mayer, R. E., Heiser, J., & Lonn, S. (2001). Cognitive constraints on multimedia learning: When presenting more material results in less understanding. Journal of Educational Psychology, 93(1), 187–198. doi:10.1037/0022-0663.93.1.187 Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38(1), 43–52. doi:10.1207/S15326985EP3801_6
55
Multimedia Design of Assistive Technology for Those with Learning Disabilities
Mayer, R. E., Moreno, R., Boire, M., & Vagge, S. (1999). Maximizing constructivist learning from multimedia communications by minimizing cognitive load. Journal of Educational Psychology, 91(4), 638–643. doi:10.1037/00220663.91.4.638 Mayer, R. E., & Sims, V. K. (1994). For whom is a picture worth a thousand words? Extensions of a dual-coding theory of multimedia learning. Journal of Educational Psychology, 86(3), 389–401. doi:10.1037/0022-0663.86.3.389 Mayer, R. E., Steinhoff, K., Bower, G., & Mars, R. (1995). A generative theory of textbook design: Using annotated illustrations to foster meaningful learning of science text. Educational Technology Research and Development, 43(1), 31–43. doi:10.1007/BF02300480 Moreno, R. (2005). Multimedia learning with animated pedagogical agents. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 507–523). New York: Cambridge University Press. Moreno, R., & Mayer, R. E. (1999). Cognitive principles of multimedia learning: The role of modality and contiguity. Journal of Educational Psychology, 91(2), 358–368. doi:10.1037/00220663.91.2.358 Moreno, R., & Mayer, R. E. (2000). A coherence effect in multimedia learning: The case for minimizing irrelevant sounds in the design of multimedia instructional messages. Journal of Educational Psychology, 92(1), 117–125. doi:10.1037/0022-0663.92.1.117 Morey, C. C., & Cowan, N. (2004). When visual and verbal memories compete: Evidence of crossdomain limits in working memory. Psychonomic Bulletin & Review, 11(2), 296–301.
56
Paas, F., Van Gerven, P. W. M., & Tabbers, H. K. (2005). The cognitive aging principle in multimedia learning. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 339-354). New York: Cambridge University Press. Penney, C. G. (1989). Modality effects and the structure of short-term verbal memory. Memory & Cognition, 17, 398–422. Reed, S. K. (2006). Cognitive architectures for multimedia learning. Educational Psychologist, 41(2), 87–98. doi:10.1207/s15326985ep4102_2 Renkl, A. (2005). The worked-out examples principle in multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 229–245). New York: Cambridge University Press. Rieber, L. P. (2005). Multimedia learning in games, simulations, and microworlds. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 549-567). New York: Cambridge University Press. Rouet, J.-F., & Potelle, H. (2005). Navigational Principles in Multimedia Learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 297–312). New York: Cambridge University Press. Roy, M., & Chi, M. T. H. (2005). The self-explanation principle in multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 271–286). New York: Cambridge University Press. Shapiro, A. M. (2005). The site map principle in multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 313–324). New York: Cambridge University Press. Sweller, J. (2005a). Implications of cognitive load theory for multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 19–30). New York: Cambridge University Press.
Multimedia Design of Assistive Technology for Those with Learning Disabilities
Sweller, J. (2005b). The redundancy principle in multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 159–167). New York: Cambridge University Press.
Jong, T. d. (2005). The guided discovery principle in multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 215–228). New York: Cambridge University Press.
Sweller, J., & Chandler, P. (1994). Why some material is difficult to learn. Cognition and Instruction, 12(3), 185–233. doi:10.1207/s1532690xci1203_1
Kalyuga, S. (2005). Prior knowledge principle in multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 325–338). New York: Cambridge University Press.
Sweller, J., Chandler, P., Tierney, P., & Cooper, M. (1990). Cognitive load as a factor in the structuring of technical material. Journal of Experimental Psychology. General, 119(2), 176–192. doi:10.1037/0096-3445.119.2.176 Tindall-Ford, S., Chandler, P., & Sweller, J. (1997). When two sensory modes are better than one. Journal of Experimental Psychology. Applied, 3(4), 257–287. doi:10.1037/1076-898X.3.4.257 Van Gerven, P. W. M., Paas, F., Van Merrienboer, J. J. G., & Schmidt, H. G. (2006). Modality and variability as factors in training the elderly. Applied Cognitive Psychology, 20, 311–320. doi:10.1002/ acp.1247 Veronikas, S., & Shaughnessy, M. F. (2005). An interview with Richard Mayer. Educational Psychology Review, 17(2), 179–189. doi:10.1007/ s10648-005-3952-z
AddItIonAL reAdIng Betrancourt, M. (2005). The animation and interactivity principles of multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 287–296). New York: Cambridge University Press. Jonassen, D. H., Lee, C. B., Yang, C.-C., & Laffey, J. (2005). The collaboration principle in multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 247–270). New York: Cambridge University Press.
Mayer, R. E. (2001). Multimedia learning. New York: Cambridge University Press. Mayer, R. E. (2005a). Cognitive theory of multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 31–48). New York: Cambridge University Press. Mayer, R. E. (2005b). Introduction to multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 1–16). New York: Cambridge University Press. Mayer, R. E. (2005c). Principles for reducing extraneous processing in multimedia learning: Coherence, signaling, redundancy, spatial contiguity, and temporal contiguity principles. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 183–200). New York: Cambridge University Press. Mayer, R. E. (2005d). Principles of multimedia learning based on social cues: Personalization, voice, and image principles. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 201–212). New York: Cambridge University Press. Mayer, R. E. (Ed.). (2005e). The Cambridge handbook of multimedia learning. New York: Cambridge University Press. Moreno, R. (2005). Multimedia learning with animated pedagogical agents. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 507–523). New York: Cambridge University Press.
57
Multimedia Design of Assistive Technology for Those with Learning Disabilities
Paas, F., Van Gerven, P. W. M., & Tabbers, H. K. (2005). The cognitive aging principle in multimedia learning. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 339-354). New York: Cambridge University Press. Rieber, L. P. (2005). Multimedia learning in games, simulations, and microworlds. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 549-567). New York: Cambridge University Press. Rouet, J.-F., & Potelle, H. (2005). Navigational Principles in Multimedia Learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 297–312). New York: Cambridge University Press. Roy, M., & Chi, M. T. H. (2005). The selfexplaination principle in multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 271–286). New York: Cambridge University Press. Shapiro, A. M. (2005). The site map principle in multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (pp. 313–324). New York: Cambridge University Press.
key terms And deFInItIons Active Processing Assumption: One of the three theoretical assumptions underpinning the cognitive theory of multimedia learning; proposes that humans must actively engage in cognitive processing for learning to occur. Animation and Interactivity Principles: A set instructional principles providing guidance on the design of multimedia that incorporate sophisticated animated graphics while at the same time taking into account learner interactivity (Betrancourt, 2005).
58
Cognitive Aging Principle: An instructional principle focused on helping older learners by effectively managing working memory resources (Mayer, 2005b). Subscribing to the idea that working memory capability declines with age (Paas et al., 2005; Van Gerven et al., 2006), the principle suggests that some instructional material presented in multiple modalities may be more efficient than instructional material presented in a single modality. Cognitive Load Theory: A theory proposed by John Sweller and his colleagues focused on the limitations of working memory during instruction. Cognitive Theory of Multimedia Learning (CTML): A theory credited to Richard E. Mayer and his colleagues focused on best practices in the use of visual and auditory information in multimedia-based instruction. Coherence Principle: An instructional principle proposing that learners learn more deeply when extraneous information is excluded (Mayer, 2005d). Collaboration Principle: An instructional principle proposing a variety of recommendations that support collaborative learning (Jonassen et al., 2005). Dual-channels Assumption: One of the three theoretical assumptions underpinning the cognitive theory of multimedia learning; proposes that the human information processing system is composed of a separate processing channel for visual and auditory represented material. Dual-coding Theory: A theory proposed by Allan Paivio, which proposes that cognition is composed of verbal and non-verbal subsystems. Guided-discovery Principle: An instructional principle proposing that learners learn more deeply when using the strategy of directing the learner toward discovery (Jong, 2005). Limited Capacity Assumption: One of the three theoretical assumptions underpinning the cognitive theory of multimedia learning; proposes that working memory is limited in how
Multimedia Design of Assistive Technology for Those with Learning Disabilities
much information can be processed within each channel. Meaningful Learning: The remembering and deep understanding of instructional material; occurs when important aspects of the material are cognitively recognized, when the material is organized into a coherent structure, and then integrated with relevant existing knowledge (Marshall, 1996; Mayer, 2001; Mayer & Moreno, 2003; Wittrock, 1990). Modality Principle: An instructional principle proposing that presenting information in dual modalities spreads total induced load across the visual and auditory channels of working memory thereby reducing cognitive load (Low & Sweller, 2005; Sweller & Chandler, 1994; Sweller et al., 1998). Multimedia: Broadly speaking, it is the presentation of both words and pictures to a learner in a variety of ways. Multimedia Learning: The building of mental representations from the amalgamation of words and pictures, which induces the promotion of meaningful learning (Mayer, 2001, 2005b). Multimedia Principle: An instructional principle proposing that learners learn more deeply from words and pictures than from words only. Navigation Principles: A variety of instructional principles providing recommendations on the use of navigational aids which include a broad category of visual and auditory devices ranging from local cues (e.g., headings and subheadings) to global content (e.g., tables and outlines) (Rouet & Potelle, 2005). Personalization, Voice, and Image Principles: Three instructional principles providing recommendations based on social cues. According to Mayer (2005e), the personalization principle proposes that learners learn more deeply when words are presented in a conversational style as opposed to formally; the voice principle proposes that learners learn more deeply when words are spoken in a human voice void of accent, opposed to an accented voice or a machine voice; and the
image principle proposes that learners learn more deeply when a speaker’s image can be seen on screen by the learner. Pre-Training Principle: An instructional principle proposing that learners learn more deeply when they are made aware of the names and behaviors of main concepts in the lesson before they are presented with the main lesson itself (Mayer, 2005a; Mayer & Moreno, 2003). Prior Knowledge Principle: An instructional principle focused on the effects of learners’ prior knowledge on the cognitive theory of multimedia learning principles (Kalyuga, 2005). The principle stems from consistent research findings that suggest instructional principles may not benefit or adversely impact learners with high prior knowledge of the content to be learned. Redundancy Principle: An instructional principle proposing that learners learn more deeply when identical information is not presented in more than one format (Mayer, 2005a). Segmentation Principle: An instructional principle proposing that learners learn more deeply when a lesson is presented in learner-controlled segments rather than continuous units (Mayer, 2005a; Mayer & Moreno, 2003). Self-Explanation Principle: An instructional principle proposing that learners learn more deeply when engaged in self-explanation, a strategy which aids in attention and promotes meaningful learning though knowledge construction and integration activities (Roy & Chi, 2005). Signaling Principle: An instructional principle proposing that learners learn more deeply when cues are added to highlight the organization of the essential material (Mayer, 2005d). Site Map Principle: An instructional principle proposing that learners learn more deeply when appropriately structured site maps are used because these maps provide learners with overarching view of the information to be learned (Shapiro, 2005). Spatial Contiguity Principle: An instructional principle proposing that learners learn more deeply
59
Multimedia Design of Assistive Technology for Those with Learning Disabilities
when related words and pictures are presented near one another than far apart (Mayer, 2005d). Temporal Contiguity Principle: An instructional principle proposing that learners learn more deeply when related animation and narration are presented concurrently rather than consecutively (Mayer, 2005d).
60
Worked-out Example Principle: A stepby-step example that demonstrates how a task is performed or how to solve a problem (R. Clark et al., 2006); the principle proposes learners learn more deeply when studying worked examples than studying practice problems (Sweller, 2005a).
61
Chapter 4
Investigating Assistive Technologies using Computers to Simulate Basic Curriculum for Individuals with Cognitive Impairments Carolyn Kinsell Solers Research Group, USA
AbstrAct Providing assistive technologies to cognitively impaired students, in the form of computer-based simulations, may improve the transfer of learning at a greater rate than other training media. The underlying premise for using computer-based simulations is that the cognitively impaired student is no longer the passive learner normally found in traditional classrooms. Instead, the cognitively impaired student becomes an active participant with the simulation and learning. In addition, this type of assistive technology provides the student with an opportunity for repeated exposure and practice at a speed in which the student feels comfortable. This chapter discusses the benefits of using computer-based simulations, defines the theoretical foundations that support the transfer of learning, and presents the processes that facilitate individual acquisition and refinement of knowledge and skills. It concludes with a review of the cognitive elements in the creation of mental models and schema.
IntroductIon Let me set the stage for this chapter—Thomas is a middle-school student who has been labeled as a slow learner, not only by his teachers, but by his classmates. Thomas is not slow at all of his school subjects, but reading is the hardest for him DOI: 10.4018/978-1-61520-817-3.ch004
to comprehend. He gets picked on in class and is tired of it! Thomas has been heard saying, “I just can’t keep up”! Unfortunately, Thomas is not alone. Based upon my readings, many individuals are not aware they have a learning disability and many are never diagnosed. In 1997, the Individuals with Disabilities Education Act (IDEA) helped to broaden the definition of the use of assistive technologies (AT) in the educational system to include special
Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Investigating Assistive Technologies using Computers to Simulate Basic Curriculum
education “services” (Rapp, 2005). According to Families and Advocates Partner (FAPE, 2001), AT includes “any service that directly assists a child with a disability in the selection, acquisition, or use of an assistive technology device.” This act helped to open many doors for students who are considered cognitively impaired. As noted in most AT literature, AT devices are widely used in the educational system but have mainly been provided for those who are physically challenged, such as computer screen readers for the visually impaired. However, there is an entire student body with learning disabilities at the cognitive level, such as those with attention deficit disorders (Harty, Miller, Newcorn, & Halperin, 2008), that are not being targeted as candidates for AT devices (Bausch & Hasselbring, 2004). Addressing the concern of Thomas for “just not getting it” can be supported by the longitudinal study conducted by Juel (1988) which indicated that over eighty-eight percent of first graders who were noted to be poor readers were still considered poor readers as fourth graders. To further complicate the situation, you have the cycle of a student who is unlikely to fit in with their peers (especially if there are feelings of low self-esteem, like Thomas) as they continue to lag behind. Stanovick (1986) supports Juel’s (1988) findings by calling this continual lag as the “Matthew Effect”—those who excel at reading continue to do so, while those who lag behind continue to do just that. It was noted in a report generated by the National Institute of Child Health (2005) that although assistive devices for those with mental retardation and development disabilities exists, it is not always easy for the Mental Retardation and Development Disabilities (MRDD) individual or for those involved in their health care or as care-givers to gain access or to know these devices exist. Hasselbring and Bausch (2005) also indicated that it is not the lack of availability of AT services and devices that have caused this gap but the lack of knowledge by teachers about AT
62
and how and when to implement. Many teachers, it appears, rely on specialists in the area of AT to implement the program, thus eliminating the immediate connection the teacher may have in the classroom to identify AT services and devices for their students. However, this gap is another topic of discussion. This chapter addresses a narrow part of AT devices, classified as computer-based simulations, that can provide the cognitively impaired a method for learning in their own context—a style of learning that could benefit a student like Thomas. Context is referring to a preferred method of learning on subjects in which the cognitively impaired individual is weak; subjects such as math, science, and reading. These contexts can use a single piece of media or hybrid approach to include animation, graphics, audio and text as a way to impart information that is in alignment with the person’s cognitive impairment. As clarification, and for the purposes of this chapter, computer-based simulations or systems, although there are many, only refer to desktop or personal computers. This topic does not address console systems, large immersive systems, or systems using haptics or head-mounted displays. In addition, the focus of the cognitively impaired examples will be on the task of reading in which the student should be able to grasp principles and derive meaning from text. The proposed chapter addresses: (a) a brief background on simulation history, its limitations, and benefits, (b) the theoretical framework fundamentals in the process of learning, (c) the mechanics of the transfer of learning that promotes knowledge/skill acquisition, and (d) a cognitive perspective.
bAckground Why computer-based simulation Simulation devices provide a means to replicate some form of reality so that an individual, or individuals, can increase their ability by applying
Investigating Assistive Technologies using Computers to Simulate Basic Curriculum
accurate actions through repeated exposure in a safe environment. Simulations have been used to facilitate learning as far back as the 17th century, such as those demonstrated by sand tables used for “war games” in the 1600s, up to the complex hybrid systems of today that involve a live, virtual, and constructive (DoD Modeling and Simulation (M&S) Glossary, 1998)) integration into a single exercise. As noted by Ausburn and Ausburn (2004) simulations are often a choice for tasks involving complex equipment or areas that are not easily accessible, or simply too dangerous to practice in real life, such as emergency room procedures, war tactic practice with improvised explosive devices, to name a few. The problem with simulations that support these types of tasks is that they are usually expensive to build, and as noted by Cloud and Rainer (1998), require built-in dynamic interactions that are limited by the model, behavior, and capabilities of the computer-based system. However, simulations that are effective as training modalities are designed with a specific objective supporting a finite set of conceivable options with a finite set of reactions (Cloud & Rainer, 1998), which is in alignment with educational purposes for those with cognitive impairments. And, with the advancement of computer processing technology of the personal computer, simulations can now be rendered, displayed, and engaged using desktop computers—which are commonly found in school classrooms. Simulations should only be considered if this type of medium will assist in the transfer of learning at a greater rate than other training media. As noted by Kritzenberger, Winkler, & Herczeg (2002) and Herczeg (2004), if training can involve the “real environment” or simulation of a real environment, learning expectations are higher. There are several conditions under which simulations are viable, which include training complex or unexpected events (not in scope with this chapter), to using simulations as a method for practice in a safe environment. For the purposes of AT, simulation should be considered for aiding
those with cognitive impairments who otherwise would not be capable of obtaining the right experience for learning in a traditional classroom environment. The benefits of using simulations often allow a person to practice and improve on their knowledge and skills in an environment that is safe (in this case without the pressure of peers) yet duplicates a specific performance context (reading). For this chapter, simulation is defined as interactive computer-based product that helps to engage the student in an active, not passive, learning mode— thereby increasing the potential for transferring knowledge and skills to the student above other types of passive media (such as lecture or book reading). The design of AT tools that target the cognitively impaired needs to consider, for example, reading level as well as reading skill as defined by the International Patient Aid Standards (IPAS) Collaboration. Although IPAS is a regulatory body for medical education, these standards should be implemented into training materials created for the use of AT—in other words, consider the target audience and their specific needs. One consideration by Herczeg (2004) defines the strategy for design of interactive user friendly computer programs for those who are challenged either cognitively or are new to computers and the technology to include special design in the areas of audio for voiceover narration matching text and illustrations; limited text at a level designed for the target audience; enhancements to support text through the use of graphics/illustrations to include animations, two-dimensional and threedimensional images, video/photographic stills; and a graphical user interface that incorporates simple navigation, easy to read icons, and an uncluttered interface. The simpler design reduces cognitive load and thus allows the student to process training content (Jibaja-Weiss, & Volk, 2007) without regard to computer issues. Without the target audience being considered in the design parameters, the cognitively impaired student,
63
Investigating Assistive Technologies using Computers to Simulate Basic Curriculum
such as Thomas, can quickly become lost and unmotivated, thus hindering learning. There is also a learning progression student’s advance through to gain knowledge and skills that is covered under the theoretical foundation in this chapter. This is followed by the transfer of learning process concluding with how information is cognitively linked.
Theoretical Foundation The basis of this section will focus on determining how computer-based simulations can help in the actual transfer of knowledge and skills for those who are cognitively impaired. Based upon a review of learning theories, there are several that support the transfer of learning. Prior to reviewing those theories, we must note that in regards to issues on transfer of learning, Carraher and Schliemann (2002) indicate that transfer is a theory and cannot provide a solid foundation for explaining how prior knowledge and experience accounts for learning. As noted, for many years, transfer was not validated in research environments (Carraher & Schliemann, 2002). However, Simons (1999) believes that optimized transfer will occur once it is determined how to work through problems that are encountered. With that caveat, we will set out under the guise that the transfer of learning is being achieved as one progresses through the stages of the theories listed. One theory, by Rasmussen (1986), has proclaimed that learning can be divided into three cognitive categories: knowledge-based, rulebased and skill-based. Knowledge can be classified in types, yet there is no one defined set of types in use (Jorna, 2001). Examples of varying types of knowledge include, but are not limited to terms such as logical, semantic, systematic, empirical (Pecorino, 2000); explicit and tacit (Edvinsson & Malone, 1997); theoretical (Jorna, 2001); to the general to specific (Gagne, 1962), and declarative, procedural, and conditional. These types are in some way connected in the
64
process in which knowledge is gained. Knowledge is integrated at all three cognitive levels of Rasmussen’s (1986) categories as one is exposed to new information. Rule-based learning combines new information-based upon responses, such as the feedback or outcomes provided. For rule-based learning to work there must be a foundation of previously acquired rules that are then built upon, in what may be referenced as logical thinking (Hong, 1998). Skill-based learning looks at the rules and procedures (specified order for accurate performance). In the case of reading, it would be the student’s ability to identify and comprehend a word, pronounce a word, and fluently read. It is these types of phonological processing inefficiencies (as noted by (O’shaughnessy & Swanson, 2007) that contribute to reading disabilities. To support Rasmussen’s (1986) theory, it is stated by Ackerman (1992) and Anderson, (1980) that individuals record environmental stimuli in order to advance among the three categories proposed by Rasmussen. If knowledge is not gained, then new information cannot be combined with a response, especially a correct response in order to formulate heuristics. Finally, correct skills cannot be built if rules and procedures cannot be followed and completed. In theory, as one is exposed to more practice, they become more proficient at a task—a task that is usually measured by a skillbased behavior. This behavior becomes automated in response to environmental stimuli that one has now become familiar. Using Thomas as our example, to improve his reading skills, he must first build upon his reading foundation with exposure and practice to reading concepts and rules. To achieve this, if Thomas was provided a computer-based simulation that contained audio and speech recognition capabilities, he could listen to the pronunciation of a word, practice pronouncing the word using speech recognition tools, and receive immediate auditory feedback. One would think that the likelihood of transfer would be greater if exposed to the
Investigating Assistive Technologies using Computers to Simulate Basic Curriculum
Figure 1. Theoretical framework and study measures
AT tool than remaining in a traditional classroom. In theory, Ackerman (1992) states there are three levels to the transfer of learning for skill acquisition: cognitive, associative and autonomous. As one understands instruction, goals and formalizes strategies (cognitive phase) they can then move on to actual practice for skill acquisition (associative phase). As they become more proficient (autonomous) at the task, it will require very little attention to perform—a task that is usually measured by a skill-based behavior. This skill-based behavior becomes automated in response to stimuli. But, as Hockey, Healey, Crawshaw, Wastell and Sauer (2003) indicate, when uncertainty in a situation increases, cognitive demand increases and the individual will fall back on knowledge-based processes (if they exist and are correct) and not
rule-based behavior. Hence, simulated devices used as AT can aid in repeated exposure to build and move forward in learning. Collectively, Ackerman’s (1992) three levels define a cognitive process that distinguishes a novice from an expert. The phases of the process build upon one another to the point that skillbased behavior eventually becomes automated in response to environmental stimuli. Figure 1, provides a graphic representation of this theoretical framework including the relationship of the three phase cognitive process, specifically focused on verbal and mechanical cognitive applications. As shown, the initial cognitive phase, typical of novice behavior, is focused on formulating concepts and developing procedural skill, such as attention to semantics for verbal information
65
Investigating Assistive Technologies using Computers to Simulate Basic Curriculum
related to the text-based description lecture or written instruction. During the associative phase basic skill and knowledge become engrained. There is less deliberate cognitive focus and more of an emphasis on increasing speed and accuracy through practice or exposure to the learning material. With continued practice, the novice moves toward mastery, or the autonomous phase exemplified by expert behavior. In this phase actions are automatic and there is no attentional effort. Overall, the use of a computer-based simulation would afford Thomas this exposure. Based upon the example shown in Figure 1, the advancement through the cognitive phase is depicted in two areas: (a) the verbal context where information is introduced, in our example, either through lecture or written instruction, and, (b) in the mechanical phase where a student can interact with a computer-based training program. When a cognitively impaired student experiences anxiety in a classroom, there will be a high cognitive demand once in that setting, especially if that information is not being comprehended at the speed in which the rest of the students experience. With practice, transition to the associative phase of increased knowledge and skill may well begin at the point where the student grasps the concepts and increases performance, which, in the case of content knowledge (comprehension), could be shown through testing. It is anticipated that some students will not experience or may take longer to transition to the associative phase due to the limited amount of time in which they are exposed to the material or due to their more advanced cognitive challenges. Therefore, if, after testing, there is no improvement in a student’s score from the first measure to the second measure, then the student is considered to still be in the cognitive phase. To aid in a person’s ability to gain proficiency with a particular curriculum component (such as reading), computer-based simulations are being used to increase an individual’s knowledge and skill through repeated exposure and practice to a set of conditions for learning, such as using
66
specific content for a particular reading ability. One key, to successful transferal, as defined in the study conducted by Sumrall and Curry (2006), is that transferal should be defined by how the knowledge and skills gained through classroom training can be synthesized and transferred into the real-world. For instance, a student who is using AT to gain knowledge and skills required of a particular subject should be able to eventually blend in with the regular classroom and be a part of the teacher-student classroom learning process. As early as the 1900’s, Thorndike studied similarities between facts and skills for transfer attainment and also researched the theory of Between-Subjects Variability, measuring if subjects converge or diverge in performance over time with training. Although there were no conclusive findings from Thorndike’s research, Ackerman (1986, 1987, 1988) found that interindividual variability of performance did decrease with practice if the task was within the abilities of the individual. Additionally, novel tasks, combined with complex tasks, required greater attention, which led to an increase in errors and a decrease in speed with which the task was accomplished. What should also be considered, is that when implementing a computer-based simulation, as an AT training device, cognitive demand will increase if the student is not familiar with using computers. This will also be a contributor to the student’s slower performance. However, as the student is able to practice, their abilities should improve based upon exposure to not only content, but to the technology. Ability determinants of performance, also known as Simplex theory, was further studied by Humphreys (as cited in Ackerman, 1988). Simplex theory suggests that as one gains practice, ability determinants of performance are changing but not in a linear fashion. Another theory, Ability-Performance Correlations (Fleishman, 1972; Fleishman, & Quaintance, 1984), ties in a cognitive assessment, such as identifying broad intellectual abilities during initial learning of a
Investigating Assistive Technologies using Computers to Simulate Basic Curriculum
simple, consistent task. Ackerman (1986) determined that there is an alignment between ability, performance, and information-processing, especially for those tasks that are inconsistent (not route processes). As a final theory, a theory that could be applied to the research of computer-based simulated training and transfer to real situations is the expectancy-value theory, as first coined by Fishbein (1967). As an individual continues to learn, they also acquire and build upon expectations resulting from actions and the consequences of those actions—which becomes the foundation for behavioral choices in the future. As for Thomas, without being identified as a student with a cognitive impairment that requires alternative learning methods, he not only remains behind in reading but becomes slower over time than those considered typical readers.
Transfer of Learning Transfer of learning is the process of applying what has been learned (carried over) to a new or similar situation, problem, or setting. It is this transfer, or carry-over, from an instructional situation to the real-world setting that is the goal of training. In essence the transfer process occurs when an individual builds requisite associations, or mental schema, that enhances storage and retrieval from memory. In effect this mental framework helps individuals learn related subject matter more rapidly (Bransford, Brown, & Cocking, 2000; Hume & Shepard, 2001; Leberman, McDonald, & Doyle, 2006; McKeachie, 2001). Transfer of learning is a key ingredient in a training environment intended to facilitate individual acquisition and refinement of knowledge and skills. As noted by Leberman, McDonald, and Doyle (2006), “transfer is the link between learning and the performance.. .” (p. 31). Although transfer has been studied for decades, it is still a process that is not completely understood (McKeachie, 2001; Salomon & Perkins, 1989).
There are key elements to transfer that are highlighted that may help to explain ‘why’ transfer would or would not take place. This discussion begins with an exploration of the three dimensions of transfer; (a) positive and negative transfer, (b) simple to complex transfer, and (c) near and far transfer.
Positive and negative transfer Positive transfer occurs when stimuli and responses are similar (Leberman et al., 2006; McKeachie, 2001; Royer, 1986). Ansburg and Shields (2003) examined the transfer of principles between different reasoning tasks. In their experiment they studied the transfer abilities of 84 subjects (students in an introduction psychology course) trying to solve six permission problems under four training conditions (combination of problem comparison with and without feedback). Those who received training on the problem comparison solved 15% more of the target problems (solutions) than those who did not receive the training, indicating positive transfer. Reinforced skills can produce a measure of success in the transference between learning and performance. In the case of the cognitively impaired student with a reading disability, such as dyslexia or a short term memory problem, may require the use of an AT tool to aid the student in learning in a different way other than a traditional classroom setting. Assistive technologies may offer a method by which the word on the screen is highlighted and through audio, is heard. If this type of interactive technology is supported, the student can speak the word aloud into a microphone for capture and computer analysis for immediate feedback. When using a computer-based simulation as an AT tool, the cognitively impaired student can practice over time to become positively qualified (tested) for their grade level; hence, allowing, Thomas to fit into the classroom with his classmates. When these reinforced reading skills, that are gained in
67
Investigating Assistive Technologies using Computers to Simulate Basic Curriculum
the simulation and are applied to the real-world, a positive transfer is then fully realized. While positive transfer facilitates learning or performance in another situation, negative transfer means that a learned response actually hinders appropriate performance. For example, people who learn a second language typically apply patterns of speech production characteristic of their native tongue, thus giving them a foreign accent, Ormrod (as cited in Schmidt, Young, Cormier, & Hagman, 1987). The cognitively impaired student, for example, who can read but not comprehend spatial concepts, may have a difficult time with the statement, “She knew that she had to succeed at this task!” How does the cognitively impaired student comprehend “succeed”? Finally, if stimuli and responses are significantly different, neither positive nor negative transfer occurs, causing a transfer gap.
simple to complex transfer Leberman et al. (2006) define simple transfer as occurring when previous knowledge can be used in a new situation with little to no effort. This is in alignment with Salomon and Perkin’s (1989) “low road transfer” concept when tasks are performed effortlessly. The effortless transfer to related situations may be termed automatization, as noted by Salomon and Perkin’s (1989), as the “automatic triggering of well learned behavior in a new context” (p. 113). This is similar to the definition of expert behavior as noted by Ackerman (1988). Leberman et. al, (2006) define complex transfer as using the previously acquired knowledge in a new situation while seeking extended applications in which that knowledge can be used. This process of complex transfer is defined by Salomon and Perkin’s (1989) as “high road transfer”, which requires greater cognitive processing and may be detected in situations in which individuals are learning rules and principles.
68
Simple transfer, for the purpose of this chapter, is illustrated when a student’s fundamental knowledge of reading comprehension in the simulated environment is easily duplicated in different real-world environments, such as being able to participate in reading in a classroom setting and then being able to transfer that reading to the real-world, such as a grocery store. This would be inclusive of Thomas as he continued to improve upon his reading knowledge and skills. Conversely, complex transfer may be illustrated when students, who can read, comprehend, and test positively on a computer-based program and can transfer their acquired knowledge to the realworld without transition to the regular classroom. Further cognitive extension would include the student’s ability to comprehend (construct meaning) and decode (recognize) more difficult words, such as “succeed” described earlier. As students seek extended applications of their reading ability to the real-world, a “complex” integration of knowledge is formed.
near and Far transfer Near transfer is posited to take place when previous knowledge is being applied to situations that are similar to what is being newly experienced and takes minimal cognitive effort (Leberman et al., 2006; McKeachie, 2001; Royer, 1986). For example, near procedural transfer, is indicated by the student that is already proficient in reading at a particular grade level (previous knowledge and skills) and then is required to read similar material at a high grade level. Far transfer is essentially the process of applying existing knowledge to a novel learning situation which takes a high cognitive effort (Leberman et al., 2006; McKeachie, 2001). This concept is suggested to occur when knowledge gained from previous experiences is put into a dissimilar situation, and the individual is expected to successfully apply this acquired knowledge.
Investigating Assistive Technologies using Computers to Simulate Basic Curriculum
Far transfer, which requires a high cognitive effort, is posited to occur if a cognitively impaired student, who is not given the simulated reading assistance but remains in the regular classroom under difficult odds, is able to acquire classroom knowledge and transfer that information to the realworld. Now that the three dimensions of transfer have been explored, the cognitive elements that aid in transfer will be examined: cognition, situativity, and automaticity. As noted in Figure 1 of Ackerman’s (1992) theory, the transfer of learning and the cognitive elements that aid in that transfer are required to becoming an expert.
experts vs novices Bransford et al. (2000), report in great detail the characteristics that distinguish experts from novices. There is strong evidence to suggest that experts interpret information differently, as well as organize, represent, and create mental models of a situation differently than that of novices (Hinds, Patterson, & Pfeffer, 2001; Novick, 1988; Schoenfeld, 1987). Experts tend to create schema from similarities that are perceived, whereas, novices are too concerned with seeing the smaller pieces, such as facts (Schoenfeld, 1987). However, as noted by Bransford et al., experts become expert through the use of cognitive thinking, starting with basic learning, moving on to the association of stimuli with responses, and finally, practicing to the point that performing a task becomes automated. Experts generally demonstrate reduced stimuli interference and reduced errors (Correll et al., 2007); just like experts, novices, can become expert through the same process. But, we cannot forget, underlying this process is the science of transfer. If transfer is not taking place, one cannot move from one cognitive element to the next, which is also supported in Ackerman’s (1988) theory and in Ackerman’s (1992) description of cognitive phases. According to Ackerman, transfer occurs in skill acquisition in the three phases, from (a) cognitive, to (b) associative, and finally
to (c) autonomous. These phases are parallel to the elements described below: (a) cognition, (b) situativity (also considered the associative phase), and (c) automaticity. The following paragraphs describe the mental process involved as to how these cognitive elements are linked with the transfer of learning.
Cognitive Elements From a cognitive perspective, and related to Ackerman’s (1992) definition of the cognitive phase, as individuals are learning, they create mental models and structures (schema) to make connections with various pieces of information. Schema originated from elements of semantic memory which contains the “knowledge of concepts, rules, principles, generalizations, skills, and metacognitive skills” (p. 7) that are based on the extraction of experience (Andre & Phye, 1986). Schema is often triggered by stimulation in our environment, which, when drawn upon can result in three types of cognitive mechanics: assimilation, accommodation, or equilibration. Lunzer (1986) provided explanations of the mechanics in the following manner: (a) assimilation takes existing schema and creates new schema that is extended to the existing situation; (b) accommodation adapts existing schema to fit a novel situation through trial-and-error or systemic inquiry or through logical inferences and creates a new schema; (c) equilibration is the balancing act of separating two conflicting schemas, known as cognitive dissonance, that have been triggered by the same stimulation and creating yet another schema. Exposure to stimulation, both new and existing, evokes these cognitive mechanics that lead to higher order thinking. In a situated learning condition, the focus is then on the development of higher order thinking (Leberman et al., 2006) in which real-world conditions are presented and aligned with existing prior knowledge. Under this type of optimized learning environment, schema building, as noted
69
Investigating Assistive Technologies using Computers to Simulate Basic Curriculum
by Clark (2003), allow one to interpret their environment and to make sense of what is being experienced-based upon their prior knowledge. Eventually, schema or sequences are stored in long-term memory and, through practice, become over-learned and turn into automated processes (Phye, 1986). As adult learners, Clark (2003) and Huitt (2003) indicate there are three primary stages to process information: encoding, storage, and retrieval; as a learner receives new information, it is the integration with prior knowledge that results in encoding, and the creation of a new schema into long-term memory. When information is needed from recall, it is retrieved from long-term memory and aids in higher order thinking. Engaging in higher order thinking forms connections between an environment and experience, and is known as critical thinking (Desse, 2001), problem-solving, (Price & Driscoll, 1997) and reasoning (McKeachie, 2001). It has also been noted that higher levels of cognitive processes require higher demands on cognitive skills, and therefore, a novice may be ill-equipped, lacking these skills (Kuhn, Black, Keselman, & Kaplan, 2000). As with cognitively impaired students, they are not afforded the opportunity to grasp concepts or knowledge within a regular classroom. In these circumstances this is when simulations can allow practice, and at length, thereby helping to form connections regarding the concept at hand. However, at the onset, when a cognitively impaired student begins using AT (such as simulations), they may experience a higher demand on their cognitive resources by the mere fact of being exposed to the AT tool itself. This in turn could produce higher adrenaline that may interfere with their initial learning. These students may require time to become familiar with the tool, the computer, and the navigation within the simulation. Once these factors are overcome, the progression through learning of the content can begin. Situativity, which is related to Ackerman’s (1992) definition of the associative phase, is part
70
of the higher level cognitive perspective when one participates in regular patterns of activities, which is characterized as communities of practice (Greeno, 1998). In addition, with cognitively impaired students, there may be little to no existing schema to draw from to aid in a successful outcome. However, over time, if enough practice could be afforded, it would be expected to see some form of improvement. Finally, the higher order thinking involved with situativity eventually encompasses automaticity, characteristics of an expert (Leberman et al., 2006). Automoticity, which is related to Ackerman’s (1992) definition of the autonomous phase, is an unconscious process that experts tend to use based on a highly organized structure of chunked information, stored as schemas, that was developed over years of experience, (Bransford et al., 2000; Salomon & Perkins, 1989). Automaticity involves less routine cognitive processing (Ferguson, 2000) and is individual to each person. Automaticity is created either through (a) intentional goal-directed processes that require an act, or (b) preconscious processing that only requires the environment as a trigger (Bargh & Chartrand, 1999). However, tasks that are not consistent in nature and that have many possibilities with various responses are not as easy to learn (Halff, Hollan, & Hutchins, 1986; Tubau, Hommel, & Lapez-Moliner, 2007). To increase the likelihood of automaticity, repeatable actions and higher-order thinking need to be infused into the learning situation. The more we learn about AT, transfer of learning, simulated environments, and real-world problems and outcomes, the more adept the training industry will become at designing training systems that get to the heart of what is now missing; a learning continuum for students of all learning capabilities. As for our hypothetical example, Thomas’s advancement in reading could be dependent upon two things, (a) the identification of his cognitive condition, and (b) the implementation of AT services and devices.
Investigating Assistive Technologies using Computers to Simulate Basic Curriculum
concLusIon The AT services and devices program has been proven to benefit the physically challenged student. However, there is still work to be done in the school systems in identifying students who have a cognitive impairment that inhibits learning. Based upon the theories, the transfer of learning concepts, and the cognitive elements presented in this chapter, there is still the potential for learning improvement for those cognitively impaired students who receive an AT device designed for their level of learning. Additional studies should be conducted to determine if a computer-based simulation designed as an AT device for the cognitively impaired student results in improved transfer of learning not only to the classroom but to the real-world.
reFerences Ackerman, P. L. (1986). Individual differences in information processing: An investigation of intellectual abilities and task performance during practice. Intelligence, 10(2), 101–139. doi:10.1016/0160-2896(86)90010-3 Ackerman, P. L. (1987). Individual differences in skill learning: An integration of psychometric and information processing perspectives. Psychological Bulletin, 101(1), 3–27. doi:10.1037/00332909.102.1.3 Ackerman, P. L. (1988). Determinants of individual differences during skill acquisition: Cognitive abilities and information processing. Journal of Experimental Psychology. General, 117(3), 288–318. doi:10.1037/0096-3445.117.3.288 Ackerman, P. L. (1992). Predicting individual differences in complex skill acquisition: Dynamics of ability determinants. The Journal of Applied Psychology, 77(5), 598–614. doi:10.1037/00219010.77.5.598
Anderson, J. R. (1980). Cognitive psychology and its implications. San Francisco: Freeman. Andre, T., & Phye, G. D. (1986). Cognition, learning, and education. In Phye, G. D., & Andre, T. (Eds.), Cogntivie classroom learning: Understanding, thinking, and problem solving. Orlando, FL: Academic Press. Ansburg, P. I., & Shields, L. (2003). Training overcomes reasoning schema effects and promotes transfer. The Psychological Record, 53(2), 231–242. Ausburn, L. J., & Ausburn, F. B. (2004). Desktop virtual reality: A powerful new technology for teaching and research in industrial teacher education. Journal of Industrial Teacher Education, 41(4), 33–58. Bargh, J. A., & Chartrand, T. L. (1999). The unbearable automaticity of being. The American Psychologist, 54(7), 462–479. doi:10.1037/0003066X.54.7.462 Bausch, M. E., & Hasselbring, T. S. (2004). Assistive technology: Are the necessary skills and knowledge being developed at the preservice and inservice levels? Teacher Education and Special Education: The Journal of the Teacher Education Division of the Council for Exceptional Children, 27(2), 97–104. doi:10.1177/088840640402700202 Bransford, J. D., Brown, A. L., & Cocking, R. R. (2000). How People Learn: Brain, Mind, Experience, and School. Expanded Ed.: NationalAcademy of Sciences - National Research Council, Washington D. C. Commission on Behavioral and Social Sciences and Education. Carraher, D., & Schliemann, A. (2002). The transfer dilemma. Journal of the Learning Sciences, 11, 1–24. doi:10.1207/S15327809JLS1101_1 Clark, R. (2003). Building Expertise: Cognitive Methods for Training and Performance Improvement. Silver Springs, MD: International Society for Performance Improvement.
71
Investigating Assistive Technologies using Computers to Simulate Basic Curriculum
Cloud, D. J., & Rainer, L. B. (1998). Applied modeling and simulation: An integrated approach to development and operation. New York: McGraw Hill. Correll, J., Park, B., Judd, C. M., Wittenbrink, B., Sadler, M. S., & Keesee, T. (2007). Across the thin blue line: Police officers and racial bias in the decision to shoot. Journal of Personality and Social Psychology, 92(6), 1006–1023. doi:10.1037/00223514.92.6.1006 Desse, J. (2001). The state of education and the double transfer of learning paradox. In Haskell, R. E. (Ed.), Transfer of learning: Cognition, instruction, and reasoning (pp. 3–21). San Diego: Academic Press. DoD Modeling and Simulation (M&S) Glossary. (1998). Under Secretary of Defense for Acquisition Technology. Edvinsson, L., & Malone, M. S. (1997). Intellecutal Captial: Realizing your Company’s True Value by Finding its Hdden Brainpower. New York: Harper Business. FAPE. (2001). 1997 Individuals with disabilities education act amendments increase access to technology for students. Families and Advocates Partnership for Education (FAPE) Retrieved August 1, 2009, from http://www.fape.org/pubs/ FAPE-13.pdf Ferguson, C. J. (2000). Free will: An automatic response. The American Psychologist, 55(7), 762–763. doi:10.1037/0003-066X.55.7.762 Fishbein, M. (Ed.). (1967). Attitude and the Prediction of Behaviour. New York: Wiley. Fleishman, E. A. (1972). On the relation between abilities, learning and human performance. The American Psychologist, 27(11), 1017–1032. doi:10.1037/h0033881 Fleishman, E. A., & Quaintance, M. K. (1984). Taxonomies of human performance. Orlando, FL: Academic Press.
72
Gagne, R. (1962). The acquisition of knowledge. Psychological Review, 69(4), 355–365. doi:10.1037/h0042650 Greeno, J. G. (1998). The situativity of knowing, learning, and research. The American Psychologist, 53(1), 5–26. doi:10.1037/0003-066X.53.1.5 Halff, H. M., Hollan, J. D., & Hutchins, E. L. (1986). Cognitive science and military training. The American Psychologist, 41(10), 1131–1139. doi:10.1037/0003-066X.41.10.1131 Harty, S. C., Miller, C. J., Newcorn, J. H., & Halperin, J. M. (2008). Adolescents with childhood ADHD and disruptive behavior disorders: Aggression, anger, and hostility. Child Psychiatry and Human Development, (40): 85–97. Hasselbring, T. S., & Bausch, M. E. (2005). Assistive technologies for reading. Educational Leadership, 63(4), 72–75. Herczeg, M. (2004). Experience design for computer-based learning systems: Learning with engagement and emotions. Paper presented at the ED-MEDIA 2004 World Conference on Educational Multimedia, Hypermedia and Telecommunications. Hinds, P. J., Patterson, M., & Pfeffer, J. (2001). Bothered by abstraction: The effect of expertise on knowledge transfer and subsequent novice performance. The Journal of Applied Psychology, 86(6), 1232–1243. doi:10.1037/0021-9010.86.6.1232 Hockey, G. R., Healey, A., Crawshaw, M., Wastell, D. G., & Sauer, J. (2003). Cognitive demands of collision avoidance in simulated ship control. Human Factors, 45(2), 252–265. doi:10.1518/ hfes.45.2.252.27240 Hong, F. T. (1998). Picture-Based vs. Rule-Based Learning. Department of Physiology, Wayne State University. Huitt, W. (2003). The information processing approach to cognition. Valdosta State University. Retrieved July 14, 2007, from http://chiron.valdosta. edu/whuitt/col/cogsys/infoproc.html
Investigating Assistive Technologies using Computers to Simulate Basic Curriculum
Hume, D., & Shepard, R. N. (2001). Introduction. In Haskell, R. E. (Ed.), Transfer of learning: Cognition, instruction, and reasoning (pp. xiii–xx). San Diego: Academic Press. Jibaja-Weiss, M. L., & Volk, R. J. (2007). Utilizing computerized entertainment education in the development of decision aids for lower literate and naïve computer users. Journal of Health Communication, 12(7), 681–697. doi:10.1080/10810730701624356 Jorna, R. (2001). Knowledge types and organizational forms in knowledge managment. ISMICK. Juel, C. (1988). Learning to Read and Write: A Longitudinal Study of 54 Children from First through Fourth Grades. Journal of Educational Psychology, 80(4), 437–447. doi:10.1037/00220663.80.4.437 Kritzenberger, H., Winkler, T., & Herczeg, M. (2002). Mixed reality environments as collaborative and constructive learning spaces for elementary school children. Paper presented at the ED-Media 2002 World Conference on Educational Multimedia, Hypermedia and Telecommunications, Denver, Colorado. Kuhn, D., Black, J., Keselman, A., & Kaplan, D. (2000). The development of cognitive skills to support inquiry learning. Cognition and Instruction, 18(4), 495–523. doi:10.1207/S1532690XCI1804_3 Leberman, S., McDonald, L., & Doyle, S. (2006). The transfer of learning: Participants’ perspectives of adult education and training. Burlington, VT: Gower. Lunzer, E. (1986). Cognitive development: Learning and the mechanisms of change. In Phye, G. D., & Andre, T. (Eds.), Cogntivie classroom learning: Understanding, thinking, and problem solving. Orlando, FL: Academic Press.
McKeachie, W. (2001). Transfer of learning: What it is and why it’s important. In Haskell, R. E. (Ed.), Transfer of learning: Cognition, instruction, and reasoning (pp. 23–39). San Diego: Academic Press. National Institute of Child Development. (2005). Mental retardation and developmental disabilities (MRDD) branch. NICHD Report to the NACHHD Council: National Institute of Child Health and Human Development. NICHD. Novick, L. R. (1988). Analogical transfer, problem similarity, and expertise. Journal of Experimental Psychology. Learning, Memory, and Cognition, 14(3), 510–520. doi:10.1037/02787393.14.3.510 O’shaughnessy, T. E., & Swanson, H. L. (2007). A comparison of two reading interventions for children with reading disabilities. Journal of Learning Disabilities, 33(3), 257–277. doi:10.1177/002221940003300304 Pecorino, P. A. (2000). Chapter 5: Epistemology. Types of knowledge. In. Phye, G. D. (1986). Practice and skilled classroom performance. In Phye, G. D., & Andre, T. (Eds.), Cognitive classroom learning: Understanding, thinking, and problem solving (pp. 141–168). Orlando, FL: Academic Press. Price, E. A., & Driscoll, M. P. (1997). An inquiry into the spontaneous transfer of problem-solving skill. Contemporary Educational Psychology, 22(4), 472–494. doi:10.1006/ceps.1997.0948 Rapp, W. H. (2005). Using assistive technology with students with exceptional learning needs: When does an aid become a crutch? Reading & Writing Quarterly, 21(2), 193–196. doi:10.1080/10573560590915996 Rasmussen, J. (1986). Information processing and human-machine interaction: An approach to cognitive engineering. New York: Elsevier.
73
Investigating Assistive Technologies using Computers to Simulate Basic Curriculum
Royer, J. M. (1986). Designing instruction to produce understanding: An approach based on cognitive theory. In Phye, G. D., & Andre, T. (Eds.), Cognitive classroom learning: Understanding, thinking, and problem solving. Orlando, FL: Academic Press. Salomon, G., & Perkins, D. N. (1989). Rocky roads to transfer: Rethinking mechanisms of a neglected phenomenon. Educational Psychologist, 24(2), 113–142. doi:10.1207/s15326985ep2402_1 Schmidt, R. A., Young, D. E., Cormier, S. M., & Hagman, J. D. (1987). Transfer of movement control in motor skill learning. In Transfer of learning: Contemporary research and applications (pp. 47–79). San Diego, CA: Academic Press. Schoenfeld, A. H. (1987). Confessions of an accidental theorist. For the Learning of Mathematics--An International Journal of Mathematics Education, 7(1), 30. Simons, P. R. J. (1999). Transfer of learning: Paradoxes for learners. International Journal of Educational Research, 31, 577–589. doi:10.1016/ S0883-0355(99)00025-7 Smurall, W. J., & Curry, K. (2006). Teaching for transferal. Science Scope, 14(17). Stanovick, K. E. (1986). Matthew effects in reading: Some consequences of individual differences in the acquisition of literacy. Reading Research Quarterly, 21, 360–407. doi:10.1598/ RRQ.21.4.1
74
Tubau, E., Hommel, B., & Lapez-Moliner, J. (2007). Modes of executive control in sequence learning: From stimulus-based to plan-based control. Journal of Experimental Psychology. General, 136(1), 43–63. doi:10.1037/00963445.136.1.43
key terms And deFInItIons Cognitive Impairment: Processing and generation of new information is hindered, but not at obvious or observable levels. Computer-Based Simulations: A multimedia, interactive method for learning using a desktop or personal computer that combines two-dimensional or three-dimensional images with animation, audio, voice recognition tools, and/or video. Theoretical Framework: A progression of learning from a cognitive, to associative, to an autonomous phase using knowledge- and skillsbased on exposure and practice to learning situations over time. Transfer of Learning: The ability to relate prior schema learned in the classroom to new situations outside the learning environment. Schema: Mental models which are developed over time due to exposure to various learning situations help to make connections between new knowledge and existing prior knowledge stored in long-term memory.
Section 2
The Internet, Media, and Cognitive Loads In the first part of this handbook, we emphasized a common quandary found in education—the technology-centered approach typically trumps that of a learner-centered one—which has left the better half of the 20th century littered with examples as to why this does not work. Implementing the latest advancements in cutting-edge technology are not enough, but instead, such knowledge must be coupled with an understanding of the human information processing system. In the first part of this handbook, we focused on theoretical scaffolding, presenting the importance of managing cognitive load and best practices in the design of multimedia-based instruction and its applicability in assisting those with learning disabilities. We also introduced the use of simulation-based instruction, and the significant contributions it has towards assisting those with learning disabilities. In the second part of this handbook, we continue this line of thinking and elaborate further on the use of 3D virtual environments. These environments have been made popular in recent years by advancements in software and computing power, and while typically seen as a form of entertainment and a means for social networking by many, these environments, such as SecondLife, have also lend themselves to businesses as well as educational institutions, because of the potential limitless possibilities such virtual environments may bring. These environments may very well hold significant opportunities to assist those who are challenged by traditional classroom instruction and interaction. Overall, we continue to merge cognitive architecture with assistive technologies and describe how this marriage can aid those with special needs. We accomplish this through the presentation of three chapters. In the fifth chapter, we present findings in the development of a 3D virtual learning environment to help develop and practice social competence (i.e., social interaction and social learning) for individuals with Autism Spectrum Disorders through the iSocial project. The goal of the project is for youth to transfer lessons learned in a virtual environment to the real world. In the sixth chapter, we present the nature of treating disorders through the aid of virtual reality in a rehabilitative focus. Highlighted are the clinical, social, and technological issues in the hope of gaining a better understanding of the coupling between cognitive architectures and rehabilitation. While finally, in the seventh chapter, we present advancements in information and communication technologies and their impact on inclusive education and how such technologies can assist those with special needs. Specifically, hypermedia learning environments as an assistive technology are discussed along with the disorientation and cognitive load problems faced by learners in navigating such environments.
76
Chapter 5
Social Orthotics for Youth with ASD to Learn in a Collaborative 3D VLE James Laffey University of Missouri, USA Janine Stichter University of Missouri, USA Matthew Schmidt University of Missouri, USA
AbstrAct Online systems, especially 3D virtual environments, hold great potential to enrich and expand learning opportunities for those who are challenged by traditional modes of instruction and interaction. In the process of developing a 3D Virtual Learning Environment to support the development and practice of social competence for individuals with Autism Spectrum Disorders, the iSocial project explored and advanced ideas for social orthotics in virtual environments. By social orthotics the authors mean structures in the environment that overcome barriers to facilitate social interaction and social learning. The vision of social orthotics in a 3D world is to be both assistive and adaptive for appropriate social behavior when the student, peers and guide are represented by avatars in a 3D virtual world designed to support learning and development. This chapter describes the formulation of social orthotics for avatar orientation and conversational turn-taking and describes experiences and lessons from early tests of prototype orthotics.
IntroductIon A multi-disciplinary team including special educators and learning technologist at the University of Missouri are developing a 3-Dimensional Virtual DOI: 10.4018/978-1-61520-817-3.ch005
Learning Environment (3D-VLE) to assist youth with autism spectrum disorders (ASD) in their development of targeted social competencies. The project, iSocial (http://isocial.rnet.missouri.edu/), seeks to take a successful face-to-face program delivered over a 10-week period by a trained guide to groups of 4 to 5 youth and deliver the program online via a
Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Social Orthotics for Youth with ASD to Learn in a Collaborative 3D VLE
3D Internet-based virtual world (Laffey, Schmidt, Stichter, Schmidt, Oprean, Herzog, & Babiuch, in press; Laffey, Schmidt, Stichter, Schmidt, & Goggins, 2009; Schmidt, Laffey, Stichter, Goggins, & Schmidt, 2008). A key goal of building an online program is to increase access to the program. To engage in iSocial the youth must work cooperatively in the online environment, including following directions from an online guide and collaborate on many online learning activities with other youth with ASD. While a key goal of iSocial is for the youth to transfer lessons and competencies learned in the online environment to their traditional face-toface settings with parents, teachers, friends and classmates, in planning for iSocial, the developers recognized a need for design features to help the youth interact and be social during the online learning processes. Youth who do not readily take turns, attend to social cues and expectations nor cooperate effectively in face-to-face settings are also likely to struggle with social practices in the online setting. The challenge, of course, is to assist youth with ASD, who have traditional social performance deficiencies, to be social while learning social performance competencies. This is a key feature of the face-to-face curriculum and an essential requirement in the translation to the online environment. We articulated a concept of social orthotics to represent types of structures that might be needed to facilitate social interaction and social learning in iSocial. The vision of social orthotics in a 3D VLE is to be both assistive and adaptive for appropriate social behavior when the student, peers and guide are represented by avatars in a 3D virtual world designed to support learning and development. This chapter describes how we are thinking about and developing early implementations of social orthotics. The chapter also shares what we are learning about these ideas and their potential to support appropriate online behavior. Additionally we discuss some key challenges for design and development of social orthotics.
bAckground Literature As a collaboration of researchers in the field of Special Education with researchers in the field of Learning Technologies we consider the role of technology in assisting social performance as an integration of both traditions. In special education, assistive technology refers to devices that increase, maintain, or improve capabilities of individuals with disabilities for those performances. Learning technologies are generally seen as a means for augmenting human capabilities. Donald Norman, a noted human interface guru, wrote a book about “Things that make us Smart” (Norman, 1994), articulating the view that the design quality of devices impacts human capability both for good and for worse. These two world views of technology assisting individuals to overcome disabilities and augmenting individuals to enhance their abilities combine to sensitize the design of iSocial to the general impact of all design decisions on human capability and the specific potential of a class of devices that may shape targeted social behavior. Researchers in the field of assistive technology for individuals with ASD pay particular attention to communication functions and have asserted the value of augmenting language input through visual devices (Hodgdon, 1995; Quill, 1997; Mirenda, 2001). Mirenda’s (2001) review of literature from prior to 1999 showed the potential of visual cues to support comprehension of speech, managing activity and choice making. Methods to stimulate language production with symbols and augment language by using voice generation devices also showed some evidence of support for communication. Two conclusions seem apparent from the review: (a) communication-related behaviors can be augmented and visual cues seem especially promising for individuals with ASD and (b) the benefits of any assistive technology is highly dependent on the fit between the form of the technology intervention and the individual’s needs
77
Social Orthotics for Youth with ASD to Learn in a Collaborative 3D VLE
and capabilities. The importance of the fit between technology and individual needs has been further supported by research in the promising domain of using robots to foster communication practice for youth with ASD. Examinations of robots as assistive technology (Robins, Dautenhahn, te Boekhorst & Billard, 2004) confirm the need to fit the technology to the individual characteristics of the child. More recently (Mirenda, 2009), the evidence for assistive technology for communication and social skills has increased and the forms of devices have become more sophisticated and integrated. Another review and effort to guide the application of assistive technology (Pierangelo & Giulani, 2008) emphasizes matching technologies (both low and high technology) with the needs of the child and attending to developmental progression in the use of forms of the technology. In addressing the use of assistive technologies for the development of social skills, Pierangelo and Giulani (2008) recommend low-tech strategies such as reading social stories, using comic strip conversations and having social scripts. Numerous software systems have been developed as high-tech ways to enhance these low-tech strategies. Some researchers in the field of assistive technology for youth with ASD have also examined the capability of youth with ASD to work and learn in a 3D VLE as a means for developing social skills and competencies. These studies have demonstrated that participants with ASD can use and interpret VLEs successfully and use VLEs to learn simple social skills (Cobb, S., Beardon, L., Eastgate, R., Glover, T., Kerr, S., Neale, H., Parsons, S., Benford, S., Hopkins, E., Mitchell, P., Reynard, G., & Wilson J., 2002; Mitchell, Parsons, & Leonard, 2007). However, this prior work has addressed the teaching of skills but not structures and mechanisms (orthotics) for actually being social in a 3D environment. For example Parsons, Leonard, and Mitchell (2006) use a café scene to teach skills of finding an appropriate seat, but the scene is a single-user context and
78
only implements a set of rules for finding a seat, rather than possibly providing opportunities for greeting others, leaving others, or practicing how to act in a café with peers taking on other roles in the scene. The majority of 3D VLE prior work has viewed VLEs as an experience of a single-user sitting at a computer to take on a specific task with a physically-present adult assistant. iSocial, however, seeks to immerse the youth in a VLE for multiple and integrated experiences as well as support these youth as they learn collaboratively with and from other members within the VLE. Since Douglas Engelbart wrote the seminal work on augmenting human intellect with technology (Engelbart, 1962) the idea of technology assisting or augmenting human capabilities has been a core principle in the field of designing computer systems for learning and performance. In this sense, the notion of assistive technology is much broader and general than in the field of special education and is viewed as amplifying human capacity rather than as compensating for disabilities. However, in the practice of design, the blending of affordances and constraints to customize support for unique forms of human capability is common to both special education and more general design work. Two tracks of work in computer systems design for learning and performance seem appropriate to mention as foundations for our conceptualization of social orthotics— performance support and scaffolding. Performance support has been a design approach since the late 1980’s and early 1990’s in response to the growing presence of computers in the workplace and the need to improve productivity. We do not often speak of this approach now as a separate form of design because it has generally been incorporated into most approaches to the design of modern computer systems. Tax preparation software, such as TurboTax by Intuit, represents a canonical example of the application of performance support in that it is meant to act as a butler assisting with tasks that the user knows how to perform and acts as a coach for tasks un-
Social Orthotics for Youth with ASD to Learn in a Collaborative 3D VLE
familiar or challenging to the user (Laffey, 1995). Scaffolding is the other construct from learning technology that shapes our thinking about social orthotics. Collins, Brown, and Newman (1989) characterized instructional scaffolding as a process where an expert performs part of a complex task for which a learner is unprepared, thereby allowing the learner to engage in work that would normally be outside his/her grasp. Scaffolding can take the form of a suggestion or other discourse based assistance or specialized devices such as the short skis used in teaching downhill skiing (Burton, Brown, & Fischer, 1984). Explicit forms of instructional scaffolding—those delivered primarily through interaction with an advisor or expert—represent only one kind of scaffolding. Procedure and task facilitation, realized through physical and structural supports that are implicit to the design of an interface, are also forms of scaffolding. This extended notion of scaffolding (Quintana, C., Reiser, B. J., Davis, E. A., Krajcik, J., Fretz, E., Duncan, R. G., Kyza, E., Edelson, D., & Soloway, E., 2004; Hmelo-Silver, 2006; Lin, 2004), which includes both advisor-like expertise delivered via agents in the 3D VLE and structures designed to constrain and invite appropriate behavior, are a basis for conceptualizing and designing social orthotics.
early Field experience for isocial A unit on conversational turn-taking, from the five-unit SCI-CBI curriculum (Stichter, Herzog, Visovsky, Schmidt, Randolph, Schultz, & Gage, in review; Stichter, Randolph, Gage & Schmidt, 2007) was developed for delivery in the iSocial VLE prior to our implementation of explicit devices for social orthotics. Four youth in pairs of two (boys on the autism spectrum, ages 11-14) undertook the lesson facilitated by an online guide. For each pair, the unit consisted of two training sessions of one hour and then four one-hour lessons delivered in a two-week period. Our findings for system usage show iSocial to be easy to use and
enjoyable. However we also found many challenges for social interaction and specifically for executing appropriate turn-taking behavior and the coordination of activity. During the lessons there were numerous instances when youth would interrupt each other, fail to initiate conversation when needed and fail to respond appropriately. The online guide had difficulty facilitating these exchanges as she could only see avatar behavior and it took time to determine if the youth were participating appropriately, inappropriately, or were just not attending. The online guide also had trouble coordinating activity in the VLE, due to a lack of traditional control mechanisms, such as nonverbal cues. For example, in the classroom, the guide notices subtle cues from students as they are starting to drift from instruction, and she can use those cues to start processes to bring the student back to attention. However, when learners engaged in undesirable behavior in their physical environment, such as gazing out the window or excessively clicking mouse buttons or keyboard keys, the online guide often did not know these behaviors were occurring and could only try verbal prompts to keep the youth on track. In addition, the youth were both curious about the environment and uncertain about how to move effectively. As a result, learners often were missing in action, sometimes out exploring and sometimes trapped in walls or other dead ends in the iSocial environment. Such issues of navigation and inappropriate behavior were distracting, which typically slowed the rate of instruction and impeded the flow of the lessons. Consequentially, the online guide was unable to address the same amount of instruction in one-hour in the VLE as is typical in a face-to-face class, causing instruction to be sometimes rushed.
socIAL ortHotIcs In our early conceptualization of iSocial, we envisioned devices for mediating the learning
79
Social Orthotics for Youth with ASD to Learn in a Collaborative 3D VLE
Figure 1. A conversation console as an early prototype of social orthotics
activities in ways that provided scaffolding for the youth in the learning process. For example, Figure 1 shows an early prototype of how a conversation console could be used to both constrain and support turn-taking and facilitate empathy during various interactive exercises that made up the curriculum. This form of scaffolding was directly linked to the instructional objectives of the curriculum, such as to support appropriate turn-taking and trying to understand what others might be thinking or feeling. You might imagine the conversation console operating like an expert coach or advisor helping the youth make sense of the situation and suggesting attention to certain aspects of the situation. Following from our review of the literature, which showed the potential of visual representations and the need for tailored assistance, we envisioned varying the implementation and intensity of the visual representation so as to customize the mediation to the individual youth’s needs.
80
Based on our early field tests of the turn-taking unit, the need for support for core aspects of social engagement and interaction became apparent. We still envision scaffolding for learning such as the conversation console, but we turned our immediate attention to devices that might help keep students together, focused, and provide errorless learning (something not available in natural contexts) to better scaffold instruction and hence avoid initial excessive and distracting behavioral errors such as interruptions. Our primary focus in developing social orthotics was to assist the youth in being social and to support the online guide, whose role it was to manage youth behavior and facilitate learning in the 3D VLE. Since the nature of a computing environment affords the potential to vary the implementation and intensity of the implementation, our view was to customize orthotics to the individual youth’s needs, present and future, and to provide the orthotic in the most
Social Orthotics for Youth with ASD to Learn in a Collaborative 3D VLE
Figure 2. Conceptual diagram of social orthotics for iSocial: Round 2
appropriate way for the youth, social competency development level and activity.
conceptual Framework for social orthotics in isocial Social orthotics are pieces of software tools and customizations to the virtual environment which are integrated into the interface and virtual world in such a way as to support social interaction and mediate acquisition of social competency from coaching, on-demand assistance and just-in-time feedback. A goal of these orthotics is to enable learners to engage in effective social practice for which they do not have full competence. Figure 2 provides schemata for how these tools pair pedagogical strategies for teaching social
competency with software mechanisms geared towards facilitating pro-social behavior. For our second round of prototyping and field testing to be undertaken in 2009 and early 2010, we are focused on two essential skills for basic social practice: (a) avoiding interruptions (iTalk) and (b) exhibiting proper adjacency, distance and orientation behavior (iGroup). All activity in a 3D VLE is mediated by designed spaces and devices. Since it is the intention of the design work to make all elements work toward the desired ends of competent social practice within the system and practice for social competency beyond the system, it is important to distinguish between three major design aspects related to assisting social practice. The three design elements related to the role of social orthotics are:
81
Social Orthotics for Youth with ASD to Learn in a Collaborative 3D VLE
Figure 3. An example of a “virtual” physical social orthotic
general environment, physical devices for targeted behavior and dynamic agents. We will illustrate these three design approaches by describing work completed from round one to round two of the design in addressing the problem of youth getting lost or wandering in the space, thus delaying the progress of lessons. As an approach to general environment design we went from an environment that had numerous rooms related to specific elements of the curriculum to a more open layout. In this new environment it was easier for the online guide to see where the students were, and it was less likely that students would get lost in rooms or stuck in walls. An example of physical devices for targeted behavior can be seen in Figure 3. Here the circle indicates a space for the youth to enter which in turn changes their perspective from a third-person view of themselves and others in the scene to a point of view perspective of the materials of the lesson. Entering the circle can also have other properties such as not allowing the user to leave until the guide permits it as well as managing
82
orientation to other members in the circle and focus on aspects in the user view. An example of an agent, for being in the appropriate place and keeping an appropriate focus, will be described in the next sections, but it includes monitoring user behavior and providing feedback and guidance. While all three of these approaches are meant to be “assistive” for social practice, we consider the latter two to be social orthotics and for the purposes of this chapter, we will focus solely on agent-based forms of orthotics.
igroup Avatar Orientation, Adjacency and Distance A key problem observed during the field test was the difficulty of having the target youth learn in a group when he or she struggled with rudimentary behaviors and orientations necessary for group activity, such as facing another youth and not invading another’s space. In the case of our
Social Orthotics for Youth with ASD to Learn in a Collaborative 3D VLE
field test the group was limited to two youth and an online guide. We anticipate groups of 5 to 6 youth with a guide, so mechanisms are needed for helping users manage the non-verbal aspects of group interaction. iGroup is a software-based means to reinforce desired adjacency, distance and orientation behavior and constrain undesirable behavior. Orientation refers to the directionality of the user’s avatar towards a speaker. For instance, a user having his or her avatar’s back turned to a speaker is considered undesirable behavior as opposed to desirable behavior of looking at the speaker. Adjacency refers to how close users’ avatars are to one another. For example, a user having his or her avatar directly in front of the speaker’s avatar or touching the speaker’s avatar is considered undesirable behavior as opposed to having the avatar approximately within one virtual meter of the speaker (desirable behavior). Distance refers to the area between users’ avatars. For example, a user having his or her avatar across the room from the speaker’s avatar is considered undesirable behavior as opposed to having the avatar within three or four virtual meters of the speaker (desirable behavior). The iGroup tool provides users with mechanisms that constrain inappropriate adjacency, distance and orientation behavior and encourage users to follow the rules for appropriate adjacency, distance and orientation when holding a conversation. iGroup monitors users’ avatar adjacency, distance and orientation in respect to other users, notifies users when they are displaying inappropriate adjacency, distance and orientation behaviors and constrains their ability to continue these behaviors. In addition, iGroup provides coaching and assistance by sending notifications to users such as, “Someone is speaking, but my back is turned. I should turn around and face the speaker or else they may think that I am not interested or I am being rude.” Finally, iGroup can be fit to users’ differing abilities for managing their avatars’ orientation, adjacency and distance. As an example of fitting the functionality to the
individual needs of the youth, one child might be provided text messages reminding him of more appropriate behavior while another child with a record of inappropriate behavior might be “virtually” physically restrained from moving outside the circle or have a specific orientation imposed on his avatar in response to a series of undesirable behaviors. Given a conversation between users with the iGroup tool enabled, inappropriate adjacency, distance or orientation behaviors during a conversation will be identified and the user exhibiting these behaviors will be provided with a notification. From the user’s perspective, iGroup sends notifications to the user’s screen when undesirable adjacency, distance, or orientation behaviors are detected. From the guide’s or administrator’s perspective, iGroup is configured using a settings panel which can be selected from the iSocial client window’s menu.
Use Case In the case described here, the guide has sets the time before notification for orientation, adjacency. and distance to three seconds. If one user remained too close to another user for three seconds, that user received a notification. If a user began speaking and another user was far away and did move to within an appropriate distance of the speaker within three seconds, that user received a notification. If a user began speaking and another user’s avatar was not oriented towards the speaker and/or did not turn his or her avatar to face the speaker within three seconds, that user received a notification. In our example, Joe and Ryan were present in a virtual space, were approximately eight virtual meters apart and were facing away from one another. Joe began speaking to Ryan. Ryan listened to Joe, but did not turn to face him or move any closer to him. Joe continued speaking for more than three seconds. Ryan then received two notifications: One prompting him to orient
83
Social Orthotics for Youth with ASD to Learn in a Collaborative 3D VLE
Figure 4. Illustration of an iGroup notification prompting re-orientation
his avatar towards Joe (see Figure 4) and the other prompting him to move closer to Joe. After the notification Ryan then moved very close to Joe and properly oriented his avatar. Because Ryan was too close to Joe, he received a notification after three seconds elapsed. Because his avatar was correctly oriented to Joe, Ryan did not receive a second notification regarding orientation. Over time Ryan improved his orientation and adjacency behavior and received fewer and fewer notifications related to these behaviors. The iGroup software detected this change in behavior and decreased the frequency of notifications that Ryan received for these behaviors. However, Ryan continued to move away from the speaker and received notifications related to distance. The iGroup software detected this and increased the frequency of notifications that Ryan received regarding his distance behavior.
Settings Panel The guide or administrator is able to configure iGroup using a settings panel. This settings panel
84
is used to set the orientation settings, adjacency settings and distance settings, as well as to set notification messages customized to the pedagogical levels of learning for the youth. Figure 5 shows the options for setting orientation controls. The orientation settings make it possible to set the amount of time that can elapse when a user exhibits undesirable orientation behavior before a notification is sent. Acquisition, maintenance and fluency are pedagogical levels which will be discussed in the section on pedagogical strategies, but the mock-up in Figure 4 shows that they have default duration settings which can be overridden manually. The notifications area toggles notifications on/off, sets the duration that the notification is displayed on the client’s screen and sets custom notification messages. In practice, the iGroup tool determines if others’ avatars are appropriately or inappropriately oriented to the speaker (see Figure 6). The software allows the pre-determined “Notification Duration” setting time to elapse prior to sending a notification to any users exhibiting inappropriate orientation behavior. This delay provides users the chance to appropriately orient
Social Orthotics for Youth with ASD to Learn in a Collaborative 3D VLE
Figure 5. Mock-up of the iGroup settings panel
Figure 6. Top-down view of avatars exhibiting inappropriate orientation behavior (left) and appropriate orientation behavior (right)
their avatar without receiving a reminder notification from the system. For example, if a user hears someone speaking and turns to face the speaker within the given time limit, that user would not receive a notification. However, if the user does not turn his or her avatar to the speaker within the given time allotted, that user would receive a notification. The delay also constrains the system
from sending a notification if, for example, the speaker is only making a brief statement and not beginning a continued discourse. The settings under the adjacency tab define a personal space for the avatars, such as a diameter of one virtual meter from the center of the avatar. A proximity trigger is activated in the iGroup tool if another avatar enters and stays in the space
85
Social Orthotics for Youth with ASD to Learn in a Collaborative 3D VLE
beyond the threshold time. The settings under the distance tab control the behavior or pop-up notifications related to users’ avatar distance from one another. The distance settings make it possible for an administrator or instructor to set the distance diameter and the amount of time that can elapse when a user exhibits undesirable distance behavior before a notification is sent. The distance diameter is defined as a space around an avatar that is speaking. When one user begins speaking, the iGroup tool determines if others’ avatars are appropriately or inappropriately distanced from the speaker based on the value provided in the settings panel for “Distance Diameter.”
italk Speaking/Listening Tool iTalk is a software-based means to reinforce desired speaking and listening behavior and constrain undesirable behavior. The first iteration of iTalk focuses specifically on eliminating audio interruptions. This tool will monitor conversation; will inform users when they are interrupting and, if needed, will constrain their ability to continue speaking out of turn. Moreover, iTalk will provide coaching and assistance by sending notifications to users such as, “I just interrupted my partner. Maybe I should wait for a pause in conversation before I speak.” In addition, iTalk will be able to dynamically adjust its settings to fit users’ differing conversational abilities. From the user’s perspective, iTalk displays the frequency of conversational interruptions to the screen and presents the user with a notification when a specified threshold of interruptions is met. From the instructor’s or administrator’s perspective, iTalk is configured using a settings panel which can be selected from the iSocial client window’s menu. The iTalk tool monitors audio by hooking in to the microphone channel on user’s clients. Assuming silence, when one user begins speaking, that user is assigned the speaking floor. If another user begins speaking but
86
does not have the speaking floor, the utterance is detected on that user’s microphone channel and is counted as an interruption. Obviously, this is a gross oversimplification of conversation dynamics and turn-taking behavior and has the potential for falsely identifying interruptions if, for example, a user accidentally brushes his or her microphone, there is a loud noise in the background, or the user makes a common interjection such as “uh huh” or “yeah.” To control for this, the sensitivity can be adjusted within the tool. The tool can be configured to allow for a certain degree of conversational overlap. For instance, the tool can be configured to allow for one user to interject during a conversation for less than one second, In addition, using frequency thresholds, which allow the user to make a few interruptions before the system sends a notification helps to control for falsely identified interruptions.
Use Case The instructor set an interruption threshold of five interruptions in 30 seconds in the iTalk settings panel. If a user interrupted five times in 30 seconds, a notification displayed on his or her screen informing that the interruption threshold was met and provided coaching hints and tips for avoiding future interruptions. Joe and Ryan began speaking and a progress meter showed the amount of time left until the interruption threshold resets began to count down. Joe interrupted frequently during the conversation. Each interruption caused a separate progress meter showing the number of interruptions to increment by one. When Joe made five interruptions within 30 seconds, a notification pop-up displayed on his screen that states that the user interrupted too frequently and provided tips on avoiding interrupting. If Joe continued to receive notification pop-ups for three consecutive 30-second intervals, iTalk dynamically adjusted the interruption threshold to meet Joe’s level of ability. Ryan did not interrupt frequently. In this case, iTalk hypothesized that Ryan’s threshold was
Social Orthotics for Youth with ASD to Learn in a Collaborative 3D VLE
too easy for his level of ability. The exact way that iTalk will work is not completely specified, but in this case Ryan might have received a token as a reward for his performance and is dynamically moved to a more challenging threshold.
Settings Panel The iTalk settings panel shown in Figure 7 is used to set the interruption threshold, enable/disable progress meters, set a custom notification message and enable/disable user muting. The interruption threshold makes it possible for the instructor or administrator to set the number of interruptions that are allowed within a given time period before a notification is sent. The three pedagogical levels have default settings which can be overridden manually. The progress meters check box toggles the visibility on the
client’s display of interruption progress meters. Notifications can be toggled on/off, can be set for a display duration on the client’s screen and have custom notification messages. In addition to the interruption threshold, progress meters and notifications, the settings panel allows the instructor or administrator to mute a user for a given time duration when an interruption threshold is met. Figure 8 shows how the progress indicators and pop-up notifications are displayed on the user’s screen. When iTalk is enabled, the user sees two progress meters on the bottom-right portion of the iSocial client window. The meter on the right is a timer and represents the time interval set by the administrator or the instructor in the settings panel. The meter on the left indicates the number of times a user has interrupted in a given time interval. When the time interval reaches zero, both meters reset.
Figure 7. Mock-up of iTalk settings panel
87
Social Orthotics for Youth with ASD to Learn in a Collaborative 3D VLE
The meters indicating the interruption threshold is color coded (green, yellow and red) in order to convey how close a user is to receiving an interruption notification. The interruption indicators use an incremental model; that is, given an interruption threshold of five interruptions in 30 seconds for the first interruption, the progress meter will display in green. For the next two interruptions, the progress meter displays in yellow. For the fourth and fifth interruptions, the progress meter displays red. Green indicates a lower interruption frequency, yellow a moderate interruption frequency and red a severe interruption frequency.
Pedagogical strategy The social orthotics tools are designed with a three-phase model of capability. The phases are: (a) acquisition, (b) maintenance and (c) fluency.
The acquisition phase is for users who have not yet acquired the ability; hence, the times that elapse before a notification is sent are short and the goals for appropriate behavior may be lower or less refined. The maintenance phase is for users who have acquired rudimentary ability, so the times that elapse before a notification is sent are moderate. The fluency phase is for users who have become adept at the competency and long times can elapse before a notification is sent. By the fluency phase goals for appropriate behavior are quite refined and expectations are as close to those in typical environments as possible. The support, prompts and scaffolding provided by the orthotics fade across the phases of acquisition, being heavy yet tolerant during the acquisition phase, moderate during the maintenance phase and light during the fluency phase. An overview of how fading works across phases of acquisition is provided below:
Figure 8. Mock-up of iTalk progress meters and pop-up notifications as seen by the user
88
Social Orthotics for Youth with ASD to Learn in a Collaborative 3D VLE
Acquisition ◦ Shorter times before notifications are sent, ◦ More specifically and clearly worded notifications (e.g., “You are too close to the speaker.” and “You have interrupted.”) ◦ Additional hints and strategies for avoiding inappropriate behavior. ◦ Specific hints and strategies provided to avoid inappropriate behavior (“precorrects”). ◦ More tolerant expectations. • Maintenance ◦ Moderate times before notifications are sent. ◦ Less specific worded notifications (e.g., “If you stand so close to someone, you might make them uncomfortable or they might think you are being rude.”) ◦ Some hints and strategies (“pre-corrects”) for avoiding inappropriate adjacency, distance, orientation and interrupting behavior. • Fluency ◦ Longer times before notifications are sent. ◦ Few notifications. ◦ Occasional and generalized “precorrects.” ◦ Expectations most resemble those of typical environments. Unless there is some basis for choosing a different phase, at the beginning of the curriculum orthotics are set to the acquisition phase. Thereafter, the behavior of the orthotic is dynamically adjusted within a phase and when moving to another phase based on the youth’s performance. The orthotic tool is able to determine a user’s ability by the number of times a user receives a notification of inappropriate behavior. For instance, if a user is in the acquisition phase and receives five notifications of inappropriate adjacency •
behavior, iGroup will adjust in order to increase the frequency of notifications that user receives. If a user is in the acquisition phase and receives very few or no notifications, iGroup will adjust in order to decrease the frequency of notifications that user receives. The social orthotic tool also maintains a log of user’s behavior related to that orthotic and is able to create a report for the guide at the end of a lesson or for review before the next session. The online guide can use this report to determine changes over time for a given user’s social behavior. For instance, if a user is not making progress and exhibits little or no change in behavior over time, the guide can be made aware of this through the reporting functionality. The guide and researchers can also use the social orthotic reports to determine specific times or parts of lessons that cause difficulty for users and use this information to specifically focus on these issues.
usAbILIty testIng In the spring of 2009 a usability test was undertaken for the iTalk social orthotic. Two youths from the previous field test were invited to participate. The study included an online guide and both participants simultaneously and collaboratively worked through two usability protocols. The screens of the participants were recorded using ScreenFlow screen-recording software that allowed for keyboard and mouse tracking. ScreenFlow also enabled the computer’s web camera to record the physical behavior of the participant working at his computer. Each protocol lasted approximately one hour. During each usability test, the iTalk software tool was enabled for the full duration of the test. A default setting for receiving pop-up notifications from iTalk was used for the entirety of protocol one. Three different notification settings were used for protocol two—high notification frequency, medium notification frequency and low notification frequency. Participants received no training
89
Social Orthotics for Youth with ASD to Learn in a Collaborative 3D VLE
on iTalk for the first protocol, but did receive training for the second protocol. In the second usability protocol, participants first reviewed a short video of their experience in the first usability protocol and then received training on using iTalk. Following the training, participants engaged in a conversation intense, game-like, activity using the iTalk software set at a high notification frequency. The notification frequency was set to medium on the second iteration as the participants worked through the activity a second time. Participants were able to complete all of the tasks from both protocols in the iSocial environment, although not without help. Participant one needed more help than participant two. Both participants characterized their experience as easy and enjoyable and both said they would like to return to continue using iSocial. During protocol, one both subjects noticed the pop-up text notifications and the meters. They understood the text messages, but expressed confusion about the meaning and purpose of the meters. They both saw changes in the meters but did not readily understand how changes in the meter representation related to their own behavior. When asked about their opinions about the meters, participant two said that “they’re distracting, and they’re bright. I hate bright.” Participant one agreed with the negative sentiment saying “They get annoying too.” Participant two thought the social orthotic was too sensitive and gave too many notifications. He stated that the pop-up message appeared when he “didn’t mean to interrupt,” explaining that he was just “moving the microphone.” Indeed, participant two touched his microphone to get the pop-up message deliberately several times. Participant one seemed to take the orthotic more seriously. At one time, he tried to say something while the online guide was talking, but when he noticed a change in his meters, he gave up the attempt and kept quiet. Participant two, on the other hand, appeared to be enjoying getting the pop-up window by moving or touching his microphone. When asked about whether they tried to interrupt less,
90
participant one first claimed that he did not try, but participant two claimed, “I tried. Didn’t work.” Participant one corrected himself by saying that “I tried too. But it didn’t work.” Prior to protocol two, the two participants watched a video of some of their activity in protocol one with the guide using the video to show how iTalk worked. After the guide illustrated the functionality of the two meters, Participant two acknowledged, “it makes sense.” And participant one was able to restate the functionality of the two meters correctly. He explained that “when you speak when other person spoken, then this timer [the yellow bar] goes down. The green one goes up.” Upon prompting from the physical facilitator, both participants understood that they were going to try to not interrupt during the session. After the first activity in protocol two, neither participant received any pop-up text notifications for verbal interruptions. They reported that they attended to changes in the meters and they tried not to interrupt. Participant two said, “That’s [the change of the meters] why I was silent for a few times.” Participant one also reported “when I noticed the yellow one went down, that means I was interrupting. So I shut up my mouth and just pay attention.” After the second activity of the protocol, both participants reported that the orthotics were less sensitive than before. Participant two described it as “the thing didn’t pop-up, but it still says that I’m talking”, and he also described it as “looks like if I did it multiple times, it just says ‘you have interrupted’ once”, which indicated that he understood the functionality of the meters. However, participant one thought the orthotics were shut down.
key Lessons for social orthotics The purpose of a usability test was to develop insights for improving the human-computer interaction of a system and not to draw conclusions about the value of the concepts and principles in play. Keeping this purpose in mind the findings from the
Social Orthotics for Youth with ASD to Learn in a Collaborative 3D VLE
usability test suggest several results about the use of iTalk. In protocol one, although the participants did not fully understand the mechanisms they did attend to them. However, there did not seem to be any substantial regulation of interruption behavior even from text that specifically told the participants they were interrupting. In protocol two the participants better understood the mechanisms and seemed to self-regulate their interruption behavior by attending to the physical cues from the meters. It is hard to tell if there was any impact from the text messages, but the meters seemed to establish a feedback loop that was attended to and used in regulating verbal behavior. Additionally in protocol one the participants complained that iTalk was annoying and too sensitive. However, in protocol two they no longer complained about iTalk being annoying and saw it as less sensitive or even turned off (although it was not). These assertions suggest that as the youth were able to understand and thus use the visual cues from the meters that iTalk started to become effective and accepted by the participants. Taken together, our lessons from the reviews of literature and from the usability results suggest several assertions about the design and development of social orthotics for youth with ASD in virtual environment for learning social competence. First, the visual nature of the representation seems to have some impact. This assertion is strongly suggested in the literature and seems to be borne out with the role of the meters in iTalk. The text messages from the pop-up notifications provided information to the participants and may have provided some regulatory influence on their behavior, but the regulatory influence of the meters in protocol two seemed much more profound. A second assertion is that when the participants understood the relationship between the visual meters and their behavior they created a feedback loop that was a dynamic mediator of their own behavior. In this sense they seemed to take ownership of the meters as their own tools. In Mind as Action, James Wertsch (1998) char-
acterizes “ownership” or “appropriation” as one of the most profound relationships that users can have with the tools they use to interact in their socio-cultural milieu. Having ownership of the tools gives the user a sense of power and authority to act. While we may not want to make too much of the small set of data we have collected in the usability test, it makes sense to use a “sense of ownership” as an attribute to be examined and strived for in the design, development and implementation of social orthotics. Is the orthotic appropriated as empowering by the user or seen as a constraining annoyance in the service of others? A third lesson suggests the relevance of customization and adaptability in orthotics. We see this lesson in three forms, the first being that the youths have different capabilities relevant to the social practices and that they experience the VLE in different ways, thus the participants need orthotics that fit their individual profiles. The computer environment affords the potential to match orthotics to profiles, but we still have much to learn about just what is relevant in the student profile of experience and capability and how best to match characteristics of the orthotic, such as duration and form of feedback, to meet individual needs. A second form of lesson three is that the orthotics should also match the task and environment. For example, orthotics for not interrupting during turn-taking in game playing may require different features than for not interrupting when the youth is talking with a teacher or counselor. A third form of this lesson is that in the iSocial context some of the capabilities that the orthotics is supporting are also the target of the curriculum. Thus one might expect an upward trajectory for these capabilities as the youth progress through the curriculum. What is the relationship between the curriculum and the orthotics? For example, if the youth gets to a later unit in the curriculum, but the orthotic still needs to apply methods from the “acquisition” phase, are new approaches needed from the curriculum, orthotics or both?
91
Social Orthotics for Youth with ASD to Learn in a Collaborative 3D VLE
Future reseArcH And desIgn It is quite obvious that there is much more “future” than “past” in research and design for social orthotics in support of social practice and learning in a 3D VLE by youth with ASD. Our designs for iTalk and iGroup, while quite exciting to us, are still fairly rudimentary. We will continue a process of research and design iteration as we seek to articulate our vision into software tools. A first step is to take the lessons learned from the usability test and re-implement iTalk and implement iGroup for a next field test. Fortunately, with support from AutismSpeaks and the Institute for Educational Sciences of the U.S. Department of Education, we have resources to both investigate best approaches to social orthotics and to develop a full implementation of iSocial. The social orthotics we have described and specified need to be fully and well implemented, but we also need to think beyond the current aspects to see if there are other important features to grouping beyond adjacency, distance or orientation and to talking beyond interruptions. Obviously there are, but can we find effective ways to monitor and provide feedback for them. Beyond extending the capabilities that orthotics can help regulate, we also need research on how best to implement the orthotics. For example, under our lessons learned we speculate that the meters had a special prominence in regulating interruptions because of their visual cues and the match that visual information has with the ways the individual’s with ASD process information. However, the influence on interruptions may also have come because the meters represented a scoring-like function that made the action game-like. In our results both mechanisms may have been at work. Can we isolate the impact of visual representation from game-like challenge? Can we find the best ways to harness both mechanisms for the power of orthotics? Is there something else going on that we have not considered? These questions are quite exciting and iSocial is a good laboratory
92
for exploring these and other design principles. A final area for continued research and development stems from the lesson described above related to customization and adaptability. These concepts seemingly hold great promise, yet we are just at the beginning of imagining how to best support individual differences, contextual relevance and trajectories of development.
concLusIon The many special education researchers who have contributed to advances in assistive technology do so because they see the potential of design and engineering to overcome disabilities and provide more normal functioning to those otherwise limited or deprived. For individuals with ASD these designs and engineering efforts primarily attend to mechanisms for communication and social interaction. As computers have moved from devices that simply calculate and word process to environments that support communication and being social, attention to how software design best supports social behavior is warranted and is especially important for individuals who are non-typical in the way they interact and process information for social interchange. These new computer environments will increasingly be called upon as supplements to traditional forms of work and learning or in some cases entirely replace traditional forms of work and learning. For example, K-12 education is increasingly being delivered online and outside of traditional schools. The Sloan Consortium estimates that over one million K-12 students were engaged in online learning in the 2007-2008 school year (Picciano & Seaman, 2008). Further, Christensen, Horn, and Johnson (2008), predict that by 2013 10% of all K-12 school enrollments will be online and that by 2018 the number will be 50% of all enrollments. Our particular interest in social orthotics is to build a custom 3D VLE for youth with ASD to develop social competence in a way that over-
Social Orthotics for Youth with ASD to Learn in a Collaborative 3D VLE
comes limited access to these forms of educational support. However, as suggested by the statistics on the growing use of online education in K-12, social orthotics offers great potential to assist students with special needs to participate in new and more effective ways with others in many forms of online education. For example, can social orthotics help a student and his mathematics teacher achieve better teaching and learning outcomes by using online aids for lessons? However, while we are excited about what we are learning about how to do social orthotics in a 3D environment for youth with ASD, speculation must be tempered by how much we still need to learn about how youth will use these tools, what impact they may have on social interaction and learning, and the potential for unintended consequences. Clearly though, social orthotics in 3D VLE is an area for further research and development. Furthermore, our abilities to use visual cues appropriately, customize and fit the orthotics to the individual, the task and the environment, provide orthotics in a way that gives ownership to the youth, and see the use of orthotics in a virtual world as part of a developmental trajectory will be key to innovation and achievement.
Christensen, C. M., Horn, M. B., & Johnson, C. W. (2008). Disrupting class: How disruptive innovation will change the way the world learns. New York: McGraw-Hill. Cobb, S., Beardon, L., Eastgate, R., Glover, T., Kerr, S., & Neale, H. (2002). Applied virtual environments to support learning of social interaction skills in users with Asperger’s Syndrome. Digital Creativity, 13(1), 11–22. doi:10.1076/ digc.13.1.11.3208 Collins, A., Brown, J. S., & Newman, S. E. (1989). Cognitive apprenticeship: Teaching the crafts of reading, writing, and mathematics. In Resnick, L. B. (Ed.), Knowing, learning, and instruction: Essays in honor of Robert Glasser (pp. 453–494). Hillsdale, NJ: Lawrence Erlbaum & Associates, Inc. Engelbart, D. (1962). Augmenting Human Intellect: A conceptual framework, summary report. SRI International. On Contract AF, 49(638), 1024.
AcknoWLedgment
Hmelo-Silver, C. E. (2006). Design principles for scaffolding technology based inquiry. In O’Donnell, A. M., Hmelo-Silver, C. E., & Erkens, G. (Eds.), Collaborative reasoning, learning and technology (pp. 147–170). Mahwah, NJ: Erlbaum.
The authors wish to acknowledge the University of Missouri Research Board, the Thompson Center for Autism and Neurodevelopmental disorders and grant # 2915 (principal investigator, James Laffey) from Autism Speaks for support for the work described in this chapter.
Hodgedon, L. Q. (1995). Solving social-behavioral problems through the use of visually supported communication. In Quill, K. A. (Ed.), Teaching children with Autism: Strategies to enhance communication and socialization (pp. 265–286). New York: Delmar.
reFerences
Laffey, J. (1995). Dynamism in performance support systems. Performance Improvement Quarterly, 8(1), 31–46.
Burton, R., Brown, J. S., & Fischer, G. (1984). Skiing as a model of instruction. In Rogoff, B., & Lave, J. (Eds.), Everyday cognition: Its development in social context (pp. 139–150). Cambridge, MA: Harvard University Press.
Laffey, J., Schmidt, M., Stichter, J., Schmidt, C., & Goggins, S. (2009). iSocial: A 3D VLE for Youth with Autism. Proceedings of CSCL 2009, Rhodes, Greece.
93
Social Orthotics for Youth with ASD to Learn in a Collaborative 3D VLE
Laffey, J., Schmidt, M., Stichter, J., Schmidt, C., Oprean, D., Herzog, M., & Babiuch, R. (in press). Designing for social interaction and social competence in a 3D-VLE. In Russell, D. (Ed.), Cases on collaboration in virtual learning environments: Processes and interactions. Hershey, PA: Information Science Reference. Lin, F. (Ed.). (2004). Designing distributed learning environments with intelligent software agents. Hershey, PA: Information Science Publishing. Mirenda, P. (2001). Autism, augmentative communication, and assistive technology: What do we really know? Focus on Autism and Other Developmental Disabilities, 16(3), 141–151. doi:10.1177/108835760101600302 Mirenda, P. (2009). Introduction to AAC for individuals with Autism Spectrum Disorders. In Mirenda, P., & Iacono, T. (Eds.), AAC for individuals with Autism Spectrum Disorders (pp. 247–278). Baltimore, MD: Paul H. Brookes Publishing Co. Mitchell, P., Parsons, S., & Leonard, A. (2007). Using virtual environments for teaching social understanding to 6 adolescents with autistic spectrum disorders. Journal of Autism and Developmental Disorders, 37(3), 589–600. doi:10.1007/s10803006-0189-8 Norman, D. (1994). Things that make us smart. Reading, MA: Addison-Wesley Publishing Co. Parsons, S., Leonard, A., & Mitchell, P. (2006). Virtual environments for social skills training: Comments form two adolescents with autistic spectrum disorder. Computers & Education, 47, 186–206. doi:10.1016/j.compedu.2004.10.003 Picciano, A. G., & Seaman, J. (2008). K-12 online learning: A 2008 follow-up of the survey of U.S. school district administrators. The Sloan Consortium.
94
Pierangelo, R., & Giuliani, G. (2008). The educator’s step-by-step guide to classroom management techniques for students with autism. Thousand Oaks, CA: Corwin Press. Quill, K. (1997). Instructional considerations for young children with autism: The rationale for visually cued instructions. Journal of Autism and Developmental Disorders, 27, 697–714. doi:10.1023/A:1025806900162 Quintana, C., Reiser, B. J., Davis, E. A., Krajcik, J., Fretz, E., & Duncan, R. G. (2004). A scaffolding design framework for software to support science inquiry. Journal of the Learning Sciences, 13, 337–386. doi:10.1207/s15327809jls1303_4 Robins, B., Dautenhahn, K., te Boekhorst, R., & Billard, A. (2004). Robots as assistive technology - Does appearance matter? Proceedings of the 2004 IEEE International Workshop on Robot and Human Interactive Communication Kurashiki, Okayama Japan. Schmidt, M., Laffey, J., Stichter, J., Goggins, S., & Schmidt, C. (2008). The design of iSocial: A three-dimensional, multiuser, virtual learning environment for individuals with autism spectrum disorder to learn social skills. The International Journal of Technology. Knowledge and Society, 2(4), 29–38. Stichter, J. P., Herzog, M.J., Visovsky, K., Schmidt, C., Randolph, J., Schultz, T., & Gage, N. (in review). Social competence intervention for youth with Asperger Syndrome and high-functioning autism: An initial investigation. Submitted to review in the Journal of Autism and Developmental Disorders. Stichter, J. P., Randolph, J., Gage, N., & Schmidt, C. (2007). A review of recommended practices in effective social competency programs for students with ASD.exceptionality, 15, 219-232. Wertsch, J. (1998). Mind as action. New York: Oxford University Press.
Social Orthotics for Youth with ASD to Learn in a Collaborative 3D VLE
key terms And deFInItIons 3-Dimensional Virtual Learning Environment (3D-VLE): A software system representing dimensionality for simulating physical movement and interaction with objects and other members designed to support teaching and learning activity. Avatar: A user’s representation on a computer. In a 3D VLE avatars are usually virtual representations of humans that can move throughout a virtual space. Dynamic Agents: Used here to represent social orthotics that monitor user behavior and intervene based on a set of variables that may change through the interaction and over time. iGroup: A form of software-based social orthotic to reinforce adjacency, distance and orientation behavior and constrain undesirable behavior such as looking away from the speaker.
iTalk: A form of software-based social orthotic to reinforce desired speaking and listening behavior and constrain undesirable behavior such as interrupting others. Pedagogical Strategy: A method for supporting learning outcomes. In iSocial we implement a method for differentially constraining and providing feedback for behavior based on a users learning phase of (a) acquisition, (b) maintenance or (c) fluency. Scaffolding: Types of structures that support advanced performance when the users may be novice or in a learning process. Social Orthotics: Types of structures that facilitate social interaction and social learning when there is an expectation that a natural and effective process is unlikely; used here to represent unique computational functionality to support talking and orientation to others in a 3D VLE.
95
96
Chapter 6
Cognition Meets Assistive Technology: Insights from Load Theory of Selective Attention Neha Khetrapal University of Bielefeld, Germany
AbstrAct This chapter deals with the issue of treating disorders with the help of virtual reality (VR) technology. To this end, it highlights the concept of transdiagnostic processes (like cognitive biases and perceptual processes) that need to be targeted for intervention and are at the risk of becoming atypical across disorders. There have been previous theoretical attempts to explain such common processes, but such theoretical exercises have not been conducted with a rehabilitative focus. Therefore, this chapter urges greater cooperation between researchers and therapists and stresses the intimate links between cognitive and emotional functioning that should be targeted for intervention. This chapter concludes by providing future directions for helping VR to become a popular tool and highlights issues in three different areas: (a) clinical, (b) social and (c) technological. Coordinated research efforts orchestrated in these directions will be beneficial for an understanding of cognitive architecture and rehabilitation alike.
IntroductIon This chapter will begin with background on the concept of cognitive rehabilitation and will serve to illustrate a recently successful popular example of it. It will then describe cognitive models that explain cognitive and emotional functioning and
how these could give rise to disorders. A major focus of the chapter is to highlight the manner in which various disorders could be treated in a similar manner and how technology could aid this process—bringing in the concept of transdiagnostic approach—the basic tenet of which is to emphasize that the processes that serve to maintain disorders cut across these disorders and hence could be dealt
DOI: 10.4018/978-1-61520-817-3.ch006
Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Cognition Meets Assistive Technology
with a single appropriately built technology. Though there has not been much research in this direction because therapists prefer to specialize in particular treatment approaches and disorders, this kind of work has picked up momentum (due to the recent scientific focus on an interdisciplinary framework). This chapter will make an initial attempt in this direction by describing how cognitive theories could be applied in understanding the transdiagnostic processes like attentional biases and perceptual processing. This chapter will also attempt to describe the merger of cognitive architecture, specially the transdiagnostic processes and recent rehabilitative tools. Since there remains much work to be done in this direction, this chapter will highlight the areas that require much needed research attention, and at the same time, will provide future directions for embarking upon this process. This chapter will provide an important resource for understanding the transdiagnostic process in terms of assistive technology to psychologists, cognitive scientists, teachers, parents, students of psychology, neuroscientists and rehabilitation professionals.
bAckground cognitive rehabilitation The aim of rehabilitation is to maintain an optimal level of functioning in domains like physical, social and psychological (McLellan, 1991). Therefore, a rehabilitation program is designed for a particular individual and is conducted over a period of time based on the nature of impairment. The basic goal is not to enhance performance on a set of standardized cognitive tasks, but instead to improve functioning in the day-to-day context (Wilson, 1997). Models of cognitive rehabilitation stress the need to address cognitive and emotional difficulties in an integrated manner and not as isolated domains (Prigatano, 1999). Therefore,
cognitive training could be of immense help in this endeavor. Cognitive tasks could thus be designed to deal with cognitive functions like memory, attention, language, and so on, and the level of difficulty could also be varied to suit individual specification (Clare & Woods, 2004).
techniques for cognitive rehabilitation An exciting new development in the field of cognitive rehabilitation is the use of virtual reality (VR). Virtual environments (VE) could be built by keeping in mind the needs of the individual. Examples include presenting a specific number of stimuli to an autistic child that can be gradually increased as the treatment progresses (Max & Burke, 1997) or virtual wheelchair training for people afflicted by physical disabilities (Stephenson, 1995). Schultheis and Rizzo (2001) define VR for behavioral sciences as, “an advanced form of human-computer interface that allows the user to interact with and become immersed in a computer-generated environment in a naturalistic fashion.” Virtual reality could also be viewed as an excellent example of assistive technology (AT) because it can be used to build upon the existing strengths of an individual who in turn helps in offsetting the disability or, in other words, provides an alternate way of completing a task which also helps in compensating for the disability (Lewis, 1998). Virtual reality technology has yielded promising results in terms of cognitive functioning (Rose, Attree, & Johnson, 1996), social benefits (Hirose, Taniguchi, Nakagaki, & Nihei, 1994), and has proved to be less expensive than the real-world simulators. The previous discussions show that VR as AT could be fruitfully employed to treat disabilities. But it is also important to take into account the functioning of human cognitive systems while designing the VR/VE or any other AT and rehabilitation program. So far in the scientific literature
97
Cognition Meets Assistive Technology
there have been discussions about both cognitive psychology and AT, but each one has remained isolated from the other. Due to the significant contributions from both fields, it becomes essential that both are discussed in relation to each other so that each of these fields could be utilized to maximize the benefits that the other can confer. To begin with, one needs to adopt a good working definition of deficient cognitive components that require rehabilitative attention and which also cut across various disabilities. An important concept in this regard is the concept of “transdiagnostic approach.” According to this approach, behavioral and cognitive processes that serve to maintain disorders are transdiagnostic in nature (Mansell, Harvey, Watkins, & Shafran, 2008). The transdiagnostic approach has many advantages to itself and these include a better understanding about comorbidity—generalization of knowledge derived from cognitive model(s) to explain a particular disorder. Therefore, when the processes are seen as cutting across the disorders, it becomes easier to generalize one explanation to other processes that are similar in nature. The next advantage is the development of treatment approaches. If the processes are assumed to be common, then it becomes easier and even cost-effective to treat various disorders. Studies in cognitive psychology indeed support the transdiagnostic approach. For instance, attention to external or internal concern related stimuli have been found to be common across various psychological disorders like social phobia, panic disorder, depression, eating disorder, psychotic disorder, posttraumatic stress disorder and so on. Other processes that are also transdiagnostic are memory, thought, reasoning and the like (Mansell et al., 2008). But how exactly are transdiagnostic processes implicated in disorders? How could such processes serve as targets for rehabilitation? The following discussion on cognitive models will make this clearer.
98
cognItIve modeLs For emotIonAL ProcessIng Any program of cognitive rehabilitation is built upon a comprehensive understanding of the cognitive and behavioral processes/architecture. Cognitive models that explain cognitive functioning and behavior are good candidates on which rehabilitation endeavors could be built. Current scientific theorizing proposes a more intimate link between cognition and emotion than has been proposed before. Therefore, it may be useful as rehabilitation programs are planned to keep both cognitive and emotional processing in mind because of the intimate interaction. Describing all such theories is outside the scope of this paper; however, the following discussion will touch upon some of these theories.
A cognitive model for selective Processing in Anxiety Mathews and Machintosh (1998) proposed a cognitive model to explain selective processing in anxiety. Anxiety is usually the experience of unpleasant feelings of tension and worry in reaction to unacceptable wishes or impulses. A popular finding in anxiety research is attentional bias towards anxiety relevant concerns only under competitive conditions (where the competition is between neutral and anxiety relevant stimuli). The Mathews and Macintosh (1998) model provides a parsimonious explanation for this finding. The model essentially explains that stimuli attributes are processed in a parallel manner and compete for attention due to the involvement of two different routes. A threat evaluation system (TES) is in place that provides help in easing the competition between the two routes by interacting with the level of anxiety and consequently strengthens activations of anxiety relevant attributes. Within their model, a slower route is used when the threat value is appraised by the consciously controlled
Cognition Meets Assistive Technology
higher level, such as in a novel anxious situation. Repeated encounters with similar situations will store the relevant cues in the TES and, therefore, a later encounter will produce anxiety through the shorter route bypassing the slower one. As a result, attention will be captured automatically in a competing situation (where the competition is between neutral and anxiety relevant stimuli). Cues encountered in a novel situation that resemble attributes already stored in TES will also tend to elicit anxiety automatically and receive attentional priority. In the model, some danger related attributes are innate while the others are learned. At the same time, other neurobiological evidence suggests that the threat value of a stimulus is evaluated and determined in two distinct ways (LeDoux, 1995). One is a shorter and quicker route that directly runs from the thalamus to the amygdale and the other is a slower route, which is mediated by higher cortical level resources. The proposal of two routes within the model proposed by Mathews and Machintosh (1998), implies that a threatening cue that matches the current concern of anxiety disordered people will sufficiently attract attention under competing situations due to the involvement of the shorter route and will counteract the functioning of the higher level; but following treatment, these patients would no longer show the attentional effects mediated by the shorter route. This is because in the later case, the higher level counteracts the processing of the faster route. Their model also implies that when people encounter threatening cues (no neutral cue), the most threatening one will capture attention and the least threatening one will be inhibited. The model is also applicable in an evolutionary framework because it is certainly more adaptive to process the most potent source of danger and its consequence as a result of mutual inhibition within the TES. The model proposed by Mathews and Machintosh (1998) deals exclusively with selective processing in anxiety and even though it has implications for treatment, it does not give a detailed
account of the how treatment will counteract the effects of anxiety. Their model entails a link between cognitive and emotional processing, but a different model proposed by Power and Dalgleish (1997) serves to document a closer relationship between cognitive and emotional processing. The model proposed by Power and Dalgleish (1997) deals with explaining the processing of five basic emotions, such as sadness, happiness, anger, fear, disgust, as well as complex emotions. Since this model is more comprehensive in nature it will be detailed here rather than other models designed to explain processing in specific disorders.
the sPAArs Approach SPAARS (Schematic, Propositional, Analogical and Associative Representational Systems) is the integrated cognitive model of emotion proposed by Power and Dalgleish (1997). It is a multilevel model. The initial processing of stimuli occurs through specific sensory systems that are collectively termed the analogical processing system. This system can play a crucial role in emotional disorders, for instance, where certain sights, smells, noise, etc. become inherent parts of a traumatic event. The output from this system then feeds into three semantic representation systems. These systems operate in parallel. At the lowest level is the associative system which takes the form of a number of modularized connectionist networks. The intermediate level has the propositional system that has language like representation though it is not language specific. There is no direct route from intermediate level to emotions, but they feed either through appraisals at the schematic level or directly through the associative system. The highest level is called the Schematic Model level. It has the merit of storing information in a flexible manner along with the traditional schema approach. At this level, the generation of emotion occurs through the appraisal process. Appraisal refers to the evaluation of meaning of affective stimuli and is considered
99
Cognition Meets Assistive Technology
causal in the generation of an emotional response. Different types of appraisals exist for eliciting the five basic emotions of sadness, happiness, anger, fear and disgust. An appraisal for sadness would focus on the loss (actual or possible) of some valued goal to an individual and pathological instances, for sadness appraisal could be termed as depression. An individual will feel happy when he or she successfully moves towards the completion of some valued goal. When an appraisal of physical or social threat to self or some valued goal is done by the person, then he or she will experience fear and, when such an appraisal is done for a harmless object, it could result in instances of phobia or anxiety. The appraisal of blocking or frustration of a role or goal through an agent leads to feelings of anger. A person will feel disgust when he or she appraises elimination from a person, object or idea, repulsive to the self or some valued goal (Power & Dalgleish, 1997). These appraisals provide the starting point for complex emotions or a sequence of emotions. In this scheme, complex emotions as well as the disorders of emotions are derived from the basic emotions. A second important feature of emotional disorders is that these may be derived from a coupling of two or more basic emotions or appraisal cycles that further embroider on the existing appraisals through which basic emotions are generated or through the integration of appraisals which include the goals of others. Examples include the coupling of happiness and sadness, which can generate nostalgia. Indignation can result from the appraisal of anger combined with the further appraisal that the object of anger is an individual who is inferior in the social hierarchy. Empathy results from sadness when combined with the loss of another person’s goal. The model acknowledges the need for two routes for the generation of emotions and this need is based in part on the fact that basic emotions have an innate pre-wired component; additionally, certain emotions may come to be
100
elicited automatically. These two routes are not completely separable. Thus genetics provides a starting psychological point, though the subsequent developmental pathways may be different for each individual. An additional way in which emotions might come to be generated through the direct route is from repeated pairings of certain event-emotion sequences that eventually leads to the automatization of the whole sequence. This repetition bypasses the need for any appraisal. An example of the involvement of the direct route in an emotional disorder, which is an instance of phobia or anxiety where the automatic processing of the objects is anxiety provoking even though it is non-threatening—but due to previous encounter with the object in an individual’s past always in an anxiety provoking situation—it comes to be associated with anxiety. The two routes, for example, can also sometimes generate conflicting emotions as in when the individual may appraise a situation in a happy way, while the direct route is generating a different emotion. The therapeutic technique for working with emotional disorders varies depending on which route the emotion is involved in the disorder. For instance, the person can be provided with a new schematic model for the appraisal of events. Once this model has been accepted the recovery is faster. This type of therapy will work in situations where the schematic level is involved in the disorder. This is an example of fast change processes occurring in therapy. But the patient may continue to experience the maladaptive emotions through the activation of the direct route that is slow to change and is an example of slow processes in recovery. In such cases, exposure-based techniques (as used in the case of phobias) can be helpful. There may also be cases in which a combination of the two techniques will be most effective. Therapies that try to focus on the propositional level of representation only may not be successful if the higher schematic models are flawed.
Cognition Meets Assistive Technology
The description of the SPAARS approach shows various similarities with the model proposed by Mathews and Machintosh (1998). Both the models posit two different routes for emotion generation. The models are parsimonious since they advance the same explanation for both normal and disordered cognition, though the SPAARS approach has a broader scope and also gives more detailed specification for treatment choices.
AssIstIve tecHnoLogy And HumAn cognItIon Current rehabilitative tools and assistive technologies could be significantly improved by considering the architecture of human cognition during the design. The principles derived from cognitive models described, if incorporated into AT tools will help the tools to serve effectively with the target population. Before embarking on the process of merging the nature of human cognition with assistive tools, I will discuss another popular theory of how selective attention operates—load theory of selective attention, which could be utilized to explain the nature of emotional and cognitive functioning in both order and disorder.
Load theory of selective Attention Goal-directed behavior requires focusing attention on goal relevant stimuli. The load theory of selective attention proposes two mechanisms for selective control of attention (Lavie, Hirst, Fockert, & Viding, 2004). The first is a perceptual selection mechanism, which is passive in nature and ensures the exclusion of distractors from perception under high perceptual load (Lavie, 1995). The distractors are not perceived under high perceptual load as the target absorbs all the available processing capacity. But under conditions of low perceptual load, spare processing capacity left over from processing task relevant stimuli “spills-over” to irrelevant stimuli that are processed accordingly
(Lavie & Tsal, 1994). Loading perception would require either adding more items to the task at hand or a more perceptually demanding task on the same number of items. The second mechanism of attentional control is more active in nature and is evoked for the purpose of rejecting distractors that have been perceived under low perceptual load. This type of control depends on higher cognitive functions, like working memory. Therefore, loading higher cognitive functions that maintain processing priorities result in increased distractor processing. The effect occurs because the reduced availability of control mechanisms in turn reduces the ability to control attention according to the processing priorities. Supporting the theory, Lavie and Cox (1997) have shown that an irrelevant distractor failed to capture attention under high perceptual load conditions as compared to low perceptual load. The load was manipulated by either increasing the number of stimuli among which the target had to be detected or by increasing the perceptual similarity between the target and distractors making the task increasingly perceptually demanding in nature. This result was cited as a support for passive distractor rejection in contrast to active inhibitory mechanisms that are employed for the purpose of rejecting distractors under low perceptual load conditions.
Load theory and disorders The two mechanisms of selective attention could be distributed in disorders and hence the load theory of selective attention could serve as an aid while describing the attentional deficits encountered in both cognitive and emotional disorders. A complete description of all the disorders and how these attentional deficits are present in each are out of the scope of this chapter, but for illustrative purpose, a few disorders will be chosen here to further illustrate the transdiagnostic processes.
101
Cognition Meets Assistive Technology
Consistent with the theories previously introduced, Bishop, Jenkins, and Lawrence (2007) showed that anxiety modulated the amygdalar response to fearful distractors that interfered with the task performance only under low perceptual load conditions. But this effect was observed for state anxiety (current anxious state) rather than trait anxiety (a permanent personality feature). Trait anxiety, on the other hand, correlated with reduced activity in brain regions responsible for controlled processing under low perceptual load. This result implies that trait anxiety is associated with poor attentional controls. Therefore, state and trait anxiety potentially produce interactive effects and disturb task performance because of the disturbed passive mechanisms and faulty attentional control which in turn does not prevent irrelevant emotional distractors from capturing attention under conditions of load. Deficient attentional control was also observed for aged participants by Lavie (2000; 2001). Maylor and Lavie (1998) investigated the role of perceptual load in aging. They showed that distractor processing was decreased for older participants at lower perceptual loads as compared to the younger ones. Similarly high level affective evaluation (appraisals that are necessary for emotion generation as described in the SPAARS approach) requires attention and working memory, and as a result, is disrupted under high cognitive load. Kalisch, Wiech, Critchley, and Dolan (2006) varied cognitive load, while at the same time, anxiety was induced with the help of anticipation of an impending pain. They observed no change in subjective and physiological indices of anxiety expectations under conditions of load. They did obtain reductions in the activity of brain areas responsible for controlled processing under conditions of high load indicating that high level appraisal was suppressed. Their results did not only show dissociation between brain areas responsible for higher and lower level appraisals, but also how these interact with the manipulations of load.
102
merging technology and cognition Having described the intimate role between cognitive and emotional processing in both order and disorder and their interaction with perceptual and cognitive load, what should be the next step if we need to plan a rehabilitative program considering the aforementioned principles of human cognition? Therapy with VR, as previously described, has shown promising results. For instance, VR has been employed effectively for the treatment of phobias, that are usually described as intense and irrational fears of objects or events, like acrophobia (Emmelkamp, Krijn, Hulsbosch, de Vries, Schuemie, & van der Mast, 2002), fear of flying (Rothbaum, Hodges, Smith, Lee, & Price, 2000), spider phobia (Garcia-Palacios et al., 2002) and social phobia (Roy, Légeron, Klinger, Chemin, Lauer, & Nugues, 2003). Clinicians also consider phobias as part of anxiety disorders. VR as a potential tool for dealing with phobias has several advantages. Because the essential component in the treatment of phobias is exposure to the threat related object (like spiders in case of spider phobia) either in the form of imagery or in vivo (the latter involves graded exposure), VR as a treatment device could be employed effectively. When working with VR/VE, the therapist can control feared situation and graded exposure with a significant degree of safety. VR thus turns out to be more effective than the imagination techniques/sessions, where the patients are simply left to themselves to imagine the feared object. Under the imagination procedure the therapist not only lacks control on the imagination of the patient, but it also becomes hard for the therapist to determine whether the patient is actually following the imagination procedure leading to poor treatment generalization outside the treatment clinics. On the other hand, real exposure to the feared object could lead the patient to be traumatized, making him/her more fearful of it. Consequently VR could be employed fruitfully to overcome the difficulties of both the
Cognition Meets Assistive Technology
imagination techniques and real exposure. The other most important advantage that VR confers on the treatment process is the opportunity for interoceptive exposure (Vincelli, Choi, Molinari, Wiederhold, & Rive, 2000). This becomes important given the fact that bodily sensations are interpreted as signs of fear in the presence of feared object. Virtual reality also turns out to be effective when higher level distorted cognitions need to be challenged (Riva, Bacchetta, Baru, Rinaldi, & Molinari, 1999).
Future reseArcH dIrectIons Future research efforts on VR as a successful application for rehabilitation should concentrate on three major issues and associated problems: (a) clinical, (b) social and (c) technological issues.
clinical Issues The previous discussion clearly shows that VR as a rehabilitative tool has shown promising results and, therefore, has implications for further improvement. Virtual reality could be better suited to rehabilitate a range of disorders by meshing it with the functioning of human cognition. Much remains to be done in order to pinpoint the specific transdiagnostic processes that cuts across disorders and are also found to be deficient. A promising direction in this regard is the application of load theory of selective attention. Though the studies conducted by Bishop et al. (2007) and Kalisch et al. (2006) show that atypical cognitive bias interacts with behavior and neural responses under differing conditions of load, these kinds of results still await to be incorporated into a rehabilitative VR endeavor. As we have previously stated, VR has been used successfully for treating various phobias like acrophobia (Emmelkamp et al., 2002), fear of flying (Rothbaum et al., 2000), spider phobia (Garcia-Palacios et al., 2002) and social phobia
(Roy et al, 2003). What the literature lacks currently is an intimate link between cognitive architecture and the basis for VR successes. Cognitive psychologists, rehabilitative therapists, and VR professionals will stand to gain much if more studies are planned in this direction. For instance, VR is a good choice in exposure techniques for phobias, but since the SPAARS framework and the model proposed by Mathews and Machintosh (1998) show that there could be two routes to emotions—and exposure technique is useful when the faster route that runs from thalamus to amygdala is involved—it will be fruitful to plan future VR studies as was done by Bishop et al. (2007). If such studies show improved attentional control under different conditions of load and prevent anxiety from modulating amygdalar response to anxiety relevant distractors (that disrupt task performance under low perceptual load) with VR treatment, then this will strengthen the link between cognitive models and rehabilitation. The prior theorizing also shows that for successful treatment, practitioners need to provide the patient with a new schematic model for the appraisal of events other than exposure techniques. Once this model has been accepted, recovery is faster. This type of therapy will work in situations where the schematic level is involved in the disorder. This is an example of fast change processes occurring in therapy. For the future, VR could be used in conjunction with brain imaging techniques to study the brain responses along with behavioral responses before and after treatment. Researchers need to meticulously plan such studies by manipulating cognitive load in participants to study the effect of treatment on cognitive appraisals as was done by Kalisch et al. (2006). Once such future endeavors show successful results for anxiety treatment, practitioners will be more confident about the transdiagnostic processes that become atypical and give rise to cognitive biases. How do researchers and practitioners know which route to emotion (faster or slower) is involved in atypical functioning before embarking
103
Cognition Meets Assistive Technology
on such endeavors? This again calls for stronger links with neuropsychology and thorough assessments before chalking out a treatment plan. Finally, if both the routes are involved, then a mixture of techniques can be used. If the decision is to concentrate on both the routes, then it is essential to increase the load on perceptual and cognitive processes parametrically in an orthogonal manner; this is a very important concept because if both kinds of load were to be increased simultaneously, then it would become difficult to discern the effect of each individually. Moreover, since VR allows for interoceptive exposure, which becomes important given the fact that bodily sensations are interpreted as signs of fear in the presence of feared object, it would make sense to study the effect of treatment on the schematic level while the bodily responses are also monitored. If the treatment also shows improvement in bodily responses, then one can be even more confident of the VR intervention.
social Issues Before VR could become a part of mainstream use, researchers and practitioners need to overcome several social obstacles. In many traditional schools of therapies, a personal relationship between the therapist and the client is given a high degree of importance. For some, VR could be viewed as disruptive to this relationship. This issue is even more important for a culture that does not emphasize individualism, for instance in some Eastern societies. In this scenario, it becomes important to consider even technologically less developed societies. Apart from this hindrance, any new therapy initially faces resistance from the broader clinical society. This was even true for behavioral therapy when it was introduced, and hence in the field of mental health, there are other issues that determine the acceptance of a new rehabilitative method rather than just documented efficacy. Until the relevant social problems connected to VR are solved, the best course of action
104
might be to adopt VR in conjunction with other traditional modes of rehabilitation.
technological Issues Research on social and clinical issues is not enough to promote VR; it is also essential to concentrate on the technological aspects of it. Currently, VR devices and protocols lack standardization, while many others are developed for a specific context making generalization poor (Riva, 2005). Though VR systems cost less than real world simulators, VR is expensive considering that many are built for specific purposes only. In addition, VR is very time-consuming to build.
concLusIon Given that VR research proceeds along three different directions, the future of VR as a rehabilitative tool is promising. Current research efforts and scientific discussions do focus on VR and human cognition, but these have so far remained isolated from each other. But the advent of cognitive science and multidisciplinary frameworks calls for a better cooperation between the two. First, fruitful research direction in this regard will be to focus on transdiagnostic processes that cut across various disorders and need to be targeted with rehabilitative efforts. This will bring down the cost of building rehabilitative tools for specific contexts and will also save precious time. Next, once the transdiagnostic processes have been examined, practitioners would apply these as models of human cognition that explain typical and atypical cognition. There have been few theories, and some popular ones have been described, but still a lot work needs to be done to develop them further and make them the basis of cognitive rehabilitation with VR. Once such efforts are in place, we will truly be able to understand comorbidity, generalize knowledge, and bring down the cost of treating various disorders. The day is not far off
Cognition Meets Assistive Technology
when mass rehabilitation over the Internet would be possible with such exciting tools!
reFerences Bishop, S. J., Jenkins, R., & Lawrence, A. D. (2007). Neural processing of fearful faces: Effects of anxiety are gated by perceptual capacity limitations. Cerebral Cortex, 17(7), 1595–1603. doi:10.1093/cercor/bhl070 Clare, L., & Woods, R. T. (2004). Cognitive training and cognitive rehabilitation for people with early-stage Alzheimer’s disease: A review. Neuropsychological Rehabilitation, 14(4), 385–401. doi:10.1080/09602010443000074
Lavie, N. (1995). Perceptual load as a necessary condition for selective attention. Journal of Experimental Psychology. Human Perception and Performance, 21(3), 451–468. doi:10.1037/00961523.21.3.451 Lavie, N. (2000). Selective attention and cognitive control: Dissociating attentional functions through different types of load. In Monsell, S., & Driver, J. (Eds.), Control of cognitive processes: Attention & performance XVIII (pp. 175–194). Cambridge, MA: MIT Press. Lavie, N. (2001). The role of capacity limits in selective attention: Behavioural evidence and implications for neural activity. In Braun, J., & Koch, C. (Eds.), Visual attention and cortical circuits (pp. 49–68). Cambridge, MA: MIT Press.
Emmelkamp, P. M., Krijn, M., Hulsbosch, A. M., de Vries, S., Schuemie, M. J., & van der Mast, C. A. (2002). Virtual reality treatment versus exposure in vivo: A comparative evaluation in acrophobia. Behaviour Research and Therapy, 40(5), 509–516. doi:10.1016/S0005-7967(01)00023-7
Lavie, N., & Cox, S. (1997). On the efficiency of visual selective attention: Efficient visual search leads to inefficient distractor rejection. Psychological Science, 8(5), 395–398. doi:10.1111/j.1467-9280.1997.tb00432.x
Garcia-Palacios, A., Hoffman, H., Carlin, A., Furness, T. A., & Botella, C. (2002). Virtual reality in the treatment of spider phobia: A controlled study. Behaviour Research and Therapy, 40(9), 983–993. doi:10.1016/S0005-7967(01)00068-7
Lavie, N., Hirst, A., Fockert, J. W. D., & Viding, E. (2004). Load theory of selective attention and cognitive control. Journal of Experimental Psychology. General, 133(3), 339–354. doi:10.1037/0096-3445.133.3.339
Hirose, M., Taniguchi, M., Nakagaki, Y., & Nihei, K. (1994). Virtual playground and communication environments for children. IEICE Transactions on Information & Systems. E (Norwalk, Conn.), 77D(12), 1330–1334.
LeDoux, J. E. (1995). Emotion: clues from the brain. Annual Review of Psychology, 46, 209–235. doi:10.1146/annurev.ps.46.020195.001233
Kalisch, R., Wiech, K., Critchley, H. D., & Dolan, R. J. (2006). Levels of appraisal: A medial prefrontal role in high-level appraisal of emotional material. NeuroImage, 30(4), 1458–1466. doi:10.1016/j.neuroimage.2005.11.011 Lavie, N., & Tsal. (1994). Perceptual load as a major determinant of the locus of selection in visual attention. Perception & Psychophysics, 56(2), 183–197.
Lewis, R. B. (1998).Assistive technology and learning disabilities: Today’s realities and tomorrow’s promises. Journal of Learning Disabilities, 31(1), 16–26, 54. doi:10.1177/002221949803100103 Mansell, W., Harvey, A., Watkins, E. R., & Shafran, R. (2008). Cognitive behavioral processes across psychological disorders: A review of the utility and validity of the transdiagnostic approach. International Journal of Cognitive Therapy, 1(3), 181–191. doi:10.1521/ijct.2008.1.3.181
105
Cognition Meets Assistive Technology
Mathews, A., & Machintosh, B. (1998). Cognitive model of selective processing in anxiety. Cognitive Therapy and Research, 22(6), 539–560. doi:10.1023/A:1018738019346 Max, M. L., & Burke, J. C. (1997). Virtual reality for autism communication and education, with lessons for medical training simulators. In Morgan, K. S., Hoffman, H. M., Stredney, D., & Weghorst, S. J. (Eds.), Studies in health technologies and informatics, 39. Burke, VA: IOS Press. Maylor, E. A., & Lavie, N. (1998). The influence of perceptual load on age differences in selective attention. Psychology and Aging, 13(4), 563–573. doi:10.1037/0882-7974.13.4.563 McLellan, D. L. (1991). Functional recovery and the principles of disability medicine. In Swash, M., & Oxbury, J. (Eds.), Clinical Neurology (Vol. 1, pp. 768–790). London: Churchill Livingstone. Power, M., & Dalegleish, T. (1997). Cognition and emotion: From order to disorder. London: The Psychology Press. Prigatano, G. P. (1999). Principles of neuropsychological rehabilitation. New York: Oxford University Press. Riva, G. (2005). Virtual reality in psychotherapy [Review]. Cyberpsychology & Behavior, 8(3), 220–230. doi:10.1089/cpb.2005.8.220 Riva, G., Bacchetta, M., Baru, M., Rinaldi, S., & Molinari, E. (1999). Virtual reality based experiential cognitive treatment of anorexia nervosa. Journal of Behavior Therapy and Experimental Psychiatry, 30(3), 221–230. doi:10.1016/S00057916(99)00018-X Rose, F. D., Attree, E. A., & Johnson, D. A. (1996). Virtual reality: An assistive technology in neurological rehabilitation. Current Opinion in Neurology, 9(6), 461–467.
106
Rothbaum, B. O., Hodges, L., Smith, S., Lee, J. H., & Price, L. (2000). A controlled study of virtual reality exposure therapy for the fear of flying. Journal of Consulting and Clinical Psychology, 68(6), 1020–1026. doi:10.1037/0022006X.68.6.1020 Roy, S., Légeron, P., Klinger, E., Chemin, I., Lauer, F., & Nugues, P. (2003). Definition of a VR−based protocol for the treatment of social phobia. Cyberpsychology & Behavior, 6(4), 411–420. doi:10.1089/109493103322278808 Schultheis, M. T., & Rizzo, A. A. (2001). The application of virtual reality technology in rehabilitation. Rehabilitation Psychology, 46(3), 296–311. doi:10.1037/0090-5550.46.3.296 Stephenson, J. (1995). Sick kids find help in a cyberspace world. Journal of the American Medical Association, 274(24), 1899–1901. doi:10.1001/ jama.274.24.1899 Vincelli, F., Choi, Y. H., Molinari, E., Wiederhold, B. K., & Rive, G. (2000). Experiential cognitive therapy for the treatment of panic disorder with agoraphobia: Definition of a clinical protocol. Cyberpsychology & Behavior, 3(3), 375–385. doi:10.1089/10949310050078823 Wilson, B. A. (1997). Cognitive rehabilitation: How it is and how it might be. Journal of the International Neuropsychological Society, 3(5), 487–496.
AddItIonAL reAdIng Baumgartner, T., Speck, D., Wettstein, D., Masnari, O., Beeli, G., & Jäncke, L. (2008). Feeling present in arousing virtual reality worlds: Prefrontal brain regions differentially orchestrate presence experience in adults and children. Frontiers in Human Neuroscience, 2(8). doi:.doi:10.3389/ neuro.09.008.2008
Cognition Meets Assistive Technology
Buxbaum, L. J., Palermo, M. A., Mastrogiovanni, D., Read, M. S., Rosenberg-Pitonyak, E., Rizzo, A. A., & Coslett, H. B. (2008). Assessment of spatial attention and neglect with a virtual wheelchair navigation task. Journal of Clinical and Experimental Neuropsychology, 30(6), 650–660. doi:10.1080/13803390701625821 Capodieci, S., Pinelli, P., Zara, D., Gamberini, L., & Riva, G. (2001). Music-enhanced immersive virtual reality in the rehabilitation of memory related cognitive processes and functional Abilities: A case report. Presence (Cambridge, Mass.), 10(4), 450–462. doi:10.1162/1054746011470217 Glantz, K., Durlach, N. I., Barnett, R. C., & Aviles, W. A. (1996). Virtual reality (VR) for psychotherapy: From the physical to the social environment. Psychotherapy (Chicago, Ill.), 33(3), 464–473. doi:10.1037/0033-3204.33.3.464 Harvey, A. G., Watkins, E. R., Mansell, W., & Shafran, R. (2004). Cognitive behavioral processes across psychological disorders: A transdiagnostic approach to research and treatment. Oxford, UK: Oxford University Press. Khetrapal, N. (2007a). Antisocial behavior: Potential treatment with biofeedback. Journal of Cognitive Rehabilitation, 25(1), 4–9. Khetrapal, N. (2007b). SPAARS Approach: Integrated cognitive model of emotion of Attention Deficit/Hyperactivity Disorder. Europe’s Journal of Psychology. Khetrapal, N. (in press). SPAARS Approach: Implications for Psychopathy. Poiesis & Praxis: International Journal of Technology Assessment and Ethics of Science. Lavie, N., & Fockert, J. W. D. (2005). The role of working memory in attentional capture. Psychonomic Bulletin & Review, 12(4), 669–674. LeDoux, J. E. (1996). The emotional brain. New York: Simon & Schuster.
McGee, J. S., van der Zaag, C., Buckwalter, J. G., Thiebaux, M., Van Rooyen, A., & Neumann, U. (2000). Issues for the Assessment of Visuospatial Skills in Older Adults Using Virtual Environment Technology. Cyberpsychology & Behavior, 3(3), 469–482. doi:10.1089/10949310050078931 Parsons, T. D., & Rizzo, A. A. (2008). Initial validation of a virtual environment for assessment of memory functioning: Virtual reality cognitive performance assessment test. Cyberpsychology & Behavior, 11(1), 17–25. doi:10.1089/ cpb.2007.9934 Renaud, P., Bouchard, S., & Proulx, R. (2002). Behavioral avoidance dynamics in the presence of a virtual spider. Information Technology in Biomedicine. IEEE Transactions, 6(3), 235–243. Riva, G. (1998). From toys to brain: Virtual reality applications in neuroscience. Virtual Reality (Waltham Cross), 3(4), 259–266. doi:10.1007/ BF01408706 Riva, G., Botella, C., Légeron, P., & Optale, G. (Eds.). (2004). Cybertherapy: Internet and virtual reality as assessment and rehabilitation tools for clinical psychology and neuroscience. Amsterdam: IOS Press. Riva, G., Molinari, E., & Vincelli, F. (2002). Interaction and presence in the clinical relationship: Virtual Reality (VR) as communicative medium between patient and therapist. IEEE Transactions on Information Technology in Biomedicine, 6(3), 1–8. doi:10.1109/TITB.2002.802370 Riva, G., Wiederhold, B. K., & Molinari, E. (Eds.). (1998). Virtual Environments in Clinical Psychology and Neuroscience. Amsterdam: IOS Press. Srinivasan, N., Baijal, S., & Khetrapal, N. (in press). Effects of emotions on selective attention and control. In Srinivasan, N., Kar, B. R., & Pandey, J. (Eds.), Advances in cognitive science (Vol. 2). New Delhi: SAGE.
107
Cognition Meets Assistive Technology
Strickland, D., Marcus, L., Mesibov, G. B., & Hogan, K. (1996). Brief report: Two case studies using virtual reality as a learning tool for autistic children. Journal of Autism and Developmental Disorders, 26(6), 651–660. doi:10.1007/ BF02172354
108
Williams, J. M., Watts, F. N., MacLeod, C., & Mathews, A. (1997). Cognitive psychology and emotional disorders (2nd ed.). Chichester, UK: John Wiley & Sons.
109
Chapter 7
Cognitive Load and Disorientation Issues in Hypermedia as Assistive Technology Muhammet Demirbilek Suleyman Demirel University, Turkey
AbstrAct Advances in information and communication technologies have raised the quality of inclusive education programs. Inclusive education, a recent advance in educational technology, has served to increase the ability of students with special needs. Hypermedia as an assistive technology has the potential to teach and train individuals with disabilities. However, like every technology, hypermedia itself is not problemfree. Disorientation and cognitive load are two of the most challenging problems related to hypermedia learning environments. The purpose of this chapter is to highlight disorientation and cognitive load problems in hypermedia learning environments where learners usually face a serious problem while navigating such environments.
IntroductIon Information and communication technologies, and in particular computers, have an undeniable impact on integrating learners with barriers into the mainstream education system. One of the most useful information and communication technologies for teachers of individuals with disabilities is hypermedia learning environments (HLEs). Hypermedia learning environments can be the vehicle for inclusive education as an assistive technology (AT). DOI: 10.4018/978-1-61520-817-3.ch007
Hypermedia presents information in an interactive way, and it is accessible to all types of learners. It provides a combination of text, sound, graphics, and motion video that can be controlled by the user. With a minimum of training, hypermedia can be used to create very individualized learning environments and tools. This gives teachers the capability to create computer programs to teach the specific objectives that are needed to advance their curricula and individualized learning plans. Hypermedia learning environments can also be used to compensate for some disabilities (Perkins, 1995).
Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Cognitive Load and Disorientation Issues in Hypermedia as Assistive Technology
AssIstIve tecHnoLogIes In today’s information age, AT is not a luxury for students with disabilities. Assistive technology is a necessity for their growth and development. The use of AT enables these students to participate in activities typical of their age group. Assistive technology also provides a way by which these students can succeed academically as well as socially. Basically, the use of AT enables these students to do things and experience successes they would otherwise have been unable to do (Kelker, 1997). Assistive technology provides creative solutions that enable individuals with disabilities to be more independent, productive, and integrated into the mainstream of society and community life. The benefits of AT have been recognized as a vital part of special education. Assistive technologies include devices used by children and adults with disabilities. Namely, these types of devices are designed to compensate for functional limitations and to enhance and increase learning, independence, mobility, communication, and environmental control and choice. Weikle and Hadadian (2003) reported that there is valuable evidence supporting the use of AT devices for communication, as functional tools, to promote social outcomes, and as retention aids for learning activities in young children with disabilities. The Technology-Related Assistance for Individuals with Disabilities Act of 1988 (Public Law, 100- 407, 1988) describes an AT device as “any item, piece of equipment, or product system whether acquired off the shelf, modified, or customized that is used to increase, maintain or improve functional capabilities of individuals with disabilities.” In very basic terms, AT can be thought of as products that assist in eliminating the effects of a disability or most simply as products that make life easier for persons with disabilities. This broad definition comprises thousands of devices—both high- and low-tech—that can be
110
classified in categories such as writing, computer access, reading, communication, and electronic aids for daily living, mobility, and leisure. Ultimately, with this extreme amount of information, it takes adequate knowledge of AT to best determine the AT needs of students with disabilities. As a category of Assistive Technology Aids and Devices, Educational and Vocational Aids include computers, adaptive software and job modifications. If used appropriately, AT can facilitate a child’s development by providing access to developmentally appropriate activities (Simms, 2003). Behrmann (1998) emphasizes the importance of AT as a means of inclusion into age-appropriate classrooms as well. Assistive technologies can provide the tools to bring more young children with disabilities into the general educational setting (Behrmann, 1998). The benefits of AT for students are cognitive as well as social and emotional. Hetzroni and Schrieber (2004) state that with the use of a word processor, students were able to produce material that was more acceptable and coherent in comparison to prior work samples.
HyPermedIA LeArnIng envIronments What is hypermedia? The history of hypermedia has roots traced back to 1945. Vannevar Bush proposed a machine called “Memex” in his Atlantic Monthly article titled “As we may think” (Dix, Finlay, Abowd & Beale, 1998). According to Bush, “a memex is a device in which an individual stores all his books, records and communications and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory” (Bush, 1945). After 50 years, Bush’s vision turned into effective models. Today’s technology allows reading, browsing, and linking in a non-linear electronic environment.
Cognitive Load and Disorientation Issues in Hypermedia as Assistive Technology
Hypermedia is a combination of networks of nodes including information (e.g., text, graphics, video, sound, etc.) for the purpose of facilitating access to, and manipulation of, the information encapsulated by the data. While hypertext is a text-only electronic environment, hypermedia encompasses other media such as graphics, video, and sound. The World Wide Web is a version of hypermedia extended to a huge network of computers connecting millions of users from all around the world. Hypermedia is different from traditional paper. While information on traditional paper is limited to a linear, sequential format, hypermedia is free from such linear restrictions (Eveland & Dunwoody, 2001). Hypermedia is a more general concept than hypertext. Hypermedia is a combination of multimedia components using computers’ interactive characteristics, dynamic display, and user interface. Hypermedia is an extension of hypertext, including: visual information, sound, video, animation, and other forms of data. Basically, hypermedia refers to the non-linear format of all forms of electronic media and text (Dix et al., 1998); thus making the World Wide Web a multimedia-based (non-linear) HLE. Web browsers are the tools used to access information across the Web. A node is an individual unit of text within a hypertext document. Nodes contain text information sometimes enriched with media. Links refers to electronic connections among nodes.
Hypermedia Features The features of HLEs are non-linearity (Foltz, 1996), flexibility (Conklin, 1987; Kim, 2000), learner control (Marchionini, 1988), variety of media (Marchionini, 1988), navigation, backtracking, annotation, and structure (Beiber, 2000), to name a few. Non-linearity allows the user to control what path to take when searching for information (Foltz,
1996). Hypermedia is flexible in terms of representing information, navigating the structure, and storing the data. Media can be represented in a variety of forms. Hypermedia gives flexibility and freedom to the user for learning and information retrieval (Conklin, 1987). The user has flexibility in choosing the sequence in which to access information (Kim, 2000). Hypermedia learning environments let the learner control navigation, media, and content selection. Hypermedia offers such a high level of learner control that users are required to apply higher-order thinking (Marchionini, 1988). Hypermedia has the capability to associate links with other links, graphics, and audio and video files. Hypermedia may contain a variety of media, such as graphics, pictures, video, and audio. Marchionini (1988) suggests that “hypermedia systems allow huge collections of information in a variety of media to be stored in an extremely compact form that can be accessed easily and rapidly” (p.9). Navigation allows users to explore links, backtracking allows users to return to the previously visited nodes, annotation allows users to bookmark and comment, whiles structural features enable uses to navigate through local and global paths (Beiber, 2000). Hypermedia learning environments contains nodes of information connected by links. A node may be text, a graphics, an audio clip, a video clip, a photo, or a combination of these components. A link is an electronic connection between two nodes. Hypermedia learning environments offer great potential for individualized learning. Adaptive characteristics of hypermedia allow instructors to adapt course presentation, navigation, and content to suit individual students’ needs and preferences. Having adaptive features, hypermedia has the ability to accommodate students’ individual learning differences and to help students with disabilities develop complex learning skills to acquire complex knowledge.
111
Cognitive Load and Disorientation Issues in Hypermedia as Assistive Technology
Human memory and Hypermedia Human memory is associative. It works by associating pieces of information with other information and creating complex knowledge structures in memory (Lowe & Hall, 1999). Like human memory, hypermedia interconnects nodes using computer supported links and allows people to partially mimic the writing and reading processes as they take place during cognition (Lowe & Hall, 1999). By creating non-linear information structures—associating chunks of information in different ways using links in combination with media consisting of text, images, video, sound, and animation—a person can enrich the representation of information. Therefore, structured HLEs may help learners to create their own representation of knowledge and to integrate it into existing knowledge structures. The proposed memory models for humans are generally based on information-processing theory. Jonassen (1989) stated that learning occurs when new information is linked to existing knowledge, structured by associative networks. The semantic network structure and non-linearity features of HLEs resembling theories of memory and cognition may be a fruitful educational tool. It has been claimed that the idea of the structure of human memory and the process of learning is consistent with the process of using HLEs (Jonassen, 1989; Marchionini, 1988). Both hypermedia and human memory are created by nodes of information connected by links (Eveland & Dunwoody, 2001). The similarity between memory and hypermedia may allow the designer and the learner to establish essential relationships between memory and hypermedia. On the contrary, physical textbooks and media can only allow the learner to represent information in a linear way. The principle of the semantic-network model suggests that a key to learning new information is associating it to existing knowledge, by semantically related links (Daniels, 1996). Norman
112
(1983) stated that as more complex connections among existing knowledge is stored in memory and as new information, the more learners will retain in memory. Research on learning shows that meaningful learning is accomplished when new information is associated to existing knowledge or node structures (Caudhill & Butler, 1990; Jonassen, 1989). Hypermedia also has an ability to incorporate various media, interactivity, vast data sources, distributed data sources, and powerful search engines. These make hypermedia a very powerful tool to create, store, access, and manipulate information.
usability Issues with Hypermedia Usability in hypermedia refers to developing easy, efficient, memorable, error-free, and pleasant user experiences (Neilsen, 1995). A HLE has an interface element with which the user interacts. Windows, (i.e. computer dialogue boxes) are used extensively in HLEs as a part of the user interface to present graphics, images, text, audio, and video. Windows include nodes and links between them. The user interface is a key factor to HLEs, in terms of usability, efficiency, user comfort, and orientation. Navigation disorientation and cognitive overload are major problems that limit the usefulness of hypermedia (Conklin, 1987; Neilsen, 1995; Dix et al, 1998; McDonald & Stevenson 1996; McDonald & Stevenson 1998). Researchers have looked for solutions to these problems in HLEs. Developing well-structured and well-designed, effective HLEs is not an easy process because of the number of associative links that exist among nodes, non-linearity, and the number of design possibilities. Providing screen displays to construct an operating environment for the user, configuring a clear visual image, and creating a working context for the user’s action are the goals of graphic user interface design (Lynch, 1994).
Cognitive Load and Disorientation Issues in Hypermedia as Assistive Technology
disorientation in Hypermedia Learning environments Disorientation refers to an experience by users not knowing where they are within hypermedia and not knowing how to move to a desired location (Theng, Jones, & Thimbley, 1995a). As expected, the experience of being lost in hypermedia may lead users to feel that they are wasting time, overlooking important information, and may influence the way they interact with hypermedia (Theng, 1997). Consequently, the disorientation problem has the potential of interrupting navigation and browsing in HLEs (McDonald & Stevenson 1998). Elm and Woods (1985) outlined three different forms of disorientation in HLEs. These are: (a) not knowing where to go next; (b) knowing where to go, but not knowing how to get there; and (c) not knowing where one is in the overall structure of the document. Disorientation is one of the most cited problems with HLEs. In fact, learners’ disorientation is a commonly reported problem in hypermedia research (Daniels & Moore 2000; Neilsen, 1995). For example, studies by Nielsen (1990) showed that 56 percent of readers of HLEs agreed that they were often confused about where they were. Many researchers also observed that users may become confused, lost, or disoriented in hypermedia systems (Elm & Woods, 1985; Conklin, 1987; Neilsen, 1990; Gupta & Gramopadhye, 1995; McDonald & Stevenson, 1996; McDonald & Stevenson, 1998; Theng, 1997; Dias & Sousa, 1997; Ahuja & Webster, 2001; Baylor, 2001; Chen, 2002). Gupta and Gramopadhye (1995) mentioned two categories of hypermedia related problems: (a) implementation dependent and (b) endemic. The implementation dependent category contains display restrictions and browser limitations. The endemic category includes disorientation and cognitive overload that impact the usability of HLEs. Authors stated that the endemic problems are more challenging than the implementation
dependent problems in terms of limiting the usefulness of HLEs. Furthermore, Foss (1989) characterized the disorientation problem and the undesirable results of navigating noticed in users during the use of hypermedia system into two categories: (a) the embedded digression / choice multiplicity problem and the (b) art museum phenomenon. The embedded digression/choice multiplicity problem refers to the user feeling distracted, lost, and forgetful of his paths and goals when he pursues multiple ways and movements that take him away from the main topic. The embedded digression problem is associated with difficulties occurring from the abundance of path choices that hypermedia causes. The art museum phenomenon, on the other hand, is a metaphor of what occurs in a similar way when spending a whole day visiting an art museum without giving special attention to a particular drawing or a model. The next day the visitor probably would not be able to describe any painting that he saw in the museum (Dias & Sousa, 1997). In the context of HLEs, the art museum phenomenon refers to problems related to the act of browsing and seeking information and involves the user’s ability to recognize which nodes have been visited or which parts remain to be visited. Foss (1989) defined the problems that disoriented users may also suffer in HLEs. Limitations on short-term memory of humans may lead to the following problems: •
• • • •
Arriving at a specific point in a hypertext document then forgetting what was done there. Forgetting to return to a departure point. Forgetting to pursue departures that were planned earlier. Not knowing if there are any other relevant frames in the document. Not remembering which sections have been visited or altered.
113
Cognitive Load and Disorientation Issues in Hypermedia as Assistive Technology
Trip and Roby (1990) also point out that disorientation will cause amplified cognitive overload that may reduce the mental resources available to process information.
cognitive Load in Hypermedia Learning environments Cognitive load is a term that refers to the load on working memory during instruction (Sweller, 1998). In a hypermedia learning environment, cognitive overload can be defined as being confused or overwhelmed by the available options (Murray, 2001). Cognitive overload can effect orientation of users within HLEs. Cognitive overload refers to being overwhelmed or confused by the options available in multi-path, multi-tool environments such as hypermedia (Murray, 2001). Thuring, Hannemann, and Haake (1995) stated that increased cognitive load results in an inability to orient within hypermedia or navigate through hypermedia. Cognitive overload is one of the main obstacles to learning (Clark, 2003). Cognitive overload happens when the learner is “bombarded with too much information at once” (Clark, 2003, p. 3). Additionally, Daniels and Moore (2000) stated that cognitive overload is one of the main barriers for hypermedia users. Furthermore, researchers noted that the nonlinearity aspect of hypermedia system often results in learner disorientation and cognitive overload (Beasley & Waugh 1995; Conklin, 1987; Tripp & Roby, 1990; Zhu, 1999). Cognitive load theory was broadly elaborated by John Sweller in (1988). The theory sheds light on the limitations of working memory capacity (Sweller, 1988; Sweller, 1994; Sweller & Chandler, 1994). There are three types of cognitive load: (a) intrinsic, (b) extraneous, and (c) germane (Sweller, Van Merriënboer, & Paas, 1998). “Intrinsic load refers to the complexity of learning material.” (Renkl & Atkinson, 2003, p. 17). Extraneous load refers to the complexity
114
of mental activities during learning (Renkl & Atkinson, 2003). And germane load refers to the capacity of working memory (Renkl & Atkinson, 2003). Brunken, Plass, & Leuter (2003) stated that learner experience, prior domain specific knowledge, and individual differences influence cognitive load that results in more effort, more errors, and less knowledge acquisition. Mayer and Moreno (1998) and Mousavi, Low, and Sweller (1995) investigated ways to reduce cognitive overload and found that physical integration of visual and verbal information (i.e., split-attention effect), representing information both visual and auditory (i.e., modality effect), and abandoning verbal information (i.e., redundancy effect) are the ways to decrease cognitive overload. Cognitive load theory highlights several practices that can be applied to inclusive education and using hypermedia as an AT to train and to improve performance of students with disabilities. There are methodologies for reducing the effects of the extraneous cognitive load of instructional materials to ensure optimal learning. These effects include split attention, redundancy, and modality.
The Split-Attention Effect The split-attention effect can be defined as how the use of materials that require learners to split their attention between two sources of information causing a higher cognitive load on working memory, impedes the learning process (Chandler & Sweller, 1992; Mayer & Moreno, 2003).
The Redundancy Effect An illustration of the redundancy effect is when one source of instruction, whether textual or graphic, provides full intelligibility, suggesting that one source of instruction should be used (Chandler & Sweller, 1991). Redundant sources should be removed from the instructional materials
Cognitive Load and Disorientation Issues in Hypermedia as Assistive Technology
in order to reduce cognitive load (Mayer, Heiser, & Lonn, 2001).
The Modality Effect According to the modality principle, learning is more efficient when multiple sensory pathways are used to present information (Moreno & Mayer, 1999; Mousavi, Low, & Sweller, 1995).
Hypermedia in Inclusive education Hypermedia learning environments have gained popularity since the Internet was introduced to schools. Hypermedia learning environments have been used as tutorials for classes to create interactive and individualized lessons. Hypermedia can be used to communicate and instruct as to improve access or productivity. Hypermedia learning environments can be used in different ways to train or teach individuals who have special education needs. According to Perkins (1991, 1993) these ways include the creation of computer-aided instruction as a communication device and as a menu to launch other applications. It also includes stacks that can be operated by students with cognitive disabilities, communication disorders, physical disabilities, and those students who are unable to read. Hypermedia learning environments can provide educators the ability to author their own tutorials and training to teach specific objectives in their classrooms (Perkins, 1993, 1991). Due to its flexibility, hypermedia can be one of the best tools for teachers and parents to utilize in aiding individuals with disabilities. Studies show that the introduction of technology, such as hypermediabased interactive learning environments in at-risk settings, enhances both self-image and locus of control with pupils engaged in computer-applied instructional activities (Furst, 1993; Klein, 1992). Hypermedia can be used to teach and train individuals with disabilities such as those unable to
read, with communications disorders, and those with cognitive or physical disabilities. Through non-linearity of hypermedia, learners who have certain disabilities can choose different ways to pursue the subject matter based upon their own interests and objectives. Thus, hypermedia environment can better accommodate individuals’ different needs and different learning styles and is more suitable for discovery learning (Liu, 1994). The associativity of hypermedia is similar to the functions of human memory. With this feature, related information can be linked together to form a network. This feature can enable learners to construct their own knowledge base, by making meaningful connections among the ideas as they see fit. Students can navigate from one node to another without any limitation. This feature allows a teacher to present the to-be-learned subject in different ways. The efficiency of hypermedia allows teachers to present the information in different forms such as text, graphics, video, sound, and animation in a single page. These features make hypermedia a powerful tool in inclusive education. Along with these advantages, however, hypermedia is not free from disadvantages. Disorientation and cognitive load are the main disadvantages of HLEs. Learners with disabilities can easily become lost (disoriented) in the hypermedia learning context. The complex and non-linear structure of HLEs may also lead inclusive education students to cognitive overload.
concLusIon When HLEs are not well structured for usability through adherence to instructional design principles, the probability of learner disorientation and cognitive overload is very high during navigation. Nunes and Fowell (1996) describe that these problems may result in the learning process being interrupted. Learners may end up studying less
115
Cognitive Load and Disorientation Issues in Hypermedia as Assistive Technology
meaningful topics and omit crucial ones (Nunes & Fowell, 1996). These are the consequences of being disoriented and experiencing cognitive overload while in HLEs. Nunes and Fowel (1996) also indicate that disorientated learners have trouble finding specific information, even if they know that it is present. Therefore, learners may fail to see how parts of the knowledge base are related and may even omit large, relevant sections of information (Hammon, 1993). As a result of cognitive overload, learners may become unclear of their learning objectives or how to accomplish them. Thus, learners may fail to become involved in the learning process (Nunes & Fowel, 1996). Interface design and usability are important components in hypermedia design. Appropriate and structured interface design helps to minimize problems in HLEs. Furthermore, having multimedia capabilities, high level interactivity, and the power of association, hypermedia can empower individuals with disabilities by providing flexible and interactive learning environments. Similarities of hypermedia with the human memory system seem to be particularly appropriate for disabled individuals’ learning, as it provides not only a vivid and natural environment for the accumulation of the facts, but also tools to synthesize and integrate new knowledge, and reconstruct existing knowledge. Finally, hypermedia can create an educational environment that can meet the needs of students with special needs. In conclusion, hypermedia may have many benefits when properly employed in educating individuals with special needs by providing a secure learning environment. Hypermedia may act as an intermediary in communicating with others and may develop social skills and facilitate group work; hypermedia may support basic literacy and numeracy; hypermedia may assist students with special needs in organizing their thoughts and time more effectively; and hypermedia may assist students to develop fine motor skills through graphical navigation, such as in using a mouse to drag and drop objects on a screen, move a cursor, and so on.
116
reFerences Ahuja, J. S., & Webster, J. (2001). Perceived disorientation: An examination of a new measure to assess web design effectiveness. Interacting with Computers, 14(1), 15–29. doi:10.1016/S09535438(01)00048-0 Baylor, A. L. (2001). Incidental learning and perceived disorientation in a web-based environment: Internal and external factors. Journal of Educational Multimedia and Hypermedia, 10(3), 227–251. Beasley, R., & Waugh, M. (1995). Cognitive mapping architectures and hypermedia disorientation: An empirical study. Journal of Educational Multimedia and Hypermedia, 4(2/3), 239–255. Behrmann, M. (1998). Assistive technology for young children in special education. Yearbook (Association for Supervision and Curriculum Development), 73-93. Wilson Web Database. Brunken, R., Plass, J. L., & Leutner, D. (2003). Direct measurement of cognitive load in multimedia learning. Educational Psychologist, 38(1), 53–61. doi:10.1207/S15326985EP3801_7 Bush, V. (1945) As we may think. Atlantic Monthly. Retrieved September 9, 2008, from http://www. theatlantic.com/doc/194507/bush Caudhill, M., & Butler, C. (1990). Naturally intelligent systems. Cambridge, MA: MIT Press. Chandler, P., & Sweller, J. (1991). Cognitive load theory and the format of instruction. Cognition and Instruction, 8(4), 293–332. doi:10.1207/ s1532690xci0804_2 Chandler, P., & Sweller, J. (1992). The split-attention effect as a factor in the design of instruction. The British Journal of Educational Psychology, 62, 233–246.
Cognitive Load and Disorientation Issues in Hypermedia as Assistive Technology
Chen, S. Y. (2002). A cognitive model for non-linear learning in hypermedia programs. British Journal of Educational Technology, 33(4), 449–460. doi:10.1111/1467-8535.00281 Clark, R. C. (2003). Authorware, multimedia, and instructional methods. Retrieved December 3, 2008, from http://www.macromedia.com/support/ authorware/basics/instruct/index.html Conklin, J. (1987, September 20). Hypertext-an introduction and survey. IEEE Computer, 17-41. Daniels, H. L. (1996). Interaction of cognitive style and learner control of presentation mode in a hypermedia environment. Unpublished Doctoral dissertation, Virginia Polytechnic Institute and State University, Blacksburg, VA. Daniels, H. L., & Moore, D. M. (2000). Interaction of cognitive style and learner control in a hypermedia environment. International Journal of Instructional Media, 27(4), 369–383. Dias, P., & Sousa, A. P. (1997). Understanding navigation and disorientation in hypermedia learning environments. Journal of Educational Multimedia and Hypermedia, 6(2), 173–185. Dix, A. D., Finlay, E. J., Abowd, D. G., & Beale, R. (1998). Human-computer interaction. London: Prentice Hall Europe. Elm, W., & Woods, D. (1985). Getting lost: A case study in interface design. In Proceedings of the human factors society 29th Annual Meeting (pp. 927-931). Eveland, W. P. Jr, & Dunwoody, S. (2001). User control and structural isomorphism or disorientation and cognitive load? Learning from the web versus print. Communication Research, 28(1), 48–78. doi:10.1177/009365001028001002 Foltz, P. W. (1996). Comprehension, coherence and strategies in hypertext and linear text. In Rouet, J. F., Levonen, J. J., Dillon, A. P., & Spiro, R. J. (Eds.), Hypertext and cognition. Hillsdale, NJ: Lawrence Erlbaum Associates.
Foss, C. (1989). Tools for reading and browsing hypertext. Information processing management. In S. McDonald & R. J. Stevenson (1996). Disorientation in hypertext: The effects of three text structures on navigation performance. Applied Ergonomics, 27(1), 61–68. Furst, M. (1993). Building self-esteem. Academic Therapy, 19(1), 11–15. Gupta, M., & Gramopadhye, A. K. (1995). An evaluation of different navigational tools in using hypertext. Computers & Industrial Engineering, 29(1-4), 437–441. doi:10.1016/0360-8352(95)00113-F Hammond, N. (1993). Learning with hypertext: Problems, principles and prospects. In McKnight, C., Dillon, A., & Richardson, J. (Eds.), Hypertext: A psychological perspective (pp. 51–69). London: Ellis Horwood. Hetzroni, O., & Schrieber, B. (2004). Word processing as an assistive technology tool for enhancing academic outcomes of students with writing disabilities in the general classroom. Journal of Learning Disabilities, 37(2), 143–154. doi:10.11 77/00222194040370020501 Jonassen, D. (1989). Hypertext/Hypermedia. Englewood Cliffs, NJ: Educational Technology Publications. Kelker, K. A. (1997). Family guide to assistive technology. Parents, Let’s Unite for Kids (PLUK). Retrieved December 1, 2008, from http://www. pluk.org/AT1.html Kim, K. (2000). Effects of cognitive style on web search and navigation. World Conference on Educational Multimedia, Hypermedia and Telecommunications (EMEDIA), 2000(1), 531-536. Klein, L. R. (1992). Self-concept enhancement, computer education, and remediation: A study of the relationship between a multifaceted intervention program and academic achievement. Unpublished doctoral dissertation, University of Pennsylvania, Philadelphia, PA.
117
Cognitive Load and Disorientation Issues in Hypermedia as Assistive Technology
Liu, M. (1994). Hypermedia-assisted-instruction and second language learning: A semantic-networkbased approach. Computers in the Schools, 10(3/4), 293–312. Lowe, D., & Hall, W. (1999). Hypermedia and the Web: An engineering approach. London: Wiley. Lynch, P. J. (1994). Visual design for the user interface: Design fundamentals. The Journal of Biocommunication, 21(1), 22–30. Marchionini, G. (1988). Hypermedia and learning. Freedom and chaos. Educational Technology, 28(11), 8–12. Mayer, R., Heiser, J., & Lonn, S. (2001). Cognitive constraints on multimedia learning: When presenting more material results in less understanding. Journal of Educational Psychology, 93, 187–198. doi:10.1037/0022-0663.93.1.187 Mayer, R. E., & Moreno, R. (1998). A split-attention effect in multimedia learning: Evidence for dual processing systems in working memory. Journal of Educational Psychology, 90(2), 312–320. doi:10.1037/0022-0663.90.2.312 Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38, 43–52. doi:10.1207/ S15326985EP3801_6 McDonald, S., & Stevenson, R. J. (1996). Disorientation in hypertext: The effects of three text structures on navigation performance. Applied Ergonomics, 27(1), 61–68. doi:10.1016/00036870(95)00073-9 McDonald, S., & Stevenson, R. J. (1998). The effects of text structure and prior knowledge on navigation in hypertext. Human Factors, 40(1), 18–27. doi:10.1518/001872098779480541 Moreno, R., & Mayer, R. (1999). Cognitive principles of multimedia learning: The role of modality and contiguity. Journal of Educational Psychology, 91, 358–368. doi:10.1037/0022-0663.91.2.358
118
Mousavi, S., Low, R., & Sweller, J. (1995). Reducing cognitive load by mixing auditory and visual presentation modes. Journal of Educational Psychology, 87(2), 319–334. doi:10.1037/00220663.87.2.319 Murray, T. (2001). Characteristics and affordances of adaptive hyperbooks. Proceedings of WebNet 2001, Orlando, FL. Neilsen, J. (1990). The art of navigating through hypertext. Communications of the ACM, 33(3), 298–310. Neilsen, J. (1995). Multimedia and hypertext: The Internet and beyond. Cambridge, MA: AP Professional. Norman, D. A. (1983). Some observations on Mental Models. In Gentner, D., & Stevens, A. L. (Eds.), Mental models (pp. 7–14). Mahwah, NJ: Lawrence Erlbaum Associates Inc. Nunes, J. M., & Fowell, S. P. (1996). Hypermedia as an experiential learning tool: A theoretical model. Information Research, 2(1). Perkins, B. (1995). Integrating hypermedia and assistive technology: An overview of possibilities. Information Technology and Disabilities, 2(2). Retrieved December 20, 2008, from http://www. isc.rit.edu/~easi/itd/itdv02n2/perkins.html Perkins, R. (1991). Using HyperStudio to create lessons that use alternative input devices. In D. Carey, R. Carey, D. A. Willis, & J. Willis (Eds.), Technology and teacher education. Annual 1991: Proceedings of the Annual Conference of the Society for Teacher Education (pp. 80-83). ERIC Document Reproduction Service No. ED 343 562. Perkins, R. (1993). Integrating alternative input devices and hypermedia for use by exceptional individuals. Computers in the Schools, 10(1-4).
Cognitive Load and Disorientation Issues in Hypermedia as Assistive Technology
Public Law 100-407 (1988). Technology-Related Assistance for Individuals with Disabilities Act of 1988. Retrieved October, 12, 2009, from http:// www.ok.gov/abletech/documents/Tech%20ActIndividuals%20with%20Disabilities.pdf Renkl, A., & Atkinson, R. K. (2003). Structuring the transition from example study to problem solving in cognitive skill acquisition: A cognitive load perspective. Educational Psychologist, 30(1), 15–22. doi:10.1207/S15326985EP3801_3 Simms, B. (2003). Assistive technology for early childhood. [from Wilson Web Database.]. Exceptional Parent, 33(8), 72–73. Retrieved on July 14, 2004. Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12, 257–285. Sweller, J. (1994). Cognitive load theory, learning difficulty, and instructional design. Learning and Instruction, 4, 295–312. doi:10.1016/09594752(94)90003-5 Sweller, J., & Chandler, P. (1994). Why some material is difficult to learn. Cognition and Instruction, 12(3), 185–233. doi:10.1207/ s1532690xci1203_1 Sweller, J., Van Merriënboer, J., & Paas, F. (1998). Cognitive architecture and instructional design. Educational Psychology Review, 10, 251–296. doi:10.1023/A:1022193728205 Theng, Y. L. (1997), Addressing the ‘lost in hyperspace’ problem in hypertext. PhD Thesis, Middlesex University (London). Theng, Y. L., Jones, M., & Thimbleby, H. (1995a). Reducing information overload: A comparative study of hypertext systems. IEEE Colloquium on Information Overload, 95(223), 6/1-6/5.
Thuring, M., Hannemann, J., & Haake, J. M. (1995). Hypermedia and cognition: Designing for comprehension. Communications of the ACM, 38(8), 57–66. doi:10.1145/208344.208348 Tripp, S. D., & Roby, W. (1990). Orientation and disorientation in a hypertext lexicon. Journal of Computer-Based Instruction, 17(4), 120–124. Weikle, B., & Hadadian, A. (2003). Can assistive technology help us to not leave any child behind? Preventing School Failure, 47(4), 181–186. doi:10.1080/10459880309603365 Zhu, E. (1999). Hypermedia interface design: The effects of number of links and granularity of nodes. Journal of Educational Multimedia and Hypermedia, 8(3), 331–359.
key terms And deFInItIons Assistive Technology: Assistive technology is “any item, piece of equipment, or product system whether acquired off the shelf, modified, or customized that is used to increase, maintain or improve functional capabilities of individuals with disabilities” (Public Law, 100- 407, 1988). Hypertext: Hypertext is a collection of text that can be linked to other text in an unlimited non-linear fashion. Hypermedia: Hypermedia is a combination of networks of nodes, including information (e.g., text, graphics, video, sound, etc.), for the purpose of facilitating access to, and manipulation of, the information encapsulated by the data. Disorientation: Disorientation refers to an experience by users not knowing where they are within hypermedia and not knowing how to move to a desired location. Cognitive Load: Cognitive load refers to the load on working memory during instruction.
119
Section 3
Software and Devices
The chapters in Section 3 focus on the use of assistive technology tools in conjunction with the use of a multi-sensory environment or multi-sensory pedagogy. The purpose is to address the essential characteristics of assistive technology implementation, such as software and devices, embedded in inclusive settings and its relevance to practitioners’ collective and individual responsibilities in this area. Little is known about coupling software and devices to the related research. Section 3 pursues new research directions that augment the benefits of assistive technology tools in inclusive education. The chapters in Section 3 mainly attempt to develop promising approaches to implementing software and devices to answer the following research questions: (a) Who should use software or devices as assistive technology for intervention? (b) How should ‘‘using software or devices as assistive technology’’ be operationalized and measured? (c) What intervention or staff development program should be conducted to decrease the prevalence of malfunctioning software or devices during assistive technology implementation? (d) How should the best match between software and devices and students with disabilities be defined? Thus, the major themes of Section 3 are research, implementation, intervention, and assessment. In conclusion, the chapters in Section 3 assert that “Technology makes things easier for everyone. Assistive technology makes things possible for individuals with disabilities.”
121
Chapter 8
Multi-Sensory Environments and Augmentative Communication Tools Cynthia L. Wagner Lifeworks Services, USA Jennifer Delisi Lifeworks Services, USA
AbstrAct This chapter discusses the use of augmentative communication tools in conjunction with use of a multisensory environment. Though little has been written about the pairing, the authors discuss related literature, the history of their program’s use, the emerging communicators with whom they notice a great benefit, and the challenges of implementation. The purpose of this chapter is to open the discussion about the relationship between the two, to examine some of the related research, and to propose new research directions which could benefit adults who face communication challenges due to sensory issues. The focus is on the issues faced by adults with developmental disabilities and autism.
IntroductIon Lifeworks is a non-profit organization that helps people with disabilities live fuller lives that are integrated into the flow of community experience. Lifeworks provides the tools clients need to build the lives they want to live, through employment at area businesses, customized support services, and social enrichment opportunities. Our goal is to support clients to achieve their communication goals using the tools that suit them best. DOI: 10.4018/978-1-61520-817-3.ch008
Every day we take in sensory information through sight, sound, taste, smell, and touch. Some people with developmental disabilities, or those on the autism spectrum, face sensory challenges. Sensory sensitivity makes it difficult to communicate if you cannot focus or attend to detail. Integration of senses allows a person to take in what is going on around them and communicate effectively. The authors implemented a program addressing the sensory needs with communication goals. This chapter is based on multi-sensory environments (MSEs) which were introduced in 2007 in our
Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Multi-Sensory Environments and Augmentative Communication Tools
day program setting for adults with developmental disabilities, autism, and/or traumatic brain injuries. We will discuss basic sensory needs, how they can be addressed in a MSE, and relate to better use of augmentative communication tools. Helping people to fulfill their sensory needs and communicate to their full potential empowers them to achieve their hopes and dreams.
bAckground Communication difficulties can be caused by physical impairments, cognitive impairments, and/or sensory impairments. Physical impairments can impact the productive communication skills of a person with communication challenges. Such impairments may prevent them from physically producing certain sounds. These impairments also can limit use of augmentative communication tools because the person may have difficulty pointing to objects, manipulating their hands to form words in sign language, or accessing a communication device through alternative access methods such as switches or a head mouse. For some people with developmental disabilities and autism, physical impairments complicating communication may not be visible. Cognitive impairments can affect language acquisition in multiple ways. For people with Down syndrome (DS), short-term memory may be a concern (Iglesia, Buceta, & Campos, 2005). Short-term memory is how we initially store new verbal vocabulary, navigate through a new communication device, or remember the meaning of new picture symbols. Another concern is the processing of language. Research “suggests that participants with DS have a deficit in verbal processing” (Iglesia et al., 2005, p. 201). This has also been discussed for individuals with other cognitive impairments. Much of the new vocabulary we acquire comes from things we have heard others say. Motor speech deficits, such as apraxia, can cause difficulty with multiple types
122
of production issues. Koul, Schlosser, and Sancibrian (2001) discuss motor issues specifically in relation to people with autism, but it affects individuals with other disorders as well. When looking at selection options for communication devices, for example, “The movement of the body part or body-part extension (e.g., the headstick) must be sufficiently controllable so that only a single item is activated with each depression” (Beukelman & Mirenda, 1992, p. 58). Finally, sensory impairments prevent us from acquiring all the information that the environment presents. Iglesia et al. (2005) state that “if more senses are engaged in receiving the information (e.g., sight, hearing), the recall of story details will be facilitated” (p. 199). The opposite is true as well—when fewer senses are engaged in receiving the information, we take in fewer details. These details could be facial expressions which denote sarcasm, inflections which communicate questions versus statements, or endings of words which detail the tense. This is not just the case for visual impairments and auditory impairments, but also for those who have Central Auditory Processing Disorder, those not taking in enough of a particular sense, and for those who take in too much of one sense. Physical and cognitive impairments affecting communication are those which have more traditionally been addressed through speech therapy. As more has become known about the nature of sensory impairments and how they relate to communication, clinicians have been better able to address these needs. Studies are being done which are investigating the relationship between sensory impairments and language acquisition, but this is a complex issue. Each diagnosis (such as DS, autism, etc.) appears to have pieces of sensory issues affecting their language impairments (when they are present), but isolating each of these issues and accounting for the individuality of the way they present in each person is a challenge. In terms of augmentative communication tools with these types of diagnoses, Beukelman and Mirenda
Multi-Sensory Environments and Augmentative Communication Tools
(1992) found that “examining the requirements (sensory, motor, cognitive, and language) and the effects (rate, accuracy, and fatigue) of various access options is still limited” (p. 67). Seventeen years later we have a few more studies, but not as many as necessary.
sensory All behavior is accompanied by an autonomic nervous system reaction. The sympathetic, parasympathetic, and reticular activating systems make up the autonomic nervous system, located in the brain stem. The sympathetic system is responsible for “fright, flight or fight” arousal. The body reacts to adrenaline release by sweating, dilating pupils, increasing heart rate, and respiration. The parasympathetic system works with the sympathetic providing balance for arousal levels. Together they work for the just right combination that allows for doing and learning. The reticular activating system is responsible for sleep/wake cycles and modulates sensitivity to sensory signals depending on importance for survival. The limbic system is located just above the brain stem and is responsible for the emotional component of behavior. Past experiences create a memory that sets the basic mood for present behavior and reactions to events. New sensory input is interpreted by comparing it to past experiences. These systems work together using an unconscious process to prepare a person for fleeing, or on a conscious cognitive level, for higher function (Messbauer 2005). How we respond to our environment depends on how we are taking in, and processing, sensations from receptors throughout our body. Vestibular sensations are closely associated with auditory sensations, arising from the inner ear, informing the brain of movement, and influencing all other systems. Proprioceptive sensations arise from receptors in our tendons and ligaments around our joints and tell our brain where our body parts are. Proprioception helps us to feel grounded and secure, modulating our vestibular system.
Tactile sensations are located in the skin and are made up of light and pressure touch. Light touch activates the autonomic nervous system, eliciting a sympathetic response. Pressure touch occurs through joint as muscle sensations, eliciting a parasympathetic response. People can react differently when these sensations are processed and perceived as too much or too little. Many of our clients experience some kind of sensory processing sensitivity. “When the brain is not processing sensory input well, it usually is not directing behavior effectively. Without good sensory integration, learning is difficult and the individual often feels uncomfortable about himself, and cannot easily cope with ordinary demands and stresses” (Ayres, 1985, p. 51). When a person is uncomfortable in his or her body they can be defensive. The nature of this defensiveness may be, for example, vestibular, visual, auditory, or tactile in nature. What sometimes appears to be learning or behavioral issue may actually be a sensory processing issue. Gravitational insecurity can be caused by poor processing of vestibular sensation. This can make a person unsure of where their body is in space, leading to fear, clumsiness and difficulty with social relationships. Visual perception problems can cause someone to become lost easily and decrease their willingness to be in strange places or try new things (Ayres, 1985). Auditory defensiveness reduces the verbal communication perceived by the listener. A defensive person may not actually cover his or her ears or look away, but may focus peripherally or on another object. We have observed that being tactilely defensive may prevent a person from touching a communication device. Interestingly, some defensive behaviors may only be present at certain times or situations. Overall, defensiveness can make a person focus so much on avoidance behaviors they are unable to attend to communication. It decreases a person’s interaction with people and objects, thereby limiting the opportunities to practice communication skills. When we can help a client relax and
123
Multi-Sensory Environments and Augmentative Communication Tools
decrease their defensiveness, they are better able to increase integration of senses and awareness of their environment. This can ultimately lead to increased communication skills.
communication and developmental disabilities Only a few of the individuals we currently serve at Lifeworks had access to a wide variety of augmentative communication tools while in the public school system. Some of our clients lived in state hospitals; others were in school programs which were just beginning to have students with special needs in the public schools. Access to augmentative communication tools is always limited to what is available to that individual, what the supporting staff know about, and what is invented at the point that people are seeking solutions. New pieces of technology, software, and techniques are being developed all the time, but new solutions for an individual are not always being sought out. Because of the move to community integration, communication requirements for people with disabilities changed dramatically. Adoption of a community-referenced approach to instruction has a direct impact on the quantity and quality of the participation and communication opportunities that are available. Suddenly, the individual needs to order food in a restaurant, cheer for the basketball team, ask for help at the library, greet the school secretary when bringing the attendance list, chat with co-workers at break time – the list of natural opportunities for things to say and people to say them to becomes endless (Beukelman & Mirenda, 1992, pp. 254-255). Some of the communication challenges people are facing now will not occur in 30 years. In the coming years, the majority of individuals being served in day programs will be those who have always lived at home and have always gone to
124
public school. That being said, the continual invention of new ways to communicate through assistive technology, and the developments in treatment through MSEs and other techniques, which aid life-long learning, will ensure that there will always be individuals who require more opportunities and tools than are available while they are in school. There have been studies done about communication skills of people with developmental disabilities and autism. Some of these studies have included adults or have solely studied them. There are both positive findings, and interesting discoveries: Despite the variety of participants and methodologies used, research findings are broadly consistent. First, people with intellectual disabilities can and do acquire basic pragmatic language skills, although more subtle aspects of conversational competence are less commonly displayed. Second, the communicative environments of children and adults with intellectual disabilities appear to inhibit the acquisition and display of pragmatic skills (Hatton, 1998, p. 79). Pragmatic skills are “the knowledge of how communication works” (Beukelman & Mirenda, 1992, p. 337). There are some draw backs to the available literature in this area. “Relatively few studies have investigated the pragmatic language use of adults with intellectual disabilities” (Hatton, 1998, p.84). The studies that have been done with adults with cognitive impairments are relatively few, and in each the sample size is small. Iglesia et al. (2005) discuss this in reference to studies of people with DS, but it applies to other groups as well. More research needs to be done, so that intervention success can be measured. This should not disqualify the already published studies which show that adults with cognitive impairments do increase their communication skills in adulthood.
Multi-Sensory Environments and Augmentative Communication Tools
There are also no comparative data from the population without disability against which to judge the conversational competence of people with intellectual disabilities…(making) it difficult to determine the extent to which ‘incompetent’conversations are due to the incompetence of people with intellectual disabilities, the stereotypes of people without disabilities, or the interactions between the two (Hatton, 1998, p. 87). At Lifeworks, we began to use the term “emerging communicator” to identify adults who were developing communication skills later than during the developmental norms. Others in the fields of speech therapy and augmentative communication use that term with a slight difference. Dowden (1999, as cited in UW Augcomm, n.d.) defines an emerging communicator as “An individual who does not yet have any reliable means of symbolic communication, although he/she typically has non-symbolic communication. This communication, for example using gestures and facial expressions, can be very useful with highly familiar partners, but it tends to be limited to the ‘here and now’ or rely heavily on the partner’s shared knowledge.” Adults in our program generally fall into two categories: those who have some good communication skills and those with quite limited skills. The third group, emerging communicators, are often overlooked. They have quite limited skills (for a variety of reasons), but display hints that they have a greater ability than we had previously helped them access. This differs from Dowden’s (1999, as cited in UW Augcomm, n.d.) definition in that they may be communicating using sign language, picture symbols, augmentative communication devices, and/or speech; but they have a limited vocabulary. Sometimes the individual has a moderate vocabulary and ability to communicate, but they are ready to add more. These opportunities for new language development are occurring quite a bit later than the developmental norms. For example, some people in our program
have expanded their vocabulary verbally or with the support of augmentative communication tools. When time in the MSE is paired with the appropriate communication tools, a person’s communication skills may expand, and there may be an increase in the type of conversations in which the person participates. When this occurs at 40 years of age, for a person with a developmental disability, it is an exciting development. We do fall into line with some of Dowden’s (1999, as cited in UW Augcomm, n.d.) thinking, when stating “some emerging communicators fall within this category because they do not yet have access to appropriate AAC strategies and technologies.” If we expand the concept of technologies to also include access to the equipment in a MSE, as well as qualified therapists and staff who work in an interdisciplinary model, then this brings our concepts of “emerging communicator” closer together. The types of skill development emerging communicators increase could include an increase in vocabulary, sentence length, pragmatics, and/ or social-communication skills.
multi-sensory environments Multi-Sensory Environments used at Lifeworks evolved from the European Snoezelen concept. As stated by Messbauer (2008) for the American Association of Multi-Sensory Environments: It is a dedicated room that attempts to block out noise, control space, temperature and lighting. It brings multi-sensory equipment together in one place to stimulate the senses, promoting pleasure and feelings of well-being. It can be utilized as part of the learning or treatment experience or for leisure and relaxation. It is controlled sensory input, especially designed to promote choice, interaction, and relationships through planned stimulation of the senses. It relieves stress, anxiety and pain. MSEs have been shown to help with autism, brain injury, challenging behaviors, de-
125
Multi-Sensory Environments and Augmentative Communication Tools
mentia, developmental disabilities, mental illness, PTSD, special education. It aims to maximize a person’s potential to focus, and then act on this change through an adaptive response to their environment. Multi-Sensory Environments help change behavior, increase focus and attention, and add to feelings of positive self-esteem and well-being (Messbauer, 2008). Equipment used in the MSE works to alert the brain and create a memorable experience. The solar projector throws images on the wall that can be changed by replacing the image wheels to alter arousal levels. A six foot tall bubble tube with changing lights provides intensity and vibration with auditory and tactile opportunities. A two hundred strand light spray with color changes engages the tactile sense and provides proprioceptive input when it is laid across someone’s lap. A vibro-acoustic recliner or mat uses sound vibration to relax and help define body awareness. Mirrors provide multiple-imagery and add to the complexity of the room. A motorized mirror ball can intensify visual and vestibular input to the nervous system. A Catherine-type wheel, which is choice driven, can be used for evaluation, exploration, and changing arousal level. Power links to various pieces of equipment are used to encourage interaction and shape behavior. The equipment in the MSE is introduced slowly, one piece at a time, for client evaluation, self-direction, and motivation (Messbauer, 2005). Lifeworks began by establishing an MSE in one of our centers. We brought our clients into the MSE individually and in small groups. In tracking our statistics, we were able to determine that behavioral incidents in the center decreased by 50% overall, regardless of whether the clients had been in the MSE or not. Our clients and staff enjoyed days that were calmer and allowed for more productive programming. This amazing transformation enabled us to write grants to help fund MSEs at four additional sites.
126
The MSE definitely has a documented calming effect. In a study of 15 children with traumatic brain injury, it was found that the MSE evoked a relaxation response as well as decreased heart rate and agitated behaviors (Hotz et al., 2006). There has been a contribution to a sense of control as well as a calming response noted in children with Rett syndrome (Lotan & Shaprio, 2005). In addition to decreasing stress, MSEs have been used to help decrease self-injurious behaviors and agitation. A statistically significant reduction in self-injury was found following MSE exposure with adults with severe or profound mental retardation and mental illness (Singh, 2004). In a group study of 24 participants with moderate to severe dementia, greater independence in activities of daily living was observed, as well as reduced agitation and apathy (Staal, 2005). One study did find carryover for two of three participants in post-session engagement as well as daily frequency of challenging behaviors on days following their MSE and OT sessions (Kaplan, 2005). There is much anecdotal success with behavioral improvements using the MSE. There are, however, few published studies demonstrating statistical significance. The MSE has also been beneficial in increasing communication. The sensory systems— auditory, vestibular, proprioceptive, tactile, and visual—develop interdependently. The auditory and vestibular systems work closely together. Speech and language depend upon the integration of auditory sensations within the vestibular system (Ayres, 1985, p. 63). A number of studies have shown sensory integration to be significant in developing language skills. Ayres and Mailloux (1981) found a consistent increase in the rate of growth in language comprehension and expression in children receiving occupational therapy sensory integration treatment. In a study of children attending school for remedial education, language was found to be the primary deficit area, acknowledging the interrelationship among auditory, visual, somatosensory, motor, and lan-
Multi-Sensory Environments and Augmentative Communication Tools
guage skills. Integration of the sensory and motor systems weighs heavily in the development of academic skills (Kruger, 2001). Using the MSE contributes to integration of senses by the use of sensory pleasurable experiences. The client chooses which piece of equipment to use as a mode of sensory input as well as how much they receive. By stimulating the visual, vestibular, and auditory systems, clients can increase their ability to communicate their needs. The neuroplastic quality of the brain allows for growth in areas that have been damaged or not previously accessed. A person is encouraged to use the MSE to reduce stress, promote pleasure, control their own environment, and support motivation, creating an environment where they will be more accepting of treatment, thereby making that treatment more effective (Messbauer, 2005).
Augmentative communication Augmentative communication tools include gesture and sign language, picture symbols, and voice-output devices. They can range from quite simple, to quite complex. The work of Christopher Nolan, Anne McDonald, and others who use ‘low technology’ (i.e., non-electronic) systems reminds us that the ultimate goal of an AAC intervention is not to find a technological solution to the communication problem, but to enable the individual to efficiently and effectively engage in a variety of interactions (Beukelman & Mirenda, 1992, p.7). Tool selection is based on many things, and is usually assessed by a speech and language pathologist. Selection is based on the skills of the individual in the areas of motor skills, cognitive skills, visual and auditory. All of these skills are impacted by sensory challenges, as stated earlier. The development of communication skills within the MSE is similar to the process of learning
to communicate as a young child. “Initially, selfdirected child behaviours are treated by caregivers as having a communicative intent, introducing the child to the notion that language can be used to do things” (Hatton, 1998, p. 81). In the MSE, moving towards an object indicates enjoyment or attraction. This can be built upon by using a switch which activates that piece of equipment. Later, using sign language, picture symbols, or voiceoutput devices to request “more” of the desired activity, the more integrated individual has moved from a self-directed behavior, to a communication which can now be understood by others. As this integrated motion and intent of communication is practiced, it leads to further integration, and natural development of skills. Generalization can occur within the room, and more communication skills can be developed. This type of intervention (starting with items of high interest, though not in an MSE) was studied by Koul, Schlosser, and Sancibrian (2001), when looking at individuals with Rett Syndrome. This study found that “individuals with Rett syndrome acquire initial lexical items with an opaque symbol-referent relationship more readily if the referents are of high interest” (Koul et al., 2001, pp. 164-165). Further support for this concept comes from Beukelman and Mirenda (1992): There is simply no doubt about it: The availability of genuine and motivating communication opportunities in integrated and inclusive settings is at least as important to the success of a communication intervention as is the availability of an appropriate access system (p. 258). Koul et al. (2001) also state that the advantage to “the relatively ‘unnatural’ behavioral approaches offer the advantage of eliminating any distracters and making the linguistic stimuli highly salient” (p. 165). The concept of distractibility and impact of stimuli was also discussed by Kruger, Kruger, Hugo and Campbell (2001). In reference to problems with attention during their study, they
127
Multi-Sensory Environments and Augmentative Communication Tools
wrote “this fact suggests that attention skills are also important in academic performance and for language ability, central auditory processing, and sensory integration” (Kruger et al., 2001, p. 96). The MSEs have the advantage that the therapist is manipulating the room to have the optimum amount of stimuli for that individual, and can ensure that the focus is on the items of interest, and distractions are eliminated. Cognitive skills such as choice-making, reasoning, and planning often require us to communicate with others, take in information before using it to problem-solve, and then communicate back what we came up with to others. As a person’s body is working more efficiently once their sensory needs have been addressed and they have gained the appropriate communication skills through verbal or augmentative communication tools, they are now able to participate more actively in these types of higher cognitive tasks. Addressing both sensory and communication needs in this way requires a team approach. The team needs to address the needs of the whole person, in as many environments as possible. This enables everyone to get a more complete picture of both the person’s skills and areas that need to be developed. Team members may include an occupational therapist, speech therapist, music therapist, physical therapist, parents or guardian, staff/teachers, social worker, and most importantly—the individual. In day programs for adults with developmental disabilities, this team meets for an annual meeting. Combining MSEs with the use of augmentative communication tools often requires more frequent communication among team members. This concept of a team approach has been looked at by others, but not in terms of combining MSE and augmentative communication tools. In referring to treating people with CAPD, language disorders, sensory integration dysfunction and learning disabilities, an interdisciplinary approach is:
128
The most favored approach. .. Unfortunately, it requires more funds and skilled human resources than are presently available. .. An effective, resource-efficient, transdisciplinary model for helping children with CAPD, language disorders, LD, and sensory integration dysfunction will aid in providing an evaluation and intervention program that may be easily implemented using existing resources (Kruger et al., 2001, p. 87). Despite the different disciplines being examined in the interdisciplinary research, many of the complications and issues are the same. Kruger et al. (2001) discuss how the “individual perspectives” of different specialists has them providing “isolated and inefficient treatment” (p. 97). Pena and Quinn (2003) state “effective collaboration between teachers and SLPs can have positive benefits for children with language impairment in daily communicative events and academic achievement” (p. 53). “For groups that function as teams, collaboration is a dynamic learning process” (Pena & Quinn, Winter 2003, p. 61). Each team member at Lifeworks interacts with the individual in different circumstances, requiring different vocabulary, different socialcommunication events, and they see the individual working on these skills in different environments. Each has valuable input, and can positively impact the communication development of the individual as they progress. We are all learning, as well, as a part of the process, especially since certain team members are more experienced with particular types of augmentative communication tools. The therapists bring in cutting edge treatment techniques, and the individual, the direct care staff, and family or guardian are often implementing similar strategies in the creative fashion required in the day-to-day usage of these tools. A device lending library, or access to one, is a fundamental piece to this approach. Sometimes adults with developmental disabilities have difficulty being approved through medical assistance for devices in a timely manner. Others have a
Multi-Sensory Environments and Augmentative Communication Tools
slower learning curve and need more opportunity to practice with a device before being able to prove it is a good choice for them. Also, if working in an MSE, they may need multiple devices in a short period of time. Lending libraries enable more flexibility during this time of learning, and prevent the purchase of a device which would be only a short-term investment. Lifeworks has a range of devices in their lending library, which include voice output devices, alternative access methods such as switches, software such as those to make picture symbols and displays, and others. We are grateful to our donors and those who have given us grants to purchase this equipment. When we do not have something in our own library, we access outside lending libraries, and have really appreciated their support. Finally, some portions of the therapy work we are able to do at Lifeworks is due to being free from the confines of working within a direct billing system, such as through medical assistance or insurance billing. Though coverage is available for adults with developmental disabilities and autism to see a speech therapist, it is difficult to obtain funding for other traditional therapies. Donations and grants have enabled us to equip our MSEs, our device lending library, and fund a portion of our therapy staff salaries. In addition, this type of team approach in an interdisciplinary fashion is difficult to fund when the therapists are contracted for individual or group treatment. Looking at communication, we need to start with the senses as integrated as possible, which sometimes requires multiple therapists working together at different points in the process. To make reimbursement for these services possible, the entire system will need to change to accommodate these treatment techniques. Though there are more costs up front, in the long run, people who can communicate their needs more effectively will draw less on the health care system and can be more active members of society.
Future reseArcH dIrectIons More research is necessary to support the advantages of access to augmentative communication tools for adult emerging communicators. There is some research supporting the use of MSEs to integrate the senses, however, more needs to be done regarding how they specifically impact communication skills. This research would validate using the MSE in this manner, making reimbursement easier for these services. Ultimately, additional documentation will support increased access and improved techniques for individuals with communication challenges.
concLusIon This chapter puts forward the idea that use of a MSE to decrease defensiveness in the body can promote integration of the senses, and lead a person to be in a better position to communicate their wants and needs. We have also noted that adults with developmental disabilities or autism can sometimes be overlooked as emerging communicators. Shifting this view will increase their access to new tools and techniques for enhancing communication skills. “People with severe disabilities can live, work, play, communicate, and form relationships with a wide variety of people in their communities, schools, and workplaces, and they deserve to be provided with opportunities to do so” (Beukelman & Mirenda, 1992, p. 254).
reFerences Augcomm, U. W. (n.d.). Augmentative and alternative communication at the University of Washington, Seattle. Retrieved October 5, 2009, from http://depts.washington.edu/augcomm/00_ general/glossary.htm
129
Multi-Sensory Environments and Augmentative Communication Tools
Ayres, A. J. (1985). Sensory integration and the child. Los Angeles, CA: Western Psychological Services. Ayres, A. J., & Maillous, Z. (1981). Influence of sensory integration procedures on language development. The American Journal of Occupational Therapy., 35(6), 383–390. Beukelman, D. R., & Mirenda, P. (1992). Augmentative and alternative communication: Management of severe communication disorders in children and adults. Baltimore, MD: Paul H. Brookes Publishing Co., Inc. Hatton, C. (1998). Pragmatic language skills in people with intellectual disabilities: A review. Journal of Intellectual & Developmental Disability, 23(1), 79–100. doi:10.1080/13668259800033601 Hotz, G. A., Castelblanco, A., Lara, I. M., Weiss, A. D., Duncan, R., & Kuluz, J. W. (2006). Snoezelen: A controlled multi-sensory stimulation therapy for children recovering from severe brain injury. Brain Injury : [BI], 20(8), 879–888. doi:10.1080/02699050600832635 Iglesia, J., Buceta, M., & Campos, A. (2005). Prose learning in children and adults with Down syndrome: The use of visual and mental image strategies to improve recall. Journal of Intellectual & Developmental Disability, 30(4), 199–206. doi:10.1080/13668250500349391 Kaplan, H., Clopton, M., Kaplan, M., Messbauer, L., & McPherson, K. (2006). Snoezelen multi-sensory environments: Task engagement and generalization. Research in Developmental Disabilities, 27, 443–455. doi:10.1016/j. ridd.2005.05.007 Koul, R., Schlosser, R., & Sancibrian, S. (2001). Effects of symbol, referent, and instructional variables on the acquisition of aided and unaided symbols by individuals with autism spectrum disorders. Focus on Autism and Other Developmental Disabilities, 16(3), 162–169. doi:10.1177/108835760101600304 130
Kruger, R., Kruger, J., Hugo, R., & Campbell, N. (2001). Relationship patterns between central auditory processing disorders and language disorders, learning disabilities, and sensory integration dysfunction. Communication Disorders Quarterly, 22(Winter), 87–98. doi:10.1177/152574010102200205 Lotan, M., & Shaprio, M. (2005). Management of young children with Rett Disorder in the controlled multi-sensory (Snoezelen) environment. Brain & Development, 27, 88–94. doi:10.1016/j. braindev.2005.03.021 Messbauer, L. (2005). The art and science of multisensory environments. Presentation at workshop in Queens, NY. Messbauer, L. (2008). What is a multi-sensory or Snoezelen room? American Association of Multi Sensory Environments. Retrieved from http:// www.aamse.us/faq.php Pena, E., & Quinn, R. (2003). Developing effective collaboration teams in speech-language pathology: A case study. Communication Disorders Quarterly, 24(2), 53–63. doi:10.1177/15257401 030240020201 Singh, N., Lancioni, G. E., Winton, A. S. W., Molina, E., Sage, M., Brown, S., & Groeneweg, J. (2004). Effects of Snoezelen room, activities of daily living skills training, and vocational skills training on aggression and self-injury by adults with mental retardation and mental illness. Research in Developmental Disabilities, 25, 285–293. doi:10.1016/j.ridd.2003.08.003 Staal, J. A., Sacks, A., Matheis, R., Calia, T., Hanif, H., Collier, L., & Kofman, E. (2005, July). The effects of Snoezelen (Multi-Sensory Behavior Therapy) and psychiatric care on agitation, apathy, and activities of daily living in dementia patients on a short term geriatric psychiatric inpatient unit. Poster session presented at the Alzheimer’s Association International Conference, Washington, DC.
Multi-Sensory Environments and Augmentative Communication Tools
key terms And deFInItIons Augmentative Communication Tools: Picture symbols, sign language and gesture, voice output devices, which can be used to assist in communication. Autism: A developmental brain disorder characterized by impaired social interaction and communication, and restricted and repetitive behavior. The autism spectrum includes Asperger’s Syndrome, Rett Syndrome, Childhood Disintegrative Disorder and Pervasive Developmental Disorder—Not Otherwise Specified. Developmental Disability: Lifelong disability due to cognitive and/or physical impairments beginning early in life.
Multi-Sensory Environment: A designated space designed to alert, or calm the senses. Proprioception: Sensory information received from joints and muscles telling the body about pressure, movement and changes in position in space. Sensory Issues: Difficulty taking in and interpreting sights, sounds, touch, taste and movement. Vestibular: Input from the inner ear telling the body about balance, change in gravity, movement around the body as well as movement of the body.
131
132
Chapter 9
Using Software to Deliver Language Intervention in Inclusionary Settings Mary Sweig Wilson Laureate Learning Systems, Inc., USA Jeffrey Pascoe Laureate Learning Systems, Inc., USA
AbstrAct Language intervention focusing on syntax is an essential component of programs designed to meet the educational needs of children with language disabilities as it provides a foundation for improved communication and literacy. Yet there are challenges to providing individualized syntax intervention on a daily basis in inclusionary settings. The use of assistive technology in the form of language intervention software provides one means to address these challenges. This chapter describes the background, rationale, and use of software designed to provide receptive syntax intervention to build sentence comprehension and use in pre-school and elementary children with disabilities. The software is also appropriate for at-risk students in districts providing early intervening services in a response to intervention model as well as English language learners. Included is an overview of advances in linguistic theory and research that have dramatically increased our understanding of language and how it is acquired by typically and atypically developing children, and which informed the curricular design of the software described. The results of field-testing under naturalistic conditions in classrooms, where regular use of the software was associated with accelerated language development, are also reviewed.
IntroductIon Since the landmark passage of Public Law 94-142 (Education of All Handicapped Children Act) in 1975, society has supported the belief that all children are entitled to a public education. The IndividuDOI: 10.4018/978-1-61520-817-3.ch009
als with Disabilities Education Act (IDEA, 2004) is the direct descendent of PL94-142 and the current law ensuring services to children with disabilities. The law governs how states and public agencies provide early intervention and special education to the millions of eligible children from birth to 21. Students from 3-21 receive special education and related services through Part B of IDEA. The
Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Using Software to Deliver Language Intervention in Inclusionary Settings
most important mandate in the law covering Part B is that students with disabilities are entitled to a free appropriate public education (FAPE) in the least restrictive environment (LRE). While “inclusion” does not appear in the legislation, clearly LRE means that to the extent possible children with disabilities should be educated with their neurotypically developing peers. As a society, we don’t believe that children with disabilities should be isolated from their peers but many schools encounter difficulties in providing education to children with disabilities in the general education classroom. Despite general commitment to integration, difficulties are frequently encountered in meeting the needs of children with disabilities in inclusionary environments. Among the difficulties are feelings of regular educators that they lack the skills to deal with special needs students in their classrooms (Hanson, Horn, Sandall, Beckman, Morgan, Marquaart, Darnwell, & Chou, 2001). Children with disabilities are most successfully included when regular and special educators work together on developing classroom-teaching strategies (Goodman & Williams, 2007; McCormick, Won, & Yogi, 2003). One promising approach to meeting the needs of students with disabilities in the general education classroom is through the use of assistive technology (AT) in the form of language intervention software. Before adopting software for use in the classroom, however, speech-language pathologists and special educators must be sure that the programs will deliver research-based intervention that can address a student’s language acquisition needs. This chapter will describe the background, rationale, and use of software designed to provide receptive syntax intervention to build sentence comprehension and use in pre-school and elementary children with disabilities. The programs are also appropriate for at-risk students in districts providing early intervening services in a response to intervention (RTI) model as well as students who are English language learners.
bAckground Advances in linguistic theory and psycholinguistic research over the past quarter century have dramatically increased our understanding of language and how it is acquired by typically and atypically developing children. Children all over the world learning any one of thousands of different languages do so in a remarkably similar manner. First words emerge, word combinations occur, and syntax is mastered at about the same age regardless of the language or culture. What exactly is the nature of the human biological endowment that enables very young children to acquire their first language on such a strikingly consistent timetable? Since its inception (Chomsky, 1955; 1957), generative grammar theory has tried to explain this phenomenon (see Chomsky, 2004 for a brief review). A fundamental assertion emerging from this work is that the rapidity and uniformity of first language acquisition is possible because human infants are born with an innate language faculty (Universal Grammar) that drives and shapes the course of language development (Hauser, Chomsky, & Fitch, 2002). Although this premise was in doubt fifty years ago, today it is accepted with discussion centered only on the precise nature of this innate endowment (Boeckx & Piattelli-Palmarini, 2005; Jenkins, 2004; Laka, 2009). Because our inborn human language capacity orchestrates language acquisition, neurotypically developing children need only language exposure to acquire language, at least insofar as acquisition of the formal grammar component (vocabulary and syntax) of language is concerned. The grammar of a language is composed of the lexicon (the “dictionary” of lexical items/words in the language) and the syntactic computational system that assembles lexical items into sentences. The important distinction here is that, while the ability to use words for communication in social settings, (i.e., pragmatics) is developed through communicative interaction, acquisition of the grammar of a
133
Using Software to Deliver Language Intervention in Inclusionary Settings
language is accomplished through listening; it is dependent upon receptive language input (Pinker, 1994; Radford, 1990; Wexler, 1998). Problems with acquiring the grammatical component of language are characteristic of a broad range of children with language impairments regardless of etiology. For example, while the communication profiles of children with Autism Spectrum Disorders, specific language impairment, Down Syndrome, and deafness may differ, their patterns of language acquisition and deficits are similar (Geurts & Embrechts, 2008; Tager-Flusberg & Calkins, 1990; Tager-Flusberg, 2004). The LanguageLinks®: Syntax Assessment & Intervention and Prepositions! programs (Wilson & Fox, 2007a; 2007b) are designed to assess and train the syntax forms that these children typically struggle with, and yet need to become competent communicators. In designing and developing LanguageLinks and Prepositions!, the goal was to use evidence-based instructional strategies to deliver a syntax assessment and intervention curriculum based on current linguistic theory, language acquisition research, and clinical research. Here we will review the theoretical and research bases of LanguageLinks and Prepositions! programs, present field-testing data demonstrating their effectiveness, and review their use in instructional programs.
universal grammar Principles and Parameters Linguists and biologists believe the innate Universal Grammar that humans are born with is composed of principles that are not dependent upon language input, and a small set of parameters that vary in a binary fashion across languages (Baker, 2001; Hornstein, Nunes, & Grohmann, 2005). Universal principles unite all languages. They don’t have to be learned because they are an invariant component of the genetically endowed language faculty and consequently are known without language experience. One important
134
universal principle is the Structure Dependence Principle, holding that all grammatical operations are structure dependent. Regardless of language, all syntactic operations are sensitive to the grammatical structure of the sentences to which they apply. For example, in English we form a yes/no question by interrogative inversion: She is working at home. → Is she working at home? Children will love this game. → Will children love this game? Small children who play with dolls are fun to watch. → Are small children who play with dolls fun to watch? In forming a question from a statement, we don’t simply move the second word to the front of the sentence as it may appear in the first two sentences above; rather the operation is structure dependent. In the case of a standard yes/no question, we move the auxiliary (is, are) or modal (will, might, should) in front of the subject phrase. Unlike universal principles that require no language experience, parameters do require language input or primary linguistic data for their setting. Since all parameters have two possible settings, children need language input to select the proper setting. A fixed set of parameters account for most of the syntactic variations among human languages (Atkinson, 1992; Baker, 2001; Chomsky, 1981; Crain, 1991; Leonard & Loeb, 1988; Radford, 1990; 2004; Roeper & Williams, 1987; Wexler, 1998). Parameters determine such things as word order in a language and whether question words (e.g., who, what, how) move to the front of a sentence (they do in English, they don’t in Chinese). The Minimalist Program (Chomsky, 1995) provides a framework for much of the current linguistic research concerning universal principles and parameters. Important in the Minimalist Program is the concept of heads. The head of a phrase is the key word that determines the properties of the phrase. The universal Headedness Principle stipulates that every phrase must have a head;
Using Software to Deliver Language Intervention in Inclusionary Settings
when two elements combine, one becomes the head. Two parameters that determine word order in a language are associated with this principle. English follows a pattern of subject-verb-object (SVO). This order is determined by two different parameters. The Head-Directionality Parameter determines whether a head of the phrase comes before or after its complement (Object). In English, the head comes before its complement(s). In English we say “hit the ball” where “hit” is the Verb head and “the ball” is its Complement so English is a “head first” language. The specifier-head or subject side parameter determines whether specifiers (subjects) come before or after the head of the phrase. English is a “specifier-first” language in that specifiers or subjects come before their head. For example, in English we say “The boy hit the ball” where “The boy” is the specifier and comes before the Verb “hit.” The two parameters, heads and specifiers, along with their settings determine word order in all languages. The acquisition of language competence can be viewed as a matter of “setting” grammatical parameters through exposure to appropriate receptive language input, combined with the learning of a lexicon. In children with language disorders, exposure to the language surrounding them, that is, primary linguistic data alone, is clearly not adequate. Understanding parameters and the receptive language experiences that “trigger” or “set” them can lead to intervention strategies that are more effective because they target the specific linguistic experiences that may optimize or correct the process of language acquisition on a fundamental (versus symptomatic) level. This suggests that the most successful language intervention should emphasize linguistic input that is likely to interact with innate factors shaping language acquisition, and is likely to “set” the grammatical parameters of the child’s native language (Atkinson, 1992; Hyams, 1986; Lightfoot, 1991; Roeper & Williams, 1987; Roeper, 2007).
the Lexicon In the Minimalist model, the lexicon (the mental dictionary of lexical items or words with their linguistic properties) has taken on a greater role in grammar than it had in earlier generative grammar theory. Each representation of a word in the lexicon consists not only of phonological and semantic properties (sound and meaning), but also syntactic features such as categorial membership (i.e., whether it is a noun, verb, determiner, etc.), inflectional behavior (e.g., how it is marked for number, person, and gender), and in the case of verbs, syntactic argument structure (e.g., run requires only one argument, a subject “The girl runs”; kiss requires two arguments, a subject and an object “The father kisses the baby”; and give typically requires three arguments “The girl gives the baby a toy”). In other words, the Minimalist Program assumes that a complete lexical entry includes the specific roles a word can play in the structure of language and the appropriate form of that word in a given grammatical context. Unlike in past generative grammar theories, lexical entries are now posited to enter the grammatical computational system or sentence forming process already marked with syntactic features (Abraham, Epstein, Thráinsson, & Zwart, 1996; Chomsky, 1995). Developing an early core lexicon is an important step in the acquisition of language. Many think of word learning as simply a process of linking a word’s sound to meaning. The acquisition of word meaning, however, describes only part of what a child is learning even in the single-word stage of language development. Contemporary linguistic theory emphasizes that the child must also be learning the syntactic features of words in relation to the parameter settings of the language being acquired—the grammatical options that distinguish one language from another. Further, children are learning a great deal about the inflec-
135
Using Software to Deliver Language Intervention in Inclusionary Settings
tional properties of the language they are acquiring. That includes such things as how a language marks number agreement of subjects and verbs (e.g., “The boy_ runs / The boys run_”) and how time is expressed (e.g., “The boy is playing / The boy played”). That some grammatical learning occurs during the single-word stage is evidenced by the rapid progression of syntactic competence: typically, at about 12 months a child will begin to produce isolated words with no evidence of grammatical marking. Within another six months or so, however, the child will begin to produce forms such as Determiner “No” (e.g., “No shoe”), the progressive Verb marker -ing (e.g., “running”), and the Genitive or Possessive ‘s (e.g., “the boy’s ball”). There is evidence that by this time a number of crucial parameters have already been set. Hirsh-Pasek, Golinkoff, and colleagues showed that when children as young as sixteen months (still in the one word stage) were presented with televisions showing Big Bird tickling Cookie Monster and vice versa, and then were told, “Oh look! Big Bird is tickling Cookie Monster!” or vice versa, they preferentially attended to the appropriate visual stimulus (Hirsh-Pasek, Golinkoff, Fletcher, DeGaspe-Beaubien, & Cauley, 1985; Hirsh-Pasek & Golinkoff, 1996). This finding demonstrated that both word order parameters had already been set. Further evidence that these parameters are fixed by the time typically developing children enter the two-word stage is provided by the fact that the word order of their utterances adheres to the word order of their native language from the outset, which in the case of English is SVO (Fodor, 2009; Radford, 1990). Within the Principles and Parameters Theory, the lexicon is divided into Lexical Category words (e.g., nouns, verbs, adjectives) and Functional Category words and forms (e.g., determiners, tense, complementizers) that serve essentially grammatical functions. Adger (2003) says that one way to think about Functional Categories “...is that they erect a syntactic skeleton above lexical categories which serves to hold together
136
the various syntactic relations that take place in the phrase” (p. 165). The Functional Categories include determiners, tense (in earlier work called inflection or INFL), and complementizers: •
Determiners are associated with nouns and are so-called because they specify (or determine) that to which a Noun expression refers. Determiners include, for example, articles (a, the), prenominal determiners (this, that, these, those) and pronouns (I, you, me, his, her). A Determiner Phrase (DP) is headed by a determiner. Tense is associated with verbs and refers to elements that inflect verbs for tense and agreement. Tense includes, for example, the regular past tense -ed, infinitival to, auxiliary be, and third person singular -s. A Tense Phrase (TP) includes a verb and its inflectional elements. Complementizers include words such as that, if, and whether which serve to introduce and characterize complement clauses in several ways. Also included are various operations involved in the formation of questions (e.g., Interrogative Reversals and Wh-Questions).
•
•
Figure 1 lists Functional Category examples of determiners, tense, and complementizers. Even those not familiar with the current linguistic distinction between Functional and Lexical Categories will immediately recognize that the forms listed in Table 1 are especially problematic for students with language impairments as well as English language learners. The list also provides some explanation of grammatical errors. For Table 1. N
V
Ball
Roll
comes to be replaced by
DP
TP
The ball
isrolling
Using Software to Deliver Language Intervention in Inclusionary Settings
example, children with language impairments sometimes persist in use of accusative case pronouns in the subject position which requires nominative case (“Her” vs. “She”). You’ll note that tense is responsible for checking or valuing nominative case. Children who are not correctly inflecting Verbs thus may use the default English accusative case in the subject position saying, “Him big.” Once they are tense marking (e.g., including copular “be” in this case), they will begin to correctly use nominative case as in “He is big.” This has clinical implications in that work on nominative case clearly should be preceded by work on one or more Tense elements. If nouns, verbs, and adjectives belong to the Lexical Category and determiners, tense, and complementizers belong to the Functional Category, where do prepositions fit in? Prepositions present a challenge to the simple notion of a Lexical - Functional dichotomy. On the one hand, prepositions are typically regarded as a fourth LexicalCategory. Consistent with this categorization is the observation that many prepositions, as with Lexical items in general but not Functional items, have intrinsic semantic content that makes an important contribution to sentence meaning. In other ways, however, prepositions have more in common with the Functional Categories and there is a growing tendency to regard them as such (e.g., Baker, 2003; Littlefield, 2005; Moro, 2008). There are numerous arguments for this view, just a few of which are mentioned here. For example, unlike the Lexical Categories, which are always growing via the addition of new Nouns (computer, cell phone), Verbs (snowboarding, faxing) and Adjectives (groovy, spacey), prepositions are more like Functional categories in that they comprise a closed class; there are relatively few of them (roughly 50 in English, and far fewer in most other languages) and there is little or no tendency to coin new ones. Some have argued that this could to some extent be attributable to the limited set of relationships available to be encoded, but there certainly would seem to
be room for coining at least a few new prepositions now and then; particularly since many of them are currently used to indicate rather diverse relationships (e.g., on the floor/ceiling/spot; on time; on topic; out of the house, out of time, out of kindness). Also unlike Lexical items, prepositions do not accept derivational affixes. nouns, verbs and adjectives can move from one Lexical Category to another by adding an appropriate derivational affix (e.g., -ize, -able, -ish, -ity...; pressure – pressurize; book - bookish), but prepositions cannot; they are always prepositions. The fact that prepositions do not conform to all of the typical features of items in either the Lexical or Functional categories has led some researchers to suggest that the category of prepositions ought to be divided according to the relative proportion of a preposition’s lexical and functional features (Corver & van Riemsdijk, 2001; Littlefield, 2005). In this scheme one would classify semantically rich prepositions as being Lexical, and other prepositions that serve primarily syntactic roles as being Functional. Supporting the validity of this division is evidence from studies of the language of individuals with aphasia (e.g., Froud, 2001), as well as from analyses of children’s first language acquisition (e.g., Littlefield, 2006). Unlike in earlier versions of generative grammar, in current linguistic theory Functional Category words as well as Lexical Category words can serve as the head of a phrase. The concept of heads is very important in linguistic theory. The universal Headedness Principle states that every syntactic structure is a projection of a Head word. Here again we see the importance of the lexicon in current thinking. In forming sentences, the lexicon is key. Those of us who serve children with language disorders should be greatly encouraged with this emphasis on the lexicon. The lexicon, after all, is learned. The learning of Functional Category lexical forms is a component of syntax mastery. Therefore, by emphasizing receptive
137
Using Software to Deliver Language Intervention in Inclusionary Settings
Figure 1. Functional categories with examples
mastery of Functional Category lexical forms we can facilitate syntax acquisition. While many of a child’s earliest multi-morpheme utterances may consist of bare noun and verb phrases, Functional Categories are apparent from the time typically developing children enter the two-word stage (Bohnacker, 1997; Brown, 1973; Engle, 1978; Spaulding, 1980). Indeed, one clinical marker for children with language impairments is the absence or relative infrequency of Functional Category elements in their speech (Leonard, 1998; Trantham & Pedersen, 1976). As the Functional Categories are acquired, the hierarchical nature of sentences emerges. Although
138
children in the early word combination stage may produce bare noun and verb phrases, these do not exist in adult English. In sentences generated by competent language users, nouns combine or merge with determiners and become Determiner Phrases. This is true even if there is no overt determiner in a phrase. Similarly, verbs combine or merge with tense elements and become Tense Phrases. Hence, the example in Table 1 (functional elements in bold). This developmental step generally does not proceed smoothly for children with language disorders. In fact, one certain conclusion that can be drawn from the research is that Functional
Using Software to Deliver Language Intervention in Inclusionary Settings
Categories are especially problematic for children with language disorders (Bedore & Leonard, 1998; Leonard 1995, 1998; Leonard, Camarata, Pawtowska, Brown, & Camarata, 2006; Rice, 1998; Rice, Wexler, & Cleave, 1995; Rice, Wexler, & Hershberger, 1998; Roeper & Seymour, 1994; Seymour, Roeper, & deVilliers, 2003; Wilson, 2000; Wilson & Pascoe, 1999).
generating sentences Linguists working in the Minimalist Program have made tremendous progress in advancing our understanding of language and its acquisition. In this section we will discuss how linguists describe the generation of sentences. This linguistic view may seem far removed from our subjective experience of producing language, but in fact it provides new insights into the production and comprehension of sentences in all languages. The model also provides direction to scientists working on the biology of language. For example, neuropsychologists have shown that the distinctions between the neural circuitry used to produce nouns and verbs demonstrates that lexical entries code not only semantic information but grammatical properties as well (Caramazza & Shapiro, 2004). Additionally, studies of adults with Broca’s aphasia have revealed very specific syntactic deficits as a result of brain lesions. These deficits are adequately described in terms of aspects of syntactic formulation and comprehension of sentences within the Minimalist Program (Grodzinsky, 2004; 2006). In earlier generative grammar theory, Phrase Structure and Transformational Rules, were the mechanisms proposed for sentence generation. In the Minimalist Program the Computational system of human language (CHL) generates sentences from a lexical array in a principled economical fashion. Two necessary components in sentence generation and comprehension are the lexicon and the syntactic computational system. The first step in generating a sentence is to get the words from the lexicon that will make up the sentence. Linguists
say that we first make a copy of each lexical item that will be used in the sentence from the lexicon and indicate how many times each will be used. In the sentence “He is hitting the ball” the lexical array or numeration would be: the1 he1 is1 hitting1 ball1 Once a lexical numeration has been copied from the lexicon, the syntactic computational system combines words using two operations: Merge combines elements in a binary fashion. Move copies and then repositions words and/or phrases. More recently linguists have begun referring to these two operations as External Merge (Merge) and Internal Merge (Move) because in the case of Merge new material is brought into the structure while in the case of Move material is repositioned (Chomsky, 2002; 2009; Radford, 2009). Using these operations, the computational system builds sentence structures that can be interpreted for sound and meaning. Unlike in earlier versions of generative grammar where sentences were built from the top down, within the Minimalist model sentences are built from the bottom up. To generate the sentence “He is hitting the ball”1 the computational system would first Merge “ball” with “the.” When two elements are combined, one becomes the head that dominates the structure. For example, when “ball” combines or Merges with “the” to form the phrase “the ball,” the head is the determiner “the.” When a noun Merges with a determiner the Functional Category determiner becomes the head. The phrase has the following structure (Figure 2). This is commonly diagramed with abbreviations for determiner, noun, and phrase (Figure 3). With the Merge of “the” with “ball,” we now have a Determiner Phrase (DP) whose head is “the.” The next step would be to Merge the DP
139
Using Software to Deliver Language Intervention in Inclusionary Settings
Figure 2.
Figure 3.
“the ball” which is the complement with the verb “hitting” to form (Figure 4). Then the subject or specifier “he” would be Merged with “hitting the ball” to form the verb phrase (Figure 5). “He” starts out in the specifier position of the verb phrase (VP) where the verb assigns it the Thematic Role of Agent. Although “he” starts in the specifier of the verb phrase, it will later be copied and moved into the specifier position of the tense phrase. The next step would be to Merge the VP with auxiliary “is,” the tense element in the sentence (Figure 6). Now we will need to Move “he” into the tense specifier position. We have to do this in order to have the Nominative Case feature valued by the tense element. You recall it is the tense element that checks for Nominative Case. Unlike many other languages, case in English is only overtly marked on pronouns. Nouns are also marked, but the marking is covert in English.
A simplified diagram of the sentence “He is hitting the ball” follows (Figure 7). In current linguistic theory, even though there is no complementizer in the sentence He is hitting the ball, it has the following structure (Figure 8). The syntactic structures generated by Merge and Move must be interpreted into sound and meaning. The syntactic component interfaces with the external sound or Articulatory-Perceptual system via Phonetic Form (PF). It interfaces with the external meaning or Conceptual-Intentional system via Logical Form (LF). An important characteristic of these syntactic structures then is that the Phonetic Form structure can contain only the sound information necessary to decode/encode a sentence and the Logical Form representation can contain only semantic information. This is because our cognitive system can only interpret meaningful information in a Logical Form. You’ll recall that in the Minimalist model lexical items
Figure 4.
Figure 5.
140
Using Software to Deliver Language Intervention in Inclusionary Settings
Figure 6.
Figure 7.
Figure 8.
enter the syntactic computational system with all their grammatical features. Some of these features are interpretable at the Conceptual-Intentional system interface and some are not. To provide an example, consider the sentence…“He is big.” The pronoun “he” has the following features: 3rd person, masculine, singular, nominative case. The first three features are valued but the case feature is unvalued. The copula “is” in contrast
enters the derivation carrying the valued feature present tense and unvalued person and number features. Grammatical features that come into the derivation valued are viewed as interpretable to the Conceptual-Intentional system. They have meaning. Unvalued features cannot be interpreted and play no role in semantic interpretation. Agreement involves having the tense element (“is”) value the unvalued case on the subject so it can be spelled
141
Using Software to Deliver Language Intervention in Inclusionary Settings
out at the PF interface as the nominative case pronoun “he.” Similarly, the unvalued person and number features on the verb “be” will be valued to reflect agreement with the 3rd person singular subject, “he,” which means it will be spoken as “is.” The case feature on the pronoun and the person and number features on the verb will be immediately deleted after these operations making them “invisible” to the syntactic and semantic components (Radford, 2009). The Minimalist Program is still in its infancy but the insights into language and language acquisition it has provided have inspired the development of promising new approaches to language intervention. In the next section we will discuss the instructional research bases for LanguageLinks and Prepositions!
Instructional research Linguistic theory should guide the choice of content in any language intervention plan, but how to deliver that content should be driven by what we have learned from research into the effectiveness of various instructional methods. While pragmatic competence in social situations revolves around expressive use of language, research has shown that language (vocabulary and syntax) is acquired through listening, not speaking. Language input provides the data necessary for lexical learning and to trigger parameter setting. Pinker (1994) stated this succinctly when he wrote, “It is not surprising that grammar development does not depend on overt practice, because actually saying something aloud, as opposed to listening to what other people say, does not provide the child with information about the language he or she is trying to learn.” (Pinker 1994, p. 280). Critically then, receptive language training, whether it is in the realm of vocabulary or syntax, should play a central role in any intervention plan for children with language impairments regard-
142
less of etiology. While the ultimate goal may be to develop communicative competence, that goal cannot be reached without first establishing language competence. Studies have validated the receptive approach to developing language competence. Research has shown that receptive procedures are in fact more effective than expressive imitation procedures in language intervention and can produce gains in production as well as comprehension (Courtright & Courtright, 1976; 1979; Zimmerman & Pike, 1972; Zimmerman & Rosenthal, 1974). The well-established learning principles of behavioral analysis (Holland & Skinner, 1961) provide a foundation for instructional design in all of Laureate’s language assessment and intervention programs including LanguageLinks and Prepositions!. The programs also use principles of explicit or discrete trial instruction which uses carefully controlled instruction and stimulus presentation. Over the past thirty years, research has demonstrated that explicit instruction is effective in teaching a variety of language skills (Justice, Chow, Capellini, Flanigan, & Colton, 2003; Maurice, Green, & Luce, 1996; Wilson, 1977). The language intervention programs also include several kinds of instructional support in training. When pretrial instruction is included, the target stimulus is presented and the target language is spoken before the student is asked to respond. Cueing to the Correct Response (CCR) is also provided on lower training levels. This consists of a variety of visual and auditory attention focusing techniques such as an animated character or an arrow appearing above the correct response target. In addition, two kinds of instructional feedback are used in the programs. Even after CCR has been faded, it is still provided following an incorrect response or if no response is made. This always occurs in the earliest vocabulary training programs and is gradually faded as the student advances in syntax. The student is then given a second chance to respond. The second kind of
Using Software to Deliver Language Intervention in Inclusionary Settings
feedback is Knowledge of the Correct Response (KCR). In KCR, the learner is always told the correct answer, either as part of the reinforcement sequence following a correct response, or as informational feedback following an incorrect response. In all cases, at the end of each trial the learner receives informational feedback indicating the correct response. In our own research, we have found that training just using feedback alone for instruction was effective (Wilson & Fox, 1983). There have been other demonstrations of the effectiveness of these procedures as well, across a range of computer administered instructional programs (Gilman, 1969; Tait, Hartley, & Anderson, 1973; Wilson & Fox, 1981), including Laureate’s language development software (e.g., Finn, Futernick, & MacEachern, 2005; Gale, Crofford, & Gillam, 1999; Gillam, Crofford, Gale, & Hoffman, 2001; Gillam & Loeb, 2005; Gillam, Loeb, Hoffman, Bohman, Champlin, Thibodeau, Widen, Brandl, & Friel-Patti, 2008; Miller, 1993). The use of computer-based language intervention software offers many advantages to clinicians, educators, parents, and administrators. Software programs can provide the highly structured interactions needed to illustrate the formal aspects of language. Additionally, computers provide a cost-efficient delivery system for individualized language intervention. Children can use language intervention software in classrooms and homes and thereby receive individualized services beyond those delivered by a speech-language pathologist. Children also enjoy working with properly designed educational software. One investigation found that three to six year old children with Autism Spectrum Disorder were more attentive and motivated when using a computer, and actually learned and retained more vocabulary than they did during one-on-one instruction with a teacher (Moore & Calvert, 2000). Most importantly, research has shown that language intervention software works. Significantly improved language development and communication skills have
been documented when regular use of language intervention software was added to the ongoing special education curriculum in a typical classroom setting. Moreover, using language intervention software with non-professional adult assistance, children with special needs can make language gains comparable to those seen during individual language therapy with a speech-language pathologist (Gale, Crofford, & Gillam, 1999; Gillam & Loeb, 2005; Gillam, Loeb, Hoffman et al., 2008; Howard, 1986; Schery & O’Connor, 1995; Steiner & Larson, 1991; Wilson & Fox, 1983; 1986).
Applying theory and research To acquire a language, children must be exposed to primary linguistic data (i.e., language input). Based on this input, they must learn the lexicon, set parameters, and become competent users of the computational system to generate sentences. Children with language disorders can experience difficulties with any or all of these linguistic processes. As discussed earlier, receptive language training is best suited to developing a lexicon, setting parameters, and establishing syntactic competence. As such, receptive language intervention should be an essential component in all programs for children with language disorders until they have mastered grammar. For busy clinicians and educators, finding time to provide evidence-based receptive language intervention is difficult. That’s where software can really help. In typically developing children, determiners, tense, and prepositions begin appearing in the early two-word stage. Once a child with a language disorder has entered into the two-word stage, targeted training on Functional Category forms and Prepositions should be provided. Learning determiner, tense, and preposition forms is a critical step in the mastery of syntax. To facilitate the acquisition of these forms, children with language disorders must systematically be exposed to sentences that feature them in highly salient contexts. The LanguageLinks: Syntax Assessment & Intervention
143
Using Software to Deliver Language Intervention in Inclusionary Settings
and Prepositions! programs are designed to help children with syntactic deficits achieve language competence using this approach. These are the first comprehensive syntax intervention programs to be based on current linguistic theory, instructional research, and have field test data to support their use. Figure 9 lists the grammatical forms trained in Levels 1-6 of LanguageLinks, presented as they are trained, in developmental order. Each of the Levels in LanguageLinks contains six Modules, each of which trains either two or three grammatically contrasting determiner or tense forms. The LanguageLinks system will take children with language impairments from the early two-word development stage through the mastery of a broad range of syntactic forms in the determiner and tense categories. In addition to determiners and tense forms, prepositions play an important role in early syntax development. Learning prepositions is an essential step in language mastery since prepositions often make an essential contribution to sentence meaning by signifying relative temporal and spatial relationships of many kinds, as well as relations involving cause, purpose, manner, means, viewpoints, and much more. Fundamentally, prepositions serve to indicate a relationship between elements in a sentence, with one of these elements being the prepositional complement or object (Quirk, Greenbaum, Leech, & Svartvik, 1985). This being the case, it is necessary that prepositions have a complement. Thus, an important step in syntax acquisition involves learning prepositions and their use with complements in prepositional phrases. Prepositions! Sterling Edition was designed to teach ten spatial prepositions (in, on, under, in front of, in back of, next to, above, below, behind, and between) and their use in sentences, a necessary step in the mastery of syntax and toward school success. Spatial or locative prepositions are especially important in early language development. Semantically, these are used to express concepts of location or position. As such, knowledge of spatial
144
prepositions plays an important role in ostensive word learning, and is critical to commenting on the position of objects in the environment. Prepositions enter the lexicon early in the word combination stage. The prepositions “in” and “on” are typically cited as the two earliest developing spatial prepositions. These were among the 14 grammatical morphemes studied in Brown’s classic 1973 book A First Language: The Early Stages (1973). Many spatial prepositions consist of a single word (in, on) and are classified as simple, while others consist of a two- or three-word sequence (next to, in front of) and are classified as complex. The earliest prepositions to be acquired are simple ones, although other simple prepositions develop after children have learned some complex forms. For example “behind” develops later than “in back of” (Stemach & Williams, 1988). The six Modules in Prepositions! train 10 essential prepositions in a variety of contexts. Like all the other Sterling Edition language intervention programs, LanguageLinks and Prepositions! both use an expert Optimized Intervention® system to automatically deliver both assessment and intervention-based on student responses.
Optimized Intervention® Optimized Intervention efficiently assesses students then enters them into training at an appropriate level. This system was originally inspired by methodology developed by the Software Technology Branch of the National Aeronautics and Space Administration (NASA) at the Johnson Space Center (Way, 1993). This group had developed software to train space shuttle astronauts that incorporated many useful features. In particular, the software was able to codify the knowledge and skills of professionals to be used to present customized lesson content, evaluate progress during a lesson, and revise the curriculum-based on individual patterns of strengths and weaknesses. In the 1990’s, representatives from NASA and
Using Software to Deliver Language Intervention in Inclusionary Settings
Figure 9. LanguageLinks: syntax assessment & intervention levels and modules
a panel of special educators from the Center for Special Education Technology and the Council for Exceptional Children identified the emerging language problems of children with disabilities as a critical problem in special education that might productively be addressed using NASA’s methodology. Subsequently, Laureate Learning Systems was invited to enter into a Technology Transfer Agreement with NASA. Since that time, Laureate has developed and field-tested a long series of Optimized Intervention systems for language intervention. Critical to this extended endeavor was the support of the National Institutes of Health, including Small Business Innovation Research (SBIR) awards from the National Institute on Deafness and Other Communication Disorders (NIDCD) and the National Institute on Child Health and Human Development (NICHD).2
The Optimized Intervention systems in Laureate’s Sterling Edition software are the culmination of these research and development efforts. The systems use artificial intelligence methodology to select appropriate training material and to adjust instructional support in relation to emerging skills and competencies, resulting in highly individualized and efficient language instruction. The systems also feature extensive data collection and reporting capabilities, thereby greatly simplifying the process of tracking student progress and generating reports detailing areas of strength and weakness. Each Sterling Edition language intervention program has an Optimized Intervention system uniquely designed to test and train its curricular targets in developmental order. All programs begin by probe testing the target words, concepts, or syntactic forms in developmental 145
Using Software to Deliver Language Intervention in Inclusionary Settings
order to ascertain the appropriate place to begin training. Once training begins, Optimized Intervention determines what material a student needs to work on and how much instructional support the student may require to make progress. When using the Optimized Intervention activity in LanguageLinks and Prepositions!, Probe testing to determine where to begin training on a form ends after the third error. Testing continues for all 10 stimuli for a form if the student makes two or fewer errors. Even if a student achieves a score of 80% or higher on one form, it still goes into training if the student fails to demonstrate knowledge of the other form(s) in the Module. Since the forms in a Module present a grammatical contrast, we believe it is important for students to be exposed to all the contrasting forms. Students must be able to discriminate the contrasts in any form family. This also serves to rule out the possibility that a response bias, e.g., always choosing just one of the forms in a family, could be misconstrued as knowledge of the form. For example, in the Module which tests and trains the Me/You contrast, most children who do not know the forms will make errors on both, but there are some children who always choose the item in the foreground (Find the hot chocolate for you) while others always choose the object that the speaker on the screen holding (Find the hot chocolate for me). These latter two groups of children will have a score of 100% on one of the forms, yet clearly do not understand the contrast. Optimized Intervention training continues until a student has demonstrated mastery over all forms in a Module. If the student continues to fail to reach Criterion for a given form or forms in a Module, training on that Module is postponed. Training is resumed after the student has gone through the other Modules on the Level in the case of LanguageLinks, or the remaining Modules in the program for Prepositions! The power of the Optimized Intervention system combined with its ease of use means that speech-language pathologists and other profes-
146
sionals can confidently recommend the use of Sterling Edition programs in classrooms, thereby increasing the amount of individualized language intervention services provided in inclusionary settings. Optimized Intervention assures that the program content is being delivered in a sound progression and manner. The extensive data collection and reporting capabilities of the programs ensure that the recommending professional can review in detail a student’s performance within and across sessions. Increasing the amount of individualized language services provided means that students will meet their goals more quickly. While increasing services by providing one on one professional treatment on a daily basis is often prohibitively expensive, that is not the case with computer delivered services. With LanguageLinks and Prepositions!, services can be delivered on a daily basis to all students who could benefit from the training the system provides. It can provide the intensive receptive language intervention needed to establish language competence while freeing the professional to work on additional important goals.
Field testing research on LanguageLinks and Prepositions! In 2005, a field study was conducted in the Medford Massachusetts Public Schools Early Education Program using prototype Modules from what would later become LanguageLinks and Prepositions! (Finn, Futernick, & MacEachern, 2005). Current linguistic theory and research highlights the importance of syntactic competence, but mastery of syntax is especially problematic for children with language impairments. Given the syntax deficits students with language impairments have, it was hypothesized that the use of syntax intervention software designed to train Functional Category determiner and tense forms as well as prepositions would result in greater increases in language scores than use of software designed for vocabulary and concept building.
Using Software to Deliver Language Intervention in Inclusionary Settings
In the Medford study, subjects were 22 preschool children (5 females, 17 males) with initial ages of 3;0 to 4;10 (Mean=4;0). They had been classified as having language impairments prior to enrollment. Subjects were from five classes led by three different teachers. All classrooms included typically developing peers in addition to the children on IEPs. In one classroom, the only special education students were those with an Autism Spectrum Disorder diagnosis. The other classrooms had a mixture of children with Specific Language Impairments, Pervasive Developmental Disorders, and Developmental Disorders. All were receiving speech-language pathology services as part of their program. The language status of each subject was evaluated using the Comprehensive Assessment of Spoken Language (CASL) (Carrow-Woolfolk, 1999). Standard scores on the core tests (Basic Concepts, Syntax Construction, Pragmatic Judgment) for subjects’ age levels were determined and Core Composite (CC) standard scores were calculated. Subjects were matched based on age, CC score, and classroom, and then randomly assigned to the experimental or control group. Classroom computers were set up to run the software. Subjects in the experimental group used prototype Modules from the LanguageLinks and Prepositions! syntax intervention system. Those in the control group used other Laureate programs designed to train vocabulary and categorization. Teachers were asked to use the appropriate software with each subject for approximately 15 minutes per day, several times per week if possible. Children’s interest level and attention span were to be taken into account, however, and no child was to be compelled to participate. Software use continued for 12 weeks. After this, subjects were once again evaluated using the CASL. Children’s CC standard scores before and after software use were analyzed using a two-way (group x trials) mixed design analysis of variance. All but two children had improved CC standard scores at the end of the 12-week study. Overall
gains in scores averaged 7.045 ± 1.58 points (mean ± SEM). This increase was significant (Trials, F(1,20)=22.6, p