Student Satisfaction and Learning Outcomes in E-Learning: An Introduction to Empirical Research Sean B. Eom Southeast Missouri State University, USA J.B. Arbaugh University of Wisconsin Oshkosh, USA
Senior Editorial Director: Director of Book Publications: Editorial Director: Acquisitions Editor: Development Editor: Production Editor: Typesetters: Print Coordinator: Cover Design:
Kristin Klinger Julia Mosemann Lindsay Johnston Erika Carter Hannah Ablebeck Sean Woznicki Natalie Pronio, Jennifer Romanchak, Milan Vracarich Jr. and Keith Glazewski Jamie Snavely Nick Newcomer
Published in the United States of America by Information Science Reference (an imprint of IGI Global) 701 E. Chocolate Avenue Hershey PA 17033 Tel: 717-533-8845 Fax: 717-533-8661 E-mail:
[email protected] Web site: http://www.igi-global.com/reference Copyright © 2011 by IGI Global. All rights reserved. No part of this publication may be reproduced, stored or distributed in any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher. Product or company names used in this set are for identification purposes only. Inclusion of the names of the products or companies does not indicate a claim of ownership by IGI Global of the trademark or registered trademark. Library of Congress Cataloging-in-Publication Data Student satisfaction and learning outcomes in e-learning: an introduction to empirical research / S. Eom and J.B. Arbaugh, editors. p. cm. Includes bibliographical references and index. Summary: “This book familiarizes prospective researchers with processes and topics for conducting research in e-learning, addressing Theoretical Frameworks, Empirical Research Methods and Tutorial, Factors Influencing Student Satisfaction and Learning Outcomes, and Other Applications of Theory and Method”--Provided by publisher. ISBN 978-1-60960-615-2 (hardcover) -- ISBN 978-1-60960-616-9 (ebook) 1. Computer-assisted instruction. 2. Distance education. 3. Learning, Psychology of. I. Eom, Sean B. II. Arbaugh, J. B., 1962LB1044.87.S848 2011 371.33’44678--dc22 2010040622
British Cataloguing in Publication Data A Cataloguing in Publication record for this book is available from the British Library. All work contributed to this book is new, previously-unpublished material. The views expressed in this book are those of the authors, but not necessarily of the publisher.
Table of Contents
Preface . ............................................................................................................................................... xiv Section 1 Theoretical Frameworks Chapter 1 Multi-Disciplinary Studies in Online Business Education: Observations, Future Directions, and Extensions........................................................................................................... 1 J. B. Arbaugh, University of Wisconsin Oshkosh, USA Chapter 2 Learning and Satisfaction in Online Communities of Inquiry............................................................... 23 Zehra Akyol, Canada D. Randy Garrison, University of Calgary, Canada
Section 2 Empirical Research Methods and Tutorial Chapter 3 A Review of Research Methods in Online and Blended Business Education: 2000-2009.................... 37 J. B. Arbaugh, University of Wisconsin Oshkosh, USA Alvin Hwang, Pace University, USA Birgit Leisen Pollack, University of Wisconsin Oshkosh, USA Chapter 4 An Introduction to Path Analysis Modeling Using LISREL ................................................................ 57 Sean B. Eom, Southeast Missouri State University, USA Chapter 5 Testing the DeLone-McLean Model of Information System Success in an E-Learning Context......... 82 Sean B. Eom, Southeast Missouri State University, USA James Stapleton, Southeast Missouri State University, USA
Chapter 6 An Introduction to Structural Equation Modeling (SEM) and the Partial Least Squares (PLS) Methodology........................................................................................... 110 Nicholas J. Ashill, American University of Sharjah, United Arab Emirates Chapter 7 Using Experimental Research to Investigate Students’ Statistfaction with Online Learning.............. 130 Art W. Bangert, Montana State University, USA Chapter 8 Student Performance in E-Learning Environments: An Empirical Analysis Through Data-Mining.......................................................................................................................... 149 Constanta-Nicoleta Bodea, Academy of Economic Studies, Romania Vasile Bodea, Academy of Economic Studies, Romania Ion Gh. Roşca, Academy of Economic Studies, Romania Radu Mogos, Academy of Economic Studies, Romania Maria-Iuliana Dascalu, Academy of Economic Studies, Romania Chapter 9 How to Design, Develop, and Deliver Successful E-Learning Initiatives........................................... 195 Clyde Holsapple, University of Kentucky, USA Anita Lee-Post, University of Kentucky, USA Section 3 Factors Influencing Student Satisfaction and Learning Outcomes Chapter 10 Quality Assurance in E-Learning......................................................................................................... 231 Stacey McCroskey, Online Adjunct Professor, USA Jamison V. Kovach, University of Houston, USA Xin (David) Ding, University of Houston, USA Susan Miertschin, University of Houston, USA Sharon Lund O’Neil, University of Houston, USA Chapter 11 Measuring Success in a Synchronous Virtual Classroom.................................................................... 249 Florence Martin, University of North Carolina Wilmington, USA Michele A. Parker, University of North Carolina Wilmington, USA Abdou Ndoye, University of North Carolina Wilmington, USA Chapter 12 Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa............267 Craig Cadenhead, University of Cape Town, South Africa Jean-Paul Van Belle, University of Cape Town, South Africa
Chapter 13 Student Personality and Learning Outcomes in E-Learning: An Introduction to Empirical Research.............................................................................................................................. 294 Eyong B. Kim, University of Hartford, USA Chapter 14 A Method for Adapting Learning Objects to Students’ Preferences.................................................... 316 Ana Sanz Esteban, University Carlos III of Madrid, Spain Javier Saldaña Ramos, University Carlos III of Madrid, Spain Antonio de Amescua Seco, University Carlos III of Madrid, Spain
Section 4 Other Applications of Theory and Method Chapter 15 Understanding Graduate Students’ Intended Use of Distance Education Platforms........................... 340 María del Carmen Jiménez-Munguía, Universidad de las Américas Puebla, México Luis Felipe Luna-Reyes, Universidad de las Américas Puebla, México Chapter 16 Online Project-Based Learning: Students’ Views, Concerns and Suggestions.................................... 357 Erman Yukselturk, Middle East Technical University, Turkey Meltem Huri Baturay, Kırıkkale University, Turkey Chapter 17 Students’ Perception, Interaction, and Satisfaction in the Interactive Blended Courses: A Case Study........................................................................................................................................ 375 Bünyamin Atici, Firat University, Turkey Yalın Kılıç Türel, Firat University, Turkey Compilation of References ............................................................................................................... 392 About the Contributors .................................................................................................................... 430 Index.................................................................................................................................................... 438
Detailed Table of Contents
Preface . ............................................................................................................................................... xiv Section 1 Theoretical Frameworks The first section, Theoretical Frameworks, introduces readers to emerging methodological and theoretical perspectives for effective empirical e-learning research. The two chapters in the book’s first section present a case for increased use of multi-course, multi-disciplinary studies and provide an overview and application of an increasingly influential model of e-learning effectiveness, the Community of Inquiry framework. Chapter 1 Multi-Disciplinary Studies in Online Business Education: Observations, Future Directions, and Extensions........................................................................................................... 1 J. B. Arbaugh, University of Wisconsin Oshkosh, USA This chapter argues that research in online teaching and learning in higher education should take a multi-disciplinary orientation, especially in settings whose curricula are drawn from several disciplinary perspectives such as business schools. The benefits of a multi-disciplinary approach include curriculum integration and enhanced communication and collective methodological advancement among online teaching and learning scholars from the disciplines that comprise the integrated curricula. After reviewing multi-disciplinary studies in business education published to date, the chapter concludes with recommendations for advancing research in this emerging stream. Some of the primary recommendations include the use of academic discipline as a moderating variable; more studies incorporate samples comprised of faculty and/or undergraduate students, and the development of more comprehensive measures of student learning. Chapter 2 Learning and Satisfaction in Online Communities of Inquiry............................................................... 23 Zehra Akyol, Canada D. Randy Garrison, University of Calgary, Canada
The purpose of this chapter is to explain the capability of the Community of Inquiry (CoI) framework as a research model to study student learning and satisfaction. The framework identifies three elements (social, cognitive and teaching presence) that contribute directly to the success of an e-learning experience through the development of an effective CoI. It is argued that a CoI leads to higher learning and increased satisfaction. The chapter presents findings from two online courses designed using the CoI approach. Overall, the students in these courses had high levels of perceived learning and satisfaction as well as actual learning outcomes.
Section 2 Empirical Research Methods and Tutorial The second section of the book is titled Empirical Research Methods and Tutorial. Because empirical research in e-learning is our topic of interest, it seems particularly appropriate that research methods are the focus of the book’s second section. After a review of research methods employed to date in a relatively active discipline, the book’s second section chronicles and provides examples of several of the structural equation modeling techniques whose increased use was called for in the review chapter. This section also includes chapters that deal with higher order multivariate techniques, experimental designs, data mining, and action research in furthering our understanding of e-learning success.
Chapter 3 A Review of Research Methods in Online and Blended Business Education: 2000-2009.................... 37 J. B. Arbaugh, University of Wisconsin Oshkosh, USA Alvin Hwang, Pace University, USA Birgit Leisen Pollack, University of Wisconsin Oshkosh, USA This review of the online teaching and learning literature in business education found growing sophistication in analytical approaches over the last 10 years. We believe researchers are uncovering important findings from the large number of predictors, control variables, and criterion variables examined. Scholars are employing appropriate and increasingly sophisticated techniques such as structural equation models in recent studies (16) within a field setting. To increase methodological rigor, researchers need to consciously incorporate control variables that are known to influence criterion variables of interest so as to clearly partial out the influence of their predictor variables of interest. This will help address shortcomings arising from the inability to convince sample respondents such as instructors, institutional administrators, and graduate business students on the benefits versus the cost of a fully randomized design approach. Chapter 4 An Introduction to Path Analysis Modeling Using LISREL ................................................................ 57 Sean B. Eom, Southeast Missouri State University, USA
Over the past decades, we have seen a wide range of empirical research in the e-learning literature. The use of multivariate statistical tools has been a staple of the research stream throughout the decade. Path analysis modeling is part of four related multivariate statistical models including regression, path analysis, confirmatory factor analysis, and structural equation models. This chapter focuses on path analysis modeling for beginners using LISREL 8.70. Several topics covered in this chapter include foundational concepts, assumptions, and steps of path analysis modeling. The major steps in path analysis modeling explained in this chapter consist of specification, identification, estimation, testing, and modification of models. Chapter 5 Testing the DeLone-McLean Model of Information System Success in an E-Learning Context......... 82 Sean B. Eom, Southeast Missouri State University, USA James Stapleton, Southeast Missouri State University, USA This chapter has two important objectives (a) introduction of structural equation modeling for a beginner; and (b) empirical testing of the validity of the information system (IS) success model of DeLone and McLean (the DM model) in an e-learning environment, using LISREL based structural equation modeling. The following section briefly describes the prior literature on course delivery technologies and e-learning success. The next section presents the research model tested and discussion of the survey instrument. The structural equation modeling process is fully discussed including specification, identification, estimation, testing, and modification of the model. The final section summarizes the test results. To build e-learning theories, those untested conceptual frameworks must be tested and refined. Nevertheless, there has been very little testing of these frameworks. This chapter is concerned with the testing of one such framework. There is abundant prior research that examines the relationships among information quality, system quality, system use, user satisfaction, and system outcomes. This is the first study that focuses on the testing of the DM model in an e-learning context. Chapter 6 An Introduction to Structural Equation Modeling (SEM) and the Partial Least Squares (PLS) Methodology........................................................................................... 110 Nicholas J. Ashill, American University of Sharjah, United Arab Emirates Over the past 15 years, the use of Partial Least Squares (PLS) in academic research has enjoyed increasing popularity in many social sciences including Information Systems, marketing, and organizational behavior. PLS can be considered an alternative to covariance-based SEM and has greater flexibility in handling various modeling problems in situations where it is difficult to meet the hard assumptions of more traditional multivariate statistics. This chapter focuses on PLS for beginners. Several topics are covered and include foundational concepts in SEM, the statistical assumptions of PLS, a LISREL-PLS comparison, and reflective and formative measurement. Chapter 7 Using Experimental Research to Investigate Students’ Statistfaction with Online Learning.............. 130 Art W. Bangert, Montana State University, USA
The use of experimental research in higher education settings for investigating the effectiveness of technology-supported instructional innovations in K-12 and higher education settings is fairly limited. The implementation of the No Child Left Behind Act (NCLB) of 2001, has renewed the emphasis on the use of experimental research for establishing evidence to support the effectiveness of instructional interventions and other school-based programs in K-12 and higher education contexts. This chapter discusses the most common experimental designs and threats to internal validity of experimental procedures that must be controlled to ensure that the interventions or programs under investigation are responsible for changes in the dependent variables of interest. A study by Bangert (2008) is used to illustrate procedures for conducting experimental research, controlling potential threats to internal validity and reporting results that communicate both practical and statistical significance. Chapter 8 Student Performance in E-Learning Environments: An Empirical Analysis Through Data-Mining.......................................................................................................................... 149 Constanta-Nicoleta Bodea, Academy of Economic Studies, Romania Vasile Bodea, Academy of Economic Studies, Romania Ion Gh. Roşca, Academy of Economic Studies, Romania Radu Mogos, Academy of Economic Studies, Romania Maria-Iuliana Dascalu, Academy of Economic Studies, Romania The aim of this chapter is to explore the application of data mining for analyzing performance and satisfaction of the students enrolled in an online two-year master degree programme in project management. This programme is delivered by the Academy of Economic Studies, the biggest Romanian university in economics and business administration in parallel, as an online programme and as a traditional one. The main data sources for the mining process are the survey made for gathering students’ opinions, the operational database with the students’ records and data regarding students activities recorded by the e-learning platform. More than 180 students have responded and more than 150 distinct characteristics/ variable per student were identified. Due the large number of variables, data mining is a recommended approach to analysis this data. Clustering, classification, and association rules were employed in order to identify the factor explaining students’ performance and satisfaction, and the relationship between them. The results are very encouraging and suggest several future developments. Chapter 9 How to Design, Develop, and Deliver Successful E-Learning Initiatives........................................... 195 Clyde Holsapple, University of Kentucky, USA Anita Lee-Post, University of Kentucky, USA The purposes of this chapter are three-fold: (1) to present findings in investigating the success factors for designing, developing and delivering e-learning initiatives, (2) to examine the applicability of Information Systems theories to study e-learning success, and (3) to demonstrate the usefulness of action research in furthering understanding of e-learning success. Inspired by issues and challenges experienced in developing an online course, a process approach for measuring and assessing e-learning success is advanced. This approach adopts an Information Systems perspective on e-learning success to address the question of how to guide the design, development, and delivery of successful e-learning initiatives.
The validity and applicability of the process approach to measuring and assessing e-learning success is demonstrated in empirical studies involving cycles of action research. Merits of this approach are discussed, and its contributions in paving the way for further research opportunities are presented. Section 3 Factors Influencing Student Satisfaction and Learning Outcomes The third section of the book examines particular influences of e-learning course outcomes in a variety of settings. These chapters examine factors such as learner dispositional and behavioral characteristics, quality assurance frameworks for e-learning effectiveness, course content design and development, and their roles in shaping effective e-learning environments. To date, much of e-learning research has focused on asynchronous learning environments (exemplified by the Journal of Asynchronous Learning Networks) based in higher education settings. However, these are not the only contexts in which e-learning occurs. Therefore, we also address the alternative and potentially increasingly important settings of synchronous course delivery and corporate learning environments in this section. Chapter 10 Quality Assurance in E-Learning......................................................................................................... 231 Stacey McCroskey, Online Adjunct Professor, USA Jamison V. Kovach, University of Houston, USA Xin (David) Ding, University of Houston, USA Susan Miertschin, University of Houston, USA Sharon Lund O’Neil, University of Houston, USA Quality is a subjective concept, and as such, there are many criteria for assuring quality, including assessment practices based on industry standards and accreditation requirements. Most assessments, including quality assurance in e-learning, frequently occur at three levels: individual course assessments, department or program assessments, and institutional assessments; frequently these levels cannot be distinctly delineated. While student evaluations are usually included within these frameworks, student views are but one variable in the quality assessment equation. To offer some plausible perspectives of how students view quality, this chapter will provide an overview of quality assurance for online learning from the course, program, and institutional viewpoints as well as review some of the key research related to students’ assessment of what constitutes quality in online courses. Chapter 11 Measuring Success in a Synchronous Virtual Classroom.................................................................... 249 Florence Martin, University of North Carolina Wilmington, USA Michele A. Parker, University of North Carolina Wilmington, USA Abdou Ndoye, University of North Carolina Wilmington, USA This chapter will benefit those who teach individuals using the synchronous virtual classroom (SVC). The SVC model will help instructors design online courses that incorporate the factors that students need to be successful. This model will also help virtual classroom instructors and managers develop
a systematic way of identifying and addressing the external and internal factors that might impact the success of their instruction. The strategies for empirically researching the SVC, which range from qualitative inquiry to experimental design, are discussed along with practical examples. This information will benefit instructors, researchers, non-profit and profit organizations, and academia. Chapter 12 Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa............267 Craig Cadenhead, University of Cape Town, South Africa Jean-Paul Van Belle, University of Cape Town, South Africa This article looks at the factors that influence user satisfaction with Internet based learning in the South African corporate environment. An electronic survey was administered, and one hundred and twenty responses from corporations across South Africa were received. Only five of the thirteen factors were found to exert a statistically significant influence on learner satisfaction: instructor response towards the learners, instructor attitude toward Internet based learning, the flexibility of the course, perceived usefulness, perceived ease of use, and the social interaction experienced by the learner in assessments. Interestingly, four of those five were also identified as significant in a similar Taiwanese study, which provides an interesting cross-cultural validation for the findings, even though our sample was different and smaller. Perhaps surprisingly, none of 6 demographic variables exerted significant influence. Hopefully organisations and educational institutions can note and make use of the important factors in conceptualizing and designing their e-learning courses.
Chapter 13 Student Personality and Learning Outcomes in E-Learning: An Introduction to Empirical Research.............................................................................................................................. 294 Eyong B. Kim, University of Hartford, USA Web-based courses are a popular format in the e-learning environment. Among students enrolled in Web-based courses, some students learn a lot while others do not. There are many possible reasons for the differences in learning outcomes (e.g., student’s learning style, satisfaction, motivation, etc.). In the last few decades, student’s personality has emerged as an important factor influencing the learning outcomes in a traditional classroom environment. Among different personality models, the Big-Five model of personality has been successfully applied to help understand the relationship between personality and learning outcomes. Because Web-based courses are becoming popular, the Big-Five model is applied to find out if students’ personality traits play an important role in a Web-based course learning outcomes. Chapter 14 A Method for Adapting Learning Objects to Students’ Preferences.................................................... 316 Ana Sanz Esteban, University Carlos III of Madrid, Spain Javier Saldaña Ramos, University Carlos III of Madrid, Spain Antonio de Amescua Seco, University Carlos III of Madrid, Spain
The development of information and communications technologies (ICT) in recent years has led to new forms of education, and consequently, e-learning systems. Several learning theories and styles define learning in different ways. This chapter analyzes these different learning theories and styles, as well as the main standards for creating contents with the goal of developing a proposal for structuring courses and organizing material which best fits students’ needs, in order to increase motivation and improve the learning process.
Section 4 Other Applications of Theory and Method The fourth section of the book includes three chapters that deal with other applications of e-learning theory and method. The book’s final section extends the approach of alternative e-learning theory and environments through applying the Unified Theory of Acceptance and Use of Technology (UTAUT), project-based learning, and blended learning. Chapter 15 Understanding Graduate Students’ Intended Use of Distance Education Platforms........................... 340 María del Carmen Jiménez-Munguía, Universidad de las Américas Puebla, México Luis Felipe Luna-Reyes, Universidad de las Américas Puebla, México The objective of this chapter is to use the Unified Theory of the Acceptance and Use of Technology to better understand graduate students’ intended use of distance education Platforms, using as a case a distance education platform of a Mexican University, the SERUDLAP system. Four constructs are hypothesized to play a significant role: performance expectancy, effort expectancy, social influence and attitude toward using technology; the moderating factors were gender and voluntariness of use. Data for the study was gathered through an online survey with a response rate of about 41%. Results suggested that the performance expectancy and attitude towards technology are factors that help us understand graduate students’ intended use of a distance education platform. Future research must consider the impact of some factors, such as previous experiences, age and facilitating conditions in order to better understand the students’ behavior. Chapter 16 Online Project-Based Learning: Students’ Views, Concerns and Suggestions.................................... 357 Erman Yukselturk, Middle East Technical University, Turkey Meltem Huri Baturay, Kırıkkale University, Turkey This study integrated Project-based learning (PBL) in an online environment and aimed to investigate critical issues, dynamics and challenges related to PBL from 49 student perspectives in an online course. The effect of PBL was examined qualitatively with open-ended questionnaire, observations, and students’ submissions who were taking an online certificate course. According to the findings, students thought that an online PBL course supports their professional development with provision of practical knowledge, enhanced project development skill, self confidence, and research capability. This support is further augmented with the facilities of the online learning environment. Students
mainly preferred team-work more than individual work. Although students were mostly satisfied with the course, they still had some suggestions for prospective students and instructors. The findings are particularly important for those people who are planning to organize course or activities which involve online PBL and who are about take an online or face-to-face PBL course. Chapter 17 Students’ Perception, Interaction, and Satisfaction in the Interactive Blended Courses: A Case Study........................................................................................................................................ 375 Bünyamin Atici, Firat University, Turkey Yalın Kılıç Türel, Firat University, Turkey Blended courses that offer several possibilities to students and teachers such as becoming more interactive and more active have become increasingly widespread for both K12 and higher education settings. With the rising of the cutting edge technologies, institutions and instructors have embarked on creating new learning environments with a variety of new delivery methods. At the same time, designing visually impressive and attractive blended settings for students have been easier with extensive learning and content management systems (LMS, CMS, LCMS) such as Blackboard, WebCT, Moodle, and virtual classroom environments (VLE) such as Adobe Connect, Dimdim, and WiZiQ. In this study, we aimed to investigate students’ perspectives and satisfactions towards designed interactive blended learning settings and to find out the students’ views on both synchronous and asynchronous interactive blended learning environment (IBLE). Compilation of References ............................................................................................................... 392 About the Contributors .................................................................................................................... 430 Index.................................................................................................................................................... 438
xiv
Preface
From its humble origins approximately 30 years ago (Hiltz & Turoff, 1978), it is possible that we are now entering what may be a golden age of e-learning. In the United States, 4.6 million students took at least one online course during Fall 2008, a seventeen percent increase from the previous year. U.S. schools offering these courses have seen increases in demand for e-learning options, with 66 percent and 73 percent of responding schools reporting increased demand for new and existing online course offerings respectively (Allen & Seaman, 2010). Similar reactions to e-learning are occurring across the globe. The European Union’s Lifelong Learning Programme will be investing much of its 7 Billion Euro budget between 2007 and 2013 in the development of and enhancement of e-learning tools and open collaboration initiatives (European Commission, 2010). Institutions such as Ramkamhaeng University in Thailand, the Indira Ghandi National Open University in India and the Open University of Malaysia are adopting e-learning to help manage enrollments approaching 2 million students (Bonk, 2009). With e-learning beginning to expand beyond its historic roots in higher education to K-12 educational settings and populous nations such China, India, and Indonesia only beginning to embrace e-learning, it appears that the demand for online learning will only increase in the future, and likely increase dramatically. But in spite of such potential promise for e-learning, support for delivering education through this medium is far from unanimous. Empirical studies suggest that online education is not a universal innovation applicable to all types of instructional situations. Online education can be a superior mode of instruction if it is targeted to learners with specific learning styles (visual and read/write learning styles) (Eom, Ashill, & Wen, 2006), students personality characteristics (Schniederjans & Kim, 2005), and with timely, helpful instructor feedback of various types. Although cognitive and diagnostics feedbacks are all important factors that improve perceived learning outcomes, metacognitive feedback can induce students to become self-regulated learners. Recent meta-analytic studies (Means, Toyama, Murphy, Bakia, & Jones, 2009; Sitzmann, Kraiger, Stewart, & Wisher, 2006) also suggest that learning outcomes now equal, and in some cases, surpass those provided in classroom-based settings. However, concerns regarding this delivery medium’s effectiveness continue to persist (Morgan & Adams, 2009; Sarker & Nicholson, 2005). Some question its appropriateness for the delivery of technically-oriented or skills-based content (Hayes, 2007; Kellogg & Smith, 2009; Marriott, Marriott, & Selwyn, 2004). Others bemoan a lack of direct contact between students and instructors (Haytko, 2001; Tanner, Noser, & Totaro, 2009; Wilkes, Simon, & Brooks, 2006). Still others associate the medium with for-profit universities, and therefore lump its use in with the practices of low standards and high-pressure marketing associated with some of those types of institutions (Bonk, 2009; Stahl, 2004). Still others believe that although the technology itself may be neither good nor bad, the bad or even non-existent training that many of those employed to teach using the medium likely guarantees a poor educational experience for learners and changes the
xv
learner-instructor relationship in ways that are not always positive (Alexander, Perrault, Zhao, & Waldman, 2009; Kellogg & Smith, 2009; Liu, Magjuka, Bonk, & Lee, 2007).
THE OBJECTIVE OF THIS BOOK One way to address such concerns is through researching the phenomenon to determine whether and under what conditions the use of the medium is most effective. However, concerns regarding the quality of research of e-learning have long persisted. From concerns such as over-reliance upon single-course studies (Phipps & Merisotis, 1999), to lack of randomized experimental designs (Bernard, Abrami, Lou, & Borokhovski, 2004), to incomplete and/or imprecise measures of student learning (Sitzmann, Ely, Brown, & Bauer, 2010) to more general concerns over methodological quality (Bernard et al., 2009), concerns regarding the rigor of research on e-learning are not new. These concerns regarding research quality are augmented by the trend that although the number of e-learning instructors continues to increase, the number of scholars providing a sustained history of research contributions on the topic has been comparatively few. For example, a recent review of the literature on e-learning in business education reported that fewer than 20 scholars were intensively contributing to this literature (three or more articles), and this number was enhanced in part because these scholars were collaborating with each other (Arbaugh et al., 2009). If such a distorted picture of dedicated e-learning researchers relative to e-learning educational practitioners exists in other disciplines, it is evident that we would greatly benefit from substantially increasing the number of researchers dedicated to examining this increasingly pervasive phenomenon.
THE AUDIENCE OF THIS BOOK This book is for practitioners, managers, researchers, and graduate students in virtually every field of study. Application areas of e-learning are not limited to a specific academic area. E-learning is a worldwide perpetual trend that is being applied to educate employees of non-academic organizations such as governments, profit or non-profit organizations. Needless to say, libraries in university, profit and non-profit organizations around the world will be potential customers. Therefore, we have produced a book that will help introduce these instructors, researchers, practicing managers, and graduate students in the e-learning community to research on satisfaction and learning outcomes in e-learning. Besides providing insights from previous research on effective instructional practices for new instructors (who, in turn, could be new researchers) that are entering the e-learning realm, why not a book that might help them examine and conduct their work more thoroughly? It is our hope that new (and not so new) instructors, researchers, practicing managers, and graduate students will use the materials in this book to enter the increasingly fascinating field of research in online teaching and learning.
THE CONTRIBUTORS OF THIS BOOK In compiling this book’s contents, we are particularly pleased that we have both a multi-national and a multi-disciplinary composition of contributors of the book’s chapters. We have authors from institutions
xvi
in Canada, Mexico, Romania, South Africa, Spain, Turkey, the United Arab Emirates, and the United States. These scholars represent fields such as adult education, computer science, distance education, economics, educational leadership, Information Systems, instructional technology, international management, marketing, and strategy. Considering the diverse backgrounds from which theoretical and methodological perspectives used to develop e-learning research, we feel that incorporating the works of scholars from varied backgrounds not only informs the reader of the breadth of research conducted in this emerging field, but also affords the chapter authors the opportunity to bring the perspectives of this collection of works back to inform scholars in their respective disciplines.
THE STRUCTURE OF THIS BOOK When one seeks to enter a new research field, familiarizing oneself with some of that field’s influential articles is a necessary starting point. However, being able to see those articles in the broader context of the field’s predominant theoretical and methodological influences and potential future directions can help scholars to determine where their expertise and skills can make the most appropriate contribution. Therefore, we organized the book’s chapters to familiarize prospective researchers with processes and topics for conducting research in e-learning. The book is divided into 4 sections: Theoretical Frameworks, Empirical Research Methods and Tutorial, Factors Influencing Student Satisfaction and Learning Outcomes, and Other Applications of Theory and Method. The first section, Theoretical Frameworks, introduces readers to emerging methodological and theoretical perspectives for effective empirical e-learning research. The two chapters in the book’s first section present a case for increased use of multi-course, multi-disciplinary studies and provide an overview and application of an increasingly influential model of e-learning effectiveness, the Community of Inquiry framework (Garrison, Anderson, & Archer, 2000). In chapter 1, Arbaugh argues that research in online teaching and learning in higher education should take a multi-disciplinary orientation, especially in settings whose curricula are drawn from several disciplinary perspectives such as business schools. The benefits of a multi-disciplinary approach include curriculum integration and enhanced communication and collective methodological advancement among online teaching and learning scholars from the disciplines that comprise the integrated curricula. After reviewing multi-disciplinary studies in business education published to date, the chapter concludes with recommendations for advancing research in this emerging stream. Some of the primary recommendations include the use of academic discipline as a moderating variable, and more studies incorporate samples comprised of faculty and/or undergraduate students, and the development of more comprehensive measures of student learning. In chapter 2, Akyol and Garrison explain the capability of the Community of Inquiry (CoI) framework as a research model to study student learning and satisfaction. The framework identifies three elements (social, cognitive, and teaching presence) that contribute directly to the success of an e-learning experience through the development of an effective CoI. It is argued that a CoI leads to higher learning and increased satisfaction. The chapter presents findings from two online courses designed using the CoI approach. Overall, the students in these courses had high levels of perceived learning and satisfaction as well as actual learning outcomes. The second section of the book is titled Empirical Research Methods and Tutorial. Because empirical research in e-learning is our topic of interest, it seems particularly appropriate that research methods are the focus of the book’s second section. After a review of research methods employed to date in a
xvii
relatively active discipline, the book’s second section chronicles and provides examples of several of the structural equation modeling techniques whose increased use was called for in the review chapter. This section also include chapter that deals with higher order multivariate techniques, experimental designs, and data mining. The first chapter in this section, chapter 3, reviews the online teaching and learning literature in business education and it found growing sophistication in analytical approaches over the last 10 years. We believe researchers are uncovering important findings from the large number of predictors, control variables, and criterion variables examined. Scholars are employing appropriate and increasingly sophisticated techniques such as structural equation models in recent studies within a field setting. To increase methodological rigor, researchers need to consciously incorporate control variables that are known to influence criterion variables of interest so as to clearly partial out the influence of their predictor variables of interest. This will help address shortcomings arising from the inability to convince sample respondents such as instructors, institutional administrators, and graduate business students on the benefits versus the cost of a fully randomized design approach. Chapter 4 is an introduction to path analysis modeling using LISREL. Over the past decades, we have seen a wide range of empirical research in the e-learning literature. The use of multivariate statistical tools has been a staple of the research stream throughout the decade. Path analysis modeling is part of four related multivariate statistical models including regression, path analysis, confirmatory factor analysis, and structural equation models. This chapter focuses on path analysis modeling for beginners using LISREL 8.70. Several topics covered in this chapter include foundational concepts, assumptions, and steps of path analysis modeling. The major steps in path analysis modeling explained in this chapter consist of specification, identification, estimation, testing, and modification of models. Chapter 5, “Testing the DeLone-McLean Model of Information System Success in an E-Learning Context,” has two important objectives: (a) introduction of structural equation modeling for a beginner; and (b) empirical testing of the validity of the information system (IS) success model of DeLone and McLean (the DM model) in an e-learning environment using LISREL. The next section presents the research model tested and discussion of the survey instrument. The structural equation modeling process is fully discussed including specification, identification, estimation, testing, and modification of the model. The final section summarizes the test results. To build e-learning theories, those untested conceptual frameworks must be tested and refined. Nevertheless, there has been very little testing of these frameworks. This chapter is concerned with the testing of one such framework. There is abundant prior research that examines the relationships among information quality, system quality, system use, user satisfaction, and system outcomes. This is the first study that focuses on the testing of the DM model in an e-learning context. Chapter 6 is an introduction to Structural Equation Modeling (SEM) and the partial least squares (PLS) methodology. Over the past 15 years, the use of Partial Least Squares (PLS) in academic research has enjoyed increasing popularity in many social sciences including information systems, marketing, and organizational behavior. PLS can be considered an alternative to covariance-based SEM and has greater flexibility in handling various modeling problems in situations where it is difficult to meet the hard assumptions of more traditional multivariate statistics. This chapter focuses on PLS for beginners. Several topics are covered and include foundational concepts in SEM, the statistical assumptions of PLS, a LISREL-PLS comparison, and reflective and formative measurement. Chapter 7, “Using Experimental Research to Investigate Students’ Satisfaction with Online Learning,” discusses the most common experimental designs and threats to internal validity of experimental
xviii
procedures that must be controlled to ensure that the interventions or programs under investigation are responsible for changes in the dependent variables of interest. A study by Bangert (2008) is used to illustrate procedures for conducting experimental research, controlling potential threats to internal validity, and reporting results that communicate both practical and statistical significance. The use of experimental research in higher education settings for investigating the effectiveness of technology-supported instructional innovations in K-12 and higher education settings is fairly limited. The implementation of the No Child Left Behind Act (NCLB) of 2001, has renewed the emphasis on the use of experimental research for establishing evidence to support the effectiveness of instructional interventions and other school-based programs in K-12 and higher education contexts. Chapter 8 introduces a new approach of data mining as an empirical analysis tool for analyzing student performance in e-learning environments. The aim of this chapter is to explore the application of data mining for analyzing performance and satisfaction of the students enrolled in an online two-year Master degree programme in project management. This programme is delivered by the Academy of Economic Studies, the biggest Romanian university in economics and business administration in parallel, as an online programme and as a traditional one. The main data sources for the mining process are the survey made for gathering students’ opinions, the operational database with the students’ records and data regarding students activities recorded by the e-learning platform are. More than 180 students have responded and more than 150 distinct characteristics/ variable per student were identified. Due the large number of variables data mining is a recommended approach to analysis this data. Clustering, classification, and association rules were employed in order to identify the factor explaining students’ performance and satisfaction, and the relationship between them. The results are very encouraging and suggest several future developments. Chapter 9 is the last chapter of the second section of the book. The purposes of this chapter are three-fold: (1) to present our findings in investigating the success factors for designing, developing and delivering e-learning initiatives, (2) to examine the applicability of information systems theories to study e-learning success, and (3) to demonstrate the usefulness of action research in furthering our understanding of e-learning success. Inspired by issues and challenges experienced in developing an online course, a process approach for measuring and assessing e-learning success is advanced. This approach adopts an Information Systems perspective on e-learning success to address the question of how to guide the design, development, and delivery of successful e-learning initiatives. The validity and applicability of our process approach to measuring and assessing e-learning success is demonstrated in empirical studies involving cycles of action research. Merits of our approach are discussed and contributions in paving the way for further research opportunities are presented. The third section of the book is titled Factors Influencing Student Satisfaction and Learning Outcomes. After presenting prevailing theoretical and methodological perspectives, the book’s third section examines particular influences of e-learning course outcomes in a variety of settings. These chapters examine factors such as learner dispositional and behavioral characteristics, quality assurance frameworks for e-learning effectiveness, course content design and development, and their roles in shaping effective e-learning environments. To date, much of e-learning research has focused on asynchronous learning environments (exemplified by the Journal of Asynchronous Learning Networks) based in higher education settings. However, these are not the only contexts in which e-learning occurs. Therefore, we also address the alternative and potentially increasingly important settings of synchronous course delivery and corporate learning environments in this section.
xix
Chapter 10 is concerned with quality assurance in e-learning. Quality is a subjective concept, and as such, there are many criteria for assuring quality, including assessment practices based on industry standards and accreditation requirements. Most assessments, including quality assurance in e-learning, frequently occur at three levels: individual course assessments, department or program assessments, and institutional assessments; frequently these levels cannot be distinctly delineated. While student evaluations are usually included within these frameworks, student views are but one variable in the quality assessment equation. To offer some plausible perspectives of how students view quality, this chapter will provide an overview of quality assurance for online learning from the course, program, and institutional viewpoints as well as review some of the key research related to students’ assessment of what constitutes quality in online courses. Chapter 11 presents the synchronous virtual classroom (SVC) success model. The SVC model will help instructors design online courses that incorporate the factors that students need to be successful. This model will also help virtual classroom instructors and managers develop a systematic way of identifying and addressing the external and internal factors that might impact the success of their instruction. The strategies for empirically researching the SVC, which range from qualitative inquiry to experimental design, are discussed along with practical examples. This information will benefit instructors, researchers, non-profit and profit organizations, and academia. Chapter 12, “Factors Influencing User Satisfaction with Internet-Based E-Learning in Corporate South Africa,” examines the factors that influence user satisfaction with Internet based learning in the South African corporate environment. An electronic survey was administered and one hundred and twenty responses from corporations across South Africa were received. Only five of the thirteen factors were found to exert a statistically significant influence on learner satisfaction: instructor response towards the learners, instructor attitude toward Internet based learning, the flexibility of the course, perceived usefulness, perceived ease of use, and the social interaction experienced by the learner in assessments. Interestingly, four of those five were also identified as significant in a similar Taiwanese study, which provides an interesting cross-cultural validation for the findings, even though our sample was different and smaller. Perhaps surprisingly, none of 6 demographic variables exerted significant influence. Hopefully organisations and educational institutions can note and make use of the important factors in conceptualizing and designing their e-learning courses. Chapter 13 examines the relationships between student personality and e-learning outcomes. Among students enrolled in Web-based courses, some students learn a lot while others do not. There are many possible reasons for the differences in learning outcomes (e.g., student’s learning style, satisfaction, motivation, etc,). In the last few decades, student’s personality has emerged as an important factor influencing the learning outcomes in a traditional classroom environment. Among different personality models, the Big-Five model of personality has been successfully applied to help understand the relationship between personality and learning outcomes. Because Web-based courses are becoming popular, the Big-Five model is applied to find out if students’ personality traits play an important role in a Web-based course learning outcomes. Chapter 14, “A Method for Adapting Learning Objects to Students’ Preferences,” analyzes the different learning theories and styles, as well as the main standards for creating contents with the goal of developing a proposal for structuring courses and organizing material which best fits students’ needs, in order to increase motivation and improve the learning process. The objective of this chapter has been to analyze different factors that influence student learning. To identify different factors that influence student learning, it was necessary to review different learning theories and different learning styles.
xx
After that, the authors analyzed the role of teachers and their main responsibilities, and students’ learning process in order to propose a pedagogical structure for an e-learning course. The relevant roles that both teaching contents and e-learning play were also discussed. An active teacher who participles and creates high quality contents is necessary to prevent the sense of isolation, discouragement, and lack of motivation. Considering all these factors and the special features of each student as regards the way he learns, this chapter has proposed a new method that facilitates teaching and adapts knowledge to special preferences of each student. The fourth section of the book includes three chapters that deal with other applications of e-learning theory and method. The book’s final section extends the approach of alternative e-learning theory and environments through applying the Unified Theory of Acceptance and Use of Technology (UTAUT) of Venkatesh et al (2003), and blended learning. The objective of Chapter 15, “Understanding Graduate Students’ Intended Use of Distance Education Platforms,” is to apply the Unified Theory of the Acceptance and Use of Technology to better understand graduate students’ intended use of distance education platforms, using as a case a distance education platform of a Mexican University, the SERUDLAP system. Four constructs are hypothesized to play a significant role: performance expectancy, effort expectancy, social influence and attitude toward using technology; the moderating factors were gender and voluntariness of use. Data for the study was gathered through an on-line survey with a response rate of about 41%. Results suggested that the performance expectancy and attitude towards technology are factors that help us understand graduate students’ intended use of a distance education platform. Future research must consider the impact of some factors, such as previous experiences, age, and facilitating conditions in order to better understand the students’ behavior. Chapter 16 is concerned with the investigation of critical issues, dynamics and challenges related to project-based learning (PBL) from 49 student perspectives in an online course. The effect of PBL was examined qualitatively with open-ended questionnaire, observations and students’ submissions who were taking an online certificate course. According to the findings, students thought that an online PBL course supports their professional development with provision of practical knowledge, enhanced project development skill, self confidence, and research capability. This support is further augmented with the facilities of the online learning environment. Students mainly preferred team-work more than individual work. Although students were mostly satisfied with the course, they still had some suggestions for prospective students and instructors. The findings are particularly important for those people who are planning to organize course or activities which involve online PBL and who are about take an online or face-to-face PBL course. Chapter 17 is the last chapter of the book. This chapter is a case study that examines students’ perceptions, interaction and satisfaction in the interactive blended courses. Blended courses that offer several possibilities to students and teachers such as becoming more interactive and more active have become increasingly widespread for both K12 and higher education settings. With the rising of the cutting edge technologies, institutions and instructors have embarked on creating new learning environments with a variety of new delivery methods. At the same time, designing visually impressive and attractive blended settings for students have been easier with extensive learning and content management systems (LMS, CMS, LCMS) such as Blackboard, WebCT, Moodle, and virtual classroom environments (VLE) such as Adobe Connect, Dimdim, and WiZiQ. In this study, we aimed to investigate students’ perspectives and satisfactions towards designed interactive blended learning settings and to find out the students’ views on both synchronous and asynchronous interactive blended learning environment (IBLE).
xxi
THE FUTURE OF E-LEARNING RESEARCH Although this book examines numerous topics for which research has been conducted, there are several areas in which e-learning research is still in its infancy. To help steer prospective scholars in directions where they might provide immediate impact, we conclude this section with a brief discussion of some of these topics:
IMPACTS OF E-LEARNING BY AND ON INSTRUCTORS There have been numerous studies of student reactions to e-learning and potential predictors of effective learner-related outcomes. However, studies of the other primary participants in e-learning environments, the instructors, have been slow in coming. Fortunately, we are beginning to see studies that focus on instructor motivations and reactions to e-learning (i.e. (Connolly, Jones, & Jones, 2007; Coppola, Hiltz, & Rotter, 2002; Shea, 2007). As instructors continue to increase their knowledge and experience with e-teaching, there likely will be research opportunities for comparing attitudes, motivations, and behaviors of novice versus experienced online instructors. Other instructor-related research opportunities may include consideration of changes in workplace environments, interactions with students outside of class, working relationships with colleagues, and relationships to their host institutions.
POTENTIAL INFLUENCES OF ACADEMIC DISCIPLINES ON E-LEARNING EFFECTIVENESS Considering that many of the theoretical foundations of e-learning research have come from the communities of instructional technology, Information Systems, and educational theory, it is not surprising that the potential influence of epistemological, sociological, and behavioral characteristics of academic disciplines may play in shaping effective e-learning environments has received limited research attention to date. However, recent studies of disciplinary effects in e-learning suggest that they may have distinct influences on both course design(Smith, Heindel, & Torres-Ayala, 2008) and student retention, attitudes, and performance (Arbaugh & Rau, 2007; Hornik, Sanders, Li, Moskal, & Dziuban, 2008). Such initial findings suggest that potential disciplinary effects in e-learning should be a focus of prospective researchers.
INCREASED GLOBAL COVERAGE AND CROSS-CULTURAL STUDIES Although our book has a multi-national pool of contributors, regional attitudes toward e-learning are, thus far universally positive. For example, portions of the Middle East view e-learning and distance education with disdain (Rovai, 2009), as is indicated in part by studies from the region that resort to using prison inmates as research samples (Al Saif, 2007). As universities from other parts of the world collaborate to create branch campuses or joint ventures with institutions in the Middle East, assessing influences of present attitudes toward and potential changes in attitudes toward the conduct of e-learning yields a productive stream of research. Also, although our book does not examine Asian e-learning settings,
xxii
a review of contributors from prominent e-learning journals such as Computers & Education suggests that Asian scholars and institutions will become increasingly influential in shaping e-learning research agendas. We certainly would welcome collaborations between scholars from these emerging regions and those where e-learning has become comparatively well-established.
ISSUES IN E-LEARNING EMPIRICAL RESEARCH As we are now entering what may be a golden age of e-learning, we have witnessed increasing proportion of e-learning empirical research using highly sophisticated research tools such as structural equation modeling. A review of the major works of Kuhn (1970a), Kaplan (1964), (Dubin 1969), and Cushing (1990) describes the process by which an academic discipline becomes an establishment in terms of four steps: 1. Consensus building among a group of scientists about the existence of a body of phenomena that is worthy of scientific study (Kuhn 1970a); 2. Empirical study of the phenomena to establish a particular fact or a generalization (Kaplan 1964); 3. Articulation of theories to provide a unified explanation of established empirical facts and generalizations (Kuhn 1970a); and 4. Paradigm building to reach a consensus on the set of elements possessed in common by practitioners of a discipline such as shared commitments, shared values, and shared examples (exemplars) (Kuhn 1970a). More than 30 years ago, Keen demanded three prerequisites to be fulfilled for the management Information Systems (MIS) area to be a coherent and substantive field. They are defining the dependent variables, clarifying the reference disciplines, building a cumulative research tradition. An important objective of this book is an attempt to clearly define the dependent variables in e-learning empirical research. The review of Arbaugh et al. (2009) suggests that the e-learning systems area is building a cumulative research tradition through empirical and non-empirical research during the past decade. In our judgment, we are heading toward the stage of articulating e-learning theories to provide a unified explanation of established empirical facts and generalizations. To articulate e-learning theories, we need consensus building as to what dependent and independent variables are worthy of investigation. During the past decade, a large number of e-learning empirical studies were conducted to investigate the impacts of too numerous factors. To provide a useful lesson to the e-learning community, let us use an example from decision support systems (DSS) and group support systems (GSS) empirical research. Eom summarized the state of DSS/GSS empirical over the past decades this way (Eom, 2007, p.436): A previous study (Farhoomand 1987) shows an increasing proportion of empirically based DSS research. Nevertheless, this accelerating rate of DSS research publication and the steady transition from non-empirical to empirical studies have not resulted in DSS theory building. Some researchers abandoned their efforts to develop context-free DSS theories and suggested that future DSS research should focus on modeling the specific “real world” target environment. This environment is characterized in terms of organizational contexts, group characteristics, tasks, and EMS environments (Dennis et al. 1990-1991). Other empirical researchers continue their efforts to integrate the seemingly conflicting results of empirical experiments (Benbasat and Lim 1993). However, the considerable amount of empirical research in GDSS,
xxiii
user interface/individual differences, and implementation has produced conflicting, inconsistent results due to methodological problems, the lack of a commonly accepted causal model, different measures of dependent variables, hardware and software designed under different philosophies, etc. (Benbasat et al. 1993; Dennis et al. 1988; Jarvenpaa et al. 1985; Pinsonneault and Kraemer 1989; Zigurs 1993) This problem of empirical research in the DSS/GSS areas possibly could reoccur in e-learning empirical research. The four causes that produced inconsistent/conflicting empirical results in the DSS are with e-learning empirical research. Some of evident problems in e-learning empirical research stem from the comparison of apples and oranges. Occasionally, some studies compare the results of studies based on samples of different subjects in terms of age and generations, gender, disciplines, course levels (undergraduate vs. graduate), demographics, socio-economic status, et cetera. Perceived e-learning outcomes and level of satisfactions are the results of interplay of many psychological, socio-economic, cultural, and other variables. In future e-learning empirical research, it may be a prudent direction to focus on the specific “real world” target environment, seeking to develop context-specific mid-range theory, rather than context free e-learning theory building. The models need to be parsimonious. At the same time, it must be complex enough to capture the reality of this world. Measurement issues are another area of concerns. These issues include the design of questionnaires, software issues, and as we see more empirical research, there is a need for informing the readers the details of survey instruments and list of questions used. For example, there was no information on survey instruments in some studies (LaPointe & Gunawardena, 2004). Further, there is a need for developing standardized indicator variables. Certainly, the same group of students’ responses may differ significantly when they respond to the two different questions to measure learning outcomes of online education. • •
I have learned a lot in this course (Peltier, Drago, & Schibrowsky, 2003). I feel that I learned more in online courses than in face-to-face courses (Eom, Ashill, & Wen, 2006).
There are also software issues. For example, the two common approaches for SEM are the covariancebased approach used in LISREL, AMOS, and EQS and the variance-based approach used in PLS-PC, PLS-Graph, Smart-PLS and XLSTAT-PLS. The fundamental differences between LISREL and PLS are reflected in the PLS and LISREL algorithms and their optimum properties. The same data set may produce different sets of results (unacceptable vs. acceptable) with the covariance-based and variance based approach, due to the differences of the two approaches. Sean B. Eom Southeast Missouri State University, USA J.B. Arbaugh University of Wisconsin Oshkosh, USA
REFERENCES Al Saif, A. A. (2007). Prisoners’ attitudes toward using distance education whilst in prisons in Saudi Arabia. Issues in Informing Science and Information Technology, 4, 125–131.
xxiv
Alavi, M., & Leidner, D. E. (2001). Research commentary: Technology-mediated learning – a call for greater depth and breadth of research. Information Systems Research, 12(1), 1–10. doi:10.1287/ isre.12.1.1.9720 Alexander, M. W., Perrault, H., Zhao, J. J., & Waldman, L. (2009). Comparing Aacsb faculty and student online learning experiences: Changes between 2000 and 2006. Journal of Educators Online, 6. Allen, I. E., & Seaman, J. (2010). Learning on demand: Online education in the United States 2009. Wellesley, MA: Babson Survey Research Group. Arbaugh, J. B. (2005). How much does “subject matter” matter? A study of disciplinary effects in online MBA courses. Academy of Management Learning & Education, 4(1), 57–73. Arbaugh, J. B., Godfrey, M. R., Johnson, M., Pollack, B. L., Niendorf, B., & Wresch, W. (2009). Research in online and blended learning in the business disciplines: Key findings and possible future directions. The Internet and Higher Education, 12, 71–87. doi:10.1016/j.iheduc.2009.06.006 Arbaugh, J. B., & Rau, B. L. (2007). A study of disciplinary, structural, and behavioral effects on course outcomes in online MBA courses. Decision Sciences Journal of Innovative Education, 5(1), 65–94. doi:10.1111/j.1540-4609.2007.00128.x Bernard, R. M., Abrami, P. C., Borokhovski, E., Wade, E., Tamim, R. M., & Surkes, M. A. (2009). A meta-analysis of three types of interaction treatments in distance education. Review of Educational Research, 79, 1243–1289. doi:10.3102/0034654309333844 Bernard, R. M., Abrami, P. C., Lou, Y., & Borokhovski, E. (2004). A methodological morass? How can we improve quantitative research in distance education. Distance Education, 25(2), 175–198. doi:10.1080/0158791042000262094 Bonk, C. J. (2009). The world is open: How Web technology is revolutionizing education. San Francisco, CA: Jossey-Bass. Connolly, M., Jones, C., & Jones, N. (2007). New approaches, new vision: Capturing teacher experiences in a brave new online world. Open Learning, 22(1), 43–56. doi:10.1080/02680510601100150 Coppola, N. W., Hiltz, S. R., & Rotter, N. G. (2002). Becoming a virtual professor: Pedagogical roles and asynchronous learning networks. Journal of Management Information Systems, 18, 169–189. Eom, S. B. (2007). The development of decision support systems research: A bibliometrical approach. Lewiston, NY: The Edwin Mellen Press. Eom, S. B., Ashill, N., & Wen, H. J. (2006). The determinants of students’ perceived learning outcome and satisfaction in university online education: An empirical investigation. Decision Sciences Journal of Innovative Education, 4(2), 215–236. doi:10.1111/j.1540-4609.2006.00114.x European Commission. (2010). ICT results, educating Europe: Exploiting the benefits of ICT. Retrieved August 31, 2010, from http://cordis.europa.eu/ictresults/pdf/policyreport/INF%207%200100%20 IST-R%20policy%20report-education_final.pdf
xxv
Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2, 87–105. doi:10.1016/ S1096-7516(00)00016-6 Hayes, S. K. (2007). Principles of finance: Design and implementation of an online course. MERLOT Journal of Online Teaching and Learning, 3, 460–465. Haytko, D. L. (2001). Traditional versus hybrid course delivery systems: A case study of undergraduate marketing planning courses. Marketing Education Review, 11(3), 27–39. Hiltz, S. R., & Turoff, M. (1978). The network nation: Human communication via computer. Reading, MA: Addison-Wesley. Hornik, S., Sanders, C. S., Li, Y., Moskal, P. D., & Dziuban, C. D. (2008). The impact of paradigm development and course level on performance in technology-mediated learning environments. Informing Science, 11, 35–57. Kellogg, D. L., & Smith, M. A. (2009). Student-to-student interaction revisited: A case study of working adult business students in online course. Decision Sciences Journal of Innovative Education, 7(2), 433–454. doi:10.1111/j.1540-4609.2009.00224.x LaPointe, D. K., & Gunawardena, C. N. (2004). Developing, testing and refining of a model to understand the relationship between peer interaction and learning outcomes in computer-mediated conferencing. Distance Education, 25(1), 83–106. doi:10.1080/0158791042000212477 Liu, X., Magjuka, R. J., Bonk, C. J., & Lee, S. (2007). Does sense of community matter? An examination of participants’ perceptions of building learning communities in online course. Quarterly Review of Distance Education, 8(1), 9–24. Marriott, N., Marriott, P., & Selwyn, N. (2004). Accounting undergraduates’ changing use of ICT and their views on using the internet in higher education – a research note. Accounting Education, 13(S1), 117–130. doi:10.1080/0963928042000310823 Means, B., Toyama, Y., Murphy, R., Bakia, M., & Jones, K. (2009). Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies. Washington, DC: U.S. Department of Education. Morgan, G., & Adams, J. (2009). Pedagogy first: Making Web technologies work for soft skills development in leadership and management education. Journal of Interactive Learning Research, 20, 129–155. Peltier, J. W., Drago, W., & Schibrowsky, J. A. (2003). Virtual communities and the assessment of online marketing education. Journal of Marketing Education, 25(3), 260–276. doi:10.1177/0273475303257762 Phipps, R., & Merisotis, J. (1999). What’s the difference: A review of contemporary research on the effectiveness of distance learning in higher education. Washington, DC. Retrieved from www.ihep.com/ PUB.htm Piccoli, G., Ahmad, R., & Ives, B. (2001). Web-based virtual learning environments: A research framework and a preliminary assessment of effectiveness in basic IT skills training. Management Information Systems Quarterly, 25(4), 401–426. doi:10.2307/3250989
xxvi
Rovai, A. P. (2009). The Internet and higher education: Achieving global reach. Cambridge, UK: Chandos Publishing. Schniederjans, M. J., & Kim, E. B. (2005). Relationship of student undergraduate achievenment and personality characteristics in a total Web-based environment: An empirical study. Decision Sciences Journal of Innovative Education, 3(2), 205–221. doi:10.1111/j.1540-4609.2005.00067.x Shea, P. (2007). Bridges and barriers to teaching online college courses: A study of experienced online faculty in thirty-six colleges. Journal of Asynchronous Learning Networks, 11(2), 73–128. Simmering, M. J., Posey, C., & Piccoli, G. (2009). Computer self-efficacy and motivation to learn in a self-directed online course. Decision Sciences Journal of Innovative Education, 7(1), 99–121. doi:10.1111/j.1540-4609.2008.00207.x Sitzmann, T., Ely, K., Brown, K. G., & Bauer, K. N. (2010). Self-assessment of knowledge: A cognitive learning or affective measure? Academy of Management Learning & Education, 9, 169–191. Sitzmann, T., Kraiger, K., Stewart, D., & Wisher, R. (2006). The comparative effectiveness of Web-based and classroom instruction: A meta-analysis. Personnel Psychology, 59, 623–664. doi:10.1111/j.17446570.2006.00049.x Smith, G. G., Heindel, A. J., & Torres-Ayala, A. T. (2008). E-learning commodity or community: Disciplinary differences between online courses. The Internet and Higher Education, 11, 52–159. doi:10.1016/j. iheduc.2008.06.008 Stahl, B. C. (2004). E-teaching – the economic threat to the ethical legitimacy of education? Journal of Information Systems Education, 15, 155–162. Tanner, J. R., Noser, T. C., & Totaro, M. W. (2009). Business faculty and undergraduate students’ perceptions of online learning: A comparative study. Journal of Information Systems Education, 20(1), 29–40. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of Information Technology: Toward a unified view. Management Information Systems Quarterly, 27(3), 425–478. Wan, Z., Wang, Y., & Haggerty, N. (2008). Why people benefit from e-learning differently: The effects of psychological processes on e-learning outcomes. Information & Management, 45, 513–521. doi:10.1016/j.im.2008.08.003 Wilkes, R. B., Simon, J. C., & Brooks, L. D. (2006). A comparison of faculty and undergraduate students’ perceptions of online courses and degree programs. Journal of Information Systems Education, 17, 131–140.
Section 1
Theoretical Frameworks
1
Chapter 1
Multi-Disciplinary Studies in Online Business Education: Observations, Future Directions, and Extensions J. B. Arbaugh University of Wisconsin Oshkosh, USA
ABSTRACT This chapter argues that research in online teaching and learning in higher education should take a multi-disciplinary orientation, especially in settings whose curricula are drawn from several disciplinary perspectives such as business schools. The benefits of a multi-disciplinary approach include curriculum integration and enhanced communication and collective methodological advancement among online teaching and learning scholars from the disciplines that comprise the integrated curricula. After reviewing multi-disciplinary studies in business education published to date, the chapter concludes with recommendations for advancing research in this emerging stream. Some of the primary recommendations include the use of academic discipline as a moderating variable, more studies that incorporate samples comprised of faculty and/or undergraduate students, and the development of more comprehensive measures of student learning.
INTRODUCTION Over the past decade, the delivery of management education via the internet has become increasingly common, even among institutions accredited by DOI: 10.4018/978-1-60960-615-2.ch001
AACSB International (Alexander, Perrault, Zhao, & Waldman, 2009; Popovich & Neel, 2005). With increasing acceptance of this educational medium has come increased research attention, approaching 200 peer-reviewed articles on business disciplines during the last decade (Arbaugh, Godfrey, Johnson, Leisen Pollack, Niendorf, &
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Multi-Disciplinary Studies in Online Business Education
Wresch, 2009). However, because many of these articles employed research samples that examined less than five class sections within a single business discipline, their ability to inform business school educators and administrators regarding the design, development, and integration of a business curriculum is somewhat limited. When a business school considers the development, design, and delivery of an online degree program, one might expect that an integrated curriculum of well-designed courses that capture the intricacies of the differences and the interdependencies between business disciplines in a technologically sound manner would be an excellent starting point. However, the business school is multi-disciplinary in orientation, there tends to be substantial variety in the development and delivery of business school curricula, particularly at the MBA level (Navarro, 2008; Rubin & Dierdorff, 2009). Considering that recent exploratory research suggests that there may be fundamental disciplinary-related differences in the design and conduct of online courses in business schools (Arbaugh, Bangert, & Cleveland-Innes, 2010), the need to examine online teaching and learning in business schools comprehensively rather than by individual silos becomes increasingly apparent if these schools are to provide a quality educational experience for increasingly demanding stakeholders (Julian & Ofori-Dankwa, 2006; O’Toole, 2009).
MAIN FOCUS OF THE CHAPTER In this chapter, we discuss why the relative lack of work that comprehensively examines the business school curriculum in online teaching and learning is cause for concern, and articulate the potential problems that this lack of attention may create for business schools going forward. We also examine both epistemological and practical reasons for which disciplinary differences between components of the business school curriculum
2
matter in online and blended delivery, and why and how studies of online business education should reflect and better capture these differences. That discussion is followed by a report of the primary findings from multi-disciplinary studies in business education published to date. The chapter concludes with a discussion of potential implications for research specific to business education that could be extended to studies of online teaching and learning in other disciplines. Although this chapter explicitly examines the state of research on online teaching and learning within business schools, we hope that it also may stimulate scholars in other disciplines to consider their fields more comprehensively when designing and conducting research.
WHY SHOULD STUDIES OF ONLINE BUSINESS EDUCATION BE MULTI-DISCIPLINARY? As the volume of research on online teaching and learning in business education has increased dramatically during the past ten years, scholars have begun to more actively disseminate these findings. Although the volume and quality of research in online and blended business education has increased dramatically, the rate of progress across the disciplines is rather uneven (Arbaugh et al., 2009). Disciplines such as Information Systems (IS) and Management had relatively active research communities, but disciplines such as Finance and Economics have little published research (Arbaugh, 2010a). Worse yet, these scholars tend to communicate only within their particular discipline rather than engaging in crossdisciplinary dialogue with scholars from other business disciplines, let alone scholars engaged in the broader online teaching and learning literature. Although such an approach may ground a study within its respective field, this approach becomes particularly problematic for teaching because students typically receive at least some
Multi-Disciplinary Studies in Online Business Education
exposure to a variety of disciplines within the business school curriculum and a particular discipline’s research progress lags the others. Therefore, widely varying research depth across disciplines also likely will result in online and/ or blended courses whose instructional quality across disciplines varies widely (Arbaugh, 2005a; Bryant, Kahle, & Schafer, 2005). This approach to researching online and blended business education employed to date has produced other negative consequences. In addition to the inconsistent research quality that results from lack of cross-disciplinary dialogue, researchers in one discipline are left unaware of theoretical perspectives and conceptual frameworks from related disciplines that could help explain phenomena in their own discipline (Wan, Fang, & Neufeld, 2007). Because the portability of theories and methods of business disciplines to research in learning and education varies widely, research in those less portable disciplines is not likely to advance substantially absent such collaborative endeavors (Arbaugh, 2010a). Finally, those with responsibilities for coordinating and directing online degree programs in business schools have little evidence to guide them when making decisions regarding the comprehensive design, emphasis, and conduct of the subjects, or for assessing the effectiveness of their current offerings. Having addressed some of the negative consequences of the current general state of affairs, let us now present some positive reasons for developing cross-disciplinary research in online business education.
Business Disciplines Differ from Each Other Epistemologically and Behaviorally With scholarly roots in the sociology and history, researchers have been studying disciplinary differences in higher education for about forty years (Kuhn, 1970; Lohdahl & Gordon, 1972; Thompson, Hawkes, & Avery, 1969). Although much of
the disciplinary differences in higher education literature is devoted to identifying and examining sociological and behaviorial differences across disciplines (Becher, 1994; Hativa & Marincovich, 1995; Lattuca & Stark, 1994; Shulman, 1987, 2005; Smeby, 1996), to date epistemological differences have been the primarily adopted characteristic from this literature for use in other educational research. One of the more popular epistemological frameworks for distinguishing differences between academic disciplines was developed by Anthony Biglan (1973). Biglan’s framework categorizes academic disciplines based on their positioning along three dimensions: 1) the existence of a dominant paradigm, 2) a concern with application, and 3) a concern with life systems. These dimensions have come to be operationalized as hard/soft, pure/applied, and life/non-life respectively. Most of the subsequent research on Biglan’s framework has focused on the paradigm dominance and emphasis on application dimensions. Hard disciplines are characterized by coalescence around dominant paradigms. In other words, scholars in these fields have general agreement regarding “how the world works.” Conversely, soft disciplines are characterized by having multiple competing paradigms available as possible explanations of their phenomena of interest. Pure fields place more attention on knowledge acquisition, whereas application and integration receive stronger emphasis in applied fields. Although much of the research attention in the disciplinary differences literature has focused on these first two dimensions (Becher, 1994; Becher & Trowler, 2001; Neumann, 2001; Neumann, Parry, & Becher, 2002), the life/non-life dimension should not be ignored. This dimension may have particularly important implications for distinguishing disciplines in of schools and colleges of business. Because the business school has been established as a professional school and therefore has focused on producing learners that have applied skills, for the most part its disciplines
3
Multi-Disciplinary Studies in Online Business Education
have been considered to reside on the “applied” side of the Pure/Applied dimension (Biglan, 1973; Khurana, 2007; Trank & Rynes, 2003). As a relatively young area of study, the paradigm development of the disciplines of the business school have been considered to be behind that of those considered to be “hard” disciplines, but there is increasing consensus that business disciplines vary in degrees of “hardness.” “Hard, applied, non-life” disciplines, such as accounting and finance, call for progressive mastery of techniques in linear sequences based upon factual knowledge. Students in “hard, applied” disciplines are expected to be linear thinkers. Teaching activities generally are focused and instructive, with the primary emphasis being on the teacher informing the student (Neumann, 2001; Neumann et al., 2002). Emphasis on factual knowledge, and by extension examinations, extends from “hard, pure” to “hard, applied” disciplines, although problem solving will be emphasized more in the “hard, applied” disciplines. Conversely, teaching in “soft, applied, life” disciplines, such as management and marketing, tends to be more free-ranging, with knowledge-building being a formative process where teaching and learning activities tend to be constructive and reiterative, shaped by both practically honed knowledge and espoused theory. Students are expected to be more lateral thinkers than those in “hard” disciplines. As is the case in the field of education, scholars of educational practice in these disciplines often are encouraged to refer to class participants as learners rather than students (Dehler, Beatty, & Leigh, 2010; Whetten, Johnson, & Sorenson, 2009). In the softer disciplines, essays and group projects tend to predominate, and self-assessments are common. Because of the emphasis on developing practical skills, there is a greater need for constructive, informative feedback on assessments. Emphasis on widely transferrable skills generally will be greater in “soft, applied” disciplines than “hard, applied” ones, as will reflective practice and lifelong learning.
4
By possessing characteristics of both of these disciplinary extremes, the “soft, applied, non-life” orientation of the information systems discipline provides elements of each. Like the harder, nonlife disciplines, they have a strong emphasis on inanimate objects, such as software code and applications of technology. However, like the softer disciplines, they also place strong emphasis on group projects and discussion-based learning (Alavi, 1994; Benbunan-Fich & Hiltz, 2003). Such characteristics suggest that a particular challenge for IS instructors is balancing the roles of content expert and learning facilitator. Such epistemological and pedagogical variance across disciplines is sure to create challenges for organizing them into a unified curriculum, which is addressed in this chapter’s next section.
Challenges of the Integrated Business Curriculum For online business education to be effective, it seems appropriate first to consider effectiveness from a programmatic or curricular perspective rather than from the perspective of individual disciplines. Business programs seek to deliver an integrated curriculum, albeit with slight variations across institutions, usually depending upon areas of faculty or institutional competency or specialized regional or student needs, particularly at the graduate level (Engwall, 2007; Julian & Ofori-Dankwa, 2006; Rubin & Dierdorff, 2009). Therefore, it behooves business schools to ensure to the greatest extent possible that its delivery of its online programs is of consistent quality across the curriculum. It does not benefit a business school if courses in only one or two disciplines of its online offerings are well-designed and delivered. Therefore, research that conceptualizes and examines online business education in ways that consider multiple disciplines in the same study is particularly welcome. The previous discussion on epistemological and pedagogical differences between disciplines
Multi-Disciplinary Studies in Online Business Education
suggests that courses in each should be designed differently for delivery in online and blended environments. Given the variety of disciplinary perspectives housed within a single field of study, the potential effects of disciplinary differences in online teaching and learning and potential conflicts between preferred instructional approaches are most likely to manifest themselves in a multidisciplinary area of study such as business. However, such differences often are not factored into discussions of the design of online and blended courses in business education. Subject matter and disciplinary effects were considered in a somewhat general manner as part of early instructional design models (i.e. Jonassen, Tessmer, & Hannum, 1999; Reigeluth, 1983; Reiser & Gagne, 1983; Van Patten, Chao, & Riegeluth, 1986). However, such effects usually are either absent in contemporary theories of online learning, or at best mentioned as a sort of “black box” category. In fact, the contemporary instructional design literature often notes how discipline-related issues are to be left to “subject matter experts” (Dempsey & Van Eck, 2002; Merrill, 2001). Although some scholars are beginning to attempt to bridge the gaps between curriculum theorists and instructional designers to encourage the reintroduction of dialogue between these communities that was lost sometime during the 1960s (Petrina, 2004), progress toward such dialogue has been rather slow. Unfortunately, such deferment of consideration of the integration of discipline and design makes it convenient for designers and administrators to advocate similar designs for all online courses. This state of affairs is particularly troublesome for the development and delivery of online degree programs in business. What guidance that does exist from the online business education literature typically calls for standardization of structure and organization of course content, requirements, and basic pedagogical operations as much as possible (Dykman & Davis, 2008a; 2008b), and any design modifications that are made be done on the basis of characteristics such as learner maturity,
technology, pedagogy, or content usage rather than disciplinary differences (Bocchi, Eastman, & Swift, 2004; Millson & Wilemon, 2008; Rungtusanatham, Ellram, Siferd, & Salik, 2004), which given this chapter’s prior discussion should now be seen as somewhat problematic. One of the implications of such an orientation is a lack of fit between the design of online courses, course management systems, and disciplines’ signature pedagogies (Smith, Heindel, & Torres-Ayala, 2008). For example, collaborative constructivism often is seen as an organizing framework for online course design in higher education, with emphasis on instructor-facilitated group activities (Garrison, Anderson, & Archer, 2000; Jonassen, Davidson, Collins, Campbell, & Haag, 1995). However, the applicability of such approaches appears to vary widely across disciplines in higher education (Murray & Renaud, 1995; Smart & Ethington, 1995). In a study of forty online MBA courses, Arbaugh and Rau (2007) found that disciplinary differences accounted for up to sixteen percent of the variance in student perceived learning and sixty-seven percent of the variance in satisfaction with the internet as an education delivery medium, and that interaction between participants, the instructor, and fellow participants had differing effect sizes in predicting perceived learning. Although preliminary, these findings suggest the importance of accounting for disciplinary differences when developing online degree programs in business. It is understandable that program directors and course designers might desire an instructional design pattern that targets the center of a “hard-soft” continuum for the sake of maintaining program consistency. Although such an approach could work well for disciplines around the center of such a continuum, this can be particularly problematic for those at the extremes. Such an approach may result in prescriptions of course structure and activities that are not soft enough for the “soft” disciplines and not hard enough for the “hard” ones (Arbaugh, 2010a). However, in
5
Multi-Disciplinary Studies in Online Business Education
light of the observations of epistemological and behavioral differences between “soft, applied” and “hard, applied” disciplines gleaned from the disciplinary differences literature, not to mention the issues created by the “life/non-life” dimension, it is questionable whether such standardization of course designs across disciplines should take place in an online business education curriculum, or even whether the standardization is even desirable.
Methodological and Statistical Analysis Benefits Finally, research of online teaching and learning in business education should be increasingly multidisciplinary in nature for the practical methodological and statistical benefits such designs will provide. Multi-disciplinary research samples by their nature tend to have comparatively larger sample sizes, which affords researchers opportunities to employ multivariate statistical techniques such as factor analysis (both exploratory and/or confirmatory), PLS, SEM, and HLM that have been discussed in some of this book’s other chapters. Also, larger samples facilitate the introduction of additional control variables due to increased statistical power. Considering that randomized experimental designs typically are not feasible for studies of business education, appropriate controls are vital for establishing the rigor necessary for the studies to produce valid and reliable evidence that can be used for designing and developing courses and programs. Besides the noted role of academic discipline, other potential control variables that warrant increased attention in research designs include participant prior experience with online teaching and/or learning (Anstine & Skidmore, 2005; Arbaugh, 2005a; 2005b; Jones, Moeeni, & Ruby, 2005; Liu, Magjuka, & Lee, 2006), student academic major (Daymont & Blau, 2008; Simmering, Posey, & Piccoli, 2009), course and/or instructor (Arbaugh, 2010b, Hwang & Arbaugh, 2006; Klein, Noe, & Wang, 2006; Williams, Duray, & Reddy, 2006) and participant demographics
6
(Alavi, Marakas, & Yoo, 2002, Benbunan-Fich & Hiltz, 2003). Finally, comparative studies of differing platforms and approaches to online delivery will benefit from the increased generalizability associated with relatively large, multi-disciplinary samples (Bernard, Abrami, Borokhovski, Wade, Tamim, Surkes, & Bethel, 2009). To illustrate at least some of these benefits, let us now examine the research published to date.
FINDINGS AND CONCLUSIONS FROM PREVIOUS MULTIDISCIPLINARY STUDIES OF ONLINE BUSINESS EDUCATION Multi-disciplinary research in online business education uses a variety of designs, such as single-institution studies the examine courses from several disciplines, program-level studies that survey students about their collective experiences with online learning within a degree program, and broad-based institutional surveys. To date, these studies have focused largely on students and their perceptions of factors influencing online and/or blended course effectiveness.
Multi-Course and CrossDisciplinary Studies Considering that studies of instructors’ initial single-course experiences with online business education only began to be published in the mid1990s, it is surprising that multi-course, multidiscipline empirical studies started appearing at the turn of the century. Among these was a series of studies by Arbaugh (2000a; 2000b; 2001) that sought to identify general predictors of students’ perceived learning and their satisfaction with the internet as an educational delivery medium. Using samples that included courses in disciplines such as Accounting, Finance, Management, Information Systems, and Operations Management, these early studies suggested that these course outcomes
Multi-Disciplinary Studies in Online Business Education
were highly associated with the extent to which learners perceived it to be easy to interact with others in the course, the extent to which the instructor encouraged interaction, the perceived flexibility of the delivery medium, and the extent to which the instructor engaged in behaviors that reduced social distance between him/herself and the learners, thereby noting characteristics of interaction between participants in both “soft” and “hard” disciplines. Neither student age nor gender predicted course outcomes. Perhaps not surprisingly, as participants gained online course experience, their satisfaction with taking courses online also increased. Although these studies support the idea purported in the conceptual models that instructors move from being information disseminators to discussion facilitators in the online environment, they also suggested that instructors were the most influential participants in the online classroom. Subsequent multi-disciplinary studies have branched into several research streams, including additional examination of topics such as participant interaction, the role of technology, and introducing topics such as disciplinary effects and student and instructor characteristics and behaviors, but the finding of the instructor as focal point in online business education remained a consistent theme in the research as the decade progressed (Arbaugh, 2010a; Kellogg & Smith, 2009). Although initial studies tended to be grounded largely in discipline-based theoretical perspectives, we have begun to see research actively building upon prior online business education research. For example, studies of epistemological teaching (objectivist vs. constructivist) and social learning (individual vs. group) dimensions by Arbaugh & Benbunan-Fich (2006; Benbunan-Fich & Arbaugh, 2006) were grounded directly upon Leidner and Jarvenpaa’s (1995) seminal conceptual framework. Reflective of the emerging theme of the importance of the instructor, their empirical tests of this model, which used a sample of forty MBA courses from multiple disciplines, found
that courses designed in group-based objectivism, where group-oriented learning activities were incorporated with instructor-centered content delivery, were found to have the highest levels of student perceived learning. Support for the principles of the instructor’s facilitative course role and well-organized content recently has been provided in multi-disciplinary studies of Garrison, Anderson, and Archer’s (2000) Community of Inquiry framework by Arbaugh and Hwang (2006) and Arbaugh (2008). Using structural equation modeling, Arbaugh and Hwang (2006) found empirical support that these were three distinctive components in a study that included fourteen courses and at least four distinct disciplines. Arbaugh (2008) found that teaching presence significantly predicted both perceived learning and delivery medium satisfaction. In this study of fifty-five courses from multiple business disciplines, teaching presence and cognitive presence were equally strong predictors of student learning, but social presence was three times as strong a predictor of delivery medium satisfaction than was teaching presence. Other multi-disciplinary work supporting the instructor’s importance in online teaching and learning was provided by Peltier, Schibrowsky, and Drago (2007). This update of a previously developed framework (Peltier, Drago, & Schibrowsky, 2003) argued that learning quality in business education was a function of the pacing of course content, participant interaction, course structure, instructor mentoring, and content presentation quality. Although they found several significant relationships between the predictors in a sample consisting of students from eighteen courses in multiple disciplines, only the instructorcontrolled activities of mentoring and the pacing of course content were strongly associated with learning quality. In spite of the previously discussed findings noting the importance of instructors, to date student characteristics have received much more research attention than have instructor character-
7
Multi-Disciplinary Studies in Online Business Education
istics. The student characteristics most commonly examined have been demographically-oriented variables such as age, gender, and prior experience with technology and online learning. Recent multi-disciplinary studies generally have found little relationship between student age or gender and online course outcomes in business education (Arbaugh, 2002, 2005b; Arbaugh & Rau, 2007; Klein et al., 2006; Williams et al., 2006). As multi-disciplinary studies have been able to draw from samples of students with more varied ranges of prior experiences with online learning, there is increasing evidence of a prior experiencecourse outcomes relationship (Arbaugh, 2005a; Arbaugh & Duray, 2002; Drago, Peltier, Hay, & Hodgkinson, 2005). However, the amount of prior online learning experience needed to produce that relationship may not be extensive. Analyzing data from students who had participated in up to seven online MBA courses, Arbaugh (2004) found that the most significant changes in student perceptions of the flexibility, interaction, course software, and general satisfaction with online courses occurred between the students’ first and second online course. He also found that there were no significant differences in students’ perceived learning with subsequent course experiences. One of the most examined learner behavioral characteristics in multi-disciplinary studies is participants’ interaction with other course participants. Consistent with the theme of the importance of the instructor, the findings of this research emphatically suggest that learner-instructor interaction is a strong predictor of student learning (Arbaugh, 2000b, 2005b; Arbaugh & BenbunanFich, 2007; Arbaugh & Hornik, 2006; Drago et al., 2005; Drago, Peltier, & Sorensen, 2002; Eom, Wen, & Ashill, 2006; Peltier et al., 2007) and delivery medium satisfaction (Arbaugh, 2000a, 2002, 2005b; Eom et al., 2006). In fact, results from multi-disciplinary studies suggest that learnerinstructor interaction may be the primary variable for predicting online course learning outcomes in online graduate business education (Arbaugh &
8
Rau, 2007; Drago et al., 2002; Kellogg & Smith, 2009; Marks, Sibley, & Arbaugh, 2005). Although learner-learner interaction is deemed as, or at least implied to be, a necessary element of online business courses, there is increasing evidence that the primacy of learner-learner interaction as a universal course design tactic may not hold for online business education. Some early studies found that learner-learner interaction was a stronger predictor of course outcomes than learner-instructor interaction (Arbaugh, 2002; Peltier et al., 2003), but recent studies have found that learner-instructor interaction is the stronger predictor (Arbaugh & Benbunan-Fich, 2007; Arbaugh & Rau, 2007; Marks et al., 2005). The progression of this research stream motivated Kellogg and Smith’s (2009) study of the influences of learner-content, learner-instructor, and learnerlearner interaction in their program’s required MBA course in Data Analysis. They found that students reported learning most from independent study assignments and least from learner-learner interaction. Although it is possible that the relatively “hard” disciplinary nature of this course may have lent itself less readily toward collaborative learning approaches, this study certainly raises questions regarding whether the use of collaborative approaches is universally applicable across the online business curriculum. Interest in the nature and types of interaction in which students partake also motivated Hwang and Francesco’s (2010) recent study of feedback-seeking channels in blended learning environments. These authors found that although these students actively sought feedback from fellow learners and their instructors, such behavior did not predict their learning performance. The primary positive predictor of learning performance in that study was the number of discussion forums in which a learner participated. However, intensely participating in such forums actually was negatively associated with learning performance. Considering that MBA students likely are much more appropriate audiences for collaborative approaches than are un-
Multi-Disciplinary Studies in Online Business Education
dergraduate business students (Arbaugh, 2010c), such findings should give instructors reason to pause when contemplating the development of course assignments and activities.
Influences of Technology Although there are emerging frameworks of effective online business education, multi-discipline studies also have drawn from established frameworks from business research. One such commonly used framework is the Technology Acceptance Model (TAM). Several multi-disciplinary studies have used the TAM as a grounding framework, either in its original form (Davis, 1989) or in the extended model (Venkatesh & Davis, 2000). Collectively, this research suggests that although the model had limited predictive power for novice online learners or early course management systems (Arbaugh, 2000b; Arbaugh & Duray, 2002), the TAM has emerged as a useful framework for explaining course management system usage and satisfaction with the Internet as an educational delivery medium (Arbaugh, 2005b; Landry, Griffeth, & Hartman, 2006; Saade, Tan, & Nebebe, 2008; Stoel & Lee, 2003). Davis and Wong (2007) found that perceived usefulness and ease of use had moderate effects on student intentions to use the CECIL system at the University of Auckland, but that student perceptions of flow and playfulness of the system (which, in turn, was highly influenced by the speed of the software) were stronger predictors of intentions to use. Arbaugh (2004) found that perceived usefulness and ease of use of Blackboard increased dramatically between the first and subsequent online course experiences. A recent comparative study of national differences by Saade and colleagues (2008) found that perceived usefulness explained behavioral intentions to use a web-based learning environment. However, although the full TAM model explained over 70 percent of the variance in behavioral intention among 120 Chinese undergraduate students, it
only explained 25 percent of the variance for 362 Canadian students.
Disciplinary Effects and Online Learning Outcomes Reflecting the concerns expressed earlier in this chapter, studies that investigate potential disciplinary effects have been slow in coming (Arbaugh, 2005a; Grandzol & Grandzol, 2006). However, we are beginning to see some initial efforts to examine disciplinary effects in online business education. Reflecting the idea of the importance of instructors in online business education, early studies of disciplinary effects specific to business courses suggested that their effects on learning outcomes may not be as large as that of instructor experience and behaviors. Arbaugh (2005a) hypothesized that disciplines for which instructors could commonly obtain doctoral degrees would be more significantly associated with course outcomes. Surprisingly, he found no such “doctoral” effect, perhaps because the relatively early development of the MBA program’s online offerings favored relatively experienced online instructors. Methodological issues also may have influenced Drago and colleagues’ (2002) study of course effectiveness in their study of eighteen MBA courses. They operationalized course content on the basis of its presentation and organization rather than by discipline. Although content was the primary predictor of learning (a possible precursor of Kellogg and Smith’s (2009) findings), they also found that instructor effects were more likely to predict perceptions of overall course effectiveness. A subsequent study of a more mature online MBA program by Arbaugh and Rau (2007) found more pronounced disciplinary effects. Their study, which used dummy coding of disciplines with Finance as the referent variable, found that disciplinary effects explained 67 percent of the variance in student satisfaction with the educational delivery medium in a sample of forty online MBA courses. In a recent study
9
Multi-Disciplinary Studies in Online Business Education
with a much larger sample, Hornik, Sanders, Li, Moskal, and Dziuban (2008) examined data from 13,000 students in 167 courses during 1997-2003. Included in this sample were undergraduate-level courses in information systems, along with courses in disciplines outside the business school such as the hard sciences, nursing, social sciences, and the humanities. Hornik and colleagues found that student grades were higher and withdrawals were lower for subjects with high paradigm development (“hard” disciplines), than for those with low paradigm development (“soft” disciplines, including information systems), and that these differences were most pronounced in advanced level courses. Conversely, in a recent study that examined disciplinary relationships to the three presences in the Community of Inquiry framework, Arbaugh and colleagues (2010) found that MBA students in courses from “hard” business disciplines reported significantly lower scores on cognitive presence than students in “softer” courses. This progression of studies suggests that disciplinary effects may become more pronounced with more mature learners as both online learners and instructors gain experience with the delivery medium.
Classroom Comparison Studies Comparison studies of online and classroom-based courses are quite common in studies of business education (Arbaugh et al., 2009). Therefore, it is not surprising that are some multi-disciplinary comparison studies. In one of the first of these studies, Dacko (2001) compared online and full-time MBA student emphases on skill development. In a prescient study of “hard” and “soft” disciplines, he found that full-time students were more likely to perceive a greater emphasis being placed on oral communication and interpersonal skills, and that online students were more likely to perceive a greater emphasis being placed upon analytical and decision-making skills. Both groups believed that their respective perceived emphases were
10
what the program should be emphasizing. In their comparison of 31 online and 27 classroom-based MBA courses, Drago and Peltier (2004) found that in spite the online courses having on average more than twice the enrollment of the classroombased courses, class size positively predicted course structure and instructor support for online courses, but that it negatively predicted them in the classroom courses. It was unclear whether these findings could be attributed to differences in instructor practice, differing student populations in the two mediums, or other factors. More recent comparative multi-discipline studies have employed methodologies of increasing rigor. Sitzmann, Kraiger, Stewart, and Wisher’s (2006) meta-analysis of 96 studies found that web-based instruction was 6 percent more effective than classroom instruction for teaching declarative knowledge. They found that the two methods essentially were equal for teaching procedural knowledge, and learners generally were satisfied equally with both methods as education delivery mediums. However, because only eight of these studies directly addressed business education, generalizing these conclusions to business disciplines may be premature. Another multidisciplinary comparison study by Klein, Noe, and Wang (2006) focused upon undergraduate business education in blended learning environments. Klein and colleagues found that blended learners with high learning goal orientation and who saw the environment as enabling instead of a barrier had higher motivation to learn, which in turn was associated with course outcomes. They also found that learners had more control, were required to take a more active role in their learning, and had greater motivation to learn in blended environments. Although the number of multi-disciplinary comparative studies in business education is relatively limited, their findings support the premise that for the most part, at worst generally there is no difference in learning outcomes between the two delivery mediums.
Multi-Disciplinary Studies in Online Business Education
Program-Level Studies in Online Business Education Other studies that have examined online business education beyond single-disciplinary perspectives include student surveys of perceptions and experiences with a degree program in its entirety and multi-institutional studies. Although these types of studies may not yield insights regarding particular disciplines, they do provide benefits such as identifying patterns of strong practice across institutions and affording students and administrators to see benefits and potential problems of the program from the perspectives of initial expectations of newly admitted students or of those have completed many or even all of its requirements. Such a perspective also allows for consideration of technological and pedagogical changes over time and opportunity to assess whether the collective parts of the program result in an integrated whole. In one early study, Terry (2001) used archival data to compare attrition rates between online and classroom offerings over a three year period. As a potential initial indication of disciplinary differences in business programs, he found that quantitatively-oriented courses had substantially higher attrition rates than their classroom-based equivalents or online courses in other disciplines. Subsequent program-level empirical studies of students have tended to rely upon survey data, focus on students at the graduate level, and study students at all stages of their progress through the MBA program. The flexibility of the learning format, networking opportunities, and virtual teaming skills are consistent themes among the students in these studies. Bocchi, Eastman, and Swift’s (2004) study of incoming cohorts to Georgia’s Web MBA program found that AACSB accreditation and perceived flexibility were drivers of students’ choice of program. Reflective of the multi-course studies reviewed earlier in the chapter, this study also suggested instructors would be key players in their program’s retention of students, and therefore encouraged them to provide
regular and structured interaction, communicate expectations clearly in their courses, and help students develop group facilitation skills. Program flexibility also was a key selection factor for the mid-career professionals in Grzeda and Miller’s (2009) study. However, in addition to wanting to acquire the traditional benefits of the MBA (knowledge of business, broader networks, and increased salary and promotion opportunities), they also wanted to be exposed to new ways of thinking about the world. Kim, Liu, and Bonk’s (2005) study of mid-program online MBA students found that these students generally were positive about their educational experiences. They valued the program’s flexible learning environment, appreciated the closer interaction with instructors and the opportunity to cultivate virtual teaming skills. However, they noted that difficulties with communication with peers, the desire for more interaction with instructors, and the lack of realtime feedback as program challenges. Multi-institutional studies are the least common of these types of studies. One study in this category was a collection of case studies compiled by Alavi and Gallupe (2003) during Fall 1999. Narrowing their focus to five early adopting exemplars, Alavi and Gallupe observed that these programs were implemented to support specific organizational strategies rather than as “add-on” features, and that the programs were supported by an internal culture of innovation. They also found that these programs required high levels of faculty training and ongoing student support, and that the resources required to develop and implement these initiatives were usually underestimated by as much as a factor of 2-3 times of the original estimates. Other institutional-level studies suggest that there are numerous close followers after these relatively early adopters. Popovich and Neel (2005) found that AACSB Internationally-accredited schools were rapidly following in the footsteps of these early adopters. They found that by 2002, at least 120 programs had degree programs delivered at least partially online, and nearly half of these
11
Multi-Disciplinary Studies in Online Business Education
had been started since Alavi and Gallupe’s study. However, Ozdemir, Altinkemer, and Barron, (2008) found that this initial population of online learning providers in the United States tended to be lower-tier programs in less densely populated states, and that these schools are much more likely to be offering graduate rather than undergraduate programs online. Other institutional studies have been focused at the instructor rather than program level. One primary conclusion from these studies is that instructors are largely self-trained for online teaching (Perreault, Waldman, Alexander & Zhao, 2002). More recent faculty studies have found that the perceived usefulness and rigor of online education are stronger predictors of acceptance of online education than is perceived ease of use of the technology (Gibson, Harris, & Colaric, 2008; Parthasurathy & Smith, 2009). Overall, it appears that faculty do appear to be more satisfied with online teaching as they gain experience, and instructor concerns regarding online teaching have diminished substantially during the first decade of the 21st century (Alexander et al., 2009). In fact, some faculty have adopted online teaching in part because it is perceived to reflect positively with external constituents on their institution’s reputation (Parthasurathy & Smith, 2009).
FUTURE RESEARCH DIRECTIONS FOR MULTI-DISCIPLINARY STUDIES More Use of Discipline as a Moderating Variable The first recommendation that emerges from a multi-disciplinary consideration of online teaching and learning is that disciplinary effects need to be examined more directly in future studies. For example, given the widely varying extent to which the various disciplines encourage learner-learner interaction (Arbaugh & Rau, 2007; Kellogg & Smith, 2009), it is possible that the equivocal find-
12
ings regarding its importance may be explained by disciplinary differences. Courses in ‘hard’ disciplines will likely require more emphasis on learner-instructor interaction; therefore, the importance of learner-learner interaction would be diminished (Arbaugh, 2010a). At minimum, this suggests that those conducting multi-course studies need to emphasize accounting for potential disciplinary effects when designing their studies.
Further Study of Additional Potential Moderating Effects To date, empirical studies of online management education almost exclusively examined direct effects, typically the relationship between potential influencers of course outcomes. With the exceptions of Klein and colleagues’ (2006) examination of the extent to which course delivery mediums moderate perceptual factors and student motivation to learn and Benbunan-Fich and Arbaugh’s (2006) study of the interaction of knowledge construction and group collaboration on learning perception, moderating relationships between variables have been almost completely ignored in online management education research. Besides the previously discussed characteristics such as academic discipline, online blends, and program level, other potential moderators that researchers might consider in multi-disciplinary studies are the influences of the particular learning technology (Arbaugh, 2005b; Martins & Kellermanns, 2004), or emerging theories of online learning effectiveness such as the Community of Inquiry (Garrison & Arbaugh, 2007; Heckmann & Annabi, 2006). Such study of potential moderating effects could lay a foundation for the development of mid-range and discipline-specific theories of online learning.
Multi-Disciplinary Studies in Online Business Education
Additional Conceptual and Operational Clarification of Blended Learning Although business education scholars have a somewhat hidden history of contributing to the literature on blended learning (Arbaugh et al., 2009), management education research generally has failed to be explicit regarding whether a course is purely online or blended until very recently (Hwang & Arbaugh, 2009; Hwang & Francesco, 2010; Klein et al., 2006). This can be attributed in part to the fact that the literature on blended learning generally has lacked precision in operationalizing the construct (Bonk & Graham, 2006; Garrison & Kanuka, 2004; Picciano & Dziuban, 2007). This lack of specificity in denoting the degree of blending within courses clearly has limited the business education literature’s ability to determine the conditions under which online or blended learning is most appropriate (Kellogg & Smith, 2009). Increasing conceptual and operational precision of just what constitutes a blended learning environment in such courses would allow researchers to address questions of optimal blends by content, discipline, and/or program level through comparison studies, much in the manner that fully online and fully classroom courses have been studied (Bernard et al., 2009; Sitzmann et al., 2006).
More Studies of Faculty Although many of the studies reviewed in this chapter suggest the importance of faculty in online and blended learning environments, very few studies in business education have directly employed samples of faculty. Encouraging faculty to study their colleagues could be both motivating and educational. Faculty have long been considered a ‘neglected resource’ in distance education (Dillon & Walsh, 1992). Although the reflective narrative account has been a common approach for sharing knowledge with fellow faculty in all
business disciplines, and faculty motivations for teaching online are beginning to receive research attention, studies examining the effects of faculty characteristics, such as age, gender, ethnicity, usage behaviors, and/or skill level in business education, are essentially non-existent (Arbaugh et al., 2009). This is because, in part, many published studies use the same instructor or a small number of instructors, thereby limiting opportunities for examining variance in instructor characteristics and behaviors (Anstine & Skidmore, 2005). As was noted earlier, reducing such limitations is a primary benefit of multi-disciplinary studies. In addition to the impact of faculty on online courses, scholars also might consider studying the impact of online learning on faculty. Online learning does not just change the culture of the classroom; it also changes the culture and conditions of the faculty work environment. Although online faculty work environments are beginning to receive research attention in higher education (Shea, 2007), the impact of online learning on faculty job satisfaction, organizational commitment, psychological and physical wellbeing have yet to be considered in the specific context of business schools (Arbaugh, Desai et al., 2010). Going outside their instructional roles, issues such as changes in work practices and environment, interaction with other faculty, and institutional commitment are topics that warrant further investigation.
More Rigorous Studies Using Undergraduate Samples A recent survey of higher education institutions in the USA found that approximately 4.6 million students had taken at least one online course during 2008, and that 82 per cent of these students were undergraduates (Allen & Seaman, 2010). However, many of the cross-disciplinary studies reviewed in this chapter have been comprised of MBA student samples. This focus of multidisciplinary work at the graduate level suggests
13
Multi-Disciplinary Studies in Online Business Education
that perspectives from undergraduate students are underrepresented in the online and blended business education literature. Although this may reflect that students are widely distributed across colleges and majors at the undergraduate level, the sheer magnitude of the dominance of undergraduates as online learners makes it difficult to infer that the proportion of research conducted in business education to date is representative of the actual practice of online education being conducted in business schools (Arbaugh, 2010c). Although the number of exemplary multi-course studies of undergraduate business students is increasing (i.e. Gratton-Lavoie & Stanley, 2009; Hwang & Francesco, 2010; Klein et al., 2006; Nemanich et al., 2009; Martins & Kellermanns, 2004), scholars affiliated with business schools that are making inroads into online and blended delivery of undergraduate education have an opportunity to influence the entirety of the curriculum by pursuing the research opportunities provided by these initiatives.
More Scholars Studying the Topic Although there has been a boom in the volume of research on online management education, this literature still suffers from a lack of a sizeable body of dedicated scholars studying the phenomenon. As this chapter demonstrates, the number of scholars making consistent contributions through multi-disciplinary studies is comparatively few. However, the positive side of this present state of the literature is that a new entrant could become somewhat established in the field relatively quickly. The rise of educational research journals in business over the last decade such as Academy of Management Learning & Education (AMLE) and the Decision Sciences Journal of Innovative Education (DSJIE) as highly regarded research outlets should provide legitimacy for more business scholars to conduct research that examines the development and delivery of educational content. Why not invest that attention toward an
14
emerging phenomenon that readily and increasingly pervades our educational experience such as online and blended learning!?
More Comprehensive and Objective Measures of Learning Perceptual measures of course outcomes such as Alavi’s (1994) measure of perceived learning and Arbaugh’s (2000a) measure of delivery medium satisfaction have allowed management education researchers to design multi-course, multi-instructor, and multi-discipline studies, thereby increasing the external validity of research findings. However, such measures do not allow for objective assessment of student performance. Of even greater concern is reflected in the results of a recent meta-analysis suggesting that such learner self-assessment items may measure affective rather than cognitive outcomes (Sitzmann, Ely, Brown, & Bauer, 2010). However, reliance upon seemingly objective measures such as course grades likely will not fully address the issue of measuring learning outcomes. Besides the need to be adjusted course grades to reflect differences across instructors (Arbaugh, 2005a), coursespecific grades generally cannot be generalized across disciplines. Therefore, it behooves scholars to seek cross-disciplinary and multi-dimensional objective outcome measures to supplement the perceptual measures that this field has relied upon to date (Arbaugh, Desai, et al., 2010; Armstrong & Fukami, 2010; Carrell, 2010).
CONCLUSION Multi-disciplinary and program-level research in online and blended business education has progressed substantially from its beginnings in the mid-1990s. Such studies carry the primary influence upon future research directions for the field. Multi-disciplinary studies are increasingly pointing to a particular type of collaboration – that
Multi-Disciplinary Studies in Online Business Education
between students and the instructor – as being particularly predictive of outcomes in online business education. Although this certainly supports the design framework for “hard” disciplines presented in this chapter, it also suggests that the “co-creation” model often advocated in the mainstream online learning literature may not be as fully applicable even in the “softer” business disciplines. In short, business education instructors may need to be both “sages on the stage” and “guides by the side” (Arbaugh, 2010b). Therefore, in addition to looking to the instructor for course leadership as a content expert (Arbaugh & Hwang, 2006; Liu et al., 2005), students also may be looking to them for leadership in navigating the “hidden curriculum” of online learning (Anderson, 2002). In spite of these noteworthy contributions, there still remain numerous research questions and issues that will be best addressed by continuing emphasis on multi-disciplinary studies in business education. The potential influences of academic discipline, degree of blending, instructor characteristics, and program level on learning outcomes are but a few of the issue best addressed in multi-disciplinary settings. Therefore, the call for additional researchers presented in this chapter is both necessary and useful. With such a hoped-for increase in research activity, we hope that initiatives in business education will encourage online learning researchers in other disciplines to examine their work more from a curricular rather than exclusively from a course or disciplinary level.
REFERENCES Alavi, M. (1994). Computer-mediated collaborative learning: An empirical evaluation. Management Information Systems Quarterly, 18, 159–174. doi:10.2307/249763
Alavi, M., & Gallupe, R. B. (2003). Using information technology in learning: Case studies in business and management education programs. Academy of Management Learning & Education, 2, 139–153. Alavi, M., Marakas, G. M., & Yoo, Y. (2002). A comparative study of distributed learning environments on learning outcomes. Information Systems Research, 13, 404–415. doi:10.1287/ isre.13.4.404.72 Alexander, M. W., Perrault, H., Zhao, J. J., & Waldman, L. (2009). Comparing AACSB faculty and student online learning experiences: Changes between 2000 and 2006. Journal of Educators Online, 6(1). Retrieved February 1, 2009, from http://www.thejeo.com/Archives/ Volume6Number1/Alexanderetalpaper.pdf Allen, I. E., & Seaman, J. (2010). Learning on demand: Online education in the United States, 2009. Wellesley, MA: Babson Survey Research Group. Anderson, T. (2002). The hidden curriculum of distance education. Change, 33(6), 28–35. doi:10.1080/00091380109601824 Anstine, J., & Skidmore, M. (2005). A small sample study of traditional and online courses with sample selection adjustment. The Journal of Economic Education, 36, 107–127. Arbaugh, J. B. (2000a). Virtual classroom characteristics and student satisfaction in Internet-based MBA courses. Journal of Management Education, 24, 32–54. doi:10.1177/105256290002400104 Arbaugh, J. B. (2000b). How classroom environment and student engagement affect learning in Internet-based MBA courses. Business Communication Quarterly, 63(4), 9–26. doi:10.1177/108056990006300402
15
Multi-Disciplinary Studies in Online Business Education
Arbaugh, J. B. (2001). How instructor immediacy behaviors affect student satisfaction and learning in Web-based courses. Business Communication Quarterly, 64(4), 42–54. doi:10.1177/108056990106400405
Arbaugh, J. B. (2010c). Do undergraduates and MBAs differ online? Initial conclusions from the literature. Journal of Leadership & Organizational Studies, 17, 129–142. doi:10.1177/1548051810364989
Arbaugh, J. B. (2002). Managing the on-line classroom: A study of technological and behavioral characteristics of Web-based MBA courses. The Journal of High Technology Management Research, 13, 203–223. doi:10.1016/S10478310(02)00049-4
Arbaugh, J. B., Bangert, A., & Cleveland-Innes, M. (2010). Subject matter effects and the community of inquiry (CoI) framework: An exploratory study. The Internet and Higher Education, 13, 37–44. doi:10.1016/j.iheduc.2009.10.006
Arbaugh, J. B. (2004). Learning to learn online: A study of perceptual changes between multiple online course experiences. The Internet and Higher Education, 7(3), 169–182. doi:10.1016/j. iheduc.2004.06.001 Arbaugh, J. B. (2005a). How much does subject matter matter? A study of disciplinary effects in Web-based MBA courses. Academy of Management Learning & Education, 4, 57–73. Arbaugh, J. B. (2005b). Is there an optimal design for on-line MBA courses? Academy of Management Learning & Education, 4, 135–149. Arbaugh, J. B. (2008). Does the community of inquiry framework predict outcomes in online MBA courses? International Review of Research in Open and Distance Learning, 9, 1–21. Arbaugh, J. B. (2010a). Online and blended business education for the 21st century: Current research and future directions. Oxford, UK: Chandos Publishing. Arbaugh, J. B. (2010b). Sage, guide, both, or neither? An exploration of instructor roles in online MBA courses. Computers & Education, 55, 1234–1244. doi:10.1016/j.compedu.2010.05.020
16
Arbaugh, J. B., & Benbunan-Fich, R. (2006). An investigation of epistemological and social dimensions of teaching in online learning environments. Academy of Management Learning & Education, 5, 435–447. Arbaugh, J. B., & Benbunan-Fich, R. (2007). Examining the influence of participant interaction modes in Web-based learning environments. Decision Support Systems, 43, 853–865. doi:10.1016/j. dss.2006.12.013 Arbaugh, J. B., Desai, A. B., Rau, B. L., & Sridhar, B. S. (2010). A review of research on online and blended learning in the management discipline: 1994-2009. Organization Management Journal, 7(1), 39–55. doi:10.1057/omj.2010.5 Arbaugh, J. B., & Duray, R. (2002). Technological and structural characteristics, student learning and satisfaction with Web-based courses: An exploratory study of two MBA programs. Management Learning, 33, 231–247. doi:10.1177/1350507602333003 Arbaugh, J. B., Godfrey, M. R., Johnson, M., Leisen Pollack, B., Niendorf, B., & Wresch, W. (2009). Research in online and blended learning in the business disciplines: Key findings and possible future directions. The Internet and Higher Education, 12(2), 71–87. doi:10.1016/j. iheduc.2009.06.006
Multi-Disciplinary Studies in Online Business Education
Arbaugh, J. B., & Hornik, S. C. (2006). Do Chickering and Gamson’s seven principles also apply to online MBAs? Journal of Educators Online, 3(2). Retrieved September 1, 2006, from http:// www.thejeo.com/ Arbaugh, J. B., & Hwang, A. (2006). Does teaching presence exist in online MBA courses? The Internet and Higher Education, 9(1), 9–21. doi:10.1016/j.iheduc.2005.12.001 Arbaugh, J. B., & Rau, B. L. (2007). A study of disciplinary, structural, and behavioral effects on course outcomes in online MBA courses. Decision Sciences Journal of Innovative Education, 5(1), 65–95. doi:10.1111/j.1540-4609.2007.00128.x Armstrong, S. J., & Fukami, C. V. (2010). Selfassessment of knowledge: A cognitive or affective measure? Perspectives from the management learning and education community. Academy of Management Learning & Education, 9, 335–341. Becher, T. (1994). The significance of disciplinary differences. Studies in Higher Education, 19, 151–161. doi:10.1080/03075079412331382007 Becher, T., & Trowler, P. R. (2001). Academic tribes and territories (2nd ed.). Berkshire, UK: Society for Research into Higher Education & Open University Press. Benbunan-Fich, R., & Arbaugh, J. B. (2006). Separating the effects of knowledge construction and group collaboration in Web-based courses. Information & Management, 43, 778–793. doi:10.1016/j.im.2005.09.001 Benbunan-Fich, R., & Hiltz, S. R. (2003). Mediators of the effectiveness of online courses. IEEE Transactions on Professional Communication, 46(4), 298–312. doi:10.1109/TPC.2003.819639
Bernard, R. M., Abrami, P. C., Borokhovski, E., Wade, E., Tamim, R. M., Surkes, M. A., & Bethel, E. C. (2009). A meta-analysis of three types of interaction treatments in distance education. Review of Educational Research, 79, 1243–1289. doi:10.3102/0034654309333844 Bickel, R. (2007). Multilevel analysis for applied research. New York, NY: Guilford. Biglan, A. (1973). The characteristics of subject matter in different academic areas. The Journal of Applied Psychology, 57(3), 195–203. doi:10.1037/ h0034701 Bocchi, J., Eastman, J. K., & Swift, C. O. (2004). Retaining the online learner: Profile of students in an online MBA program and implications for teaching them. Journal of Education for Business, 79, 245–253. doi:10.3200/JOEB.79.4.245-253 Bonk, C. J., & Graham, C. R. (Eds.). (2006). The handbook of blended learning: Global perspectives, local designs. San Francisco, CA: Pfeiffer. Bryant, S. M., Kahle, J. B., & Schafer, B. A. (2005). Distance education: A review of the contemporary literature. Issues in Accounting Education, 20, 255–272. doi:10.2308/iace.2005.20.3.255 Carrell, L. J. (2010). Thanks for asking: A (redfaced?) response from communication. Academy of Management Learning & Education, 9, 300–304. Dacko, S. G. (2001). Narrowing skill development gaps in Marketing and MBA programs: The role of innovative technologies for distance learning. Journal of Marketing Education, 23, 228–239. doi:10.1177/0273475301233008 Davis, F. (1989). Perceived usefulness, perceived ease of use and user acceptance of information technology. Management Information Systems Quarterly, 13, 319–340. doi:10.2307/249008
17
Multi-Disciplinary Studies in Online Business Education
Davis, R., & Wong, D. (2007). Conceptualizing and measuring the optimal experience of the e-learning environment. Decision Sciences Journal of Innovative Education, 5, 97–126. doi:10.1111/j.1540-4609.2007.00129.x Daymont, T., & Blau, G. (2008). Student performance in online and traditional sections of an undergraduate management course. Journal of Behavioral and Applied Management, 9, 275–294. Dehler, G. E., Beatty, J. E., & Leigh, J. S. A. (2010). From good teaching to scholarly teaching: Legitimizing management education and learning scholarship. In Wankel, C., & DeFillippi, R. (Eds.), Being and becoming a management education scholar (pp. 95–118). Charlotte, NC: Information Age Publishing. Dempsey, J. V., & Van Eck, R. N. (2002). Instructional design online: Evolving expectations. In Reiser, R. A., & Dempsey, J. V. (Eds.), Trends and issues in instructional design and technology (pp. 281–294). Upper Saddle River, NJ: Merrill Prentice-Hall. Dillon, C. L., & Walsh, S. M. (1992). Faculty: The neglected resource in distance education. American Journal of Distance Education, 6(3), 5–21. doi:10.1080/08923649209526796 Drago, W., & Peltier, J. (2004). The effects of class size on the effectiveness of online courses. Management Research News, 27(10), 27–41. doi:10.1108/01409170410784310 Drago, W., Peltier, J., Hay, A., & Hodgkinson, M. (2005). Dispelling the myths of online education: Learning via the information superhighway. Management Research News, 28(6/7), 1–17. doi:10.1108/01409170510784904 Drago, W., Peltier, J., & Sorensen, D. (2002). Course content or the instructor: Which is more important in online teaching? Management Research News, 25(6/7), 69–83. doi:10.1108/01409170210783322
18
Dykman, C. A., & Davis, C. K. (2008a). Online education forum part two – teaching online versus teaching conventionally. Journal of Information Systems Education, 19, 157–164. Dykman, C. A., & Davis, C. K. (2008b). Online education forum part three – a quality online educational experience. Journal of Information Systems Education, 19, 281–289. Engwall, L. (2007). The anatomy of management education. Scandinavian Journal of Management, 23, 4–35. doi:10.1016/j.scaman.2006.12.003 Eom, S. B., Wen, H. J., & Ashill, N. (2006). The determinants of students’ perceived learning outcomes and satisfaction in university online education: An empirical investigation. Decision Sciences Journal of Innovative Education, 4(2), 215–235. doi:10.1111/j.1540-4609.2006.00114.x Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2, 87–105. doi:10.1016/S1096-7516(00)00016-6 Garrison, D. R., & Arbaugh, J. B. (2007). Researching the community of inquiry framework: Review, issues and future directions. The Internet and Higher Education, 10(3), 157–172. doi:10.1016/j.iheduc.2007.04.001 Garrison, D. R., & Kanuka, H. (2004). Blended learning: Uncovering its transformative potential in higher education. The Internet and Higher Education, 7, 95–105. doi:10.1016/j. iheduc.2004.02.001 Gibson, S. G., Harris, M. L., & Colaric, S. M. (2008). Technology acceptance in an academic context: Faculty acceptance of online education. Journal of Education for Business, 83, 355–359. doi:10.3200/JOEB.83.6.355-359
Multi-Disciplinary Studies in Online Business Education
Grandzol, J. R., & Grandzol, C. J. (2006). Best practices for online business education. International Review of Research in Open and Distance Learning, 7(1), 1–17. Gratton-Lavoie, C., & Stanley, D. (2009). Teaching and learning of principles of microeconomics online: An empirical assessment. The Journal of Economic Education, 40(2), 3–25. doi:10.3200/ JECE.40.1.003-025 Grzeda, M., & Miller, G. E. (2009). The effectiveness of an online MBA program in meeting midcareer student expectations. Journal of Educators Online, 6(2). Retrieved November 10, 2009, from http://www.thejeo.com/Archives/ Volume6Number2/GrzedaandMillerPaper.pdf Hativa, N., & Marincovich, M. (Eds.). (1995). New directions for teaching and learning - disciplinary differences in teaching and learning: Implications for practice. San Francisco, CA: Jossey-Bass. Heckman, R., & Annabi, H. (2006). How the teacher’s role changes in online case study discussions. Journal of Information Systems Education, 17, 141–150. Hwang, A., & Arbaugh, J. B. (2006). Virtual and traditional feedback-seeking behaviors: Underlying competitive attitudes and consequent grade performance. Decision Sciences Journal of Innovative Education, 4, 1–28. doi:10.1111/j.15404609.2006.00099.x Hwang, A., & Arbaugh, J. B. (2009). Seeking feedback in blended learning: Competitive versus cooperative student attitudes and their links to learning outcome. Journal of Computer Assisted Learning, 25, 280–293. doi:10.1111/j.13652729.2009.00311.x Hwang, A., & Francesco, A. M. (2010). The influence of individualism-collectivism and power distance on use of feedback channels and consequences for learning. Academy of Management Learning & Education, 9, 243–257.
Jonassen, D., Davidson, M., Collins, M., Campbell, J., & Haag, B. B. (1995). Constructivism and computer-mediated communication in distance education. American Journal of Distance Education, 9(2), 7–26. doi:10.1080/08923649509526885 Jonassen, D. H., Tessmer, M., & Hannum, W. H. (1999). Task analysis methods for instructional design. Mahwah, NJ: Erlbaum. Jones, R., Moeeni, F., & Ruby, P. (2005). Comparing Web-based content delivery and instructor-led learning in a telecommunications course. Journal of Information Systems Education, 16, 265–271. Julian, S. D., & Ofori-Dankwa, J. C. (2006). Is accreditation good for the strategic decision making of traditional business schools? Academy of Management Learning & Education, 5, 225–233. Kellogg, D. L., & Smith, M. A. (2009). Studentto-student interaction revisited: A case study of working adult business students in online courses. Decision Sciences Journal of Innovative Education, 7, 433–456. doi:10.1111/j.15404609.2009.00224.x Khurana, R. (2007). From higher aims to hired hands: The social transformation of American business schools and the unfulfilled promise of management as a profession. Princeton, NJ: Princeton University Press. Kim, K.-J., Liu, S., & Bonk, C. J. (2005). Online MBA students’ perceptions of online learning: Benefits, challenges and suggestions. The Internet and Higher Education, 8, 335–344. doi:10.1016/j. iheduc.2005.09.005 Klein, H. J., Noe, R. A., & Wang, C. (2006). Motivation to learn and course outcomes: The impact of delivery mode, learning goal orientation, and perceived barriers and enablers. Personnel Psychology, 59, 665–702. doi:10.1111/j.17446570.2006.00050.x
19
Multi-Disciplinary Studies in Online Business Education
Kuhn, T. S. (1970). The structure of scientific revolutions (2nd ed.). Chicago, IL: University of Chicago Press. Landry, B. J. L., Griffeth, R., & Hartman, S. (2006). Measuring student perceptions of blackboard using the technology acceptance model. Decision Sciences Journal of Innovative Education, 4, 87–99. doi:10.1111/j.1540-4609.2006.00103.x Lattuca, L. R., & Stark, J. S. (1994). Will disciplinary perspectives impede curricular reform? The Journal of Higher Education, 65, 401–426. doi:10.2307/2943853 Leidner, D. E., & Jarvenpaa, S. L. (1995). The use of information technology to enhance management school education: A theoretical view. Management Information Systems Quarterly, 19, 265–291. doi:10.2307/249596 Liu, X., Magjuka, R. J., & Lee, S. 2006. An empirical examination of sense of community and its effects on students’ satisfaction, perceived learning outcome, and learning engagement in online MBA courses. International Journal of Instructional Technology & Distance Learning, 3(7). Retrieved September 1, 2006, from http:// www.itdl.org/Journal/Jul_06/article01.htm Lohdahl, J. B., & Gordon, G. (1972). The structure of scientific fields and the functioning of university graduate departments. American Sociological Review, 37, 57–72. doi:10.2307/2093493 Marks, R. B., Sibley, S., & Arbaugh, J. B. (2005). A structural equation model of predictors for effective online learning. Journal of Management Education, 29, 531–563. doi:10.1177/1052562904271199 Martins, L. L., & Kellermans, F. W. (2004). A model of business school students’ acceptance of a Web-based course management system. Academy of Management Learning & Education, 3, 7–26.
20
Merrill, M. D. (2001). Components of instruction toward a theoretical tool of instructional design. Instructional Science, 29, 291–310. doi:10.1023/A:1011943808888 Millson, M. R., & Wilemon, D. (2008). Educational quality correlates of online graduate management education. Journal of Distance Education, 22(3), 1–18. Murray, H. G., & Renaud, R. D. (1995). Disciplinary differences in teaching and learning: Implications for practice. New Directions for Teaching and Learning, 64, 31–39. doi:10.1002/ tl.37219956406 Navarro, P. (2008). The core curricula of topranked U.S. business schools: A study in failure? Academy of Management Learning & Education, 7, 108–123. Nemanich, L., Banks, M., & Vera, D. (2009). Enhancing knowledge transfer in classroom versus online settings: The interplay among instructor, student, content, and context. Decision Sciences Journal of Innovative Education, 7, 123–148. doi:10.1111/j.1540-4609.2008.00208.x Neumann, R. (2001). Disciplinary differences and university teaching. Studies in Higher Education, 26, 135–146. doi:10.1080/03075070120052071 Neumann, R., Parry, S., & Becher, T. (2002). Teaching and learning in their disciplinary contexts: A conceptual analysis. Studies in Higher Education, 27, 405–417. doi:10.1080/0307507022000011525 O’Toole, J. (2009). The pluralistic future of management education. In Armstrong, S. J., & Fukami, C. V. (Eds.), The SAGE handbook of management learning, education, and development (pp. 547–558). London, UK: SAGE Publications. Ozdemir, Z. D., Altinkemer, K., & Barron, J. M. (2008). Adoption of technology-mediated learning in the U.S. Decision Support Systems, 45, 324–337. doi:10.1016/j.dss.2008.01.001
Multi-Disciplinary Studies in Online Business Education
Parthasurathy, M., & Smith, M. A. 2009. Valuing the institution: An expanded list of factors influencing faculty adoption of online education. Online Journal of Distance Learning Administration, 12(2). Retrieved October 15, 2009, from http:// www.westga.edu/~distance/ ojdla/summer122/ parthasarathy122.html Peltier, J. W., Drago, W., & Schibrowsky, J. A. (2003). Virtual communities and the assessment of online marketing education. Journal of Marketing Education, 25, 260–276. doi:10.1177/0273475303257762 Peltier, J. W., Schibrowsky, J. A., & Drago, W. (2007). The interdependence of the factors influencing the perceived quality of the online learning experience: A causal model. Journal of Marketing Education, 29, 140–153. doi:10.1177/0273475307302016 Perreault, H., Waldman, L., Alexander, M., & Zhao, J. (2002). Overcoming barriers to successful delivery of distance-learning courses. Journal of Education for Business, 77, 313–318. doi:10.1080/08832320209599681 Popovich, C. J., & Neel, R. E. (2005). Characteristics of distance education programs at accredited business schools. American Journal of Distance Education, 19, 229–240. doi:10.1207/ s15389286ajde1904_4 Reigeluth, C. M. (Ed.). (1983). Instructional design theories and models: An overview of their current status. Hillsdale, NJ: Erlbaum. Reiser, R. A., & Gagne, R. M. (1983). Selecting media for instruction. Englewood Cliffs, NJ: Instructional Technology. Rubin, R. S., & Dierdorff, E. C. (2009). How relevant is the MBA? Assessing the alignment of required curricula and required managerial competencies. Academy of Management Learning & Education, 8, 208–224.
Rungtusanatham, M., Ellram, L. M., Siferd, S. P., & Salik, S. (2004). Toward a typology of business education in the Internet age. Decision Sciences Journal of Innovative Education, 2, 101–120. doi:10.1111/j.1540-4609.2004.00040.x Saade, R. G., Tan, W., & Nebebe, F. (2008). Impact of motivation on intentions in online learning: Canada vs. China. Issues in Informing Science and Information Technology, 5, 137–147. Shea, P. (2007). Bridges and barriers to teaching online college courses: A study of experienced online faculty in thirty-six colleges. Journal of Asynchronous Learning Networks, 11(2), 73–128. Shulman, L. S. (1987). Knowledge and teaching: Foundations of the new reform. Harvard Educational Review, 57(1), 1–22. Shulman, L. S. (2005). Signature pedagogies in the professions. Daedalus, 134(3), 52–59. doi:10.1162/0011526054622015 Simmering, M. J., Posey, C., & Piccoli, G. (2009). Computer self-efficacy and motivation to learn in a self-directed online course. Decision Sciences Journal of Innovative Education, 7, 99–121. doi:10.1111/j.1540-4609.2008.00207.x Sitzmann, T., Ely, K., Brown, K. G., & Bauer, K. N. (2010). Self-assessment of knowledge: A cognitive learning or affective measure? Academy of Management Learning & Education, 9, 169–191. Sitzmann, T., Kraiger, K., Stewart, D., & Wisher, R. (2006). The comparative effectiveness of Web-based and classroom instruction: A metaanalysis. Personnel Psychology, 59, 623–664. doi:10.1111/j.1744-6570.2006.00049.x Smart, J. C., & Ethington, C. A. (1995). Disciplinary and institutional differences in undergraduate education goals: Implications for practice. New Directions for Teaching and Learning, 64, 49–57. doi:10.1002/tl.37219956408
21
Multi-Disciplinary Studies in Online Business Education
Smeby, J.-C. (1996). Disciplinary differences in university teaching. Studies in Higher Education, 21(1), 69–79. doi:10.1080/030750796123 31381467 Smith, G. G., Heindel, A. J., & Torres-Ayala, A. T. (2008). E-learning commodity or community: Disciplinary differences between online courses. The Internet and Higher Education, 11, 152–159. doi:10.1016/j.iheduc.2008.06.008 Stoel, L., & Lee, K. H. (2003). Modeling the effect of experience on student acceptance of Web-based course software. Internet Research: Electronic Networking Applications and Policy, 13, 364–374. doi:10.1108/10662240310501649 Terry, N. (2001). Assessing enrollment and attrition rates for the online MBA. T.H.E. Journal, 28(7), 64–68. Thompson, J. D., Hawkes, R. W., & Avery, R. W. (1969). Truth strategies and university organization. Educational Administration Quarterly, 5(2), 4–25. doi:10.1177/0013131X6900500202 Trank, C. Q., & Rynes, S. L. (2003). Who moved our cheese? Reclaiming professionalism in business education. Academy of Management Learning & Education, 2, 189–205. Van Patten, J., Chao, C., & Riegeluth, C. (1986). A review of strategies for sequencing and synthesizing instruction. Review of Educational Research, 56, 437–471. doi:10.3102/00346543056004437 Venkatesh, V., & Davis, F. D. (2000). Theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46, 186–204. doi:10.1287/ mnsc.46.2.186.11926 Wan, Z., Fang, Y., & Neufeld, D. J. (2007). The role of information technology in technologymediated learning: A review of the past for the future. Journal of Information Systems Education, 18, 183–192.
22
Whetten, D. A., Johnson, T. D., & Sorenson, D. L. (2009). Learning-centered course design. In Armstrong, S. J., & Fukami, C. V. (Eds.), The SAGE handbook of management learning, education, and development (pp. 254–270). London, UK: Sage. Williams, E. A., Duray, R., & Reddy, V. (2006). Teamwork orientation, group cohesiveness, and student learning: A study of the use of teams in online distance education. Journal of Management Education, 30, 592–616. doi:10.1177/1052562905276740
KEY TERMS AND DEFINITIONS “Applied” Disciplines: Academic disciplines where the primary focus is the application of knowledge. “Hard” Disciplines: Academic disciplines where general consensus has emerged amongst scholars regarding the field’s dominant paradigms. “Life” Disciplines: Academic disciplines that study living things. Multi-Disciplinary Studies: Research studies that are comprised of samples that include courses from more than one academic discipline. “Non-Life” Disciplines: Academic disciplines that study non- living things. “Pure” Disciplines: Academic disciplines where the primary focus is the creation and acquisition of knowledge. “Soft” Disciplines: Academic disciplines characterized by multiple competing paradigms as possible explanations of their phenomena of interest.
23
Chapter 2
Learning and Satisfaction in Online Communities of Inquiry Zehra Akyol Canada D. Randy Garrison University of Calgary, Canada
ABSTRACT The purpose of this chapter is to explain the capability of the Community of Inquiry (CoI) framework as a research model to study student learning and satisfaction. The framework identifies three elements (social, cognitive, and teaching presence) that contribute directly to the success of an e-learning experience through the development of an effective CoI. It is argued that a CoI leads to higher learning and increased satisfaction. The chapter presents findings from two online courses designed using the CoI approach. Overall, the students in these courses had high levels of perceived learning and satisfaction, as well as actual learning outcomes.
INTRODUCTION Online learning has reached a point where it has been accepted as an important alternative or enhancement to traditional face-to-face education. Changing needs and expectations of 21th century students and the advances in communication technologies are the main reasons for this development. However, there are still concerns about the quality of online learning programs, which raises the
question of how to evaluate the success of online learning. The literature points out two variables that have been studied extensively: learning and satisfaction. In order to increase the effectiveness of online learning programs, researchers have been exploring factors and issues affecting students’ learning and satisfaction in online environments as well as developing and applying strategies and theories to enhance their learning and satisfaction. In this chapter, an overview of the CoI framework as one promising theory to achieve higher levels
DOI: 10.4018/978-1-60960-615-2.ch002
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Learning and Satisfaction in Online Communities of Inquiry
of learning and satisfaction is introduced along with supporting research.
BACKGROUND An important line of research regarding learning online has been the exploration of the challenges and factors affecting the success of students’ learning experiences. For example, Mingming and Evelyn (1999) found eleven factors significantly related to students’ perceived learning: • • • • • • • • • • •
instructor-student interaction, instructor-student communication, instructor evaluation, instructor responses, student-student interaction, student-student communication, online discussion, written assignments, learning style, prior computer competency, and time spent on a course.
However, the most influential factors were students’perceived interaction with their instructor followed by online discussion. Similarly, Eom, Wen and Ashill (2006) examined several factors, from course structure to self motivation, as potential determinants of perceived learning outcomes and satisfaction in asynchronous online learning courses. The results showed that only two of them, learning style and instructor feedback, affect perceived learning outcomes. In terms of satisfaction of an online learning experience, however, there is less consensus. Researchers have identified a wide range of variables associated with satisfaction (Lin & Overbaugh, 2007; Martz, Reddy & Sangermano, 2004; Sahin, 2007; Sun, Tsai, Finger, Chen & Yeh, 2008). The common theme is that instructor support and interaction contribute significantly to learner satisfaction Similarly, it has been shown that small
24
group interaction (Driver, 2002) or collaborative interaction (Jung, Choi, Lim & Leem, 2002; So & Brush, 2008) created higher levels of social presence and satisfaction. Researchers have also begun to investigate the relationship between students’ perceived learning and satisfaction and a sense of community. This coincides with an increasing emphasis on community building in online learning environments. The research of Rovai (2002) provided evidence for the relationship between sense of community and perceived learning. He concluded, online learners who have a stronger sense of community and perceived learning feel less isolated and have greater satisfaction with their academic programs. In turn, Rovai found that students felt less isolated which resulted in fewer dropouts. Parenthetically, this link between satisfaction and retention was also found by Schreiner (2009). Harvey, Moller, Huett, Godshalk and Downs (2007) also investigated whether a stronger sense of community would lead to increased learning and productivity in asynchronous environments. They found that more peer interactions, as expressed by community comments, resulted in higher learning as evidenced by higher grades. In other words, learning occurred within the teams as they worked together to complete their projects. Many other studies have also confirmed the impact of community on students’ learning and satisfaction in online environments (e.g., Ertmer & Stepich, 2004; Shea, 2006; Shea, Li, & Pickett, 2006; Liu, Magjuka, Bonk & Lee, 2007). Considering all the previous studies, the evidence suggests that a community of inquiry approach may lead to higher levels of learning and satisfaction. This is reinforced by Palloff and Pratt (2005) who indicate that creating and sustaining a community for online learning enhances student satisfaction and learning through community involvement. The potential of the Community of Inquiry (CoI) framework developed by Garrison, Anderson and Archer (2000) is derived from its ability to provide a comprehensive look at how
Learning and Satisfaction in Online Communities of Inquiry
learning occurs in online learning contexts. Most of the above variables influencing learning and satisfaction are taken into consideration by the elements of a community of inquiry. Garrison and Cleveland-Innes (2004) claim that when all these three elements of a learning community (social, cognitive, and teaching presence) are integrated harmoniously in a way that supports critical discourse and reflection, then satisfaction and success result. Other studies using the CoI framework also provide evidence for the impact of social, cognitive and teaching presence on learning and satisfaction (e.g., Richardson & Swan, 2003; Shea, Pickett & Pelz, 2003, 2004; Akyol & Garrison, 2008, 2011; Shea & Bidjerano, 2009). The purpose of this chapter is to explore the inherent capability of the CoI framework to enhance students’ learning and satisfaction by focusing on the role of each element (social, cognitive and teaching presence). In arguing that developing an effective community of inquiry leads to higher learning and increased satisfaction, the findings from two online courses designed by applying the CoI framework will be presented.
COMMUNITY OF INQUIRY FRAMEWORK An online learning community is valuable as it serves social needs as well as enhancing student satisfaction and learning through community involvement (Palloff & Pratt, 2005). The CoI framework embraces collaborative constructivist approaches as the basis of inquiry. The framework is comprised of three elements essential to this purpose: social presence, cognitive presence and teaching presence. All three elements together construct a purposeful discourse and reflection (Garrison & Vaughan, 2008). The role of each presence to achieve high levels of learning and satisfaction are discussed next.
Social Presence Garrison (2009) defines social presence as “the ability of participants to identify with the community (e.g., course of study), communicate purposefully in a trusting environment, and develop inter-personal relationships by way of projecting their individual personalities” (p. 352). The role of social presence to establish relationships and a sense of belonging in order to create the climate for open communication and support critical thinking in a community of learners (Garrison & Anderson, 2003). The research of Shea and Bidjerano (2009) also suggests that it is crucial to assist learners to gain comfort and confidence in the online discussion format in order to foster cognitive activity. Similarly, Harvey et al. (2007) emphasizes social aspect of asynchronous communication for both the quality of learning experience and the quality of group work. The students in their study expressed building their self confidence through independent and collaborative research and being proud of their team’s efforts and outcomes resulted in a sense of satisfaction among members of the learning communities. Moreover, they also expressed their desire for more collaborative work (Harvey, et al, 2007). Many other studies also provided evidence for the relationship between social presence, and learning and/or satisfaction (e.g. Gunawardena & Zittle, 1997; Picciano, 2002; Tu & McIsaac, 2002; Richardson & Swan, 2003; Swan & Shih, 2005; Akyol & Garrison, 2008, 2011; Boston, Diaz, Gibson, Ice, Richardson & Swan, 2009). The more students feel comfortable and a sense of belonging to a group, the higher levels of satisfaction and learning are expected. In this regard, instructional design in supporting the development of social presence is crucial (Swan & Shih, 2005; Shea and Bidjerano, 2009). The CoI framework helps instructional designers and online instructors by illuminating the way social presence develops and how it interacts with the other presences. Social presence in an online learning environment usually starts with more
25
Learning and Satisfaction in Online Communities of Inquiry
affective expression (i.e., self disclosure) and evolves to group cohesion through continuous open communication (Akyol & Garrison, 2008).
Teaching Presence Instructors play key roles in students’ learning in either traditional face-to-face or in online learning environments. However, in online learning students may need to feel more instructional guidance by the instructor (Akyol, Garrison, Ozden, 2009). In a recent study conducted by Paechter, Maier and Macher (2010) it was found that students experience the instructor’s support and expertise as being especially important for the acquisition of knowledge and skills, and for course satisfaction in online learning. Anderson (2004) defines the qualities that define an excellent online learning teacher. First, he proposes having sufficient technical skills to navigate and contribute effectively within the online learning context. Next, he emphasizes developing a sense of trust and safety so that learners will not feel uncomfortable and constrained in postings their thoughts and comments. Finally, an effective online learning teacher must have resilience, innovativeness, and perseverance. It is clear that teaching online represents a new challenge that requires a new set of responsibilities and roles. The teaching presence construct defined within the context of the CoI framework speaks to these qualities. Teaching presence addresses the design, facilitation and direction of cognitive and social processes to support and enhance a meaningful learning experience. Teaching presence includes the possibility that any participant could assume the associated responsibilities. As such, Garrison and Anderson (2003) emphasize sharing the roles and responsibilities of a teacher among students. This has been supported by the students in studies by Rourke and Anderson (2002) and Akyol, Garrison and Ozden (2009). Teaching presence has a regulatory and mediating role to create an effective community of in-
26
quiry by balancing social and cognitive processes congruent with the intended learning outcomes and the needs and capabilities of the learners (Garrison & Anderson, 2003). This critical role has been confirmed by other research (e.g. Shea et al., 2006; Book & Oliver, 2007). It has been found that instructors who develop strong practices in terms of establishing reason and context for communication, enabling communication, supporting communication and moderating communication are likely to support community development (Book & Oliver, 2007). Overall, students value frequent feedback from the instructor and find it important to improve the quality of online learning. In addition to the relationship between sense of community and teaching presence, the relationship between teaching presence and students’ learning and satisfaction was also evidenced in the literature (Baker, 2004; Shea, Pickett and Pelz, 2003, 2004; Akyol & Garrison, 2008).
Cognitive Presence The ultimate purpose of an educational community of inquiry is to create an intellectual environment that supports sustained critical discourse and higher order knowledge acquisition and application (Garrison & Anderson, 2003). To a large degree, social and teaching presence facilitate the creation of a community for the purpose of sustaining cognitive presence through practical inquiry. As Harvey et al. (2007) indicated, it difficult to imagine that learners would engage in substantive and rich conversations without the feelings of acceptance that a community provides. The community provides the required emotional and leadership support through social and teaching presence in order to develop high levels of cognitive presence. The CoI framework is a process model. This is perhaps best reflected within the cognitive presence element. Cognitive presence is comprised of the progressive phases of practical inquiry leading to resolution of a problem or dilemma. It is
Learning and Satisfaction in Online Communities of Inquiry
developmental in nature starting with triggering event and aiming to reach resolution. In order to achieve deep and meaningful learning, it is important to engage learners in the inquiry process. Cognitive presence is described as a condition of higher-order thinking and learning (Garrison & Anderson, 2003). Akyol and Garrison (2011) revealed this close relationship when they found high levels of cognition (i.e. integration) as well as high levels of perceived learning, satisfaction and actual grades. For students to be able to engage in high levels of cognitive inquiry requires skillful marshalling of teaching and social presence (Shea & Bidjerano, 2009). In order to develop an effective community of inquiry in an online learning environment, the integration of the elements should be designed, facilitated and directed based on the purpose, participants and technological context of the learning experience (Akyol & Garrison, 2008). Establishing social presence is one of the first and most important challenges for instructors and instructional designers as it is a precondition to establishing collaborative learning experience. Keeping this in mind, special efforts must be made to allow participants to introduce themselves in the first session and through the use of chat rooms, collaborative assignments and discourse social presence can be sustained over time (Garrison & Anderson, 2003). When group cohesion and trust are strong, the transition through the phases of cognitive presence will also be easier. Garrison and Anderson (2003) also suggest division of the group into smaller groups for discussion to support cognitive presence and social presence. Learning activities and assessment should be congruent with the learning outcomes to enhance cognitive presence. Most of these practical guidelines are associated with the design and organization aspect of teaching presence. Throughout the course, facilitating discourse and direct instruction aspects of teaching presence can be shared between the instructor and students depending on the level of students. This strategy encourages students to
monitor and manage their learning by increasing their metacognitive awareness and learning how to learn (Garrison & Anderson, 2003). The value of the CoI framework to study perceived learning and satisfaction is demonstrated in the study described next.
RESEARCH In this section, we present the results of a study that investigated students learning and satisfaction. The study used the CoI framework to design and develop two online courses.
Methodology The purpose of this research was to examine students’ learning and satisfaction level in online communities of inquiry. The main research question leading this study was whether the community of inquiry approach can create an effective online learning environment that supports high levels of learning and satisfaction. The context of the research was a graduate level online course given in fall and spring terms at a large research based university. In both courses, learning activities, strategies and assessment techniques were all developed to reflect social, cognitive and teaching presence. Both the instructor and the topic of the courses were the same – only the duration of the two courses were different. The major assignments in both courses were article critiques and peer reviews, weekly online discussions, and prototype course redesign projects. The instructor modeled how to facilitate the discussions in the first online discussion and in the remaining weeks it was the students who facilitated and directed both to take more responsibility of their learning and to distribute teaching presence among the instructor and the students. The instructor’s modeling of effective facilitation also encouraged the development of cognitive presence and social presence in both
27
Learning and Satisfaction in Online Communities of Inquiry
Table 1. The indicators of categories of the CoI elements Categories Social Presence
Open Communication Group Cohesion Personal/Affective
Learning climate/risk-free expression Group identity/collaboration Self projection/expressing emotions
Teaching Presence
Design & Organization Facilitating Discourse Direct Instruction
Setting curriculum & methods Shaping constructive exchange Focusing and resolving issues
Cognitive Presence
Triggering Event Exploration Integration Resolution
Sense of puzzlement Information exchange Connecting ideas Appling new ideas
online courses. Examples of creating social presence were social presence was created by a warm welcome by the instructor in the first synchronous meeting (through Elluminate) and reinforced via students’ home pages and collaborative activities throughout the course. Cognitive presence was created and sustained when students felt comfortable to express and share their ideas in order to construct the knowledge and skills needed to apply for their article critique assignment and course redesign prototype project. There were 36 (12 males and 24 females) students enrolled in both online courses. The data to explore learning and satisfaction in a community of inquiry developed in each online course was obtained through transcript analysis of online discussions and the CoI Survey (Arbaugh, ClevelandInnes, Diaz, Garrison, Ice, Richardson, Shea, & Swan, 2008). Transcript analysis was applied in order to code and explore posting patterns of social presence, teaching presence and cognitive presence based on category indicators defined in the CoI framework (Garrison & Anderson, 2003). Social presence was analyzed in the transcripts by coding for affective expression, open communication and group cohesion. Teaching presence was coded for design and organization, facilitating discourse, and direct instruction. Cognitive presence was coded using the indicators of the four phases of the Practical Inquiry model: triggering
28
Indicators
event, exploration, integration and resolution (See Table 1). Totally, eight weeks of discussions in the fall course and four weeks of discussions in the spring course were analyzed. The first author with a research assistant conducted the transcript analysis of the fall term course after getting training and pilot coding. Initial inter-rater reliability was.75 for pilot coding. The researchers coded transcripts separately and then actively discussed coding differences in order to reach a consensus. This negotiated coding strategy increased reliability by allowing refinement of coding scheme and controls for simple error (Garrison, ClevelandInnes, Koole & Kappelman, 2006). One hundred percent agreement was reached on each online discussion in fall term course. After gaining experience with the discussions in the fall course, the discussions in the spring course were analyzed by the first author. The Community of Inquiry (CoI) survey was administered at the end of each course to assess students’ perceptions of each constituting element (presences) of the CoI framework as well as their perceived learning and satisfaction. The Cronbach’s Alpha was found 0.94 for teaching presence, 0.91 for social presence, and 0.95 for cognitive presence (Arbaugh, et al., 2008). Thirty students (15 from each course) completed the survey. In addition to students’ self-report of learning and satisfaction, their final grades were also used in the
Learning and Satisfaction in Online Communities of Inquiry
Table 2. The percentages of the messages that includes the indicators of CoI elements Social Presence
Teaching Presence
Cognitive Presence
Fall term course
94.1%
53.8%
90.2%
Spring term course
90.2%
57.1%
79.4%
research to provide a better view of learning and satisfaction in the online communities of inquiry. The final grades were comprised of 25% article critique assignment, 25% online discussion activity and 50% course redesign prototype project.
Findings The results of the transcript analysis indicated that a community of inquiry developed in each course. Table 2 shows the percentages of the messages that included the indicators of social presence, teaching presence and cognitive presence. This result could be attributed to the success of instructional design that the CoI approach applied in both courses to design the instructional strategies, methods, and learning activities supported the development of social presence, teaching presence and cognitive presence. Further analysis was conducted to see whether there were developmental differences between the courses due to the course duration difference by Akyol, Vaughan and Garrison (in press). It was found that there were significant differences on categories of each presence generally favoring the course with a longer duration (i.e., takes time to reach higher levels of inquiry). However, this is not the focus of this paper. The focus here was on whether a community of inquiry developed in each course, which was confirmed by the transcript analysis results summarized in Table 2. The primary question here is the perceived level of learning and satisfaction developed in each course. The descriptive analysis of the CoI Survey also showed that students’ perceptions of each presence were high in both courses, confirm-
ing the transcript analysis that students could sense each element of the CoI (see Table 3). Transcript analysis also yielded that some of the categories of CoI elements were found most frequently than the others in both courses. The open communication category of social presence (occurring as continuing a thread, quoting from and referring to other’s messages, asking questions, complimenting, or expressing agreement/disagreement) was coded highest in both courses. In terms of cognitive presence the integration phase was the most frequently reached cognitive level in both courses, which is contrary to most previous studies (e.g. Garrison, Anderson & Archer, 2000; McKlin, Harmon, Evans & Jone, 2002; Meyer, 2003; Pawan, Paulus, Yalcin & Chang, 2003; Vaughan & Garrison, 2005; Kanuka, Rourke & Laflamme, 2007). The frequency of teaching presence categories created a difference between the two courses. In the fall term, the online course direct instruction category was coded most frequently whereas facilitating discourse was found as the highest in the spring term online course. This difference could be attributed to the difference on course duration that students in the spring term online course might have needed more time to perform direct instruction activities such as sharing and injecting knowledge from diverse sources. In both courses, students’ perceived learning and satisfaction were also found high indicating that students agreed that they could learn a lot in the course and they were satisfied with the course. Consistent with a high perception of learning, students’ grades were also high in both online courses. Students’ final grades in both courses
29
Learning and Satisfaction in Online Communities of Inquiry
Table 3. Students’ perceptions of each presence and learning and satisfaction in online courses Social Presence
Teaching Presence
Cognitive Presence
Perceived Learning
Satisfaction
Fall term course
3.94
4.15
4.07
4.2
4.47
Spring term course
4.06
4.63
4.24
4.67
4.87
were essentially identical. The means were 94.2 for the fall term online course and 92.1 for the spring term online course. This also indicates that students successfully completed their article critique assignments, attended online discussions and applied their course redesign ideas and the knowledge they gained into their projects. Overall, these results suggest that a community of inquiry is associated with the level of students’ perceived learning and satisfaction. As Akyol and Garrison (2011) indicate, the strength of the CoI framework is its emphasis on collaborative constructivist approaches for designing online learning environments to achieve deep and meaningful learning. The study of Benbunan-Fich and Arbaugh (2006) also confirmed the ascendency of collaborative constructivist approaches. The authors found evidence to suggest that group collaboration or knowledge construction can potentially improve students’ perceived learning and final grades. The findings here also provide evidence for the importance of a community of inquiry to achieve student satisfaction and learning outcomes in an e-learning environment (Akyol & Garrison, 2011).
FUTURE RESEARCH DIRECTIONS Notwithstanding the considerable research that has reported a relationship between satisfaction and perceived learning, there is a need for additional research that focuses on large scale studies conducted within a comprehensive theoretical framework. The framework that has shown considerable promise in exploring the complexities
30
of e-learning is the CoI framework (Garrison & Arbaugh, 2007). Perhaps the first challenge is to better understand the role that a sense of community plays in student satisfaction and quality of learning outcomes. Another topic worthy of further research is understanding the dynamics of a community of inquiry that engage learners and ensure that they achieve intended learning outcomes (Akyol & Garrison, 2008). With regard to perceived and actual learning, much more research needs to be done on cognitive presence and understanding the pedagogical challenges of ensuring learners complete the inquiry cycle. The role of meta-cognition must also play an important role in the inquiry process and should be studied. Finally, it has been shown that social presence plays an important mediating function between teaching and cognitive presence (Garrison, Cleveland-Innes & Fung, 2010; Shea & Bidjerano, 2009). More research is needed to fully understand the nature of this relationship and importance of social presence in a community of inquiry. In terms of using the CoI framework to conduct large scale studies, a group of researchers have developed an instrument reported previously that can be administered to efficiently and validly measure each of the presences of a community of inquiry (Arbaugh et al., 2008). This instrument can be used to design large scale studies to explore a wide range of topics relevant to e-learning across institutions and disciplines. These studies will be essential if we are to understand the factors that contribute to the success of online course delivery systems that will guide institutions in the adoption of e-learning approaches.
Learning and Satisfaction in Online Communities of Inquiry
CONCLUSION The main emphasis of the CoI framework is to create an effective community that enhances and supports learning and satisfaction. Building a learning community is valuable as it serves social needs as well as enhancing student satisfaction and learning through community involvement (Palloff & Pratt, 2005). Previous studies confirmed the effectiveness of the CoI framework to develop productive learning communities (Akyol & Garrison, 2008, 2009, 2011; Vaughan & Garrison, 2005). Shea and Bidjerano also (2009) emphasize that epistemic engagement where students are collaborative knowledge builders is well articulated and extended through the CoI framework. We suggest that instructional designers and instructors can apply the CoI framework and approach to designing effective online learning environments for increased learning and satisfaction. However, one main consideration that should be taken into account is that all three presences are interrelated and the establishment of one presence contributes to the establishment of the other presences (Shea & Bidjerano, 2009; Akyol & Garrison, 2008). Therefore, it is crucial that all the presences are considered in concert and in balance to support a collaborative community of inquiry.
REFERENCES Akyol, Z., & Garrison, D. R. (2008). The development of a community of inquiry over time in an online course: Understanding the progression and integration of social, cognitive and teaching presence. Journal of Asynchronous Learning Networks, 12(3), 3–22.
Akyol, Z., & Garrison, D. R. (2011). Understanding cognitive presence in an online and blended community of inquiry: Assessing outcomes and processes for deep approaches to learning. British Journal of Educational Technology, 42(2), 233–250. doi:10.1111/j.1467-8535.2009.01029.x Akyol, Z., Garrison, D. R., & Ozden, M. Y. (2009). Online and blended communities of inquiry: Exploring the developmental and perceptional differences. [IRRODL]. International Review of Research in Open and Distance Learning, 10(6), 65–83. Akyol, Z., Vaughan, N., & Garrison, D. R. (in press). The impact of course duration on the development of a community of inquiry. Interactive Learning Environments. Anderson, T. (2004). Teaching in an online learning context. In T. Anderson & F. Elloumi, (Eds.), Theory & practice of online learning (pp. 173–194). Retrieved January 10, 2010, from http:// cde.athabascau.ca/online_book/contents.html Arbaugh, J. B., Cleveland-Innes, M., Diaz, S., Garrison, D. R., Ice, P., & Richardson, J. (2008). Developing a community of inquiry instrument: Testing a measure of the Community of Inquiry framework using a multi-institutional sample. The Internet and Higher Education, 11, 133–136. doi:10.1016/j.iheduc.2008.06.003 Baker, J. D. (2004). An investigation of relationships among instructor immediacy and affective and cognitive learning in the online classroom. The Internet and Higher Education, 7(1), 1–13. doi:10.1016/j.iheduc.2003.11.006 Benbunan-Fich, R., & Arbaugh, J. B. (2006). Separating the effects of knowledge construction and group collaboration in learning outcomes of Web-based courses. Information & Management, 43(6), 778–793. doi:10.1016/j.im.2005.09.001
31
Learning and Satisfaction in Online Communities of Inquiry
Boston, W., Diaz, S. R., Gibson, A. M., Ice, P., Richardson, J., & Swan, K. (2009). An exploration of the relationship between indicators of the community of inquiry framework and retention in online programs. Journal of Asynchronous Learning Networks, 13(3), 67–83. Brook, C., & Oliver, R. (2007). Exploring the influence of instructor actions on community development in online settings. In Lambropoulos, N., & Zaphiris, P. (Eds.), User-centered design of online learning communities. Hershey, PA: Idea Group. Driver, M. (2002). Exploring student perceptions of group interaction and class satisfaction in the Web-enhanced classroom. The Internet and Higher Education, 5(1), 35–45. doi:10.1016/ S1096-7516(01)00076-8 Eom, S. B., Wen, H. J., & Ashill, N. (2006). The determinants of students’ perceived learning outcomes and satisfaction in university online education: An empirical investigation. Decision Sciences Journal of Innovative Education, 4(2), 215–235. doi:10.1111/j.1540-4609.2006.00114.x Ertmer, P. A., & Stepich, D. A. (2004). Examining the relationship between higher-order learning and students’ perceived sense of community in an online learning environment. Proceedings of the 10th Australian World Wide Web conference, Gold Coast, Australia. Garrison, D. R. (2009). Communities of inquiry in online learning: Social, teaching and cognitive presence. In Howard, C. (Eds.), Encyclopedia of distance and online learning (2nd ed., pp. 352–355). Hershey, PA: IGI Global. Garrison, D. R., & Anderson, T. (2003). E-learning in the 21st century: A framework for research and practice. London, UK: Routledge/Falmer. doi:10.4324/9780203166093
32
Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2(2-3), 87–105. doi:10.1016/S1096-7516(00)00016-6 Garrison, D. R., & Arbaugh, J. B. (2007). Researching the community of inquiry framework: Review, issues, and future directions. The Internet and Higher Education, 10(3), 157–172. doi:10.1016/j.iheduc.2007.04.001 Garrison, D. R., & Cleveland-Innes, M. (2004). Critical factors in student satisfaction and success: Facilitating student role adjustment in online communities of inquiry. In J. Bourne & J. C. Moore (Eds), Elements of quality online education: Into the mainstream - volume 5 in the Sloan-C series (p. 29-38). Needham, MA: Sloan Center for Online Education. Garrison, D. R., Cleveland-Innes, M., & Fung, T. S. (2010). Exploring causal relations among teaching, cognitive and social presence: A holistic view of the community of inquiry framework. The Internet and Higher Education, 13(1-2), 31–36. doi:10.1016/j.iheduc.2009.10.002 Garrison, D. R., Cleveland-Innes, M., Koole, M., & Kappelman, J. (2006). Revisiting methodological issues in the analysis of transcripts: Negotiated coding and reliability. The Internet and Higher Education, 9(1), 1–8. doi:10.1016/j. iheduc.2005.11.001 Gunawardena, C. N., & Zittle, F. (1997). Social presence as a predictor of satisfaction within a computer mediated conferencing environment. American Journal of Distance Education, 11(3), 8–25. doi:10.1080/08923649709526970
Learning and Satisfaction in Online Communities of Inquiry
Harvey, D., Moller, L. A., Huett, J. B., Godshalk, V. M., & Downs, M. (2007). Identifying factors that affect learning community development and performance in asynchronous distance education. In Luppicini, R. (Ed.), Online learning communities (pp. 169–187). Charlotte, NC: Information Age Publishing. Hong, K.-S. (2002). Relationships between students’ and instructional variables with satisfaction and learning from a Web-based course. The Internet and Higher Education, 5(3), 267–281. doi:10.1016/S1096-7516(02)00105-7 Jung, I., Choi, S., Lim, C., & Leem, J. (2002). Effects of different types of interaction on learning achievement, satisfaction and participation in Web-based instruction. Innovations in Education and Teaching International, 39(2), 153–162. doi:10.1080/14703290252934603 Kanuka, H., Rourke, L., & Laflamme, E. (2007). The influence of instructional methods on the quality of online discussion. British Journal of Educational Technology, 38(2), 260–271. doi:10.1111/j.1467-8535.2006.00620.x Lin, S. Y., & Overbaugh, R. C. (2007). The effect of student choice of online discussion format on tiered achievement and student satisfaction. Journal of Research on Technology in Education, 39(4), 399–415. Liu, X., Magjuka, R. J., Bonk, C. J., & Lee, S.-H. (2007). Does sense of community matter? An examination of participants’ perceptions of building learning communities in online courses. Quarterly Review of Distance Education, 8(1), 9–24.
Martz, B., Reddy, V. K., & Sangermano, K. (2004). Looking for indicators of success for distance education. In Howard, C., Schenk, K., & Discenza, R. (Eds.), Distance learning and university effectiveness: Changing educational paradigms for online learning (pp. 144–160). Hershey, PA: Information Science Publishing. doi:10.4018/9781591401780.ch007 McKlin, T., Harmon, S. W., Evans, W., & Jone, M. G. (2002). Cognitive presence in Web-based learning: A content analysis of students’ online discussions. American Journal of Distance Education, 15(1), 7–23. Meyer, K. (2003). Face-to-face versus threaded discussions: The role of time and higher-order thinking. Journal of Asynchronous Learning Networks, 7(3), 55–65. Mingming, J., & Evelyn, T. (1999). A study of students’ perceived learning in a Web-based online environment. In Proceedings of WebNet 99 World Conference on WWW and Internet, Honolulu, Hawaii. Paechter, M., Maier, B., & Macher, D. (2010). Students’ expectations of, and experiences in e-learning: Their relation to learning achievements and course satisfaction. Computers & Education, 54(1), 222–229. doi:10.1016/j. compedu.2009.08.005 Palloff, R. M., & Pratt, K. (2005). Collaborating online: Learning together in community. San Francisco, CA: Jossey-Bass. Pawan, F., Paulus, T. M., Yalcin, S., & Chang, C. F. (2003). Online learning: Patterns of engagement and interaction among in-service teachers. Language Learning & Technology, 7(3), 119–140. Picciano, A. G. (2002). Beyond student perceptions: Issues of interaction, presence, and performance in an online course. Journal of Asynchronous Learning Networks, 6(1), 21–40.
33
Learning and Satisfaction in Online Communities of Inquiry
Richardson, J. C., & Swan, K. (2003). Examining social presence in online courses in relation to students’ perceived learning and satisfaction. Journal of Asynchronous Learning Networks, 7(1), 68–88. Rourke, L., & Anderson, T. (2002). Using peer teams to lead online discussion. Journal of Interactive Media in Education, 1. Rovai, A. P. (2002). Sense of community, perceived cognitive learning, and persistence in asynchronous learning networks. The Internet and Higher Education, 5(4), 319–332. doi:10.1016/ S1096-7516(02)00130-6 Sahin, I. (2007). Predicting student satisfaction in distance education and learning environments. Turkish Online Journal of Distance Education, 8(2), 113-119. Retrieved September 23, 2009 from http://tojde.anadolu.edu.tr/tojde26/pdf/ article_9.pdf Schreiner, L. A. (2009). Linking student satisfaction with retention. Retrieved January 19, 2010 from https://www.noellevitz.com/NR/rdonlyres/ A22786EF-65FF-4053-A15A-CBE145B0C708/ 0/LinkingStudentSatis0809.pdf Shea, P. (2006). A study of students’ sense of learning community in online environments. Journal of Asynchronous Learning Networks, 10(1), 35–44. Shea, P., & Bidjerano, T. (2009). Community of inquiry as a theoretical framework to foster epistemic engagement and cognitive presence in online education. Computers & Education, 52(3), 543–553. doi:10.1016/j.compedu.2008.10.007 Shea, P., Li, C. S., & Pickett, A. (2006). A study of teaching presence and student sense of learning community in fully online and Web-enhanced college courses. The Internet and Higher Education, 9(3), 175–190. doi:10.1016/j.iheduc.2006.06.005
34
Shea, P. J., Pickett, A. M., & Pelz, W. E. (2003). A flow-up investigation of teaching presence in the SUNY learning network. Journal of Asynchronous Learning Networks, 7(2), 61–80. Shea, P. J., Pickett, A. M., & Pelz, W. E. (2004). Enhancing student satisfaction through faculty development: The importance of teaching presence. In J. Bourne & J.C. Moore (Eds), Elements of quality online education: Into the mainstream - volume 5 in the Sloan-C series (pp. 39-59). Needham, MA: Sloan Center for Online Education. So, H.-J., & Brush, T. A. (2008). Student perceptions of collaborative learning, social presence and satisfaction in a blended learning environment: Relationships and critical factors. Computers & Education, 51(1), 318–336. doi:10.1016/j. compedu.2007.05.009 Sun, P.-C., Tsai, R. J., Finger, G., Chen, Y.-Y., & Yeh, D. (2008). What drives a successful elearning? An empirical investigation of the critical factors influencing learner satisfaction. Computers & Education, 50(4), 1183–1202. doi:10.1016/j. compedu.2006.11.007 Swan, K., & Shih, L. F. (2005). On the nature and development of social presence in online course discussions. Journal of Asynchronous Learning Networks, 9(3), 115–136. Tu, C. H., & McIsaac, M. (2002). The relationship of social presence and interaction in online classes. American Journal of Distance Education, 16(3), 131–150. doi:10.1207/S15389286AJDE1603_2 Vaughan, N., & Garrison, D. R. (2005). Creating cognitive presence in a blended faculty development community. The Internet and Higher Education, 8(1), 1–12. doi:10.1016/j.iheduc.2004.11.001
Learning and Satisfaction in Online Communities of Inquiry
KEY TERMS AND DEFINITIONS A Community of Inquiry: Where individual experiences and ideas are recognized and discussed in light of societal knowledge, norms, and values. Cognitive Presence: The extent to which learners are able to construct and confirm meaning through sustained reflection and discourse (Garrison, Anderson and Archer, 2001). Online Learning: A method of learning delivered by using asynchronous and synchronous communication technologies.
Perceived Learning: Self evaluation of the amount of learning that students gained. Satisfaction: An affective outcome indicating positive feelings and attitudes towards the quality of learning and learning environment. Social Presence: The ability of participants to identify with the community (e.g., course of study), communicate purposefully in a trusting environment, and develop inter-personal relationships by way of projecting their individual personalities. Teaching Presence: The design, facilitation, and direction of cognitive and social processes for the purpose of realizing personally meaningful and educationally worthwhile learning outcomes.
35
Section 2
Empirical Research Methods and Tutorial
37
Chapter 3
A Review of Research Methods in Online and Blended Business Education: 2000-2009
J. B. Arbaugh University of Wisconsin Oshkosh, USA Alvin Hwang Pace University, USA Birgit Leisen Pollack University of Wisconsin Oshkosh, USA
ABSTRACT This review of the online teaching and learning literature in business education found growing sophistication in analytical approaches over the last 10 years. The authors of this chapter believe researchers are uncovering important findings from the large number of predictors, control variables, and criterion variables examined. Scholars are employing appropriate and increasingly sophisticated techniques such as structural equation models in recent studies (16) within a field setting. To increase methodological rigor, researchers need to consciously incorporate control variables that are known to influence criterion variables of interest so as to clearly partial out the influence of their predictor variables of interest. This will help address shortcomings arising from the inability to convince sample respondents such as instructors, institutional administrators, and graduate business students on the benefits versus the cost of a fully randomized design approach. DOI: 10.4018/978-1-60960-615-2.ch003
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Review of Research Methods in Online and Blended Business Education
INTRODUCTION As blended and online teaching and learning become increasingly ubiquitous in business education, the pace of research examining these phenomena has accelerated dramatically during the last decade (Arbaugh, Godfrey, Johnson, Leisen Pollack, Niendorf, & Wresch, 2009). However, for the findings of this research to appropriately inform the practice of online business education, the studies should be methodologically rigorous. Recent reviews and studies of online teaching and learning research suggest that the general methodological quality of this research stream varies widely and often is lacking (Bernard, Abrami, Lou, & Borokhovski, 2004; Bernard, Abrami, Lou, Borokhovski, Wade, Wozney, Wallet, Fiset, & Huang, 2004; Means, Toyama, Murphy, Bakia, & Jones, 2009). Concerns of methodological quality led Bernard et al. (2009) to include a quality check in their recent meta-analysis of comparative studies of differing types of distributed education. Such reviews have under-reported studies of business education, in part because business education scholars often have not used experimental research designs with random assignment of subjects in designing their studies. However, because online business education scholars typically examine actual online courses where administrators and students control the composition of the research samples rather than the researchers, such designs often are not feasible. As the flexibility of the delivery medium is one of the prime attractions of online business courses for part-time MBA students whose priority is continuing in their jobs while getting their education (Arbaugh, 2005b; Dacko, 2001; Millson & Wilemon, 2008) it is unlikely that any institution would willingly randomly assign a student to an online or classroombased section for the convenience of researchers. Thus, researchers who typically do not have the option of subject randomization but who are able to identify important background influences that could affect their subjects may consciously
38
include such influences as covariates along with their independent variables of interest within their field research design. The use of such covariates is the only real practical design alternative in the field as most university administrators would reject a full randomized design, especially so with administrators from institutions where the majority of online instruction in business education is taught – masters comprehensive-level institutions (Alexander, Perrault, Zhao, & Waldman, 2009; Popovich & Neel, 2005), whose primary focus is on meeting education needs rather than research priorities. This field characteristic requires business education researchers to incorporate research designs that provide the advantages of randomized experiments as much as possible but without compromising student access or program/course offerings.
MAIN FOCUS OF THE CHAPTER The purpose of this chapter is to examine and assess the state of research methods used in studies of online and blended learning in the business disciplines with the intent of assessing the field, recommending adjustments to current research approaches and identifying opportunities for meaningful future research. We review research from the business disciplines of Accounting, Economics, Finance, Information Systems (IS), Management, Marketing, and Operations/Supply Chain Management over the first decade of the 21st century. It is our hope that the review will help those interested in evidence-based course design and delivery identify exemplary studies and help future scholars raise the overall quality of this research in the business disciplines. We also hope that online teaching and learning scholars in other disciplines might use our approach to conduct methods reviews in their respective fields, thereby helping to inform the broader research community of appropriate research design and conduct.
A Review of Research Methods in Online and Blended Business Education
Table 1. Terms used in the literature search Disciplines:
Search Terms for On-line:
Search Terms for Learning:
€€€€€• Management €€€€€• Finance €€€€€• Accounting €€€€€• Marketing €€€€€• Information Systems €€€€€• Operations/Supply Chain Management €€€€€• Economics
€€€€€• Blended €€€€€• Mediated €€€€€• Technology-mediated €€€€€• Distance €€€€€• On-line €€€€€• Virtual €€€€€• Web-based €€€€€• E €€€€€• Cyberspace €€€€€• Computer Based €€€€€• Computer Assisted €€€€€• Distributed
€€€€€• Education €€€€€• Learning €€€€€• Teaching €€€€€• Instruction
RESEARCH METHODS REVIEW PROTOCOL This paper is developed from a subset of a broader literature review of articles in business education that examined virtual learning environments including both fully and partially online course content and participant interaction. A comprehensive search for peer-reviewed articles pertaining to “on-line learning” in business courses that were published after January 1, 2000, was conducted between September 2006 and November 2009. Databases examined in the review included ABI/ Inform, Business Full Text, Business Source Elite, and Lexis/Nexis Business. Terminology used in the search is provided in Table 1. To supplement this review, the primary learning and education journals for each business discipline dating back to 2000, as identified in the journals database recently published in Academy of Management Learning & Education, were included in the review (Whetten, 2008). Given our interests in possible alternatives to exclusive dependence upon randomized experiments, we cast a wide net to include studies that examined virtual learning environments where the course content and participant interaction is conducted at least partially online. This protocol identified 120 articles that empirically examined online and/or blended learning in business and management education.
OBSERVATIONS ON RESEARCH METHODS AND ANALYTICAL APPROACHES Research Designs Used Two design types dominate our research sample: survey-based studies and quasi-experimental comparative designs. Sixty-four of the studies used a survey-based data collection approach. Forty-three studies used quasi-experimental designs, with nearly all of these studies comprising comparisons of online class sections or students with classroombased sections or students. Twenty-eight of the quasi-experimental studies used surveys to collect at least part of their data. Pure experimental designs were used in four studies, and interviews, archival studies, and multiple methods were the approaches used in three studies each.
Sample Sizes and Response Rates Table 2 shows the number of studies in various sample size categories for each year of the review. As the Table shows, the number of studies published annually increased sharply in 2002, and that increased activity continued throughout the decade. Many of the studies have at least moderately large sample sizes, with 74 studies having samples larger than 100. These larger samples also began to appear in earnest in 2002. This sug-
39
A Review of Research Methods in Online and Blended Business Education
Table 2. Number of articles with sample sizes by year of publication Year 2000
1-50
51-100
1
4
2002
4
1
2003
1
3
2004
4
2
2005
4
3
2006
1
2007 2008
2001
101-200
201-500
501-1000
1000+
Total 5
1
1
5
4
1
15
4
1
9
2
6
1
1
16
2
1
4
1
15
1
4
4
5
2
17
2
3
2
1
3
11
1
1
6
4
1
13
2009
1
3
3
3
1
Total
19
21
25
28
17
gests that securing appropriate statistical power became less of a concern as the decade progressed. Although the number of studies and the size of their samples increased throughout the decade, surprisingly there was a small but persistent stream of published studies with samples of less than 50. It should be noted, however, that some of these more recent studies are of faculty rather than students (i.e. Connally, Jones, & Jones, 2007; Liu, Bonk, Magjuka, Lee, & Su, 2005; Yukselturk & Top, 2005-2006). However, only seven studies mentioned attempts to control for non-response bias. Although we realize that many of the studies used entire class sections and therefore did not address non-response issues, the preponderance of survey-based studies in our sample suggests that non-response bias could be a concern.
Types of Variables Examined – Criterion, Predictor, and Control Variables The article’s criterion variables were classified into five broad categories: (1) course performance variables (e.g., exam score, GPA), (2) psychological variables (e.g., attitudes, perceptions, satisfaction), (3) course delivery variables (e.g., online, on-site), (4) student skills (e.g., comput-
40
2
11 4
ing skills), and (5) other criterion variables. The complete listing of the categories and measures is presented in Table 3. A prominently researched category of criterion variables pertain to course performance. These include course grade and exam scores. Here, much research has accumulated comparing traditional versus online course formats on one or more course performance measures. Echoing a common conclusion with other researchers, Anstine and Skidmore (2005) found no statistically significant difference in learning outcomes between the two formats in their direct comparison of exam scores for online versus traditional face-to-face course formats in an MBA level statistics course. Gratton-Lavoie and Stanley (2009) noted that students enrolled in an online version of an introductory economics course performed significantly higher on a final exam when compared to students enrolled in a hybrid course. No significant performance differences were found on two mid-term exams. The largest category of criterion variables is comprised of measures on psychological constructs. For example, Klein, Noe and Wang (2006) found students’ motivation to learn was significantly higher for hybrid than for face to face courses. Another psychological construct that has
A Review of Research Methods in Online and Blended Business Education
Table 3. Categories of criterion variables and measures Course Performance Variables
Psychological Variables
Course Delivery Variables
• academic performance • assignment/ exam/test/ quiz scores • actual learning • content knowledge • course completion • course evaluation questions • course points • disappearance from class • drop rate • gain score • entrance exam • exam questions • exercises • final score • learning outcomes • overall learning • participation grade • pass rate • postings • skill demonstration • skill development • student learning • student withdrawal • team performance
• attitude toward course • attitude toward discussion design format • attitude toward instructor • attitude toward learnerlearner interaction • attitude toward interaction • attitude towards medium • attitude towards PowerPoint/ Audio/Video • attitude toward system • attitude toward technology • attitude toward virtual teams • cognitive effort • cognitive learning • excitement • intention to use • learning style • motivation • satisfaction • satisfaction with course • satisfaction with delivery medium • perceived ease of use • perceived flexibility • perceived interaction • perceived learning • perceived quality of learning experience • perceived usefulness • perception of Blackboard • perception of digital library • perception of distance learning • perception of quality of online learning • perceptions of skills (cognitive, affective, interactive) • value of team processes • 6DVs & composite of 4DVs
• contact • course instrumentality • course structure • delivery medium • effective learning support • group behavior • group cohesiveness • group interactivity • instructor access • instructor performance • instructor support • interaction effectiveness • interaction outside class • interaction process • interaction with instructor • presentation of course material • sense of community • social interaction • social presence • student-to-student interaction • system usage • usage behavior • usage of BlackBoard • use of communication
Sample Studies: Anstine & Skidmore (2005) Gratton-Lavoie & Stanley (2009)
Sample Studies: Arbaugh & Rau (2007) Benbunan-Fich & Hiltz (2003) Klein, Noe, & Wang (2006) Martins & Kellermanns (2004) Webb, Gill, & Poe (2005)
Sample Studies: Berry (2002) Drago & Peltier (2004) Larson (2002)
Student Skill Variables • analytical skills • computing skills • leadership skills • management topics/ skills • playfulness with computers
Other Variables • assignment integrative complexity • ambiguity • clarity • effectiveness • future use of technology • medium as barrier/ enabler • programming • study time efficiency • theory • usefulness of experiential learning • program effectiveness
Sample Studies: Al-Shammari (2005) Chen & Jones (2007)
Note: Items in each category are listed in alphabetical order
41
A Review of Research Methods in Online and Blended Business Education
Table 4. Categories of predictor variables and measures Course Performance Variables
Psychological Variables
• academic achievement • academic level • course grade • GPA • online assessment • prior knowledge • prior learning • prior performance in class work • self-tests
• attitude toward computing, IT • attitude toward online instruction relative to classroom instruction • attitude toward subject • cognitive absorption • cognitive presence • cognitive style • focuses immersion • heightened enjoyment • individualism /collectivism • intention to use • involvement • learning approaches • learning goal orientation • learning style • motivation • perceived ease of use • perceived incentive to use • perceived interaction • perceived skill development • perceived knowledge • perceived usefulness of the degree • perceived usefulness of online environment • perceived technology effectiveness • perception of effectiveness of medium • perception of own learning style • personality • satisfaction with experience • self-efficacy • self-motivation • social pressure • temporal disassociation • Wonderlic personality
Course Delivery Variables
• audio summaries • availability of lecture notes • awareness of system capabilities • BlackBoard features • bulletin boards • chat summaries • classroom demeanor • classroom dynamics • class section size • coach involvement • consistency of participation • content • course activity mode • course design characteristics • course flexibility • course format • content items • course participation • course site usage • course structure • course topic areas • delivery medium • design of environment for interaction • direct instruction • ease of use • ease of interaction • emphasis on interaction • facilitation effectiveness • facilitating discourse • faculty encouragement to use • flexibility • functionality • group activity • group cohesiveness • group projects • group size • hit consistency • in-class feedback seeking • individual projects • instructional design & organization • instructor • instructor facilitation • instructor feedback • instructor feedback timeliness • instructor feedback quality • instructor immediacy • instructor online experience • instructor presence
• instructor role • intensity of participation • interaction • interaction difficulty • interface • learner-learner interaction • learner-instructor interaction • learner-system interaction • media variety • name recognition • onsite meeting • out-of-class feedback seeking • participant behavior • participation in discussion boards • peer feedback • peer encouragement • positive & negative feedback behavior • program flexibility • prior online course experience • provided flexibility • provided helpfulness • quality of learning technology • quality of materials • site hits • social presence • social prompting • software platform • student engagement • support features • system usage • usefulness of lecture notes • use of CMS content • use of computerassisted • virtual team experiences • student dyads • teaching approach • teaching presence • team composition • team conference participation • teamwork orientation • technology tools • telepresence • time spent on chat • threaded discussion • total hits on site
Student Skill Variables • computer literacy • experience level • preference for verbal versus written • student ability • students’ technical skills
Demographics & Other Variables • academic discipline • age • children at home • commute time to university • convenience • empathy • full-time versus part-time student • gender • importance of program features • industry employed in • level of effort • location • marital status • nationality, international student • parents’ education • semester • reasons for choosing online • reasons for pursuing the degree • times participated in study • weekly work hours
continued on following page 42
A Review of Research Methods in Online and Blended Business Education
Table 4. Continued Course Performance Variables
Psychological Variables
Course Delivery Variables
Sample Studies: Hwang & Arbaugh (2009)
Sample Studies: Martins & Kellermanns (2004)
Sample Studies: Eom, Wen, & Ashill (2006) Hansen (2008) Peltier, Schibrowsky, & Drago (2007) Webb, Gill, & Poe (2005)
Student Skill Variables
Demographics & Other Variables
Sample Studies: GrattonLavoie & Stanley (2009) Medlin, Vannoy, & Dave (2004)
Sample Studies: Gratton-Lavoie & Stanley (2009) Grzeda & Miller (2009)
Note: Items in each category are listed in alphabetical order
been studied is student satisfaction. Arbaugh and Rau (2007) investigated the effects of a variety of course structure and participant behavior variables on student satisfaction with the delivery medium. They concluded that structure variables such as media variety and group projects significantly and positively affected delivery medium satisfaction. They also found that learner-learner interaction was negatively related to satisfaction. A third category of psychological variables involve students’ perceptions of course elements. For instance, perceived learning was investigated by Benbunan-Fich and Hiltz (2003). They found no significant differences in perceived learning between online, face-to-face, and hybrid course formats. Webb, Gill and Poe (2005) found that perceptions of interaction vary significantly by delivery medium with perceived increase in interactions for courses with more online discussion components. Martins and Kellermanns (2004) found that students perceive a web-based course management system to be more useful if they were presented with incentives to use the system and if they received faculty and peer encouragement to use the system. Also, students perceived such a system to be easier to use if they were aware of the capabilities of the system, have technical support, and prior experience with computer and the Web. Course delivery variables represented the third category of criterion variables. These
included measures of course structure, delivery medium, and a host of measures that assessed components of interaction processes. Larson (2002) found that instructor involvement had a positive effect on interactivity in an online strategic marketing course. He also found that interaction quantity was negatively related to interaction quality. Berry (2002) found that face-to-face versus virtual team interactions were equally effective in producing group cohesiveness, satisfactory group interaction process, and satisfactory group outcomes. Student skill variables represented the fourth main category of criterion variables. For example, along with perceptions of course effectiveness, perceived learning, and course satisfaction, Chen and Jones (2007) also tested for differences in reported analytical and computer skills between a classroom-based and a blended learning section of an MBA-level Accounting course. They found that students in the blended learning section reported greater improvement in analytical and computer skills. Predictor variables were classified into five broad categories: (1) course performance variables (e.g., GPA), (2) psychological variables (e.g., attitudes, perceptions, satisfaction), (3) course delivery variables (e.g., online, on-site), (4) student skills (e.g., computing skills), and (5) demographic and other variables. The complete listing of the categories and measures is presented in Table 4.
43
A Review of Research Methods in Online and Blended Business Education
Course performance variables were largely previous academic performance measures of respondents, such as prior knowledge, intelligence, and/or GPA (Chueng & Kan, 2002; Kim & Schniederjans, 2004; Murphy & Tyler, 2005; Potter & Johnston, 2006; Weber & Lennon, 2007). The psychological variables category is quite varied, including student attitudes and numerous measures of student perceptions. These included perception measures of educational technology (Arbaugh, 2005b; Davis & Wong, 2007; Johnson, Hornik, & Salas, 2008; Landry, Griffeth & Hartman, 2006), learner motivation (Eom, Wen, & Ashill, 2006; Klein et al., 2006; Saade, 2007), participant interaction, amongst others (Arbaugh & BenbunanFich, 2006; 2007; Peltier, Schibrowsky, & Drago, 2007). These attitudes and perceptions do influence criterion variables. For example, Martins and Kellermanns (2004), found students’ attitudes toward the web-based system to positively affect their intention to use the system. Another important predictor category is course delivery. In fact, course delivery characteristics are the most commonly studied predictor variables in our review sample. Here, a significant amount of research has been accumulated with many studies comparing traditional versus online course format on a variety of factors. Most prominently, the two formats have been evaluated for differences in student performance. For example, Webb, Gill and Poe (2005) found that students performed better on multiple learning outcomes in online/ partially online course formats. Hansen (2008) found that online class format led to better performance on a business plan assignment when compared to the traditional class room format. However, he found no significant differences in test scores for the two formats. A less common approach is to compare different technologies for delivering education online. For example, Alavi, Marakas, and Yoo (2002) found that cohorts of adult learners reported higher learning gains using an e-mail/listserv-based system than those using a test version of a “next generation” integrated course management system. Peltier, Schibrowsky 44
and Drago (2007) found student perceptions of the quality of their online experience were significantly influenced by the course structure and course content. Course content perceptions were in turn significantly influenced by various interaction variables and lecture delivery quality. In their investigation of factors affecting satisfaction with an online course, Eom, Wen, and Ashill (2006) found that course structure, interaction, instructor feedback, and instructor facilitation to be important influencing course delivery variables. Demographic characteristics of students also have been studied as predictor variables. For example, Gratton-Lavoie and Stanley (2009) noted significantly differences in age, gender, marital status and number of children of respondents who enrolled in online versus hybrid classroom formats of an introductory economics course with the online course format being selected by significantly more older students, students with children, and those from the female gender. It is likely that the most significant finding from this review regarding control variables is their relative lack of use. At least half of the studies reviewed included no control variables whatsoever. The most commonly used control variables were type of student and gender (31 studies each), intelligence and/or prior academic performance (25 studies), prior experience with online learning or technology (18 studies), course design characteristics (17 studies), system usage (8 studies), and work position/experience (8 studies). Although there were some studies that had some rigorous controls for a variety of conditions (Anstine & Skidmore, 2005; Arbaugh, 2005a, 2005b; Arbaugh & Benbunan-Fich, 2007; Klein et al., 2006; Webb et al., 2005 are examples of such studies) with some even using important demographic and other variables as primary predictors of interest, the collective body of work suggests that many of the findings have not ruled out influences from common control variables and consequent alternative explanations that have not been accounted for.
A Review of Research Methods in Online and Blended Business Education
Table 5. Number of articles with statistical techniques by year of publication Year
Descriptive Statistics only
2000
Chi-square, t-tests, ANOVA
1
2001
Multi-variate Techniques (MANOVA, regression, EFA, etc.)
SEM, CFA
HLM
4 2
2002
2
5
7
2003
1
1
5
1
2004
9
6
3
1
2005
4
3
5
2
2006
3
2
4
6
2
2007
5
1
4
3
1
5
7
1
5
2
46
16
2008 2009
5
Total
29
24
Statistical Techniques Table 5 shows a breakout of the articles by statistical techniques used. Several interesting observations emerge from this table. First, the use of multivariate techniques such as multiple regression analysis has been a staple of the research stream. The use of this highly reliable testing technique early in the research area may be a little surprising to some. But in hindsight, this may reflect the fact that the field has attracted researchers who have been trained in such analytical techniques which are common in many business disciplines, and those scholars simply brought these techniques with them to design and analyze studies of online teaching and learning, Nevertheless, despite the sophistication of some researchers in this area, nearly one-fourth of the published studies reported only descriptive statistics, and this number did not decline over the decade. Even though some research questions are best addressed by descriptive statistics, there is room to debate the pervasiveness of analytical rigor across the field when there is still a sizeable proportion of studies that only employed basic descriptive statistics. On a positive note, there is a growing
3
number of studies, although still small, that has used highly sophisticated statistical techniques such as structural equation models and hierarchal linear models in recent years.
SUMMARY ASSESSMENT OF THE METHODOLOGICAL STATE OF THE FIELD Overall, it appears that there is a robust variety of characteristics being examined in studies of online business education, and the methodological and analytical rigor with which the studies are being conducted continues to improve. Sample sizes are increasing, and although there may still be a larger than expected proportion of studies that exclusively relied on basic descriptive statistics, more analytically rigorous techniques have increasing presence. What is particularly encouraging from the review is that it is evident that the peer review process is allowing the cream to rise to the top. The particularly methodologically and analytically rigorous studies are being found in journals such as American Economic Review (Brown & Liedholm, 2002), Information Systems Research
45
A Review of Research Methods in Online and Blended Business Education
(Alavi et al., 2002; Santhanam, Sadisharan, & Webster, 2008), Journal of Economic Education (Anstine & Skidmore, 2005; Gratton-Lavoie & Stanley, 2009), Personnel Psychology (Klein et al., 2006), Information & Management (BenbunanFich & Arbaugh, 2006; Lu, Yu, & Liu, 2003; Saade & Bahli, 2005) and Academy of Management Learning & Education (Arbaugh, 2005a; 2005b; Arbaugh & Benbunan-Fich, 2006; Martins & Kellermanns, 2004). In short, excellent work in this area is published in highly regarded outlets.
FUTURE RESEARCH DIRECTIONS Our review shows a field that is increasing in sophistication. This is seen in the sampling breadth, research design and analytical approaches that were described here. For example, out of the 120 studies, sample sizes have ranged from a low of 27 students in 2000 to a high of 1,244 students in 2004 (Drago & Peltier, 2004), with a sample mean of 261 students. Samples sizes in the hundreds have become increasingly common since 2005. The samples were fairly well spread across disciplines with 28 studies indicating multi-disciplinary samples and the rest from accounting, economics, finance, information systems, management, marketing, and other business majors. Thus, in addition to growing sample sizes, respondents have come from all the major business disciplines. This helps to increase generalizability of findings in the field. Also, the range of predictors have included a fairly wide set of demographic, attitudinal and other respondent characteristics and recent criterion have focused on student learning performance such as tests and exam results (Hwang & Arbaugh, 2009; Johnson et al., 2008). Despite the growing depth in studies over the last 10 years, the field could benefit from further methodological and analytical rigor as described below.
46
More Faculty-Based Samples First, research on how faculty and administrators may facilitate online versus face-to-face learning performance is still lacking. We could only find seven studies in the online teaching and learning area that included responses from faculty in recent years, with only two faculty studies having samples of over 100 (Connally et al., 2007; Gibson, Harris, & Colaric, 2008; Liu et al., 2005; Popovich & Neel, 2005). These studies with faculty responses have focused on their acceptance of online technology and their perceptions (Gibson et al., 2008) rather than their roles in directly facilitating learning performance of students. Future studies could explore and test the role of faculty and administrators in facilitating online learning success.
Greater Use of HLM Techniques (Or At Least Test to See Whether Their Use Is Appropriate) In terms of research design, the most common research design was cross-sectional surveys of student perceptions and attitudes – either alone or together with some student records, such as class grades and/or online participation for analyses. There were few studies that used pre-post survey designs. Only 22 studies solely relied on student or exam grades plus some form of student records (e.g., types of classes, previous grades, etc.) without collecting additional student survey data. The most common statistical analyses used in the studies was either multiple regression analysis procedures to examine relationships between predictors and criterion variables of interests and/ or analysis of variance procedures to compare differences in mean values amongst sub-samples. Seventy of the 120 studies used either multiple regressions or analysis of variance procedures. Sixteen studies used structural equation models in multi-stage model testing or to determine significance of factor structures. There were some studies that used exploratory factor analysis procedures
A Review of Research Methods in Online and Blended Business Education
to develop factors from survey items. Also, some were descriptive papers that required no statistical analytical procedures with a few studies presenting frequency distributions or correlation matrices without further statistical analyses. Clearly, the use of some form of multiple regression or analysis of variance procedures is common across the reviewed studies. Less common is the use of structural equation models where factors could be tested and multi-stage models built from the data to examine fit. An analytical approach that is clearly missing from studies is the use of hierarchical linear modeling (HLM). While such an analytical approach is increasingly common in disciplinebased business research (Hitt, Beamish, Jackson, & Mathieu, 2007; Morgeson & Hofmann, 1999), it has yet to be used in online business education research. Given the number of multi-course studies in our review (45 had five or more class sections), there is the potential for unexplained nesting effects. Students in such studies are nested within courses, and there may be course-specific effects that may not be accounted for when limited to traditional multivariate approaches. It would be understandable if HLM techniques were not being used if intra-class correlation coefficients (ICC) were too low to warrant their use. However, we found only one study that even checked the ICCs of their classes (Alavi et al., 2002). Therefore, we encourage future researchers to examine possible nesting effects, and at minimum to calculate the ICC in their multi-class section studies (Bickel, 2007). This will help researchers determine whether the use of HLM techniques is warranted.
Further Identification and Use of Relevant Control Variables In addition to possible nesting effects for examination, Anstine and Skidmore (2005) raised an important concern – the need to include control variables that will account for background effects (e.g., demographic factors, class design factors, etc.) which may not be of primary interest to a
study but nevertheless could impact on learning outcomes. There were 62 studies that did not report any statistical control variables. Researchers should bear in mind this caution for control variables in their research design. The most common control variables in studies were of a demographic nature, such as age, and gender. Class structure variables such as prior instructor online course experience, class section size, types of assignments, use of media, etc. were also used as control variables in some studies. We expect future studies to pay increasing attention to the need for control variables as this will improve comparability of findings in the field. Besides variables measuring student age, gender, and prior knowledge, some researchers believe controls for prior learner online course experience, effort expended, and the likelihood that learners would use online as a substitute for classroom learning are particularly warranted (Anstine & Skidmore, 2005; Arbaugh, 2005a; Brown & Liedholm, 2002). These may prove to be useful control variables as researchers move toward predictor variables that are more abstract in nature but which could help explain student effort and primary motivation in online learning environment success. For example recent studies have examined the role of competitive attitudes (Hwang & Arbaugh, 2009) and cultural variables, such as individualism-collectivism (Johnson et al., 2008) in motivating online learning participation and success. By controlling for factors such as prior online experience and individual preference for online environment, researchers could uncover additional influences of more abstract factors in online learning success.
Greater Consideration of Randomized Design, or Reasonable Alternatives Another research design consideration is related to the lack of traditional randomized experimental design in current studies. About one-third of the studies were comparison studies of classroom,
47
A Review of Research Methods in Online and Blended Business Education
online, and/or blended learning settings. However, all these studies were quasi-experimental in research design because students were not randomly assigned to the delivery medium in their classes. This is an area of concern that has been raised by some educational researchers (Bernard et al., 2004; Means et al., 2009). Although fully randomized experimental design is believed to be ideal for comparing the learning efficacy of traditional face-to-face classes versus online learning classes, there are practical issues that researchers have to consider. There are also possible work-around options to address some of the concerns in not having fully randomized experimental designs in online learning research. On the practical side, it is no secret that the flexibility of the online delivery medium is a prime attraction to students who participate in online business courses, particularly at the MBA level (Arbaugh, 2005b; Dacko, 2001; Millson & Wilemon, 2008). The nature of part-time MBA programs with their focus on full-time working professionals who usually have clear preferences on timing of classes and delivery medium that will suit their working needs, makes it unlikely for these students to volunteer themselves to be randomly assigned to a face-to-face versus an online learning environment at the request of researchers. Also, course administrators who have student preferences and needs as primary considerations are unlikely to allow researchers free hand in randomized design in the field. What other options do researchers have in addressing the argument for randomized experimental design in an online learning environment? We have to go back to the nature and assumptions of randomized experimental design for some insight. The strength and popularity of randomized experimental design rests on the foundation of internal validity – changes in outcomes could be traced to treatment effects while taking into consideration other possible influences in the design background through the subject randomization process. Although experimental researchers could
48
control their subjects in laboratories for randomization and treatment assignment purposes, field researchers often do not have this luxury as can be seen in the reasons mentioned above. The way to address potential background effects that may affect subject treatment is to identify such background effects in the initial design and consciously capture them as covariates along with predictor variables of interest in the research design, which further underscores our call for more intentional incorporation of control variables into studies in this field. By consciously designing and capturing both types of variables in the design, researchers could examine the comparative impact of both predictors and covariates of interest and so account for background effects. For example, in examining the impact of online learning usage on learning outcomes, researchers have often included covariates such as prior experience with technology, participant demographics (gender, age, intelligence etc.), and other variables that were important but which were not the primary interests of studies (Arbaugh, 2005a; Anstine & Skidmore, 2005; Gratton-Lavoie & Stanley, 2009; Liu, Magjuka, & Lee, 2006). Although no design could fully account for all the background effects, the use of covariates should account for the major known background effects. Both experimental design and field study design play different roles in uncovering knowledge across many disciplines. Even experimental researchers have acknowledged these roles (Mook, 1983) and certainly we should not ignore the usefulness of using covariates where complete randomization of subjects is not possible. If researchers only restrict themselves to the randomized experimental design approach, then the process of uncovering the impact of face-toface versus online learning is likely to be much slower as researchers will have to depend on the good fortune of having persuaded students to participate in randomization studies. Instead of doing a service to the discovery of how different learning environments may impact students, the
A Review of Research Methods in Online and Blended Business Education
sole use of fully randomized experimental approach would on the contrary greatly diminish the number of studies, sample size, and consequent generalizability of findings in this area (Arbaugh & Rau, 2007).
Greater Consideration of Varieties of Blended Learning Models Another issue for consideration is the growing pervasiveness of online and blended learning delivery mediums (Allen, Seaman, & Garrett, 2007; Clark & Mayer, 2008). While experimental design studies may help identify differences in impact between traditional face-to-face versus online learning environment, the growing acceptance of blended learning structures makes it difficult to clearly trace face-to-face versus online learning effects from an experimental design approach. This is so because students no longer participate solely in either face-to-face or online learning environments. They are involved in both types of environments in the same course. Instead of thinking in terms of “either or” student assignment to the face-to-face versus online learning environment, such blended environments point to the usefulness of considering influences from both types of learning environments. This analysis will be achieved not through randomized experimental design but instead through conscious capture of both types of influences and testing their comparative impact on learning outcomes. An example is the Hwang and Arbaugh (2009) study where both reported face-to-face feedback interactions were compared against discussion board participation frequency for their comparative impact on test results. Thus, the question of influence from either the face-to-face learning environment or online learning may be superseded by the question of comparative impact of both environments due to the growth of the blended learning medium.
CONCLUSION This review has shown a deepening of research in the online teaching and learning area with numerous studies testing for the impact of online learning approaches and student attitudes on learning success. While we do know that some online learning approaches have contributed to learning performance, there is still more to be done. First, we need to more consistently include control variables to account for common background influences, such as demographic variables that could impact online learning performance. This is especially important when the only research design option in some environment is a quasiexperimental approach, if possible at all. Including control variables will help alleviate concerns about background effects that could best be accounted for in true experimental design. Also, researchers should start looking at nesting effects of classes, or whole courses across different sections of student responses – the arena of hierarchical linear models. Researchers should also continue their efforts in developing more sophisticated multi-stage models that could test for comparative influences of many variables within a sample through structural equation models. This will move researchers away from traditional two-stage multiple regression models. Although multiple regression models are still important, these could be subsumed in models that involve more than two stages. Despite the still somewhat limited number of empirical studies in this area, there is a beginning set of results that could be used in exploring best practices in designing online learning environments. These may include types of students that could benefit from such environments, the role of attitudes in motivating student success and the importance of participation on a regular basis to succeed in such learning environment. There is more to be done, but we are beginning to understand of how online teaching may be designed for student learning success.
49
A Review of Research Methods in Online and Blended Business Education
REFERENCES Al-Shammari, M. (2005). Assessing the learning experience in a business process reengineering (BPR) course at the University of Bahrain. Business Process Management Journal, 11(1), 47–62. doi:10.1108/14637150510578728 Alavi, M., Marakas, G. M., & Yoo, Y. (2002). A comparative study of distributed learning environments on learning outcomes. Information Systems Research, 13, 404–415. doi:10.1287/ isre.13.4.404.72 Alexander, M. W., Perrault, H., Zhao, J. J., & Waldman, L. (2009). Comparing AACSB faculty and student online learning experiences: Changes between 2000 and 2006. Journal of Educators Online, 6(1). Retrieved February 1, 2009, from http://www.thejeo.com/Archives/ Volume6Number1/Alexanderetalpaper.pdf Allen, I. E., Seaman, J., & Garrett, R. (2007). Blending in: The extent and promise of blended earning in the United States. Needham, MA: Sloan-C. Anstine, J., & Skidmore, M. (2005). A small sample study of traditional and online courses with sample selection adjustment. The Journal of Economic Education, 36, 107–127. Arbaugh, J. B. (2005a). How much does subject matter matter? A study of disciplinary effects in Web-based MBA courses. Academy of Management Learning & Education, 4, 57–73. Arbaugh, J. B. (2005b). Is there an optimal design for on-line MBA courses? Academy of Management Learning & Education, 4, 135–149. Arbaugh, J. B., & Benbunan-Fich, R. (2006). An investigation of epistemological and social dimensions of teaching in online learning environments. Academy of Management Learning & Education, 5, 435–447.
50
Arbaugh, J. B., & Benbunan-Fich, R. (2007). Examining the influence of participant interaction modes in Web-based learning environments. Decision Support Systems, 43, 853–865. doi:10.1016/j. dss.2006.12.013 Arbaugh, J. B., Godfrey, M. R., Johnson, M., Leisen Pollack, B., Niendorf, B., & Wresch, W. (2009). Research in online and blended learning in the business disciplines: Key findings and possible future directions. The Internet and Higher Education, 12(2), 71–87. doi:10.1016/j. iheduc.2009.06.006 Arbaugh, J. B., & Rau, B. L. (2007). A study of disciplinary, structural, and behavioral effects on course outcomes in online MBA courses. Decision Sciences Journal of Innovative Education, 5, 65–95. doi:10.1111/j.1540-4609.2007.00128.x Benbunan-Fich, R., & Arbaugh, J. B. (2006). Separating the effects of knowledge construction and group collaboration in Web-based courses. Information & Management, 43, 778–793. doi:10.1016/j.im.2005.09.001 Benbunan-Fich, R., & Hiltz, S. R. (2003). Mediators of the effectiveness of online courses. IEEE Transactions on Professional Communication, 46(4), 298–312. doi:10.1109/TPC.2003.819639 Bernard, R. M., Abrami, P. C., Borokhovski, E., Wade, E., Tamim, R. M., Surkes, M. A., & Bethel, E. C. (2009). A meta-analysis of three types of interaction treatments in distance education. Review of Educational Research, 79, 1243–1289. doi:10.3102/0034654309333844 Bernard, R. M., Abrami, P. C., Lou, Y., & Borokhovski, E. (2004). A methodological morass? How can we improve quantitative research in distance education. Distance Education, 25(2), 175–198. doi:10.1080/0158791042000262094
A Review of Research Methods in Online and Blended Business Education
Bernard, R. M., Abrami, P. C., Lou, Y., Borokhovski, E., Wade, A., & Wozney, L. (2004). How does distance education compare with classroom instruction? A meta-analysis of the empirical literature. Review of Educational Research, 74, 379–439. doi:10.3102/00346543074003379 Berry, R. W. (2002). The efficacy of electronic communication in the business school: Marketing students’ perceptions of virtual teams. Marketing Education Review, 12(2), 73–78. Bickel, R. (2007). Multilevel analysis for applied research: It’s just regression!New York, NY: Guilford Press. Brown, B. W., & Liedholm, C. E. (2002). Can Web courses replace the classroom in principles of microeconomics? The American Economic Review, 92(2), 444–448. doi:10.1257/000282802320191778 Chen, C. C., & Jones, K. T. (2007). Blended learning vs. traditional classroom settings: Assessing effectiveness and student perceptions in an MBA accounting course. Journal of Educators Online, 4(1), 1–15. Cheung, L. L. W., & Kan, A. C. N. (2002). Evaluation of factors related to student performance in a distance-learning business communication course. Journal of Education for Business, 77, 257–263. doi:10.1080/08832320209599674 Clark, R. C., & Mayer, R. E. (2008). E-learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning (2nd ed.). San Francisco, CA: Pfeiffer. Connolly, M., Jones, C., & Jones, N. (2007). New approaches, new vision: Capturing teacher experiences in a brave new online world. Open Learning, 22(1), 43–56. doi:10.1080/02680510601100150
Dacko, S. G. (2001). Narrowing skill development gaps in marketing and MBA programs: The role of innovative technologies for distance learning. Journal of Marketing Education, 23, 228–239. doi:10.1177/0273475301233008 Davis, R., & Wong, D. (2007). Conceptualizing and measuring the optimal experience of the e-learning environment. Decision Sciences Journal of Innovative Education, 5, 97–126. doi:10.1111/j.1540-4609.2007.00129.x Drago, W., & Peltier, J. (2004). The effects of class size on the effectiveness of online courses. Management Research News, 27(10), 27–41. doi:10.1108/01409170410784310 Eom, S. B., Wen, H. J., & Ashill, N. (2006). The determinants of students’ perceived learning outcomes and satisfaction in university online education: An empirical investigation. Decision Sciences Journal of Innovative Education, 4, 215–235. doi:10.1111/j.1540-4609.2006.00114.x Gibson, S. G., Harris, M. L., & Colaric, S. M. (2008). Technology acceptance in an academic context: Faculty acceptance of online education. Journal of Education for Business, 83, 355–359. doi:10.3200/JOEB.83.6.355-359 Gratton-Lavoie, C., & Stanley, D. (2009). Teaching and learning of principles of microeconomics online: An empirical assessment. The Journal of Economic Education, 40(2), 3–25. doi:10.3200/ JECE.40.1.003-025 Grzeda, M., & Miller, G. E. (2009). The effectiveness of an online MBA program in meeting midcareer student expectations. Journal of Educators Online, 6(2). Retrieved November 10, 2009, from http://www.thejeo.com/Archives/Volume6Number2/GrzedaandMillerPaper.pdf Hansen, D. E. (2008). Knowledge transfer in online learning environments. Journal of Marketing Education, 30, 93–105. doi:10.1177/0273475308317702
51
A Review of Research Methods in Online and Blended Business Education
Hitt, M. A., Beamish, P. W., Jackson, S. E., & Mathieu, J. E. (2007). Research in theoretical and empirical bridges across levels: Multilevel research in management. Academy of Management Journal, 50, 1385–1399. Hwang, A., & Arbaugh, J. B. (2009). Seeking feedback in blended learning: Competitive versus cooperative student attitudes and their links to learning outcome. Journal of Computer Assisted Learning, 25, 280–293. doi:10.1111/j.13652729.2009.00311.x Johnson, R. D., Hornik, S., & Salas, E. (2008). An empirical examination of factors contributing to the creation of successful e-learning environments. International Journal of Human-Computer Studies, 66, 356–369. doi:10.1016/j.ijhcs.2007.11.003 Kim, E. B., & Schniederjans, M. J. (2004). The role of personality in Web-based distance education courses. Communications of the ACM, 47(3), 95–98. doi:10.1145/971617.971622 Klein, H. J., Noe, R. A., & Wang, C. (2006). Motivation to learn and course outcomes: The impact of delivery mode, learning goal orientation, and perceived barriers and enablers. Personnel Psychology, 59, 665–702. doi:10.1111/j.17446570.2006.00050.x Landry, B. J. L., Griffeth, R., & Hartman, S. (2006). Measuring student perceptions of blackboard using the technology acceptance model. Decision Sciences Journal of Innovative Education, 4, 87–99. doi:10.1111/j.1540-4609.2006.00103.x Larson, P. D. (2002). Interactivity in an electronically delivered marketing course. Journal of Education for Business, 77, 265–245. doi:10.1080/08832320209599675 Liu, X., Bonk, C. J., Magjuka, R. J., Lee, S., & Su, B. (2005). Exploring four dimensions of online instructor roles: A program level case study. Journal of Asynchronous Learning Networks, 9(4), 29–48.
52
Liu, X., Magjuka, R. J., & Lee, S. (2006). An empirical examination of sense of community and its effects on students’ satisfaction, perceived learning outcome, and learning engagement in online MBA courses. International Journal of Instructional Technology & Distance Learning, 3(7). Retrieved September 15, 2006, from http:// www.itdl.org/Journal/Jul_06/article01.htm Lu, J., Yu, C.-S., & Liu, C. (2003). Learning style, learning patterns, and learning performance in a WebCT-based MIS course. Information & Management, 40, 497–507. doi:10.1016/S03787206(02)00064-2 mMartins, L. L., & Kellermans, F. W. (2004). A model of business school students’ acceptance of a Web-based course management system. Academy of Management Learning & Education, 3, 7–26. Means, B., Toyama, Y., Murphy, R., Bakia, M., & Jones, K. (2009). Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies. Washington, DC: U.S. Department of Education. Millson, M. R., & Wilemon, D. (2008). Educational quality correlates of online graduate management education. Journal of Distance Education, 22(3), 1–18. Mook, D. G. (1983). In defense of external invalidity. The American Psychologist, 38(4), 379–387. doi:10.1037/0003-066X.38.4.379 Morgeson, F. P., & Hofmann, D. A. (1999). The structure and function of collective constructs: Implications for multilevel research and theory development. Academy of Management Review, 24, 249–265. doi:10.2307/259081 Murphy, S. M., & Tyler, S. (2005). The relationship between learning approaches to part-time study of management courses and transfer of learning to the workplace. Educational Psychology, 25, 455–469. doi:10.1080/01443410500045517
A Review of Research Methods in Online and Blended Business Education
Peltier, J. W., Schibrowsky, J. A., & Drago, W. (2007). The interdependence of the factors influencing the perceived quality of the online learning experience: A causal model. Journal of Marketing Education, 29, 140–153. doi:10.1177/0273475307302016 Popovich, C. J., & Neel, R. E. (2005). Characteristics of distance education programs at accredited business schools. American Journal of Distance Education, 19, 229–240. doi:10.1207/ s15389286ajde1904_4 Potter, B. N., & Johnston, C. G. (2006). The effect of interactive online learning systems on student learning outcomes in accounting. Journal of Accounting Education, 24, 16–34. doi:10.1016/j. jaccedu.2006.04.003 Saade, R. G. (2007). Dimensions of perceived usefulness: Toward enhanced assessment. Decision Sciences Journal of Innovative Education, 5, 289–310. doi:10.1111/j.1540-4609.2007.00142.x Santhanam, R., Sasidharan, S., & Webster, J. (2008). Using self-regulatory learning to enhance e-learning-based information technology training. Information Systems Research, 19, 26–47. doi:10.1287/isre.1070.0141 Webb, H. W., Gill, G., & Poe, G. (2005). Teaching with the case method online: Pure versus hybrid approaches. Decision Sciences Journal of Innovative Education, 3, 223–250. doi:10.1111/j.15404609.2005.00068.x Weber, J. M., & Lennon, R. (2007). Multi-course comparison of traditional versus Web-based course delivery systems. Journal of Educators Online, 4(2), 1–19. Whetten, D. A. (2008). Introducing AMLEs educational research databases. Academy of Management Learning & Education, 7, 139–143.
Yukselturk, E., & Top, E. (2005-2006). Reconsidering online course discussions: A case study. Journal of Educational Technology Systems, 34(3), 341–367. doi:10.2190/6GQ8-P7TXVGMR-4NR4
ADDITIONAL READING Arbaugh, J. B. (2000a). Virtual classroom characteristics and student satisfaction in Internet-based MBA courses. Journal of Management Education, 24, 32–54. doi:10.1177/105256290002400104 Arbaugh, J. B. (2000b). Virtual classrooms versus physical classrooms: An exploratory study of class discussion patterns and student learning in an asynchronous Internet-based MBA course. Journal of Management Education, 24, 213–233. doi:10.1177/105256290002400206 Arbaugh, J. B. (2000c). How classroom environment and student engagement affect learning in Internet-based MBA courses. Business Communication Quarterly, 63(4), 9–26. doi:10.1177/108056990006300402 Arbaugh, J. B. (2000d). An exploratory study of the effects of gender on student learning and class participation in an Internet-based MBA course. Management Learning, 31, 533–549. doi:10.1177/1350507600314006 Arbaugh, J. B. (2001). How instructor immediacy behaviors affect student satisfaction and learning in web-based courses. Business Communication Quarterly, 64(4), 42–54. doi:10.1177/108056990106400405 Arbaugh, J. B., & Duray, R. (2002). Technological and structural characteristics, student learning and satisfaction with web-based courses: An exploratory study of two MBA programs. Management Learning, 33, 231–247. doi:10.1177/1350507602333003
53
A Review of Research Methods in Online and Blended Business Education
Arbaugh, J. B., & Hwang, A. (2006). Does “teaching presence” exist in online MBA courses? The Internet and Higher Education, 9, 9–21. doi:10.1016/j.iheduc.2005.12.001 Baugher, D., Varanelli, A., & Weisbord, E. (2003). Student hits in an Internet-supported course: How can instructors use them and what do they mean? Decision Sciences Journal of Innovative Education, 1, 159–179. doi:10.1111/j.15404609.2003.00016.x Bryant, K., Campbell, J., & Kerr, D. (2003). Impact of web based flexible learning on academic performance in information systems. Journal of Information Systems Education, 14(1), 41–50. Cao, J., Crews, J. M., Lin, M., Burgoon, J. K., & Nunnamaker, J. F. Jr. (2008). An empirical investigation of virtual interaction in supporting learning. The DATABASE for Information Systems, 39(3), 51–68. Conaway, R. N., Easton, S. S., & Schmidt, W. V. (2005). Strategies for enhancing student interaction and immediacy in online courses. Business Communication Quarterly, 68(1), 23–35. doi:10.1177/1080569904273300 Dineen, B. R. (2005). TeamXchange:Ateam project experience involving virtual teams and fluid team membership. Journal of Management Education, 29, 593–616. doi:10.1177/1052562905276275 Drago, W., Peltier, J., & Sorensen, D. (2002). Course content or the instructor: Which is more important in on-line teaching? Management Research News, 25(6/7), 69–83. doi:10.1108/01409170210783322 Friday, E., Friday-Stroud, S. S., Green, A. L., & Hill, A. Y. (2006). A multi-semester comparison of student performance between multiple traditional and online sections of two management courses. Journal of Behavioral and Applied Management, 8(1), 66–81.
54
Gagne, M., & Shepherd, M. (2001). A comparison between a distance and a traditional graduate accounting class. T.H.E. Journal, 28(9), 58–63. Hay, A., Peltier, J. W., & Drago, W. A. (2004). Reflective learning and online management education: A comparison of traditional and online MBA students. Strategic Change, 13(4), 169–182. doi:10.1002/jsc.680 Hayes, S. K. (2007). Principles of Finance: Design and implementation of an online course. Journal of Online Learning and Teaching, 3, 460–465. Heckman, R., & Annabi, H. (2005). A content analytic comparison of learning processes in online and face-to-face case study discussions. Journal of Computer-Mediated Communication, 10(2), article 7. Retrieved January 4, 2007, from http:// jcmc.indiana.edu/vol10/issue2/heckman.html Heckman, R., & Annabi, H. (2006). How the teacher’s role changes in on-line case study discussions. Journal of Information Systems Education, 17, 141–150. Hornik, S., & Tupchiy, A. (2006). Culture’s impact on technology-mediated learning: The role of horizontal and vertical individualism and collectivism. Journal of Global Information Management, 14(4), 31–56. doi:10.4018/jgim.2006100102 Hwang, A., & Arbaugh, J. B. (2006). Virtual and traditional feedback-seeking behaviors: Underlying competitive attitudes and consequent grade performance. Decision Sciences Journal of Innovative Education, 4, 1–28. doi:10.1111/j.15404609.2006.00099.x Kellogg, D. L., & Smith, M. A. (2009). Studentto-student interaction revisited: A case study of working adult business students in online courses. Decision Sciences Journal of Innovative Education, 7, 433–456. doi:10.1111/j.15404609.2009.00224.x
A Review of Research Methods in Online and Blended Business Education
Kock, N., Verville, J., & Garza, V. (2007). Media naturalness and online learning: Findings supporting both the significant- and no-significantdifference perspectives. Decision Sciences Journal of Innovative Education, 5, 333–355. doi:10.1111/j.1540-4609.2007.00144.x Lane, A., & Porch, M. (2002). Computer Aided Learning (CAL) and its impact on the performance of non-specialist accounting undergraduates. Accounting Education, 11, 217–233. doi:10.1080/09639280210144902 Liu, X., Magjuka, R. J., & Lee, S. (2008). The effects of cognitive thinking styles, trust, conflict management on online students’ learning and virtual team performance. British Journal of Educational Technology, 39, 829–846. doi:10.1111/j.1467-8535.2007.00775.x Malhotra, N. K. (2002). Integrating technology in marketing education: Perspective for the new millennium. Marketing Education Review, 12(3), 1–5. Marks, R. B., Sibley, S., & Arbaugh, J. B. (2005). A structural equation model of predictors for effective online learning. Journal of Management Education, 29, 531–563. doi:10.1177/1052562904271199 McDowall, T., & Jackling, B. (2006). The impact of computer-assisted learning of academic grades: An assessment of students’ perceptions. Accounting Education, 15, 377–389. doi:10.1080/09639280601011065 Morgan, G., & Adams, J. (2009). Pedagogy first: Making web technologies work for soft skills development in leadership and management education. Journal of Interactive Learning Research, 20, 129–155. Navarro, P. (2000). Economics in the cyberclassroom. The Journal of Economic Perspectives, 14, 119–132. doi:10.1257/jep.14.2.119
Navarro, P., & Shoemaker, J. (1999). The power of cyberlearning: An empirical test. Journal of Computing in Higher Education, 11(1), 29–54. doi:10.1007/BF02940841 Navarro, P., & Shoemaker, J. (2000a). Performance and perceptions of distance learners in cyberspace. American Journal of Distance Education, 14(2), 15–35. doi:10.1080/08923640009527052 Navarro, P., & Shoemaker, J. (2000b). Policy issues in the teaching of economics in cyberspace: Research design, course design, and research results. Contemporary Economic Policy, 18, 359–366. doi:10.1111/j.1465-7287.2000.tb00032.x Nemanich, L., Banks, M., & Vera, D. (2009). Enhancing knowledge transfer in classroom versus online settings: The interplay among instructor, student, content, and context. Decision Sciences Journal of Innovative Education, 7, 123–148. doi:10.1111/j.1540-4609.2008.00208.x Parikh, M., & Verma, S. (2002). Using Internet technologies to support learning: An empirical analysis. International Journal of Information Management, 22, 27–46. doi:10.1016/S02684012(01)00038-X Parthasurathy, M., & Smith, M. A. (2009). Valuing the institution: An expanded list of factors influencing faculty adoption of online education. Online Journal of Distance Learning Administration, 12(2). Retrieved October 15, 2009, from http:// www.westga.edu/~distance/ojdla/summer122/ parthasarathy122.html Peltier, J. W., Drago, W., & Schibrowsky, J. A. (2003). Virtual communities and the assessment of online Marketing education. Journal of Marketing Education, 25, 260–276. doi:10.1177/0273475303257762
55
A Review of Research Methods in Online and Blended Business Education
Piccoli, G., Ahmad, R., & Ives, B. (2001). Webbased virtual learning environments: A research framework and a preliminary assessment of effectiveness in basic IT skills training. Management Information Systems Quarterly, 25, 401–426. doi:10.2307/3250989 Saade, R., & Bahli, B. (2005). The impact of cognitive absorption on perceived usefulness and perceived ease of use in online learning: An extension of the technology acceptance model. Information & Management, 42, 317–327. doi:10.1016/j. im.2003.12.013 Sautter, P. (2007). Designing discussion activities to achieve desired learning outcomes: Choices using mode of delivery and structure. Journal of Marketing Education, 29, 122–131. doi:10.1177/0273475307302014 Schniederjans, M. J., & Kim, E. B. (2005). Relationship of student undergraduate achievement and personality characteristics in a total webbased environment: An empirical study. Decision Sciences Journal of Innovative Education, 3, 205–221. doi:10.1111/j.1540-4609.2005.00067.x Simmering, M. J., Posey, C., & Piccoli, G. (2009). Computer self-efficacy and motivation to learn in a self-directed online course. Decision Sciences Journal of Innovative Education, 7, 99–121. doi:10.1111/j.1540-4609.2008.00207.x Terry, N. (2000). The effectiveness of virtual learning in economics. Journal of Economics and Economic Education Research, 1, 93–99. Williams, E. A., Duray, R., & Reddy, V. (2006). Teamwork orientation, group cohesiveness, and student learning: A study of the use of teams in online distance education. Journal of Management Education, 30, 592–616. doi:10.1177/1052562905276740
56
Yoo, Y., Kanawattanachai, P., & Citurs, A. (2002). Forging into the wired wilderness: A case study of a technology-mediated distributed discussionbased class. Journal of Management Education, 26, 139–163. doi:10.1177/105256290202600203
KEY TERMS AND DEFINITIONS Control Variables: Research variables that may predict criterion variables that are not included in a set of predictor variables. Criterion (Dependent) Variables: Research variable(s) whose outcome result is associated with changes in predictor variables. Typically a variable of primary interest in a research study. Hierarchical Linear Modeling (HLM): A statistical technique that controls for effects of nesting of variables within specific contexts, such as courses, institutions, or academic disciplines. Multiple Regression Analysis: A statistical technique design to predict a criterion variable using a set of predictor variables. Predictor (Independent) Variables: Research variables that are associated and often assumed to be the cause of changes in criterion variables. Quasi-Experimental Comparative Designs: research designs where at least one treatment group and one control group are compared on variable(s) of interest but study participants are not randomly assigned to a group. Randomized Experimental Designs: research designs consisting of at least one treatment group and one control group where study participants are randomly assigned to each group. Structural Equation Modeling (SEM): A statistical approach that allows for measurement of relationships between observed variables using groupings of latent variables. The approach also allows researchers to consider potential effects of measurement error. Virtual Learning Environments: educational settings where the dissemination of course content and interaction between course participants is conducted at least partially online.
57
Chapter 4
An Introduction to Path Analysis Modeling Using LISREL Sean B. Eom Southeast Missouri State University, USA
ABSTRACT Over the past decades, there has been a wide range of empirical research in the e-learning literature. The use of multivariate statistical tools has been a staple of the research stream throughout the decade. Path analysis modeling is part of four related multivariate statistical models, including regression, path analysis, confirmatory factor analysis, and structural equation models. This chapter focuses on path analysis modeling for beginners using LISREL 8.70. Several topics covered in this chapter include foundational concepts, assumptions, and steps of path analysis modeling. The major steps in path analysis modeling explained in this chapter consist of specification, identification, estimation, testing, and modification of models.
INTRODUCTION Tremendous advances in information technology and the changing demographic profile of the student population have allowed colleges and universities to offer Internet-based courses as a way to meet the ever-increasing demand for higher and continuing education. In the early elearning systems developmental stage, the focus DOI: 10.4018/978-1-60960-615-2.ch004
of research was on the non-empirical dimensions of e-learning systems. E-learning systems include learning management systems, course management systems, and virtual learning environments. There are a wide range of free software and/or open source learning management systems (e.g., eFront), and course management systems (e.g., Dokeos, ILIAS, Moodle, etc.). Many well-known virtual learning environments are available to facilitate the creation of virtual class rooms (e.g., Blackboard, WebCT, FirstClass, Desire2Learn,
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
An Introduction to Path Analysis Modeling Using LISREL
CyberExtension, It’s Learning, WebTrain, etc.). Some universities have developed their own custom learning environments for creating and managing e-learning systems. Furthermore, they have spent heavily to constantly update their online instructional resources, computer labs, and library holdings. Now it is evident that the technology itself may not be an impediment anymore. The distance learning system can be viewed as having several human/non-human entities interacting together via computer-based instructional systems to achieve the goals of education, including perceived learning outcomes and student satisfaction. During the past decade, the volume of research in online and blended business education has increased dramatically. The most common e-learning research streams across business disciplines were outcome comparison studies with classroom-based learning and studies examining potential predictors of course outcomes (Arbaugh et al., 2009). The Dimensions and Antecedents of VLE Effectiveness introduced by Piccoli, Ahmad, and Ives (2001) contributed to developing new empirical research models. User satisfaction is the overall measure of the student’s perceived level of fulfillment in the online course. The review of e-learning empirical research indicates that there are numerous quantitative research methods that have been utilized. They include categorical data analysis using chi-square test, multivariate data analysis techniques including analysis of covariance (ANCOVA), General linear model multivariate analysis of covariance (MANCOVA), conjoint analysis, canonical correlation analysis, discriminant analysis, multiple regression analysis, path analysis, factor analysis (confirmatory vs. exploratory), structural equation modeling (SEM) using PLS graph and Smart PLS, LISREL, AMOS, and EQS. Moreover, qualitative research methods for e-learning empirical research have been applied to examine the effects of various factors or variables on the student satisfaction and learning outcomes. Qualitative research methods include
58
action research, case study research, the grounded theory approach, ethnographic research, etc.
MAIN FOCUS The use of multivariate statistical techniques has been a staple of the e-learning empirical research stream throughout the decade. This may reflect the fact that the field has drawn in some researchers who have been trained in such analytical techniques which are common in many business disciplines, and those scholars simply brought these techniques with them to design and analyze studies of online learning. Moreover, there is a growing number of studies that have used highly sophisticated statistical techniques such as structural equation models and hierarchal models in recent years (Arbaugh, Hwang, & Pollack, 2010). This chapter focuses on covariance based path analysis modeling using LISREL 8.70. Structural equation modeling (SEM) is “a comprehensive statistical approach to testing hypotheses about relations among observed and latent variables.”(Hoyle, 1995) SEM methodology is used to test four types of theoretical models: regression, path, confirmatory factor, and structural equation models. LISREL is capable of modeling all four models. All four models can be tested by following the five steps: specification, identification, parameter estimation, testing, and modification. To complement this chapter of path modeling, several other chapters are concerned with path modeling applications, an introduction to SEM using PLS graph, and SEM applications. The remainder of this chapter is organized by the following several sections. •
•
Foundational concepts – observed and latent variables, dependent and independent variables, and regression models Assumptions
An Introduction to Path Analysis Modeling Using LISREL
Figure 1. A classification of SEM models
•
•
Path modeling steps (specification, identification, parameter estimation, testing, and modification Summary
FOUNDATIONAL CONCEPTS As shown in Figure 1, a family of SEM techniques consist of regression, path, confirmatory factor, and structural equation models. All of these models can be classified by the three criteria (the observability of variables, the existence of mediating variables, and the need for path analysis with latent variables). There are already abundant sources of references on path analysis modeling. This chapter is an introduction to path analysis modeling for a beginner. For those audiences in mind, this section discusses several foundational
concepts to help better understand the concept and procedures in path analysis modeling.
Observed Variables and Latent Variables Observed variables are variables that can be measured or observed directly. For example, to establish a theoretical relationship between elearning system quality and user satisfaction, a list of questionnaires was designed. Each of the six questions below is considered to be observed variable or indicator variable. Latent variables are not observed or measured directly. Rather they can be measured indirectly through several observed variables that can be measured using surveys, tests, interviews, etc. For example, system quality and user satisfaction are latent variables (constructs, or factors) that are indirectly inferred
59
An Introduction to Path Analysis Modeling Using LISREL
from questionnaire 1, 2, and 3 and questionnaire 4, 5, and 6 respectively.
System Quality 1. The e-learning system is easy to use. 2. The e-learning system is user-friendly. 3. The e-learning system provides interactive features between users and system.
•
The dependent variables are the variables to be predicted and denoted as “y”. They are also known as: ◦⊦ Endogenous variables. ◦⊦ Response variables ◦⊦ Effect variables
Multiple Regression Analysis
Both observed and latent variables can be classified into either exogenous/independent or endogenous/ dependent variables. The former comes from the Greek words “exo” and “gen”, which mean “outside” and “production”, referring to uncontrollable variables coming from outside of the model, therefore their values are given. Endogenous or dependent variables are the opposite of exogenous and their values are dependent on exogenous or independent variables. User satisfaction may be dependent on system quality and other exogenous variables. A beginner of structural equation modeling may encounter many different terms referring to dependent and independent variables.
From a historical viewpoint, a family of SEM techniques consists of regression, path, confirmatory factor, and structural equation models in the chronological order of development. Each successive model is capable of modeling more complex relationships among variables and use previous models. For example, path models use regression analysis and correlation coefficients. Structural equation models use all previous models (regression, path, and confirmatory factor) to model complex relationships among latent variables. In order to understand path models, it is necessary to understand multiple regression models. Therefore, this section briefly introduces regression models. Multiple regression analysis is a statistical technique to define the linear relationship between a dependent variable and multiple independent variables. The regression model can be classified into either the simple regression model or the multiple regression model. The simple regression model involves only one independent variable and one dependent variable. The multiple regression model has one dependent variable and multiple independent variables. A probabilistic regression model includes error terms. The simple regression model can be written as:
•
y = ß0 + ß1x + є
User Satisfaction 4. The academic quality was on par with faceto-face courses I’ve taken. 5. I would recommend this course to other students. 6. I would take an on-line course at this university again in the future.
Dependent Variables and Independent Variables
60
The independent variables are denoted as “x”. They are also known as: ◦⊦ Exogenous variables ◦⊦ Explanatory variables ◦⊦ Predictor variables ◦⊦ Cause variables
where ß0 = the population intercept, ß1= the population slope, and є = an error term.
An Introduction to Path Analysis Modeling Using LISREL
In most cases of regression analyses, the sample data are used to estimate the population intercept and slope. Therefore, using the sample data, the equation for the simple regression line is as follows. y = b0 + b1x where b0 = the sample intercept and b1= the sample slope. The multiple regression model can be written as: y = ß0 + ß1x1 + ß2x2 … ßkxk + є where ß0 = the population intercept ß1= the partial regression coefficient for independent variable x1 ß2= the partial regression coefficient for independent variable x2 ßk= the partial regression coefficient for independent variable xk k= the number of independent variables, and є = an error term (also known as the disturbances).
PATH ANALYSIS MODELING AND ITS ASSUMPTIONS Multiple regression models examine the relationship between several directly observable independent variables and a directly observable dependent variable. It is not possible to estimate
the relationship between dependent variables using multiple regression models. For example, in Figure 4, system use and user satisfaction are endogenous variables which depend on the values of systems quality and information quality. Regression models allow us to estimate the relationship between a single dependent variable (system use) and multiple independent variables (system quality and information quality). However, when researchers are interested in testing the theoretical relationships among observed variables as depicted in Figure 5, which includes the relationships among dependent variables and mediating variables (system use and user satisfaction), a path analysis model needs to be used. There is not a stand-alone software package to do path analysis modeling only. Structural equation modeling packages are available to do the whole range of structural equation modeling, including path analysis. Readers are referred to the following. • • • •
http://www.mvsoft.com/index.htm (EQS software) http://www.mvsoft.com/eqsBooks.htm (EQS books) http://www.spss.com/amos/ (AMOS software) http://www.ssicentral.com/ (LISREL software)
A path model tests theoretical relationships among multiple variables when the following conditions are met. First, there is temporal ordering of variables. Second, covariation or correlation exists among variables. Third, variables are quantitative (interval or ratio level). Path analysis modeling assesses the causal contribution of directly observable variables to other directly observable variables. Unlike structural equation models that are concerned with latent variables, path analysis models examine the causal contribution of directly observable variables.
61
An Introduction to Path Analysis Modeling Using LISREL
Since path models are a logical extension of regression models, path models and regression models share the same assumptions. 1. Linearity— path models assume that the relationship among variables is linear. Consequently, the regression and path analyses cannot be applied if the relationship between two variables is nonlinear such as a curvilinear relationship, which is characterized by curved lines. 2. Data is normally distributed. Another important assumption is that all variables in path models are normally distributed with some important characteristics. The normal distribution is a continuous, symmetrical distribution about its mean. It is also asymptotic to the horizontal axis. 3. Disturbances (a.k.a., residuals), usually denoted as ei, or єi are also normally distributed random variables. The disturbance is the difference between the actual value of dependent variable (y) and the predicted value of ŷ at a given value of independent value of x. The disturbance includes the effects of all unknown factors which influence the value of y. The sum of the residuals is zero. Therefore, the mean or expected value of the ei is also zero. This assumption will always be true because the regression line is estimated by the least-squares method. 4. Each of the disturbances along the regression line has a constant variance regardless of the value of x. See (Dielman, 1996, pp. 82-83.) for the detailed discussion of this assumption and others. 5. The disturbances are independent. This means that the disturbances/residual variables are not correlated with any variable in the model and that a residual plot should not display any systematic pattern.
62
Sample Size Sample size is an important consideration to ensure that the purpose of the study can be fully accomplished. The number of parameters is an important input to decide the size of a sample that can assess the significance of testing model effects. Kline (1998) suggested the following. • • •
Ideal sample size= the number of parameters * 20 or more Adequate sample size= the number of parameters * 10 or more Insufficient sample size = the number of parameters * 5 or less
The type of parameters in path analysis and structural equation models includes the following: • • • •
Path coefficients Equation error variances Correlation among the independent variables Independent variable variances
The number of each type of parameter can be counted from the specified path model. The model identification section shows an example of how to count the number of the different types of parameter. According to Schumacker and Lomax (2010, p.211), “In traditional multivariate statistics, the rule of thumb is 20 subjects per variables (20:1). The rule of thumb used in structural equation modeling vary from 100, 200, to 500 or more subjects per study, depending on model complexity and cross-validation requirements.” Hair et al. (Hair, Black, Babin, & Anderson, 2010) suggested the following: • •
Minimum sample size of 100 with the model of five or fewer constructs. Minimum sample size of 150 - 300 with the model of seven or fewer constructs.
An Introduction to Path Analysis Modeling Using LISREL
•
Minimum sample size of 500 with the model of large number of constructs.
PATH MODELING STEPS There are several steps involved in path modeling: specification, identification, estimation, testing, and modification.
Specification Model specification is the first step of identifying all relevant variables in the model and examining the proper relationships among the identified variables. Typically before this step of model specification, a complete survey of the literature on the specific research area must be conducted to identify major issues for further investigation. The identification of all relevant variables specifically refers to including only necessary variables and excluding extraneous variables. Needless to say, either including extraneous variables or excluding essential variables will lead to a path model that may not truly reflect the causal relationships among variables. Moreover, it can affect the path coefficient significantly, and therefore the model result can be unreliable. Building a parsimonious model with a few substantive meaningful paths is also an important consideration in path modeling. A theoretical model in structural equation modeling (SEM) with all paths specified is of limited interest. This is the case of saturated model, which will indicate little difference between the sample variancecovariance matrix and the reproduced implied covariance matrix (Schumacker & Lomax, 2004). Figure 2 also demonstrates common path diagram symbols. An observed variable is enclosed by either a rectangle or a square. A measurement error in an observed variable is denoted by an oval shape with an attached unidirectional arrow pointing to that observed variable. Single head arrows indicate unidirectional (recursive) paths. A
reciprocal (non-recursive) path between variables can be drawn by using two arrows with different directions. Double headed or curved arrows denote correlation between variables. Figure 3 demonstrates all possible hypothetical models among three variables (x1, x2, and y), taken from (Schumacker & Lomax, 2004). The path model examines the relationships among only observed variables. However, structural equation models include not only observed variables but also latent variables (constructs or factors) which are measured indirectly by combining several observed variables. All variables, whether they are latent or observed, can be classified as either exogenous or endogenous variables. The term “exogenous” comes from the Greek words “exo” and “gen”, meaning “outside” and “production”. Exogenous variables are the variables with no causal links (arrows) leading to them from other variables in the model. Exogenous variables are derived from outside a system and therefore not controllable (independent). Endogenous variables originate from within a system and they are controllable and affected by exogenous variables. A unidirectional path is drawn from an exogenous variable to an endogenous variable.
An Example of Specification To further demonstrate various aspects of path modeling, we will introduce a simple path analysis model to help the readers understand basic concepts, modeling procedures, and an interpretation of the LISREL outputs. To conduct a path analysis, the following 7 questions are selected from a survey instrument of multiple questions. A survey questionnaire was developed using a 7 point Likert Scale. Students responded to questions based on their level of agreement for each statement ranging from strongly disagree to strongly agree: 1 (strongly disagree), 2 (disagree), 3 (somewhat disagree),
63
An Introduction to Path Analysis Modeling Using LISREL
Figure 2. Path model symbols
4 (neutral), 5 (somewhat agree), 6 (agree), and 7 (strongly agree). • • • • •
System Quality: The system is user-friendly. Information Quality: The system provides information that is exactly what you need. System Use: I frequently use the system. User Satisfaction: Overall, I am satisfied with the system. Learning Outcome: I feel that online learning is equal to the quality of traditional classroom learning.
With the absence of latent variables in the model, two analysis techniques are available
64
(see Figure 1): regression analysis and path analysis. The presence of mediating variables in the specified model determines which one of the two techniques can be applied. Figure 1 shows that the path model is an appropriate one. In Figure 4 of bivariate regression model, there is no mediating variable. There are two exogenous (or independent) variables on the left-hand side and two endogenous variables on the right hand side. The path analysis model (Figure 5) contains two mediators (or mediating variables). They are system use and user satisfaction. In the path analysis model, the system quality influences the system use, which in turn influences user satisfaction and e-learning outcomes. The system use is said to be
An Introduction to Path Analysis Modeling Using LISREL
a mediating variable between the system quality and e-learning outcomes. The user satisfaction serves as a mediator between information quality and e-learning outcomes, system quality and elearning outcomes, and system use and e-learning outcomes. Our model in Figure 5 examines the relationships among five observed variables. The two independent variables are system quality and information quality. The two mediating variables are system use and user satisfaction. The dependent variable is e-learning outcomes. Using an hypothetical model of three variables used in (Schumacker & Lomax, 2004), two independent variables (x1 and x2) and one dependent variable (y), five different models are specified (Figure 3). The first three models are built on the assumption that x1 influences x2. The last two models from the bottom assume that x1 does not influence x2. With another assumption that x2 influences x1, it is possible to add several other additional models. Despite the numerous possible theoretical models, the model specification must be guided by the theory that is based on literature review.
Figure 3. Possible three-variable path models (Source: Schumacker & Lomax, 2004, pp.154155)
Identification The next step, after the specification of a path model, is to decide whether the model is identifiable. The identifiability of a path model can be determined by comparing the number of parameters to be estimated (unknowns) and the number of distinct values in the covariance matrix (knowns). If the number of knowns is equal to the number of unknowns in the path model, it is called “just identified”. If the number of unknowns is less than the number of knowns, it is the case of “over-identified” model. The last case, “under identified” model, occurs when the number of unknowns is more than the number of knowns (see Table 1). The term determination is often used interchangeably with the term identification. Therefore, the terms justdetermination, underdetermination, and overdetermination are
used with or without hyphens. It is critical to avoid building the under-justified model. If a path model is underidentified, a unique path solution cannot be computed. Model is just-identified if number of parameters is equal to number of distinct values in the covariance matrix. The output will indicate that path analyses are saturated models and therefore chi squares = 0 and degrees of freedom =0. Figure 5 illustrates the case of just-identified model.
65
An Introduction to Path Analysis Modeling Using LISREL
Table 1. Three cases of model identification problems Parameters to be Estimated (Unknowns)
Distinct Values in the Covariance Matrix (Knowns)
Model Identification Type Under Identified
>
Just Identified
=
Over Identified
su us iq -> su us su -> us outcome us -> outcome Relationships Outcome = su us us = sq iq su
An Introduction to Path Analysis Modeling Using LISREL
Figure 7. LISREL windows application screen
su = sq iq Command file is saved with SPL extension. LISREL has two types of user interfaces- the drop down menu and the graphical user interfaces (GUI). The GUI interfaces has 12 buttons. To run this command language file (introtopathanalysisdatafromfile.spl), the user clicks the fifth button with “L” from left. Figure 8 shows the contents of data file (jimsurvey.dat). The data file does not include variable names. This can be stored at the same directory where your command language file is stored or at different locations (Figure 9).
Testing and Modification Model testing tests the fit of the correlation matrix against the theoretical causal model built by researchers based on the extant literature. The outputs from running path analysis models are:
(1) the path analysis model with SPL extension, (2) the output file with OUT extension, and (3) the path model graphical file with PTH extension. The output file contains a wide range of useful information, including multiple fit indices available to test the model (see Figure 10).
The LISREL Output File The output file contains the following information. • •
• •
Covariance Matrix LISREL Estimates using Maximum Likelihood ◦⊦ Structural equations ◦⊦ Reduced form equations Covariance Matrix of Independent Variables Covariance Matrix of Latent Variables
69
An Introduction to Path Analysis Modeling Using LISREL
Figure 8. Path analysis model editing screen
Figure 9. Data file for the path analysis model
•
Goodness of Fit Statistics
Prior to examining LISREL estimates, it makes sense to examine the goodness of fit statistics first. If the fit statistics do not indicate good fit, the researchers may modify the previous model. As Figure 11 shows, the goodness of fit statistics include an extensive array of fit indices that can be categorized into six different subgroups of statistics that may be used to determine model fit. For a very good overview of LISREL goodnessof-fit statistics, readers are referred to (Byrne, 1998, pp.109-119.; Hooper, Coughlan, & Mullen, 2008). In regard to the choice of indices to assess the model fit, Byrne (1998, p.118) states: Having worked your way through this smorgasbord of goodness-fit measures, you are likely
70
An Introduction to Path Analysis Modeling Using LISREL
Figure 10. Path analysis output
feeling totally overwhelmed and wondering what to you do with all this information! Although you certainly do not need to report entire set of fit indices, such an array can give you a good sense of how well your model fits the sample data. But how does one choose which indices are appropriate in the assessment of model fit? Unfortunately, this choice is not simple one, largely because of particular indices have been shown to operate somewhat differently give the sample size, estimation procedure, model complexity, violation of the underlying assumptions of multivariate normality and variable independence, or any combination thereof.
There seems to be an agreement among SEM researchers that it is not necessary to report every goodness of fit statistics from path analysis output (Figure 11). For the SEM beginners, it is not an easy task to choose a set of fit indices in the assessment of model fit, due to the complexity and multiple dimensions involved in the choice of good indices (J. S. Tanaka, 1993). Although there are no golden rules that can be agreed upon, readers are referred to the appendix of chapter X of this book (Testing of the DeLone-McLean Model of Information System Success in an E-Learning Context) for a brief discussion on several indices that have been frequently reported and suggested to be reported in the literature (Boomsma, 2000; Crowley & Fan, 1997; Hayduk, Cummings,
71
An Introduction to Path Analysis Modeling Using LISREL
Figure 11. Goodness of fit statistics from path analysis output
Boadu, Pazderka-Robinson, & Boulianne, 2007; Hooper, Coughlan, & Mullen, 2008; Rex B. Kline, 2005; McDonald & Ho, 2002).
LISREL Parameter Estimates Path analysis using LISREL estimates the coefficients of a set of linear structural equations. The path analysis output shows two different outputs from structural equations and reduced form equations. The structural equations comprised of independent (cause) variables and dependent (effect) variables. A single regression equation model and bivariate regression model can be analyzed by path analysis of LISREL. Outputs
72
from the single regression and bivariate regression analysis include only structural equations and the estimated relationships between the effect variables and cause variables. However, our model outputs list two equations (structural form equations and reduced form equations) and the two estimated relationships of each equation because the path model includes the structural form equations that define the relationships among the cause variables. In the model, system use, user satisfaction, and self-regulated learning behavior are endogenous, but they are also intervening variables. If covariance among measurement errors of three intervening variables equals zero, we can apply ordinary least square (OLS). However,
An Introduction to Path Analysis Modeling Using LISREL
structural equation models assume there exists covariance among measurement errors of three intervening variables. LISREL estimates all structural coefficients simultaneously, not separately. Since our model is tested based on a sample size of 674, Chi-Square statistic is not a good measure of goodness of fit. Chi-Square statistic nearly always rejects the model when large samples are used(P.M. Bentler & Bonnet, 1980). The RMSEA is the second fit statistic reported in the LISREL program. A cut-off value close to.069 indicates a close fit, and the values up to 0.08 are considered to represent reasonable error of approximation (Jöreskog & Sörbom, 1993). The LISREL output sections provide us with two different sections: structural equations and reduced form equations. The structural equations consist of all the equations including mediating variables (system use and user satisfaction). The reduced form equations show only the effects of exogenous (independent) variables on endogenous variables.
Structural Equations The term, structural equations, refers to the simultaneous equations in a model. It is also known as multiequations. It refers to the equation that contains mediating variables. The mediating variable functions as the dependent (response) variable in one equation. At the same time, it functions as the independent variable (predictor) in another equation. Structural equations outputs show direct effects and indirect effects of all exogenous variables and mediating variables. The output below shows that system quality, information quality, computer self-efficacy have no effects on the e-learning outcomes. Structural equations also illustrate the effects of endogenous variables on each other. It indicates that only one variable (self-efficacy) positively influences the e-learning outcomes. System quality, information quality, user satisfaction have no effect on the use of e-learning
systems. User satisfaction is positively influenced by system quality, information quality, and selfmanaged learning behavior. The perceived leaning outcomes are positively influenced by user satisfaction, self-managed learning behavior, and self-efficacy.
Path Coefficients To interpret the output from LISEL, a path coefficient/path weight must be understood. A path coefficient is the standardized partial regression coefficient for an independent variable. In Figure 12, the estimated path coefficients are in front of the * before each variable. To demonstrate how to interpret the LISREL out, we will use the first line of LISREL outputs. su = 0.15*sq + 0.27*iq, Errorvar.= 0.79,R² = 0.22 (0.040) (0.039) 3.85
6.91
(0.043) 18.32
The e-learning system use (su) is expected to increase by 0.15 on the average if the perception of students in regard to the quality of e-learning systems (sq) measured by “the system is userfriendly” increases one unit in Likert 7 point scale when other variable (iq) is remained fixed.
Standard Errors of the Estimate The number enclosed by parenthesis below the path coefficients is the standard error of the estimate. It is the standard deviation of residuals (error) for the regression model. The residual analysis tests whether the regression line is a good fit of the data (Black, 2008). The residual of the regression/path models are defined as the difference between the y value and the predicted value, ŷ. The sum of all the residuals is always zero. Therefore, to measure the variability of y, the next step is to compute the standard error of the estimate (the standard deviation of residuals/ error for the regression model) to find the sum of
73
An Introduction to Path Analysis Modeling Using LISREL
Figure 12. Structural equations and reduced form equations
squares of error (residual) (SSE). The final step to finding the standard error of the estimate (Se) is to divide SSE by the degrees of freedom of errors for the model and take of the square root of the value from the previous step. •
Residual = y- ŷ
•
SSE = ∑ (y- ŷ)2
•
Se =
SSE n −k −1
where n = number of observations k = number of independent variables
74
t–values and Their Interpretation The numbers below the standard errors of the estimate are the t-values. It is the ratio that can be obtained by dividing the path coefficients by standard errors of estimates. In Figure 12, the first t-value (3.85) is the ratio between the estimate (path coefficient) and its standard error of the estimate (.15/.04=3.75). This ratio (3.75) is not the same as 3.85 in Figure 12, because the path coefficient (.15) and its standard error of estimate (.040) are not used to produce the t-value 3.85. Instead of.15, the path coefficient could have been different numbers such as.15335 or.14997, etc. The high t-value indicates that the path coefficient is non-zero (significant). What is the threshold value of t? In structural equation modeling, the rule is to↜use a critical value. AMOS uses critical ratio (C.R.) values (Byrne, 2010). LISREL uses
An Introduction to Path Analysis Modeling Using LISREL
Figure 13. Covariance matrix of independent variables
T value. However, this T must not be confused with the student t distribution. In inferential statistics, when population standard deviation (σ) is unknown and the population is normally distributed, the estimation of the population mean can be done by using the student t distribution which is developed by William S. Gosset. In SEM, including path analysis modeling, the T value that is typically used is T > 1.96. Traditionally these critical values are called t values, but they use z critical values. This is just a historical artifact in the field (Eom, 2004). A hypothesis test for the regression coefficient of each independent variable is to determine whether it is zero or not. Therefore, the null hypothesis is: ß1 = 0. The alternative hypothesis is: ß1 ≠ 0. The same hypothesis is developed and tested for all independent variables. When the alternative hypotheses contain ≠, this is the case of two tailed tests. With the specified type 1 error rate (α) =.05, the rejection region is in the two ends of the standard normal distribution area (2.5% in each area). This value of 1.96 indicates that the type 1 error rate, or alpha (α), is 5% in the two ends of the distribution curve. Therefore, the critical z value is: zα/2 = ± 1.96. This z = 1.96 covers 95% of areas of the standard normal distribution. Using the z value of 1.96, all the path coefficients in the model are significant, with a probability of 5% of making type 1 error (rejecting a true null hypothesis).
Coefficient of Multiple Determination (R2) This is a measure of fit for the regression model. R2 is “the proportion of variation of the dependent variable (y), accounted for by the independent variables in the regression models.”(Black, 2008). The value of R2 ranges 0 to 1. R2 = regression sum of squares (SSR) / total sum of squares (SST) Total sum of squares (SST) = explained sum of squares (SSR) + unexplained sum of squares (SSE)
Path Diagram LISREL 8.70 produces a path diagram as shown in Figure 14. The path diagram provides us with important information. The headed arrow in Figure 14 shows that covariance between system quality (sq) and information quality (iq) is 1.02. The variances of sq and iq are 1.42 and 1.5, respectively. The same information can also be obtained from the covariance matrix of independent variables (Figure 13). Single headed arrows are path coefficients from structural equations in Figure 12. For example, the first structural equation defines the relationship between su and two independent variables.↜ sq =.15*sq +.27*iq, errovar. =.79
75
An Introduction to Path Analysis Modeling Using LISREL
Figure 14. Path diagram
These path coefficients are displayed in Figure 14. The right hand side of the path diagram also includes measurement errors for each endogenous variable (.79,.51, and 3.00). The bottom of the path diagram shows four fit indices. The p-value is a way of testing hypotheses. This is also known as the observed significance level. This p-value is the smallest value of α for which the alternative hypothesis can be accepted.
Reduced Form Equations The reduced form equation of a path model is the result of rearranging algebraically independent variables and dependent variables so that each dependent (endogenous) variable is on the left side of one equation, and only independent (exogenous) variables are on the right side. Reduced form equations provide us with the equations of the endogenous variables (left-hand side) in terms of the total causal effect of the two exogenous variables. Therefore, the effects of mediating variables (system use and user satisfactions) on the effect variables are not shown. The first effect variable (system use) is explained by the two cause variables (system quality and information quality). With the absence of any
76
mediating variables, this equation of system use (su) appeared under both structural equations and reduced form equations with identical path coefficients.
Effect Decomposition The second effect variable (user satisfaction) is explained only by the total effects of two cause variables, excluding the direct effect of the mediating variable (system use). The total effects are computed by combining direct and indirect effects of the two cause variables. •
•
•
Direct effects of system quality on user satisfaction is.39 (taken from the second equation of structural equations) Indirect effects of systems quality on user satisfaction can be computed by multiplying the path coefficients of compound path. The compound path in this case is (system quality → system use → user satisfaction). Therefore the indirect effects (.0195) are computed by.15*.13. Total causal effect of system quality on user satisfaction is.4095 (.39+.0195). In Figure 12, it is shown as.41.
An Introduction to Path Analysis Modeling Using LISREL
The total effects of E-learning outcomes consist of several indirect paths from two cause variables. The indirect effects from information quality are decomposed as follows. • • •
•
Information quality → system use → elearning outcome: (.27*.2=.054) Information quality → user satisfaction →e-learning outcomes (.44*.54=0.2376) Information quality → system use → user satisfaction → e-learning outcome ↜(.27*.13*.54= 0.018954) Total indirect effects from information quality: (.054+.2376+.018954=.310554)
The indirect effects from system quality are decomposed as follows. • • •
•
system quality → system use → e-learning outcome: (.15*.02=.03) system quality → user satisfaction →elearning outcome: (.39*.54=.2106) system quality → system use → user satisfaction → e-learning outcome: (.15*.13*.54=.01053) Total indirect effects from information quality: (.03+.2106+.01053=.310554)
SUMMARY A family of path modeling techniques consists of regression analysis, path analysis, factor analysis, and structural equation modeling. This chapter focuses on path analysis modeling using LISREL 8.70. SEM methodology is built on all three related multivariate statistical techniques (regression, path, and confirmatory factor models). LISREL is capable of modeling all four techniques in the steps of specification, identification, parameter estimation, testing and modification. A simple path analysis model is introduced to investigate the relationships among information quality, system quality, systems use, and e-learning outcomes.
This chapter demonstrates the various aspects of path modeling to help the readers understand basic concepts, modeling procedures, and interpretation of LISREL outputs.
REFERENCES Arbaugh, J. B., Godfrey, M. R., Johnson, M., Pollack, B. L., Niendorf, B., & Wresch, W. (2009). Research in online and blended learning in the business disciplines: Key findings and possible future directions. The Internet and Higher Education, 12, 71–87. doi:10.1016/j.iheduc.2009.06.006 Arbaugh, J. B., Hwang, A., & Pollack, B. L. (2010). A review of research methods in online and blended business education: 2000-2009. In Eom, S. B., & Arbaugh, J. B. (Eds.), Student satisfaction and learning outcomes in e-learning: An introduction to empirical research. Hershey, PA: IGI Global. Bentler, P. M. (1990). Comparative fit indices in structural models. Psychological Bulletin, 107, 238–246. doi:10.1037/0033-2909.107.2.238 Bentler, P. M., & Bonnet, D. C. (1980). Significance tests and goodness of fit in the analysis of covariance structures. Psychological Bulletin, 88(3), 588–606. doi:10.1037/0033-2909.88.3.588 Black, K. (2008). Business statistics for contemporary decision making (5th ed.). Hoboken, NJ: Wiley. Bollen, K. A. (1989). A new incremental fit index for general structural models. Sociological Methods & Research, 17, 303–316. doi:10.1177/0049124189017003004 Boomsma, A. (2000). Reporting analyses of covariance structures. Structural Equation Modeling, 7(3), 461–483. doi:10.1207/ S15328007SEM0703_6
77
An Introduction to Path Analysis Modeling Using LISREL
Byrne, B. M. (1998). Structural equation modeling with LISREL, PRELIS, and SIMPLIS: Basic concepts, applications, and programming. Mahwah, NJ: Lawrence Erlbaum. Byrne, B. M. (2010). Structural equation modeling with AMOS: Basic concepts, applications, and programming (2nd ed.). New York, NY: Routledge Academic. Crowley, S. L., & Fan, X. (1997). Structural equation modeling: Basic concepts and applications in personality assessment research. Journal of Personality Assessment, 68(3), 508–531. doi:10.1207/ s15327752jpa6803_4 Diamantopoulos, A., & Siguaw, J. A. (2000). Introducing LISREL. London, UK: Sage Publications. Dielman, T. E. (1996). Applied regression analysis for business and economics (2nd ed.). Belmont, CA: Wadsworth Publishing Company. Eom, S. B. (2004). Personal communication with Richard Lomax in regard to the use of T value in structural equation modeling through e-mail. Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate data analysis (7th ed.). Upper Saddle River, New Jersey: Prentice Hall. Hayduk, L., Cummings, G. G., Boadu, K., Pazderka-Robinson, H., & Boulianne, S. (2007). Testing! Testing! One, two three – testing the theory in structural equation models! Personality and Individual Differences, 42(2), 841–850. doi:10.1016/j.paid.2006.10.001 Hooper, D., Coughlan, J., & Mullen, M. R. (2008). Structural equation modeling: Guidelines for determining model fit. The Electronic Journal of Business Research Methods, 6(1), 53–60. Hoyle, R. H. (1995). The structural equation modeling approach: Basic concepts and fundamental issues. In Hoyle, R. H. (Ed.), Structural equation modeling: Concepts, issues, and applications (pp. 1–15). Thousand Oaks, CA: Sage Publications.
78
Hoyle, R. H., & Panter, A. T. (1995). Writing about structural equation models. In Hoyle, R. H. (Ed.), Structural equation modeling: Concepts, issues, and applications (pp. 158–176). Thousand Oaks, CA: Sage Publications. Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 1–55. doi:10.1080/10705519909540118 Jöreskog, K. G., & Sörbom, D. (1989). LISREL 7 user’s reference guide. Chicago, IL: SPSS Publications. Jöreskog, K. G., & Sörbom, D. (1993). LISREL 8: Structural equation modeling with the SIMPLIS command language. Hillsdale, NJ: Lawrence Erlbaum Associates Publishers. Kline, R. B. (1998). Principles and practice of structural equation modeling. New York, NY: Guilford Press. Kline, R. B. (2005). Principles and practice of structural equation modeling (2nd ed.). New York, NY: The Guilford Press. MacCallum, R. C., Browne, M. W., & Sugawara, H. M. (1996). Power analysis and determination of sample size for covariance structure modeling. Psychological Methods, 1, 130–149. doi:10.1037/1082-989X.1.2.130 McDonald, R. P., & Ho, M.-H. R. (2002). Principles and practice in reporting statistical equation analyses. Psychological Methods, 7(1), 64–82. doi:10.1037/1082-989X.7.1.64 Piccoli, G., Ahmad, R., & Ives, B. (2001). Webbased virtual learning environments: A research framework and a preliminary assessment of effectiveness in basic IT skills training. Management Information Systems Quarterly, 25(4), 401–426. doi:10.2307/3250989
An Introduction to Path Analysis Modeling Using LISREL
Schumacker, R. E., & Lomax, R. G. (2004). A beginner’s guide to structural equation modeling (2nd ed.). Mahwah, NJ: Lawrence Erlbaum Associates, Publishers. Schumacker, R. E., & Lomax, R. G. (2010). A beginner’s guide to structural equation modeling (3rd ed.). New York, NY: Routledge, Taylor and Francis Group. Tanaka, J. S. (1993). Multifaceted conceptions of fit in structural equation models. In Bollen, K. A., & Long, J. S. (Eds.), Testing structural equation models (pp. 10–39). Newbury Park, CA: Sage Publications. Tanaka, J. S., & Huba, G. J. (1984). Confirmatory hierarchical factor analyses of psychological distress measures. Journal of Personality and Social Psychology, 46, 621–635. doi:10.1037/00223514.46.3.621
ADDITIONAL READING Byrne, B. M. (1994). Structural Equation Modeling with Eqs and Eqs/Windows. Thousand Oaks: Sage Publications. Byrne, B. M. (2010). Structural Equation Modeling with AMOS: Basic Concepts, Applications, and Programming (2nd ed.). New York: Routledge Academic. Carmeli, A., & Gefen, D. (2005). The Relationship between Work Commitment Models and Employee Withdrawal Intentions. Journal of Managerial Psychology, 20(1/2), 63–86. doi:10.1108/02683940510579731 Christensen, T. E., Fogarty, T. J., & Wallace, W. A. (2002). The Association between the Directional Accuracy of Self-Efficacy and Accounting Course Performance. Issues in Accounting Education., 17(1), 1–26. doi:10.2308/iace.2002.17.1.1
DeShields, O. W. Jr, Kara, A., & Kaynak, E. (2005). Determinants of Business Student Satisfaction and Retention in Higher Education: Applying Herzberg’s Two-Factor Theory. International Journal of Educational Management, 19(2/3), 128–139. Dow, K. E., Wong, J., Jackson, C., & Leitch, R. A. (2008). A Comparison of Structural Equation Modeling Approaches: The Case of User Acceptance of Information Systems. Journal of Computer Information Systems, 48(4), 106–115. Harvey, P., Harris, K. J., & Martinko, M. J. (2008). The Mediated Influence of Hostile Attributional Style on Turnover Intentions. Journal of Business and Psychology, 22(4), 333–343. doi:10.1007/ s10869-008-9073-1 Loehlin, J. C. (2004). Latent Variable Models: An Introduction to Factor, Path, and Structural Equation Analysis (4th ed.). Mahwah, N.J.: L. Erlbaum Associates. Maruyama, G. M. (1998). Basics of Structural Equation Modeling. Thousand Oaks: Sage Publications. Montgomery, A. L., Li, S., Srinivasan, K., & Liechty, J. C. (2004). Modeling Online Browsing and Path Analysis Using Clickstream Data. Marketing Science, 23(4), 579–595. doi:10.1287/ mksc.1040.0073 Olobatuyi, M. E. (2006). A User’s Guide to Path Analysis. Lanham, MD: University Press of America. Plice, R. K., & Reinig, B. A. (2007). Aligning the Information Systems Curriculum with the Needs of Industry and Graduates. Journal of Computer Information Systems, 48(1), 22–30. Poon, J. M. L. (2003). Situational Antecedents and Outcomes of Organizational Politics Perceptions. Journal of Managerial Psychology, 18(1/2), 138–155. doi:10.1108/02683940310465036
79
An Introduction to Path Analysis Modeling Using LISREL
Surhone, L. M., Timpledon, M. T., & Marseken, S. F. (Eds.). (2010). Path Analysis (Statistics): Structural Equation Model, Analysis of Covariance, Latent Variable Model. Betascript Publishing. Xenikou, A., & Simosi, M. (2006). Organizational Culture and Transformational Leadership as Predictors of Business Unit Performance. Journal of Managerial Psychology, 21(6), 566–579. doi:10.1108/02683940610684409 Yousef, D. A. (2002). Job Satisfaction as a Mediator of the Relationship between Role Stressors and Organizational Commitment: A Study from an Arabic Cultural Perspective. Journal of Managerial Psychology, 17(4), 250–266. doi:10.1108/02683940210428074
KEY TERMS AND DEFINITIONS Coefficient of Correlation: A measure of the linear correlation of two variables. The linear correlation measures the strength of the relationship between two numerical variables. If they are quantitative data, the Pearson product-moment correlation coefficient, r, is used to measure the strength of the linear correlations. Quantitative data are often referred to as metric data, which include both ratio level data and interval level data. The value of r ranges from -1 to 0 to +1, representing a perfect negative correlation (r=1), no correlations (r=0), and a perfect positive correlation (r=+1). Covariance: A measure of joint variability for two variables. A positive (negative) covariance value indicates an increasing (decreasing) linear relationship between two quantitative variables. It can be denoted by: Cov(x,y) = SSxy/n-1 = S (xi - ¯x)(yi - ¯y)/ n-1, where n is the sample size. It is a necessary input to compute a coefficient of correlation between two quantitative data. The value of r is the covariance of two variables (x, y) divided by the product of standard deviation of
80
variable x and the standard deviation of variable y. Understanding the concept of variance and covariance is very important to better understand path analysis modeling, since a variance and covariance matrix is used to test the fit of the correlation matrix against the causal research models. Exogenous Variables: Comes from the Greek words “exo” and “gen”, which mean “outside” and “production”, referring to uncontrollable variables coming from outside of the model. Therefore values are given. Endogenous/dependent variables are the opposite of exogenous, whose values are dependent on exogenous/independent variables. User satisfaction may be dependent on system quality and other exogenous variables. A beginner of structural equation modeling may encounter so many different terms referring to dependent and independent variables. The independent variables are denoted as “x”. They are also known as exogenous, explanatory, predictor, or cause variables. The dependent variables are the variables to be predicted and denoted as “y”. They are also known as endogenous, response, or effect variables. Model Estimation: Estimates the parameters of theoretical models using different estimation procedures including maximum likelihood (ML), generalized least squares (GLS), unweighted least squares (ULS), etc. LISREL uses the ML procedures to generate structural equations and reduced form equations, covariance matrix of independent Variables, covariance matrix of latent variables, and goodness of fit statistics. Model Identification: Decides whether the model is identifiable. The identifiability of a path model can be determined by comparing the number of parameters to be estimated (unknowns) and the number of distinct values in the covariance matrix (knowns). If the number of known is equal to the number of unknown in the path model, it is called just identified. If the number of unknown is less than the number of known, it is the case of over-identified model. The last case, under identified model, occurs when the number of unknown is more than the number of known.
An Introduction to Path Analysis Modeling Using LISREL
The term determination is often used interchangeably with the term identification. Therefore, the terms justdetermination, underdetermination, and overdetermination are used with or without hyphens. It is critical for the path analysis modelers to be aware that always a path model should avoid building under-justified models. Model Specification: The first step of path analysis modeling. It is concerned with identifying all relevant variables in the research model and specifying the proper relationships among them. The identification of all relevant variables specifically refers to including only necessary variables and excluding extraneous variables. Needless to say, either including extraneous variables or excluding essential variables will lead to a path model that may not truly reflect the causal relationships among variables. Moreover, it can affect the path coefficients significantly. Therefore the model results can be unreliable. It is also important in this path modeling step to build a parsimonious model with a few substantive meaningful paths. Model Testing: Tests the fit of the correlation matrix against the theoretical causal model built by researchers based on the extant literature. The output file contains a wide range of useful information including multiple fit indices available to test the model. The goodness of fit statistics include an extensive array of fit indices. Choosing specific indices in the assessment of model fit is a complex task due to the fact that each index is designed to best suit specific path/structural equation models depending on the different sample size, estimation procedure, model complexity, violation of the underlying assumptions of multivariate normality, variable independence, etc. Path Coefficients: Standardized regression coefficients. The path coefficient, a.k.a. path weight, shows the strength of the effect of an independent variable on a dependent variable in the path model. The LISREL output section
provides us with two different path coefficients: structural equations and reduced form equations. The structural equations consist of all the equations including mediating variables. The reduced form equations show only the effects of exogenous (independent) variables on endogenous variables. The term, structural equations, refers to the simultaneous equations in a model. It is also known as multiequations. It refers to the equation that contains mediating variables. The mediating variable functions as the dependent (response) variable in one equation. At the same time, it functions as the independent variable (predictor) in another equation. Structural equations outputs show direct and indirect effects of all exogenous variables and mediating variables. Structural equations also illustrate the effects of endogenous variables on each other. The reduced form equation of a path model is the result of rearranging algebraically independent variables and dependent variables so that each dependent (endogenous) variable is on the left side of one equation, and only independent (exogenous) variables are on the right side. Reduced form equations provide us with the equations of the endogenous variables (left-hand side) in terms of the total causal effect of the two exogenous variables. Therefore, the effects of mediating variables on the effect variables are not shown. t- values: Refer to the numbers below the standard errors of the estimate. It is the ratio that can be obtained by dividing the path coefficients by the standard errors of estimates. The high t-value indicates that the path coefficient is nonzero (significant). In LISREL, the rule is to use a critical value of T. This T must not be confused with the student t distribution. In SEM, the t value that is typically used is t > 1.96. Traditionally, these critical values are called t values, but they use z critical values.
81
82
Chapter 5
Testing the DeLone-McLean Model of Information System Success in an E-Learning Context Sean B. Eom Southeast Missouri State University, USA James Stapleton Southeast Missouri State University, USA
ABSTRACT This chapter has two important objectives (a) introduction of structural equation modeling for a beginner; and (b) empirical testing of the validity of the information system (IS) success model of DeLone and McLean (the DM model) in an e-learning environment, using LISREL based structural equation modeling. The following section briefly describes the prior literature on course delivery technologies and e-learning success. The next section presents the research model tested and discussion of the survey instrument. The structural equation modeling process is fully discussed including specification, identification, estimation, testing, and modification of the model. The final section summarizes the test results. To build e-learning theories, those untested conceptual frameworks must be tested and refined. Nevertheless, there has been very little testing of these frameworks. This chapter is concerned with the testing of one such framework. There is abundant prior research that examines the relationships among information quality, system quality, system use, user satisfaction, and system outcomes. This is the first study that focuses on the testing of the DM model in an e-learning context. DOI: 10.4018/978-1-60960-615-2.ch005
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Testing the DeLone-McLean Model of Information System Success
INTRODUCTION During the past decades, we have seen an increase in empirical research to identify the success factors of e-learning systems. The majority of e-learning research has focused on the two research streams (a) outcome comparison studies with classroombased learning; and (b) studies examining potential predictors of e-learning success or e-learning outcomes (Arbaugh et al., 2009). The quality of e-learning empirical research has also improved substantially during the past two decades. Many frameworks for e-learning education in business have been developed or adopted from other disciplines. To build e-learning theories, those untested conceptual frameworks must be tested and refined. Nevertheless, there has been very little testing of these frameworks. This chapter is concerned with the testing of one such framework: the information system (IS) success model of DeLone and McLean (the DM model). This chapter has two important objectives (a) introduction of structural equation modeling for a beginner; and (b) empirically testing the validity of the DM model in an e-learning environment. The primary focus of this book is placed on providing an introduction to e-learning empirical research. To that end, several chapters in the book include tutorials on structural equation modeling techniques such as path analysis and structural equation modeling using partial least squares. This chapter complements those chapters to provide a basic introduction to structural equation modeling. The second objective of this chapter is to empirically test the validity of the DM model in an e-learning environment. The DM model is one of the widely recognized IS models based on a systematic review of 180 studies with over 100 measures. The DM model has been empirically tested using structural equation modeling in a quasi-voluntary IS use context (Rai, Lang, & Welker, 2002) and in a mandatory information system context (Livari, 2005). The study of Rai et al. concluded that the DM model
has explanatory power, and therefore, the model has merit for explaining IS success. The study of Livari concluded that perceived system quality and information quality are significant predictors of user satisfaction. However, his study failed to support the positive association between system use and user satisfaction. Our study is the first empirical testing of the DM model in e-learning context. E-learning systems and information systems share some common dependent variables. Nevertheless, the two systems differ in terms of the output of the systems (independent variables). The rest of this chapter is organized into several sections. The following section briefly describes the prior literature on course delivery technologies and e-learning success. The course delivery technologies are part of a comprehensive array of dependent variables that affect the success of e-learning systems. The next section presents the research model to be tested. The survey instrument is discussed in the next section. Structural equation modeling (SEM) methodology is fully discussed in the following sections, including model specification, model identification, model estimation, and model testing and modification. The final section summarizes the test results.
COURSE DELIVERY TECHNOLOGIES AND E-LEARNING SUCCESS The review of the past two decades of e-learning systems research identified three dimensions: human (students, instructors, and interaction among them), design (course contents, course structures, etc.), and e-learning systems including technologies. In each dimension, researchers identified many indicator variables. For example, students can be further sub-classified into sub-dimensions such as learning styles, intelligence, self-efficacy, motivation, self-regulated learning behaviors, etc. For the review of the impact of human dimensions and design dimensions on e-learning success,
83
Testing the DeLone-McLean Model of Information System Success
Figure 1. Information systems success model. Source: DeLone and McLean 1992, P.87
readers are referred to (Arbaugh et al., 2009). The technological dimension of e-learning success factors includes many information systems tools such as Web 2.0 technologies, push technologies, blogs, and wikis, to name a few. Readers are referred to (Arbaugh et al., 2009) to review the various empirical studies to examine the impact of these tools on e-learning success.
The DeLone and McLean Model The research model we tested is the model of information systems (IS) success created by DeLone and McLean (1992). Based on the review of 180 empirical studies, DeLone and McLean presented a more integrated view of the concept of information systems success and formulated a more comprehensive model of information systems success (Figure 1). This was based on six major categories of success measures, which are interrelated and interdependent. They are system quality, information quality, use, user satisfaction, individual impact, and organizational impact. Seddon presented and justified a re-specified and slightly extended version of the DeLone and McLean model because the inclusion of both variance and process interpretations in their model could be confusing. Seddon demonstrated that the term, “IS use” in the DM model can be interpreted in three different ways—as a variable that proxies for the benefits from use, as the dependent variable,
84
and as an event in a process leading to individual or organizational impact. The extended model clarified the meaning of IS use and introduced four new variables (expectations, consequences, perceived usefulness, and net benefits to society) and reassembled the links among the variables.
The Updated Model of Information Systems Success The original IS success model of DeLone and McLean (1992) was updated by adding one more construct of service quality, and combining individual and organizational impacts into a single construct of net benefits (DeLone & McLean, 2003). Addition of the service quality construct is justified due to the changing role of information systems organization as having the dual role of information provider and service provider. The role of information provider is to produce information for the end-users (entire organization and management). The role of service provider is offering support for end-user developers. To correctly understand the significance of this construct in the IS success model, we need to understand the dual role of the end-users as the user of the system and the developer of the systems. The sample SERVQUAL instrumental items as indicators of service quality constructor include the following dimensions:
Testing the DeLone-McLean Model of Information System Success
Figure 2. The e-learning success model and sample metrics of Holsapple and Lee-Post. Source: Holsapple and Lee-Post, 2006. p.71
• •
•
•
•
Tangibility – Information systems has upto-date hardware and software. Reliability – Information systems depart should be able to perform service dependably and accurately. Responsiveness – Information systems department staff should be able to give prompt service to the users. Assurance – Information systems department staff should be able to get their job done well with confidence and trust. Empathy – Information systems department staff have users’ best interests to give individualized service with caring attitude.
SERVQUAL is a multi-item scale developed to assess customer perceptions of service quality in service and retail businesses. Kettinger and Lee (1994) were among the early adapters of SERVQUAL to the IS context. The updated model
of IS success is later adapted to the e-learning context (Holsapple & Lee-Post, 2006).
The E-Learning Success Model of Holsapple and Lee-Post As introduced in the review, a large volume of empirical studies have been conducted to investigate the success factors of e-learning systems. Holsapple and Lee-Post attempted to formulate a holistic and comprehensive model for assessing e-learning initiatives. They developed a model to guide the design, development, and delivery of successful e-learning initiatives. The E-Learning Success Model of Holsapple and Lee-Post (Figure 2) uses the process approach to measuring and assessing success. The model includes success metrics developed specifically for the e-learning context. In this process approach, the overall success of e-learning initiatives depends on the
85
Testing the DeLone-McLean Model of Information System Success
attainment of success at each of the three stages of e-learning systems development: design, delivery, and outcome analysis. Their model shows that system design, system delivery, and systems outcomes are interdependent as shown by the unidirectional, single headed arrow and bi-directional, double headed arrows. Using the action research approach, they tested the model through the development and implementation of an online version of an undergraduate quantitative methods course. Holsapple and Lee-Post further suggested validating their success model using empirical studies.
SURVEY INSTRUMENT Wang, Wang, and Shee (2007) first explored whether traditional IS success models can be extended to investigate e-learning systems’ success in organizational contexts. Based on DeLone and McLean’s (2003) updated IS success model, Wang, et al. developed and validated a multi-dimensional model for assessing e-learning systems’ success (ELSS) from the perspective of the employee (elearner). The study conceptualized the construct of e-learning systems’ success, provided an empirical validation of the construct and its underlying dimensionality, and developed a standardized instrument with desirable psychometric properties for measuring e-learning systems success. Using the proposed ELSS instrument, the current study attempts to extend the use of DeLone and McLean’s (2003) model to university-based e-learning environments. The survey instrument consisted of 35 items using a seven point Likert scale ranging from “strongly disagree” to “strongly agree.” In addition, respondents were asked six demographic-type questions. The population was undergraduate and graduate students that were enrolled in an online course at a large university located in the Midwest United States. Invitations to reply to the survey were administered online at the time of log-in to 2,156
86
unique students. Of those students invited, 809 students volunteered responses with 674 surveys being complete and usable for a response rate of 31.3%.
SAMPLE SIZE IN STRUCTURAL EQUATION MODELING Sample size is an important consideration to ensure that the purpose of the study can be fully accomplished. The number of parameters is an important input to decide the size of a sample that can assess the significance of testing model effects. Kline (1998) suggested the following. • • •
Ideal sample size= the number of parameters * 20 or more Adequate sample size= the number of parameters * 10 or more Insufficient sample size = the number of parameters * 5 or less
The type of parameters in path analysis and structural equation models includes the following: • • • •
Path coefficients Equation error variances Correlation among the independent variables Independent variable variances
The number of each type of parameter can be counted from the specified path model. The model identification section shows an example of how to count the number of the different types of parameter. According to Schumacker and Lomax (2010, p.211), “In traditional multivariate statistics, the rule of thumb is 20 subjects per variable (20:1). The rule of thumb used in structural equation modeling vary from 100, 200, to 500 or more subjects per study, depending on model complexity and cross-validation requirements.”
Testing the DeLone-McLean Model of Information System Success
Hair et al. (Hair, Black, Babin, & Anderson, 2010) suggested the following: • • •
Minimum sample size of 100 with the model of five or fewer constructs. Minimum sample size of 150 - 300 with the model of seven or fewer constructs. Minimum sample size of 500 with the model of large number of constructs.
By any measure, our sample size of 674 completed responses with no missing data is considered to be more than adequate.
SEM METHODOLOGY All structural equation models (regression, path, confirmatory factor, and structural equation models) follow a series of steps: specification, identification, estimation, testing, and modification of model (Schumacker & Lomax, 2010).
Model Specification Model specification is the first step of identifying all of the relevant variables in the model and examining the proper relationships among the identified variables. Typically, before this step of model specification, a complete survey of the literature on the specific research area must be conducted to identify major issues for further investigation. The identification of all relevant variables specifically refers to including only necessary variables and excluding extraneous variables. Figure 3 shows the research model with common path diagram symbols. Figure 4 is the LISREL produced Basic Conceptual Diagram of the DeLone-McLean Model of Information System Success. An observed variable is enclosed by either a rectangle or a square. A measurement error in an observed variable is denoted by an oval shape with an attached unidirectional arrow pointing to that observed variable. Single head
arrows indicate unidirectional (recursive) paths. A reciprocal (non-recursive) path between variables can be drawn by using two arrows with different directions. Double headed or curved arrows denote correlation between variables. Each observed variable is contacted by two pointing arrowheads. For example, the first observed variable (q1), measured by questionnaire item 1, has two arrows at left and right. The left-hand side arrow indicates a measurement error, and the right hand side arrow indicates that q1 is an indicator that is associated with the system quality construct. The models in Figures 3 and 4 examine the relationships among five constructs. The two independent constructs are system quality and information quality. The two mediating constructs are system use and user satisfaction. The dependent construct is e-learning outcomes.
System Quality and Information Quality The IS success model (DeLone & McLean, 1992; DeLone & McLean, 2003) and the e-learning success model (Holsapple & Lee-Post, 2006) posit that the success of IS and e-learning systems is dependent on the intervening variables (user satisfaction and system use), which are in turn dependent on the quality of information, system, and service. Technology acceptance model (TAM) developed in the IS area has emerged as a useful model for explaining e-learning system usage and satisfaction (Landry, Griffeth, & Hartman, 2006). The TAM defines the relationships between systems use (dependent constructs) and perceived usefulness and perceived ease of use (two independent constructs). Therefore, the TAM theorizes that system use is determined by perceived usefulness and perceived ease of use. The TAM model has been extended by many other researchers. The unified theory of acceptance and use of technology (UTAUT) is an extension of the TAM model. The TAM postulates that perceived usefulness and ease of use determine an individual’s intention to use a
87
Testing the DeLone-McLean Model of Information System Success
Figure 3. Research model
Figure 4. LISREL produced basic conceptual diagram of the DeLone-McLean model of information system success
88
Testing the DeLone-McLean Model of Information System Success
system, which in turn, determines actual system use. The theory posits that the four key constructs directly determine usage intention and behavior (Venkatesh, Morris, Davis, & Davis, 2003). Moreover, gender, age, experience, and voluntariness of use are posited to mediate the impact of the four key constructs on usage intention and behavior (Venkatesh & Davis, 2000; Venkatesh & Morris, 2000; Venkatesh, Morris, Davis, & Davis, 2003). Arbaugh(2005) found that perceived usefulness and ease of use of Blackboard significantly predicted student satisfaction with the Internet as an educational delivery medium.
System Use System use has been considered as a factor that influences the system success in the past decades and has been used by a number of researchers. (DeLone & McLean, 1992; DeLone & McLean, 2003; Holsapple & Lee-Post, 2006; Rai, Lang, & Welker, 2002). Consequently, we hypothesize that system use is a variable that will be positively related to e-learning systems success and e-learner satisfaction.
User Satisfaction and E-Learning Outcomes There is abundant prior research that examines the relationships between user satisfaction and individual impact (Doll & Torkzadeh, 1988; Livari, 2005; Rai, Lang, & Welker, 2002). A study of Eom et al. (2006) examined the determinants of students’ satisfaction and their perceived learning outcomes in the context of university online courses. Their study found that user satisfaction is a significant predictor of learning outcomes.
the structural equation model can be determined by comparing the number of parameters to be estimated (unknowns) and the number of distinct values in the covariance matrix (knowns). If the number of knowns is equal to the number of unknowns in the path model, it is called “just identified”. If the number of unknowns is less than the number of knowns, it is the case of an “over identified” model. The last case, “under identified” model, occurs when the number of unknowns is more than the number of knowns. The term determination is often used interchangeably with the term identification. Therefore, the terms just determination, under determination, and over determination are used with or without hyphens. It is critical to avoid building the under justified model. If a path model is under identified, a unique path solution cannot be computed. A model is just identified if the number of parameters is equal to the number of distinct values in the covariance matrix. The output will indicate that path analyses are saturated models and therefore chi squares = 0 and degrees of freedom =0. In our model, the number of distinct values in the covariance matrix is 153(17*18/2). The number of free parameters to be estimated is as follows: •
•
Identification The next step, after the specification of the structural equation model, is to decide whether the model is identifiable. The identifiability of
•
Factor loadings –14 (Figure 4 shows 17 observed variables and their factor loadings. Factor loadings in SEM outputs are standardized solution. To produce standardized factor loadings that use the same scale of measurement, some parameters (q11, q15, and q17) must be fixed.) Measurement Error variances –17 (Figure 4 contains 17 observed variables. Each of them contains a measurement error indicated by an arrow. Unlike regression or path models, SEM consider this measurement errors of observed variables.) Measurement error covariances – 0 (By definition, measurement errors are not to be correlated.)
89
Testing the DeLone-McLean Model of Information System Success
• • •
•
•
Latent independent constructs variances -- 0 Latent independent constructs covariances -- 1 Structure coefficients --7 (There are 7 arrows that define the relationships among structural (latent) variables in Figure 4.) Equation prediction error variances – 3 (Each of mediating latent variables (system use and user satisfaction), and dependent variable (systems outcome) equations have a prediction error variable.) Equation prediction error covariances – 0
The number of distinct values in the covariance matrix is equal to p(p+1)/2= [17(17+1)]/2=9*17=153, where p is the number of observed variables. The number of free parameters is 42. Therefore, the degrees of freedom (df) in the model is computed as [p(p+1)/2] - the number of free parameters. df = 153 – 42 = 111. This model is an over identified model because the total number of distinct values in the covariance matrix (153) is greater than the number of free parameters to be estimated (42).
Estimation Model identification is followed by model estimation. This step estimates the parameters of theoretical models. Figure 4 shows that the first user-interface displays of LISREL 8.2. It has only three submenus – file, view, and help. The LISREL command file must be created first by clicking the file menu. Figure 5 also shows dropdown menus of several choices. By clicking file, you will begin to build a SIMPLIS (SIMPLe LISrel) file. LISREL 8 allows the user to use two different command languages (LISREL command and SIMPLIS command) to build an input file. The SIMPLIS command language is easy to use and learn and it becomes available with LISREL version 8 (Jöreskog & Sörbom, 1993). The eight sets of hypotheses were tested using LISREL 8.70. The research model consists of two independent 90
constructs (system quality and information quality), two mediating constructs (systems use and user satisfaction), and one dependent construct (system outcomes). Each construct is defined by using several indicator variables. Table 2 shows latent variables and their associated indicator variables.
The SIMPLIS Command Language Syntax The SIMPLIS command language syntax consists of the following. • • • • • • • •
Title Variables Observed variables Raw data or covariance matrix or correlations matrix Sample size Latent variables Relationships End of problem
The title is optional. It can be single or multiple lines. Variables are used to define acronyms to be used later in the observed variables section. Data for LISREL 8 can be entered using one of the following types; raw data, covariance matrix, and correlation matrix. Raw data can be placed as part of the input file. Figure 5 shows only the first and the last part of the data. The rest of the data is omitted. It is recommended that the raw data be stored in an external file, using the following command line. Raw data from File filename The next section specifies the sample size. A total of 674 valid unduplicated responses were used to fit the path analysis model. This can be done with many different formats. • • •
Sample size 674 Sample size = 674 Sample size: 674
Testing the DeLone-McLean Model of Information System Success
Figure 5. The IS success model
The next section defines the relationships. The first section defines the relationship between observed (indicator) variables and latent variables. For example, system quality comprised of 4 indicator variables (q1, q2, q4 and q5). • •
q1 q2 q4 q5 = sysq or q1 – q5 = sysq
This is followed by defining the relationships between latent variables. This can be done by using two different formats of either paths or relationships. The path model in Figure 5 can be specified using one of the two formats below.
Paths sysq -> q1 q2 q4 q5 infq -> q6 q7 q8 q9 q10 sysuse -> q11 q12 usersat -> q15 q16 sysout -> q17 q18 q19 q20 sysq -> sysuse usersat infq -> sysuse usersat usersat -> sysout sysuse -> sysout usersat Relationships q1-q5 = sysq q6-q10 = infq q11 q12 = sysuse q15 q16 = usersat
91
Testing the DeLone-McLean Model of Information System Success
q17-q20 = sysout sysuse = sysq infq usersat = sysq infq sysuse sysout = sysuse usersat The command file is saved with SPL extension. LISREL has two types of user interfaces – the drop down menu and the graphical user interfaces (GUI). The GUI interfaces has 12 buttons. To run this command language file, the user clicks the fifth button with “L” from left.
Estimation and Validation The LISREL Output File The output file contains the following information. • • • • • • •
Covariance matrix LISREL Estimates using maximum likelihood Structural equations Reduced form equations Covariance matrix of independent variables Covariance matrix of latent variables Goodness of fit statistics
Prior to examining LISREL estimates, it makes sense to examine the goodness of fit statistics first. If the fit statistics do not indicate good fit, the researchers may modify the previous model. In this example, since we are testing a well-accepted model, it is not necessary to modify it. Comparing our goodness of fit statistics in appendix and suggested acceptable threshold values, some indices (RMSEA, SRMR, NFI, GFI, CFI, and NNFI) values are clearly above the threshold values, while only one (relative χ2/ df) is below the suggested values. AGFI just pass the minimum criterion.
92
Measurement (Outer) Model and Assessing Its Validity Due to the fact that the latent variable con be only indirectly measureable by using more than one observable variable, the issue of validity in SEM is an important matter. Validity is concerned with the extent to which each observed variable accurately define the construct. Each measurement item on a survey instrument is assumed to reflect only one latent variable and each item is related to one construct better than to any others (Gefen & Straub, 2005). This property (uni-dimensionality) of the construct must be confirmed. There are two elements of construct (factorial) validity: convergent validity and discriminant validity (see Table 1). Convergent validity is defined as the extent to which indicators of a latent variable converge or share high proportions of variance in common (Hair, Black, Babin, & Anderson, 2010). It is established when all indicator (observed) variables load highly on their assigned factors, .5 or higher. Ideally, they should be .7 or higher. Additionally, each of the measurement items load with an acceptable t-value on its latent construct. The acceptable t-value is when p-value is at the .05 α level or less. Unlike PLS-Graph output, LISREL output does not generate factor validity information such as composite reliability and AVE (Average variance Extracted). However, factor loadings output (standardized solution of basic model) can be used to produce such information (see Figure 6 and Figure 7). The formula of AVE is: ∑λi2 /n where λi is the factor loading of each observed variable on its corresponding construct and n is the number of observed variables on each construct. Composite reliability or construct reliability is an indicator of convergent validity. It is computed from the following formula. Composite reliability = (∑λi)2/ [(∑λi)2 + ∑ei] where λi is the factor loading of each observed variable on its corresponding construct and ei is the error variance term for each
Testing the DeLone-McLean Model of Information System Success
Table 1. Convergent and discriminant validity and reliability of the measurement model Constructs and Associated Variables
Measurement Items
Factor Loading
System Quality CR = 0.8723 AVE = 0.6320 Q1
The system is always available
0.69
Q2
The system is user-friendly
0.85
Q4
The system has attractive feature that appeal to user
0.84
Q5
The system provides high-speed information access
0.79
Q6
The system provides info. that is exactly what you need
0.90
Q7
The system provides info. that is relevant to learning
0.89
Q8
The system provides sufficient information
0.92
Q9
The system provides information that is easy to understand
0.89
Q10
The system provides up-to-date information
0.82
Q11
I frequently use the system
0.91
Q12
I depend upon the system
0.80
Q15
I think the system is very helpful
0.89
Q16
Overall, I am satisfied with the system
0.95
Q17
The system has a positive impact on my learning
0.90
Q18
Overall, the performance of the system is good
0.95
Q19
Overall, the system is successful
0.95
Q20
The system is an important and valuable aid to me in the performance of my class work
0.83
Information Quality CR = 0.9473 AVE = 0.7826
System Use CR = 0.8461 AVE = 0.7341
User Satisfaction CR = 0.9173 AVE = 0.8473
System Outcomes CR = 0.9498 AVE = 0.8260
CR is composite reliability; AVE is average variance extracted.
observed variable on its corresponding construct. The error variance term is obtained by 1- (λi)2. Discriminant validity can be established when variables do not cross-load on two or more constructs, each construct is said to be demonstrating discriminant validity. In other words, each observed variables load highly on its theoretically assigned construct and not highly on other con-
structs. Discriminant validity was assessed using two methods. First, item loadings and the crossloadings of the constructs and the measures were examined. If items are loading together and items load highly (loading >0.50) on their associated factors, they are demonstrating discriminate validity. Individual reflective measures are considered
93
Testing the DeLone-McLean Model of Information System Success
Table 2. Computing AVE and composite reliability for the system quality construct Variables
Factor Loadings (FL)
q1
squred (FL) 0.69
0.4761
q2
0.85
0.7225
q4
0.84
0.7056
q5
0.79
0.6241
Total variance
2.5283
AVE Variables
0.632075 Factor Loadings
q1
square(FL) 0.69
1-Square(FL) 0.4761
0.5239
q2
0.85
0.7225
0.2775
q4
0.84
0.7056
0.2944
q5
0.79
0.6241
0.3759
Sum
3.17
Squared Sum
1.4717
10.0489 CR
=[B16/(B16+D15)] 0.872254917
to be reliable if they correlate more than 0.7 with the construct they intend to measure. The second method of establishing discriminant validity is comparing the square root of average variance extracted (AVE) for each construct with the correlations among constructs. If the square root of each AVE is much larger than any correlation among any pair of latent variables, and it should be greater than .50 (Chin, 1998; Fornell & Larcker, 1981; Gefen & Straub, 2005), then the validity of the measurement model is established. The formula of AVE is: ∑λi2 /n where λi is the factor loading of each observed variable on its corresponding construct and n is the number of observed variables on each construct. The AVE test is to see that the correlation of the construct with its measurement items should be larger than its correlation with other constructs, (Gefen & Straub, 2005). Table 2 shows that the square root of each AVE is much larger than any correlation among any pair of latent variables and it is much greater than minimum threshold value of .50. Overall, the measurement model results provided
94
strong support for the factorial, convergent, and discriminant validities and reliability of the measures used in the study.
Structural (Inner) Model Results LISREL and other covariance structure analysis modelling approaches involve parameter estimation procedures. Goodness-of-fit statistics, discussed earlier, is a global fit measure to provide you with the overall measures of fit between the sample covariance matrix and the reproduced model implied covariance matrix. The global fit measures do not specifically provide any information in regard to the statistical significance of each parameter estimate for the paths in the model. This is the second criterion to evaluate model fit.
Structural Equations The term, structural equations, refers to the simultaneous equations in a model. It is also known as multiequations. It refers to the equations that
Testing the DeLone-McLean Model of Information System Success
Figure 6. Standardized solution of basic model
Figure 7. The measurement items and t-values on its latent constructs
95
Testing the DeLone-McLean Model of Information System Success
Figure 8. Structural equations
contain mediating variables. The mediating variable functions as the dependent (response) variable in one equation. At the same time, it functions as the independent variable (predictor) in another equation. Structural equations outputs show direct and indirect effects of all exogenous and mediating variables. Structural equations also illustrate the effects of endogenous variables on each other. System quality, information quality, user satisfaction have no effect on the use of e-learning systems. User satisfaction is positively influenced by system quality, information quality, and system use. The perceived system outcomes are positively influenced by user satisfaction and system use.
Path Coefficients To interpret the output from LISREL, a path coefficient/path weight must be understood. A path coefficient is the standardized partial regression coefficient for an independent variable. In Figure 8, the estimated path coefficients are in front of the asterisk (*) before each variable. To demonstrate how to interpret the LISREL out, we will use the first line of LISREL outputs. sysuse=0.170*sysq+0.413*infq, Errorvar.=0.672,R²=0.328 (0.127) (0.125) (0.0616) 1.337 3.302 10.909
The e-learning system use (sysuse) is expected to increase by 0.17 on the average if the
96
perception of students in regard to the quality of e-learning systems (sysq) measured by “the system is user-friendly” increases one unit in Likert 7 point scale when other variables (infq) remained fixed.
Standard Errors of the Estimate The number enclosed by the parenthesis below the path coefficients is the standard error of the estimate. It is the standard deviation of residuals (error) for the regression model. The residual analysis tests whether the regression line is a good fit of the data (Black, 2008). The residual of the regression/path models are defined as the difference between the y value and the predicted value, ŷ. The sum of all the residuals is always zero. Therefore, to measure the variability of y, the next step is to compute the standard error of the estimate (the standard deviation of residuals/ error for the regression model) to find the sum of squares of error (residual) (SSE). The final step to find the standard error of the estimate (Se) is to divide SSE by the degrees of freedom of errors for the model and take the square root of the value from the previous step. Residual = y-ŷ SSE = ∑ (y-ŷ)2
Testing the DeLone-McLean Model of Information System Success
Se =
sse = n −k −1
∑ (y −
EQ \ O(y,^))2
n −k −1
where n = number of observations↜k = number of independent variables
t – Values and Their Interpretation The numbers below the standard errors of the estimate are the t-values. It is the ratio that can be obtained by dividing the path coefficients by standard errors of estimates. In Figure 8, the first t-value (1.337) is the ratio between the estimate (path coefficient) and its standard error of the estimate (.17/.127=1.338). This ratio (1.338) is not the same as 1.337 in Figure 8, because there are rounding errors. The high t-value indicates that the path coefficient is non-zero (significant). What is the threshold value of t? In structural equation modeling, the rule is to use a critical value. AMOS uses critical ratio (C.R.) values (Byrne, 2010). LISREL uses a t value. However, this t must not be confused with the student t distribution. In inferential statistics, when population standard deviation (σ) is unknown and the population is normally distributed, the estimation of the population mean can be found by using the student t distribution which was developed by William S. Gosset. In SEM, including path analysis modeling, the t value that is typically used is t > 1.96. However, the critical t values depends on the nature of the test (one-tailed vs. two-tailed hypothesis testing). Traditionally these critical values are called t values, but they use z critical values. This is just a historical artifact in the field (Eom, 2004). One-tailed vs. two-tailed hypothesis testing: Depending on the nature of the test, the interpretation of t-values will be different. One tailed tests are directional, meaning that the outcome should occur only in one direction, either positive or negative. The alternative hypothesis (Ha) is used either the greater than (>) or the less than
( .05, h = .04). Students assigned to the social presence only group did not post any messages coded at the at the exploration level of cognitive presence preventing comparisons with this group to the other two experimental groups.
REPORTING RESULTS FROM THE COI STUDY WERE REPORTED TO INFORM CONSUMERS DECISIONS Statistical analyses should be reported in a manner so that consumers of the research who have at least a conceptual understanding of the research methods and statistical techniques used in studies can draw conclusions from experimental investigations (Bangert & Baumberger, 2005). Results for z tests included the z statistic as well as the probability of significance or Type I error rate as well as the effect size h. The introduction to the results section reported the alpha level that was set to make the decision about significance of results. For this experiment, the alpha level was set at .05 and the introduction section for the results section provided the values for interpreting the magnitude of the effect sizes (h) reported for each comparison. The probability of significance for each comparison was reported as either p > .05 or p < .05. The reason for using this convention rather than reporting the actual probability is because the z statistics were calculated by hand and the z table was used to estimate if the z value was
146
greater or less than the .05 alpha level specified for each comparison. For most other statistical comparisons, statistical computer programs such as SPSS will provide in their outputs the actual significance levels or probabilities of a committing a Type I error eliminating the need to use the “greater” or “less” than symbols. However, SPSS and other programs will provide results where “p=.000” for highly significant results. When this is the case, the researcher should report the result as “p 5_4_INSTRUCTORS_ CommunicationMethod=Forums 15 conf:(0.88) 5. 2_7_SYLLABUS_ProjectsRelevancy=1 3_1_PLATFORM_Adequacy=YES 17 ==> 5_4_INSTRUCTORS_ CommunicationMethod=Forums 15 conf:(0.88) 6. 2_6_SYLLABUS_RessourceSufficiency=3 5_7_INSTRUCTORS_InteractivityTechniques=Feedback 15 ==> 1_5_NEEDS_InitialRequirements=3 13 conf:(0.87) 7. 1_5_NEEDS_InitialRequirements=1 25 ==> 3_1_PLATFORM_Adequacy=YES 21 conf:(0.84) 8. 1_5_NEEDS_InitialRequirements=1 5_8_INSTRUCTORS_MaterialsQuality=2 17 ==> 3_1_PLATFORM_Adequacy=YES 14 conf:(0.82) 9. 2_6_SYLLABUS_RessourceSufficiency=1 5_7_INSTRUCTORS_InteractivityTechniques=Open_questions 16 ==> 5_4_INSTRUCTORS_CommunicationMethod=Forums 13 conf:(0.81) 10. 2_7_SYLLABUS_ProjectsRelevancy=2 3_1_PLATFORM_Adequacy=YES 5_7_INSTRUCTORS_InteractivityTechniques=Open_ questions 16 ==> 5_4_INSTRUCTORS_CommunicationMethod=Forums 13 conf:(0.81) 11. 1_5_NEEDS_InitialRequirements=2 3_1_PLATFORM_Adequacy=YES 20 ==> 5_4_INSTRUCTORS_ CommunicationMethod=Forums 16 conf:(0.8) 12. 1_5_NEEDS_InitialRequirements=3 5_4_INSTRUCTORS_CommunicationMethod=Forums 5_8_INSTRUCTORS_MaterialsQuality=3 20 ==> 2_7_SYLLABUS_ProjectsRelevancy=2 16 conf:(0.8) 13. 1_5_NEEDS_InitialRequirements=3 2_7_SYLLABUS_ProjectsRelevancy=2 5_8_INSTRUCTORS_MaterialsQuality=3 20 ==> 5_4_INSTRUCTORS_CommunicationMethod=Forums 16 conf:(0.8)
178
Student Performance in E-Learning Environments
Figure 15. Results of the first experiment using Apriori algorithm for association rules
Box 24. 2. 5_7_INSTRUCTORS_InteractivityTechniques=Feedback 5_8_INSTRUCTORS_MaterialsQuality=3 19 ==> 1_5_NEEDS_InitialRequirements=3 15 conf:(0.79) 3. 3_1_PLATFORM_Adequacy=To_some_extent 5_8_INSTRUCTORS_MaterialsQuality=3 17 ==> 1_5_NEEDS_InitialRequirements=3 13 conf:(0.76) 4. 2_7_SYLLABUS_ProjectsRelevancy=2 5_4_INSTRUCTORS_CommunicationMethod=Forums 5_8_INSTRUCTORS_MaterialsQuality=3 22 ==> 1_5_NEEDS_InitialRequirements=3 16 conf:(0.73) 5. 5_4_INSTRUCTORS_CommunicationMethod=Forums 5_8_INSTRUCTORS_MaterialsQuality=3 29 ==> 1_5_NEEDS_InitialRequirements=3 20 conf:(0.69) 6. 2_7_SYLLABUS_ProjectsRelevancy=2 5_8_INSTRUCTORS_MaterialsQuality=3 30 ==> 1_5_NEEDS_InitialRequirements=3 20 conf:(0.67) 7. 3_1_PLATFORM_Adequacy=To_some_extent 5_7_INSTRUCTORS_InteractivityTechniques=Feedback 23 ==> 1_5_NEEDS_InitialRequirements=3 15 conf:(0.65) 8. 5_8_INSTRUCTORS_MaterialsQuality=3 43 ==> 1_5_NEEDS_InitialRequirements=3 28 conf:(0.65) 9. 2_6_SYLLABUS_RessourceSufficiency=3 28 ==> 1_5_NEEDS_InitialRequirements=3 18 conf:(0.64) 10. 3_1_PLATFORM_Adequacy=To_some_extent 5_4_INSTRUCTORS_CommunicationMethod=Forums 24 ==> 1_5_NEEDS_InitialRequirements=3 15 conf:(0.63) 11. 2_8_SYLLABUS_WorkImpact=3 5_7_INSTRUCTORS_InteractivityTechniques=Feedback 21 ==> 1_5_NEEDS_InitialRequirements=3 13 conf:(0.62)
the initial requirements are satisfied but not too much. The confident factor for this rule is a good value, 0.87. (Box 24). The experiments results are presented in Figure 16.
FUTURE DEVELOPMENT The described data collection, transformation and analysis processes can be further refined in
order to extract knowledge about how the master program can be improved. The current data set can be extended in order to study the student performance dynamics for the entire program period. A study on the student profile evolution can be also performed. Based on this study, some predictions about the student retention could be developed. Attributes describing the teachers’ point of view about students’ activity, projects quality, the quality of their questions and answers will be included in the next survey. Knowing teacher 179
Student Performance in E-Learning Environments
Figure 16. Results of the second experiment using Apriori algorithm for association rules
expectations and satisfaction regarding student activities it make possible to identify the gaps between the expectations of different participants and to find the causes for these The analysis results will be used to improve the initial assessment during the admission process. Additional analysis can be done in relation to the e-learning platform, in order to identify the usage patterns in connection with the discipline types or teacher’s expectations. The reasons why some platform facilities are seldom or never used can be also discovered.
CONCLUSION Data mining experiments reveal interesting patterns existing in data. In the first cluster analysis experiment two student profiles were more obvious. In the first (cluster 0), the students’ expectations were less satisfied at the end of programme and they have already possessed knowledge in Project Management domain at the enrolment. That is why the performance class is higher than the one from the other cluster (cluster1) (students are learning more easily.) In the other cluster (cluster 1), the
180
satisfaction level is high. Knowledge accumulation and working with the online platform have offered a solid base for understanding the Project Management domain. The performance class is not as good as the one from cluster 0, but still is a good one. The statistical model from Figure 5i shows a positive correlation between the degree of fulfillment of trainee’s needs and the student’s performance. This finding is slightly opposite to the results given by data mining clustering: in cluster 0, students have better learning results, while their expectations are less fulfilled. The data mining results can be considered a method in which the statistical model can be improved: it suggests another endogenous variable which should be taken into consideration - the initial level of knowledge. In second cluster analysis experiment, it can be said that the graduated faculty has a big influence over the e-learning process and also over the students’ performance. Students who graduated ECSI Faculty are more accommodated with teachers learning style and with their demands in contrast with the others from the other cluster. So, although the students in the second cluster have a better experience and the extra resources
Student Performance in E-Learning Environments
are used more often, the performance class is only a good one. This finding is explained by the preliminary statistical analysis, which showed that the students who work (and have experience) consider that the co-workers don’t support them in learning activities. So, the professional experience is gained, while the learning performance at the master is reduced. In the third cluster analysis experiment it was revealed the fact that students from the second cluster consider evaluation relevancy important but not too much, they are involved in communication with teachers and their colleagues, and they have good grades to projects. The ones from the first cluster, despite the fact they consider the evaluation relevancy very important, they are not so involved in communication process, having a lower performance class. The attributes of the first cluster are shown by the statistical analysis also: most students consider that evaluations reflect their knowledge on a certain topic. Again, the data mining analysis provided a more accurate view: features of the students who believe or not in evaluation are provided. In the fourth cluster analysis experiment, it was revealed the fact that students that spend more time in front of the computer tend to be more active on the platform. This fact leads also to a better performance. In the fifth cluster analysis experiment, it was revealed the fact that for students with higher expectations from the platform tools, the learning platform isn’t appropriately built. These students also participate now and then to the open forums and have a better performance than those from cluster 1. An important result offered by the first classification experiment indicates that if performance class for the first year is 3 (average over 8 and less than 2 failed exams) and the student thinks that he must be involved in any decision regarding the learning activity, then the Performance class for the second year will be 3 (meaning the average is over 8 and less than 2 failed exams).
This means there is a constant determination for learning along the whole master period. In the second classification experiment, the fact that the students who graduated Economic Cybernetics, Statistics and Informatics (ECSI) Faculty (one from the most important faculty within The Bucharest Academy of Economic Studies) have a very good performance is confirmed. In many cases when evaluation relevancy is very big, communication efficiency is quite big and activity involvement is medium then the performance is medium (average between 7 and 8). So, there is a strong connection between the students’ activity and their performance in e-learning environment. Based on obtained association rules and their confident factor, in case that general association rules are generated, the following rule is the most important: If students consider that the online programme meets very much the requirements that they had when they enrolled and the favorite communication method is the platform forum, then the platform is considered as being highly appropriate. For the first part there are 17 instances and for the second 16 instances. That is why the confident factor for this rule is 0.94. The study offers a detailed analysis of factors which influence students’ performance in online courses. We consider data mining techniques to be more suitable for the analysis, as we had to process a considerable amount of data: for each student, we had the opportunity to evaluate attributes showing his academic performance (grades) and also personal perceptions of the factors influencing the performance.
REFERENCES Arbaugh, J. B. (2004). Learning to learn online: A study of perceptual changes between multiple online course experiences. The Internet and Higher Education, 7, 169–182. doi:10.1016/j. iheduc.2004.06.001
181
Student Performance in E-Learning Environments
Baker, R., & Yacef, K. (2009). The state of educational data mining in 2009: A review and future visions. Journal of Educational Data Mining, 1, 3–17.
Bulu, S. T., & Yildirim, Z. (2008). Communication behaviors and trust in collaborative online teams. Journal of Educational Technology & Society, 11(1), 132–147.
Barros, B., & Verdejo, M. F. (2000). Analyzing student interaction processes in order to improve collaboration: The degree approach. International Journal of Artificial Intelligence in Education, 11, 221–241.
Chapman, C., Clinton, J., & Kerber, R (2005). CRISP-DM 1.0, step-by-step data mining guide.
Bodea, V. (2003). Standards for data mining languages. The Proceedings of the Sixth International Conference on Economic Informatics - Digital Economy, (pp. 502-506). INFOREC Printing House, ISBN 973-8360-02-1, Bucureşti. Bodea, V. (2007). Application and benefits of knowledge management in universities – a case study on student performance enhancement. Informatics in Knowledge Society, The Proceedings of the Eight International Conference on Informatics in Economy, May 17-18, ASE Printing House, (pp. 1033-1038). Bodea, V. (2008). Knowledge management systems. Ph.D thesis, supervised by Prof. Ion Gh. Roşca, The Academy of Economic Studies, Bucharest. Bodea, V., & Roşca, I. (2007). Analiza performanţelor studenţilor cu tehnici de data mining: studiu de caz în Academia de Studii Economice din Bucureşti. In Bodea, C., & Andone, I. (Eds.), Managementul cunoaşterii în universitatea modernă. Editura Academiei de Studii Economice din Bucureşti. Bouckaert, R., Frank, E., Hall, M., Kirkby, R., Reutemann, P., Seewald, A., & Scuse, D. (2010). WEKA manual for version 3-6-2. University of Waikato, Hamilton, New Zealand Brew, L. S. (2008). The role of student feedback in evaluating and revising a blended learning course. The Internet and Higher Education, 11, 98–105. doi:10.1016/j.iheduc.2008.06.002
182
Charpentier, M., Lafrance, C., & Paquette, G. (2006). International e-learning strategies: Key findings relevant to the Canadian context. Retrieved from http://www.ccl-cca.ca/pdfs/ CommissionedReports/JohnBissInternationalELearningEN.pdf Crisp-dm. (2010). CRoss Industry Standard Process for Data Mining. Retrieved from http:// www.crisp-dm.org/ Davenport, T. (2001). Successful knowledge management projects. Sloan Management Review, 39(2). Delavari, N., Beikzadeh, M. R., & Amnuaisuk, S. K. (2005). Application of enhanced analysis model for data mining processes in higher educational system. Proceedings of ITHET 6th Annual International Conference, Juan Dolio, Dominican Republic. Delavari, N., Beikzadeh, M. R., & Shirazi, M. R. A. (2004). A new model for using data mining in higher educational system. Proceedings of 5th International Conference on Information Technology based Higher Education and Training: ITEHT ’04, Istanbul, Turkey. European Commission. (2005). Mobilizing the brainpower of Europe: Enabling universities to make their full contribution to the Lisbon Strategy. Brussels, Communicate no. 152. Eurostat. (2009). The Bologna Process in higher education in Europe: Key indicators on the social dimension and mobility. European Communities and IS, Hochschul-Informations-System G mbH. Retrieved from http://epp.eurostat.ec.europa.eu/ portal/ page/portal/education/bologna_process
Student Performance in E-Learning Environments
Friedman, J. H. (1997). Data mining and statistics: What’s the connection?Stanford, CA: Standford University. Guardado, M., & Shi, L. (2007). ESL students’ experiences of online peer feedback. Computers and Composition, 24, 443–461. doi:10.1016/j. compcom.2007.03.002 Haddawy, P., & Hien, N. (2006). A decision support system for evaluating international student applications. Computer Science and Information management program. Asian Institute of Technology. Kelly, H. F., Ponton, M. K., & Rovai, A. P. (2007). A comparison of student evaluations of teaching between online and face-to-face courses. The Internet and Higher Education, 10, 89–101. doi:10.1016/j.iheduc.2007.02.001 Luan, J. (in press). Data mining applications in higher education. In New Directions for Institutional Research (1st ed.). San Francisco, CA: Jossey-Bass. Luan, J. (2002). Data mining and its applications in higher education. In Serban, A., & Luan, J. (Eds.), Knowledge management: Building a competitive advantage for higher education. New directions for Institutional Research, 113. San Francisco, CA: Jossey Bass. Luan, J., Zhai, M., Chen, J., Chow, T., Chang, L., & Zhao, C.-M. (2004). Concepts, myths, and case studies of data mining in higher education. AIR 44th Forum Boston. Ma, Y., Liu, B., Wong, C. K., Yu, P. S., & Lee, S. M. (2000). Targeting the right students using data mining. Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and data mining, (pp 457-464). Boston, MA.
McDonald, M., Dorn, B., & McDonald, G. (2004). A statistical analysis of student performance in online computer science courses. Proceedings of the 35th SIGCSE Technical Symposium on Computer Science Education, Norfolk, Virginia, (pp. 71-74). McFarland, D., & Hamilton, D. (2006). Factors affecting student performance and satisfaction: Online versus traditional course delivery. Journal of Computer Information Systems, 46(2), 25–32. Monolescu, D., & Schifter, C. (2000). Online focus group: A tool to evaluate online students’ course experience. The Internet and Higher Education, 2, 171–176. doi:10.1016/S1096-7516(00)00018-X Piccoli, G., Ahmad, R., & Ives, B. (2001). Webbased virtual learning environments: A research framework and a preliminary assessment of effectiveness in basic IT skills training. Management Information Systems Quarterly, 25(4), 401–426. doi:10.2307/3250989 Priluck, R. (2004). Web-assisted courses for business education: An examination of two sections of principles of marketing. Journal of Marketing Education, 26(2), 161–173. doi:10.1177/0273475304265635 Ramaswami, M., & Bhaskaran, R. (2010). A CHAID based performance prediction model in educational data mining. IJCSI International Journal of Computer Science Issues, 7(1), 10–18. Ranjan, J. (2008). Impact of Information Technology in academia. International Journal of Educational Management, 22(5), 442–455. doi:10.1108/09513540810883177 Ranjan, J., & Malik, K. (2007). Effective educational process: A data mining approach. Vine, 37(4), 502–515. doi:10.1108/03055720710838551
183
Student Performance in E-Learning Environments
Romero, C., & Ventura, S. (2007). Educational data mining: A survey from 1995 to 2005. Expert Systems with Applications, 33, 135–146. doi:10.1016/j.eswa.2006.04.005 Sargenti, P., Lightfoot, W., & Kehal, M. (2006). Diffusion of knowledge in and through higher education organizations. Issues in Information Systems, 3(2), 3–8. Shyamala, K., & Rajagopalan, S. P. (2006). Data mining model for a better higher educational system. Information Technology Journal, 5(3), 560–564. doi:10.3923/itj.2006.560.564 Talavera, L., & Gaudioso, E. (2004). Mining student data to characterize similar behavior groups in unstructured collaboration spaces. Proceedings of Workshop on Artificial Intelligence in Computer Supported Collaborative Learning at European Conference on Artificial Intelligence, Valencia, Spain, (pp. 17-23). Tallent-Runnels, M.-K. (2005). The relationship between problems with technology and graduate students’ evaluations of online teaching. The Internet and Higher Education, 8, 167–174. doi:10.1016/j.iheduc.2005.03.005 Waiyamai, K. (2004). Improving quality of graduate students by data mining. Faculty of Engineering, Kasetsart University, Frontiers of ICT Research International Symposium. Witten, I., & Frank, E. (2005). Data mining: Practical machine learning tools and techniques. Elsevier. Young, A., & Norgard, C. (2006). Assessing the quality of online courses from the students’ perspective. The Internet and Higher Education, 9, 107–115. doi:10.1016/j.iheduc.2006.03.001 Zapalska, A., Shao, D., & Shao, L. (2003). Student learning via WebCT course instruction in undergraduate-based business education. Teaching Online in Higher Education (Online). Conference.
184
ADDITIONAL READING Anjewierden, A., Kollöffel, B., & Hulshof, C. (2007). Towards educational data mining: Using data mining methods for automated chat analysis to understand and support inquiry learning processes. ADML 2007 (pp. 27–36). Crete. Bodea, C. 2007. An Innovative System for Learning Services in Project Management. In Proceedings of 2007 IEEE/INFORMS International Conference on Service Operations and Logistics. And Informatics. Philadelphia, USA, 2007. IEEE. Castells, M., & Pekka, H. (2002). The Information Society and the Welfare State. The Finnish Model. Oxford: Oxford University Press. Demirel, M. (2009). Lifelong learning and schools in the twenty-first century. Procedia Social and Behavioral Sciences, 1, 1709–1716. doi:10.1016/j. sbspro.2009.01.303 Garcia, A. C. B., Kunz, J., Ekstrom, M., & Kiviniemi, A. (2003). Building a Project Ontology with Extreme Collaboration and VD&C. CIFE Technical Report #152. Stanford University. Gareis, R. 2007. Happy Projects! Romanian version ed. Bucharest, Romania: ASE Printing House. Guardado, M., & Shi, L. (2007). ESL students’ experiences of online peer feedback. Computers and Composition, 24, 443–461. doi:10.1016/j. compcom.2007.03.002 Kalathur, S. (2006) An Object-Oriented Framework for Predicting Student Competency Level in an Incoming Class, Proceedings of SERP’06 Las Vegas, 2006, pp. 179-183 Kanellopoulos, D., Kotsiantis, S., & Pintelas, P. (2006). Ontology-based learning applications: a development methodology. In Proceedings of the 24th IASTED International Multi-Conference Software Engineering. Innsbruck, Austria, 2006.
Student Performance in E-Learning Environments
Lytras, M. D., Carroll, J. M., Damiani, E., & Tennyson, R. D. (2008). Emerging Technologies and Information Systems for the Knowledge Society. In First World Summit on the Knowledge Society. Athens, Greece: WSKS. Markkula, M. (2006). Creating Favourable Conditions for Knowledge Society through Knowledge Management, eGorvernance and eLearning. Budapest, Hungary, 2006. FIG Workshop on eGovernance, Knowledge Management and eLearning. Teekaput, P., & Waiwanijchakij, P. (2006). eLearning and Knowledge Management, Symptoms of a Reality. In Third International Conference on eLearning for Knowledge-Based Society. Bangkok, Thailand, 2006. Turner, R. J., & Simister, S. J. (2004). Gower Handbook of Project Management. Romanian version ed. Bucharest, Romania: Codecs Printing House. Young, A., & Norgard, C. (2006). Assessing the quality of online courses from the students’ perspective. The Internet and Higher Education, 9, 107–115. doi:10.1016/j.iheduc.2006.03.001
KEY TERMS AND DEFINITIONS Association Rule: An implication expression of the form X => Y where X and Y are disjoint conjunctions of attribute-value pairs. Strength of association rules can be measured in terms of support and confidence. Support determines how often a rule applies to a data set and confidence determines how frequently items appear in transactions that contain X. Association analysis has as objective to find hidden relationships in large sections of data sets. Classification: The process consisting in learning function f, which assigns a predefined class label y to each set of attributes X. The function f is known as the model for classification. A
classification model can serve as an explanatory tool to distinguish between instances of different classes. In this case, the classification is considered as a descriptive modeling. A classification model can also be used to predict the class label for the unknown instances. In this case, the classification is considered as a predictive modeling. Classification techniques are better suited for prediction or description of data sets for binary or nominal attributes. Clustering: A technique by which similar instances are grouped together. All the instances grouped in the same cluster have a certain understanding, a certain utility, or both. Clusters capture the natural structure of data and so, the clustering process might be the starting point for other data handling processes such as summarization. Data Mining: The process of extracting previously unknown, valid, and operational patterns/ models from large collection of data. Essential for data mining is the discovery of patterns without previous hypotheses. Data mining is not aimed to verify, confirm or refute hypothesis, but instead to discover “unexpected” patterns, completely unknown at the time of the data mining process take place, which may even contradict the intuitive perception. For this reason, the results are truly valuables. Decision Tree: A diagrammatic representation of the possible outcomes and events used in decision analysis. A decision tree with a range of discrete (symbolic) class labels is called a classification tree, whereas a decision tree with a range of continuous (numeric) values is called a regression tree. Decision trees are attractive because they show clearly how to reach a decision, and because they are easy to construct automatically from labeled instances. Two well known programs for constructing decision trees are C4.5 and CART (Classification and Regression Tree). Educational Data Mining: An emerging discipline, concerned with developing methods for exploring the unique types of data that come from educational settings, and using those meth-
185
Student Performance in E-Learning Environments
ods to better understand students, and the settings which they learn in E-Learning: A type of distance education in teaching-learning interaction is mediated by an environment set up by new information and
186
communication technologies, in particular the Internet. Internet is both the material environment, as well as the communication channel between the actors involved.
Student Performance in E-Learning Environments
APPENDIX Questionnaire 1. Trainee’s Needs (Motivation to Participate in an Online Education Program) 1.1. Why did you choose to follow this master programme? Mark the corresponding cell with X.
I work in projects and I want to improve professionally. I intend to work in projects and I’m particularly interested in this domain. I was convinced by friends that it is an easy programme. I have time and I believe that training in an y field is useful.
1.2. What made you enroll to online MIP and not the classical MIP? Mark the corresponding cell with X.
Lack of time (I have a full-time job.) Legal reasons (I am enrolled to another master, which is not online.) I need a diploma (I believe that online programmes are easier.). Other reason (Please specify it).
1.3. What do you think are the benefits of online MIP before the classical MIP? Mark the corresponding cell with X. I have access to information, without being forced to go to class. It gives me a community of practice. Other advantages (Please specify them.)
1.4. Have you taken other online programmes? Mark the corresponding cell with X. No Yes (Please indicate the type of programme, duration, the gained satisfaction.)
187
Student Performance in E-Learning Environments
1.5. Do you consider that the MIP online programme meets the requirements you had when you enrolled for it? (1 – very important, 5 – not important at all) Mark the answer. 12345
2. Syllabus & Training Providers 2.1. How important is that the MIP programme is a certified programme? (1 – very important, 5 – not important at all) Mark the answer. 12345 2.2. How important is that the MIP programme is organized by a well-known national certified organization? (1 – very important, 5 – not important at all) Mark the answer. 12345 2.3. Do your favorite subjects have a higher degree of interactivity? Mark the corresponding cell with X. Yes (interactivity helps me understand better). No (i don’t have time for interactivity). It doesn’t matter.
2.4. How important is for you the clear thematic content and requirements of a course? ? (1 – very important, 5 – not important at all) Mark the answer. 12345 2.5. What format of the course materials are most helpful? Mark the corresponding cell with X. Word documents Power Point slides Electronic books (in html, chm or any other format) I don’t care.
188
Student Performance in E-Learning Environments
2.6. Do you think the resources provided at the course (course support, project models/ case studies, templates) are sufficient enough to acquire the knowledge you need? (1- sufficient enough, 5-not sufficient at all) Mark the answer. 12345 2.7. Do you consider the projects developed at various disciplines relevant to the training? (1- very relevant, 5- not at al relevant) Mark the answer. 12345 2.8. Does the theme of the master projects correspond to the work activities? (1- very, 5-not at all) Mark the answer. 12345 2.9. Do you consider that the projects have a proper weight in the final grade, at the majority of the courses? (1- very, 5-not at all) Mark the answer. 12345 2.10. How important are the homework given during the course? (1 – very important, 5 – not important at all) Mark the answer. 12345 2.11. Do you prefer the on going assessment or the summative assessment? Mark the corresponding cell with X. Just the summative assessment Just the on going assessment Both of them
2.12. Do you think the results of evaluations so far reflect your knowledge? (1- very, 5-not at all) Mark the answer. 12345
189
Student Performance in E-Learning Environments
2.13. What kind of evaluation methods should be used and in what proportion? (1-always, 2-often, 3-sometimes, 4-never). Put each number in a different cell. Projects Peer review Multiple choice tests Others (Please, specify them.)
2.14. What is your favorite discipline so far? Justify your answer. 2.15. What is the discipline that created you most dissatisfaction? Justify your answer.
3. Organization and Technical Platform 3.1. Do you consider that the study platform used in online MIP is appropriate? Mark the corresponding cell with X. 3.2. Would it be important to organize a technical training session prior to using the platform? 1 No, because it is difficult to be used, from the technical point of view No, because it hasn’t all the technical facilities I need Yes To some extent
– very important, 5 – not important at all) Mark the answer. 12345 3.3. How important are the online discussions with your colleagues (forums)? 1 – very important, 5 – not important at all) Mark the answer. 12345 3.4. How often do you participate in online discussions? Mark the corresponding cell with X. Often enough, because I consider them useful. Not very often, I usually only read what the others say. Now and then, it depends on the feedback received from my colleagues and instructors. I never participated, I consider them useless.
190
Student Performance in E-Learning Environments
3.5. Do you consider that a higher flexibility in an online platform would help you get better results? Mark the corresponding cell with X. No, I prefer to have a predetermined syllabus. No, I prefer the instructor to decide the educational activities. Yes, I would like to be able to choose my homework deadlines. Yes, I would like to be involved in any decision regarding the learning activity.
3.6. What are the elements that you like in the organization and platform of MIP online programme? 3.7. What are the elements that you don’t like in the organization and platform of MIP online programme? 3.8. Do you consider the face-to-face meetings held in MIP programme useful? (1- very useful, 5 – not useful at all) Mark the answer. 12345 3.9. Do you consider that the MIP programme should continue with its online version in the following years, too? Mark the corresponding cell with X. Yes (please justify) No (please justify) I don’t care.
4. Trainee’s Commitment 4.1. Do you think a more careful selection (possibly by examination) of the candidates in an online program is needed or anyone interested and who meets legal requirements (minimum level of preparation) should be allowed to register? Mark the corresponding cell with X. €€€€€€€€€€Anyone could register. An initial test is required (test online). An initial check is required (CV or other documents).
191
Student Performance in E-Learning Environments
4.2. Which is the most important element for your performance in online classes? Mark the corresponding cell with X. Quality of the education platform The degree to which the programme gives me the knowledge that I need The feedback from the instructors Other factor (Please specify it.)
4.3. Did you search and use independently other resources than the one from the courses to deepen your knowledge on a certain subject? (1- very often, 5-never). Mark the answer. 12345 4.4. Did you apply the knowledge acquired at MIP to your workplace? Mark the corresponding cell with X. Yes, often enough Very seldom No, because what I learned has no utility at work. No, because I have no enough knowledge (the courses were too theoretical).
4.5. Do you prefer the projects developed individually or developed in team? Mark the corresponding cell with X. I prefer the projects developed individually, because the communication in a team is difficult. I prefer team work, because my collaborative skills can be developed. I prefer team work, because it is easier.
4.6. Do you think there is any connection between your involvement in online activities and evaluation results? (1- a very big connection, 5-no connection). Mark the answer. 12345
192
Student Performance in E-Learning Environments
4.7. Do the colleagues at work support you in attending this master programme? (1 – very much, 5- not at all). Mark the answer. 12345
5. Instructors’ Involvement 5.1. How important is the instructor’s role in an online educational programme? (1 – very important, 5 – not important at all) Mark the answer. 12345 5.2. How is the ideal instructor in an online class? Mark the corresponding cell with X. Involved (ready to offer answers, advices, explanations) Not so involved (I don’t really need an instructor) It doesn’t matter.
5.3. What do you think is the instructor’s role in MIP programme and how important is it? (1 – very important, 5 – not important at all) Put a different mark in each cell. The instructor/teacher facilitates/moderates communication in virtual community. The instructor/teacher monitors the master students’ participation. The instructor/teacher promotes the collaborative learning. The instructor/teacher offers support for learning activities (explanations, recommendations) Other (Please specify it.)
5.4. What is your favorite method of communication with your instructor/ teacher? Mark the corresponding cell with X. On the platform, on forums On the platform, using online meetings Through e-mail Face-to-face
193
Student Performance in E-Learning Environments
5.5. How would you rate in terms of efficiency your communication with the teachers? (1- very effective, 5-not effective at all) Mark the answer. 12345 5.6. How involved are you in the communication with your online teacher? (1- vey involved, 5- not involved at all) Mark the answer. 12345 5.7. What kind of techniques should a teacher use to ensure good interactivity in online courses and how often should the teacher use these techniques? (1-always, 2-often, 3-sometimes, 4-never). Put each number in a different cell. Feedback on the quality of learning Creative and open questions Team work Others (Please, specify them.)
5.8. How would you rate the quality of materials provided by teachers in relation to their requirements and assessments? (1- very good, 5- unsatisfactory) Mark the answer. 12345
Personal Comments If you want to develop any of the answers or to make some comments, please use the spaces below Proposals, suggestions: Personal information: Age: ____ Graduated faculty: _________________________________________________________________ Other completed training programs related to project management (if there are): __________________ _________________________________________________________________________________ Experience in project management (number of years): ____ Position in your organization: _______________________________________________________ Monthly income (below 1500 RON/month, over 1500 RON/month): ________________________ Daily activity in virtual environment, including the MIP programme (number of hours in average/ day): _____________________________________________________________________________ Favorite activities in virtual environment (list them): _____________________________________ _________________________________________________________________________________ Field of activity (IT, banking, commercial...): _____________________________________________ Other family members working in project management (yes/no): ____
194
195
Chapter 9
How to Design, Develop, and Deliver Successful E-Learning Initiatives Clyde Holsapple University of Kentucky, USA Anita Lee-Post University of Kentucky, USA
ABSTRACT The purposes of this chapter are three-fold: (1) to present findings in investigating the success factors for designing, developing and delivering e-learning initiatives, (2) to examine the applicability of Information Systems theories to study e-learning success, and (3) to demonstrate the usefulness of action research in furthering understanding of e-learning success. Inspired by issues and challenges experienced in developing an online course, a process approach for measuring and assessing e-learning success is advanced. This approach adopts an Information Systems perspective on e-learning success to address the question of how to guide the design, development, and delivery of successful e-learning initiatives. The validity and applicability of the process approach to measuring and assessing e-learning success is demonstrated in empirical studies involving cycles of action research. Merits of this approach are discussed, and its contributions in paving the way for further research opportunities are presented.
INTRODUCTION In the pursuit of teaching excellence, today’s educators are confronted with the challenge of how to successfully tap into the transforming power of the DOI: 10.4018/978-1-60960-615-2.ch009
Internet to facilitate or enable learning. As such, a primary objective of this chapter is to present findings from investigating the success factors in designing, developing, and delivering e-learning initiatives. An e-learning success model is introduced to serve not only as a measure of quality assurance in e-learning, but also as a strategy
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
How to Design, Develop, and Deliver Successful E-Learning Initiatives
for ensuring future success in the development and assessment of e-learning. The e-learning success model draws its theoretical basis from a user-centered Information Systems development paradigm. Consequently, a secondary objective of this chapter is to examine the applicability of Information Systems theories to study e-learning success. The validity of our e-learning success model is tested using an action research methodology. An iterative process of diagnosing, action planning, action taking, evaluating, and learning is repeated in manageable cycles following a continuous improvement principle to identify and address barriers to successful e-learning. As a result, a third objective of this chapter is to demonstrate the usefulness of action research in furthering understanding of e-learning success.
BACKGROUND According to the U.S. Department of Education (Parsad and Lewis, 2008), e-learning encompasses various distance education courses and programs including online, hybrid/blended online, and other distance learning courses. The inclusion of hybrid/ blended online courses as e-learning signifies the realization that learning can be extended beyond traditional in-class instruction with the mediation of learning technologies. Following this definition of e-learning, the U.S. Department of Education found that 96% of public 2-year and 86% of public 4-year institutions offered e-learning during the 2006-2007 academic year, with enrollments of 4,844,000 and 3,502,000 respectively. Of the total 2,720 institutions that offered e-learning, only 2% did not use Internet-based technologies at all for instructional delivery. These statistics reinforce the prevalence of Internet-based e-learning in higher education. As a result, we define e-learning as follows: E-learning is a formal education process in which the student and instructor are interacting via
196
Internet-based technologies at different locations and times. Riley et al., in their 2002 report to Congress on distance education programs, summed up the merits of e-learning precisely in this way: “the Internet, with its potential to expand the reach of higher education dramatically, presents very promising prospects to increase access to higher education and to enrich academic activity.” (Riley et al., 2002). Indeed, e-learning has often been touted as a means to revolutionize the traditional classroom lecture style of learning where knowledge is transmitted from teachers to students – the objectivist model of learning (Benbunah-Fich, 2002; Schank, 2001). This is to recognize that e-learning can do much more than just content transmission. It supports an alternative model of learning called constructivism where knowledge emerges through active learning – e-learning technologies are used for student-to-student(s) and instructor-to-student(s) interactions, and group work, creating a rich learning environment to engage students in more active learning tasks such as problem solving, concept development, exploration, experimentation, and discovery (Nunes and McPherson, 2003, Hardaway and Will, 1997). Hence, the pedagogical paradigm shift in how students learn requires concerted efforts from both the education and information technology fields to collectively chart a course for effective learning in the Internet Age. This chapter lays out the first step towards the pursuit of excellence in e-learning, sharing the same long-term goals as envisioned by a 16-member web-based education commission in their 2000 report (Web-based Education Commission, 2000): 1. To center learning around the student instead of the classroom 2. To focus on the strength and needs of individual learners 3. To make lifelong learning a practical reality
How to Design, Develop, and Deliver Successful E-Learning Initiatives
4. To empower every student 5. To elevate each individual to new levels of intellectual capacity and skill 6. To bring learning to students instead of students to learning The rest of the chapter is organized as follows. Past e-learning success research is reviewed. Our approach to e-learning success is then introduced. Findings from empirical studies to validate the proposed approach to e-learning are presented. The chapter concludes with a discussion on future directions for research and practice.
THE PURSUIT OF EXCELLENCE IN E-LEARNING Past Research Research on e-learning success is diverse – reflecting the complexity of the issue and the multi-disciplinary nature of the field. Education researchers tend to attribute learning success to its quality and focus on defining quality e-learning. The resulting studies are primarily guidelines or “best practices” of e-learning that are developed from case studies (Meyer, 2002; Byrne, 2002; Smith, 2001; Pittinsky & Chase, 2000; Lawhead et al., 1997). The most comprehensive guidelines are Pittinsky & Chase’s 24 benchmarks in seven areas: institutional support, course development, teaching/learning, course structure, student support, faculty support, and evaluation and assessment (Pittinsky & Chase, 2000). On the other hand, business researchers in general, and Information Systems researchers in particular, analyze e-learning success from a socio-technological system perspective – yielding empirical studies that attempt to explore a variety of factors and intervening variables that may have an impact on the success of e-learning. As shown in Table 1, a majority of the studies evaluate e-learning success on a single measure such
as learner satisfaction (Sun et al., 2008), course performance (Nemanich et al., 2009; Simmering et al., 2009; Santhanam et al., 2008; Hall et al., 2006), learning experience (Peltier et al., 2007; Williams et al, 2006), and system use (Sun et al., 2009; Davis & Wong, 2007; Saade, 2007; van der Rhee et al., 2007; Sadde & Bahli, 2005). However, factors affecting e-learning success are extensive, spanning learner characteristics (e.g., technology readiness, e-learning attitude, computer anxiety), instructor characteristics (e.g., e-learning attitude, teaching style, availability), and instructional design (e.g., learning model, content, interaction), as described in Piccoli et al.’s (2001) virtual learning environment framework. Despite the volume of research studies on elearning success, it is difficult to interrelate and unify the fragmented research activities undertaken in this area. Part of the problem lies in the ambiguity of the concept of e-learning success. Another problem is the seemingly vast array of factors impacting e-learning success. As a result, it is difficult to understand and isolate critical success factors of e-learning, as there is a lack of consensus about what constitutes success of elearning. To attain a greater benefit from the fragmented research efforts, there is a need to integrate and formulate a holistic and comprehensive model for evaluating e-learning initiatives. Another concern with the current state of elearning research is that e-learning researchers have a tendency to rely on literatures from their own respective disciplines with little cross-disciplinary sharing of ideas and findings. This issue was identified by Arbaugh et al. (2009) after they reviewed 182 articles on e-learning in business education from 2000 to 2008. Consequently, there is a need to forge a collaborative research between education and Information Systems fields. A third limitation of these studies is that success measures are derived from assessing the results of the development effort only. There is also a need to broaden the viewpoint of learning success from a result to a process perspective.
197
How to Design, Develop, and Deliver Successful E-Learning Initiatives
Table 1. Empirical studies of e-learning in business education since 2005 Study
Independent factor(s)
Dependent variable(s)
Nemanich et al. (2009)
Student learning ability
Course performance
Simmering et al. (2009)
Computer self-efficacy
Course grade
Sun et al. (2009)
System functionalities meeting the needs for instructional presentation, student learning management, and interactions
Instructor’s continue system usage
Johnson et al. (2008)
Computer self-efficacy Perceived usefulness
Course performance Course satisfaction Course instrumentality
Santhanam et al. (2008)
Instructional strategies that promote self regulatory learning
Test score
Sun et al. (2008)
Learner’s computer anxiety Instructor’s e-learning attitude Course flexibility Course quality Perceived usefulness Perceived ease of use Assessment diversity
Learner satisfaction
Wan et al. (2008)
Student’s virtual competence
Learning effectiveness Learner satisfaction
Arbaugh & Rau (2007)
Interaction
Student’s perceived learning
Course discipline Course structure
Student’s satisfaction with course delivery medium
Davis & Wong (2007)
Perceived usefulness flow experience
Learner’s system use
Peltier et al. (2007)
Teaching quality as measured by student-to-student interactions, student-to-instructor interactions, instructor support and mentoring, lecture delivery quality, course content, and course structure
Learning experience as measured by amount of learning, course enjoyment, and recommending to others
Saade (2007)
Perceived usefulness as measured by enhanced performance, enjoyment, and enhanced abilities
Learner’s system use
van der Rhee et al. (2007)
Student’s technology readiness
Student’s system use
Arbaugh & Benbunan-Fich (2006)
Objectivist teaching practices with collaborative learning techniques
Student’s perceived learning Student’s satisfaction with course delivery medium
Eom et al. (2006)
Course structure Instructor feedback Self-motivation Learning style Interaction Instructor facilitation
Learner satisfaction
Instructor feedback Learning style
Student’s perceived learning
Hall et al. (2006)
Student’s value belief Student’s personality
Task performance
Williams et al. (2006)
Team work Group cohesion
Learning experience
Sadde & Bahli (2005)
Perceived usefulness as impacted by cognitive absorption
Learner’s system use
198
How to Design, Develop, and Deliver Successful E-Learning Initiatives
The primary objective of this chapter is to address these three needs.
A Process Approach to E-Learning Success We advance a process approach for measuring and assessing e-learning success from an Information Systems perspective. The process approach to e-learning success stresses the importance of evaluating success not only on an outcome basis, but throughout the entire process of designing, developing, and delivering e-learning initiatives. The Information Systems perspective of e-learning success acknowledges the shared interest between the education and information Systems research in fostering a deeper understanding of the effectiveness of Internet-based e-learning. The validity of applying Information Systems theories to study e-learning success stems from the recognition that both efforts strive to better meet the needs of their constituencies by offering technological solutions. Furthermore, Information Systems researchers have been studying factors that account for the success of Information Systems since the early 1980s. Related theories and knowledge accumulated since then can be valuable in contributing to the understanding and pursuit of e-learning success. Consequently, we follow an Information System prototyping strategy for guiding the design, development, and delivery of e-learning initiatives, and for adapting an Information Systems success model to measure and evaluate factors that influence the success of e-learning throughout the entire process of e-learning systems development.
An Information Systems Prototyping Strategy Prototyping is an approach to developing Information Systems that involves a 4-step interactive process between the user and developer (Nauman and Jenkis, 1982). The user’s basic information requirements are identified in step 1 to allow a
working prototype to be developed quickly in step 2. The user is provided with hands-on use of the prototype in step 3 to identify undesirable or missing features. The prototype is revised and enhanced in step 4 to meet users’ acceptance. Information Systems literature recommends the use of prototyping for experimentation and learning before committing resources to develop a full-scale system – so as to clarify users’ information requirements, allow developers to understand the user’s environment, and promote understanding between the user and developer. (Alavi, 1984; Janson and Smith, 1985). Hardgrave et al. (1999) found that the relationship between prototyping and system success (as indicated by user satisfaction) was contingent on five environmental factors: project innovativeness, system impact, user participation, number of users, and developer experience with prototyping. Drawing on the similarities between e-learning and prototyping environments, prototyping is recommended as the strategy to develop a successful e-learning system. Specifically, a prototype e-learning module should be developed first with the intent of understanding students’ learning needs and attitudes toward e-learning, so that related issues and problems can be identified and addressed adequately before launching a full-scale development of the remaining modules of an online course.
An E-Learning Success Model The prototype e-learning module is critical in deciphering students’ learning needs and how those needs can be addressed in an e-learning environment. However, further investigation is required to evaluate whether or not the e-learning initiative is a success. It is for this purpose that we create an e-learning success model that provides specifics of what a successful e-learning initiative should be and how to design, develop, and deliver such an initiative. Our e-learning success model, as shown in Figure 1, is adapted from DeLone and McLean’s
199
How to Design, Develop, and Deliver Successful E-Learning Initiatives
Figure 1. E-learning success model
Information Systems success model (DeLone and McLean 2003). It follows a process approach for measuring and assessing success in positing that the overall success of e-learning rests on the attainment of success at each of the three stages of e-learning systems development: instructional design, course delivery, and learning outcome analysis. Success of the instructional design stage is evaluated along three success dimensions: system quality, information quality, and service quality. Success of the course delivery stage is evaluated along two success dimensions: use and user satisfaction. Finally, success of the learning outcome stage is evaluated along the net benefits dimension. The arrows shown in Figure 1 capture the interdependences among the three stages of development, as well as within the success dimensions of the course delivery stage. Essentially, success of the instructional design is vital to the success of course delivery, which, in turn, affects the success of learning outcomes. The success of
200
learning outcomes, on the other hand, has an impact on the success of subsequent course delivery, as indicated by the double arrow linking course delivery and learning outcome stages. Moreover, a course is successfully delivered if users continue to use the e-learning systems because they are already satisfied with the systems. Interdependences between the two success factors within the course delivery stage are depicted by the double arrow connecting use with user satisfaction. Our e-learning success model also includes metrics for each of the six success dimensions relevant to a typical e-learning environment. The system quality dimension measures the desirable characteristics of the course management system such as Blackboard or WebCT. A quality system should be easy-to-use, user friendly, stable, secure, fast, and responsive. The information quality dimension evaluates the course content on aspects of organization, presentation, length, clarity, usefulness, and currency. The service quality
How to Design, Develop, and Deliver Successful E-Learning Initiatives
dimension evaluates the essence of student-instructor interactions on attributes of promptness, responsiveness, fairness, competency, and availability. The use dimension measures the extent to which the course elements are used, including PowerPoint slides, audio clips, video clips, lecture scripts, discussion boards, case studies, illustrations, tutorials, and assignments. The user satisfaction dimension gauges the perceptions of students about their e-learning experiences which include satisfaction, enjoyment, success, and approval. The net benefits dimension captures both positive (learning enhancement, empowerment, time savings, and academic success) and negative (lack of face-to-face contact, social isolation, quality concerns, and dependency on technology) aspects of e-learning. The metrics provide a scale against which each of the six success dimensions within the three stages of e-learning system development can be measured by means of an instrument such as a survey. The assessment of the six success dimensions reveals deficiencies of the current e-learning systems and provides impetus for enhancement. It is this process of continuous assessment and improvement of the six success dimensions that leads to the attainment of excellence in e-learning.
Action Research Methodology Our process approach to e-learning success is tested through practical implementation using an action research methodology. Action research was introduced by Kurt Lewin in the 1940s to study social psychology and social changes at the University of Michigan’s Research Center for Group Dynamics (Lewin, 1947). Lewin’s work established the reputation of action research as a “science of practice” that is best suited for studying complex social systems by introducing changes into practice and observing the effects of these changes (Argyris et al., 1985). The fundamental contention of action research is that complex social systems cannot be reduced for meaningful
study. As a result, the goal of action research is to understand the complex process, rather than prescribing a universal law (Bakervielle, 1999). The complex nature of learning is apparent, as pointed out by Meyer (2002): The problem with most research studies on learning is the difficulty of isolating factors so that their impact (if any) can be identified and understood, separate from the action of other factors in the environment. Unfortunately for researchers, learning is both complex and occurs in very rich environments. It is doubly difficult to unravel influences from the individual’s personality, values, brain, background (family, school, friends, work), and of course, the educational environment (classroom, teacher, pedagogical choices, tools). (p.24) As a research methodology action research differs from case studies in that an action researcher is an active participant in experimenting with interventions that improve real-world practices. A researcher in a case study, on the other hand, is a passive observer of how practitioners solve real-world problems. Action research also differs from applied research in that the goal of action research is not only to report the success of a solution to a problem as in the case of applied research, but also to evaluate the solution process so as to further the theoretical underpinning of practices (Chiasson et al, 2008; Avison et al., 1999; Robinson, 1993). The scientific rigor of action research lies in its ability to advance actionable explanations to real-world phenomena, thereby achieving both formative validity (the theory-building process is valid) and summative validity (the theory is able to withstand empirical testing) (Lee and Hubona, 2009). Consequently, we adopt an action research methodology to (1) understand barriers to successful e-learning, (2) improve e-learning practices, and (3) validate the new process approach to e-learning success. Our action research methodology involves five iterative phases, namely, diagnosing, action-
201
How to Design, Develop, and Deliver Successful E-Learning Initiatives
Figure 2. The five phases of action research
planning, action-taking, evaluating, and learning (Susan and Evered, 1978). The diagnosing phase identifies barriers to successful e-learning, so that e-learning interventions to overcome these barriers can be developed in the action-planning phase. The action-taking phase then carries out the e-learning interventions developed. The evaluating phase examines resulting impacts on e-learning success from the actions taken. The learning phase assimilates lessons learned and experiences gained towards a better understanding of e-learning success. These five phases of action research are shown in Figure 2.
EMPIRICAL STUDIES The validity and applicability of our process approach to measuring and assessing e-learning success has been demonstrated in empirical studies involving cycles of action research (Holsapple and Lee-Post, 2006; Lee-Post, 2007; Lee-Post, 2009). In these studies, the first two cycles of
202
action research served to validate the importance of prototyping as a strategy for e-learning systems development. A prototype module on facility location was developed using animated PowerPoint lectures, audio clips, lecture scripts, and tutorials to present the course content. Students from a quantitative methods course then used this module in an e-learning system on Blackboard to learn the concepts of facility location decisions, key factors in location analysis, technical methods for location evaluation, and quantitative models for screening location alternatives. A course feedback survey was used at the end of both cycles to evaluate the success of the prototype module as well as students’ attitudes towards e-learning. The remaining two cycles of action research were conducted to investigate the usefulness of the e-learning success model. The rest of the elearning modules for the quantitative methods course were developed to allow us to offer the course entirely online. Students who are onlineready were admitted into the online section of the course. A course satisfaction survey was used at
How to Design, Develop, and Deliver Successful E-Learning Initiatives
the end of each cycle to measure all six success dimensions of the online course. Each success dimension was quantified as a numeric measure from ratings obtained via the survey. A low rating for any success dimension signified a deficiency in that area and efforts were then devoted to rectify the deficiency. As cycles of action research were conducted to raise the ratings of the six success dimensions, the resulting lessons and experience converged to a fuller understanding of the impediments to e-learning success and how these impediments could be addressed. Figure 3 shows an overview of the five phases within each of the four cycles of action research. A detailed description of each of the four cycles of action research is provided in the following sections.
First Cycle of Action Research The first cycle of action research began upon the approval of a proposal to develop an online quantitative methods course in operations management for business undergraduates. In diagnosing the impediments to e-learning success at this early stage of the development process, the current understanding and practice of e-learning development were critically examined in light of Information Systems development theories. The theory-practice gap was found to be a lack of full understanding about students’ learning needs and attitudes towards e-learning. As a result, prototyping was identified as the most appropriate strategy to ascertain students’ learning needs and how e-learning could be used to meet those needs. In the action-planning phase, a prototype e-learning module on one topic of study was planned. By following the guiding principles laid out in Pennsylvania State University’s report (IDE, 1998), an instructional design plan was created to ensure that the module’s learning objectives, content, activities, outcome assessments, and instructional technologies were put together in such a way as to enhance learning. In addition,
a course feedback survey was to be designed to measure the success of the e-learning module. In the action-taking phase, an e-learning module on facility location was developed. Students in the quantitative methods course learned about facility location asynchronously by assessing the e-learning module on Blackboard anytime within a 2-week period instead of meeting face-to-face in class. Content materials were presented using various media so that students could watch an animated PowerPoint show, listen to an audio clip, or read the lecture script at their own choosing. Learning activities included discussion questions intended for student-to-student interactions and case studies for real-world application of materials learned. A course feedback survey was used to collect students’ background information and their opinions towards e-learning upon the completion of the e-learning module. A copy of the feedback survey can be found in the Appendix. Results of the survey were analyzed in the evaluation phase. Descriptive statistics computed from quantitative responses indicated that our typical student had a fairly high GPA (3.7), a current B standing in class, a course load of four or more, an above average digital literacy, but negative before and after opinions of e-learning (both were the same 2.8 on a 5-point scale). Students preferred reading the lecture script and doing case studies, rather than discussion questions. Course materials were rated highest in terms of organization (3.7 on a 5-point scale). “Control when and where to learn” was the most valuable (4.1 on a 5-point scale) e-learning benefit, followed by “Learn materials in less time” (3.3 on a 5-point scale). The qualitative responses indicated that students desired more control over content delivery (e.g., able to download PowerPoint slides on their own computers), more interactions (e.g., live chat), and more illustrative examples. In summary, the e-learning module was not well received as evident from students’ low rating (2.7 on a 5-point scale) on both the user satisfaction and net benefits dimensions.
203
How to Design, Develop, and Deliver Successful E-Learning Initiatives
Figure 3. Cycles of action research
204
How to Design, Develop, and Deliver Successful E-Learning Initiatives
Reflecting upon the experience in designing, developing, and delivering the e-learning module in the learning phase, the following merits of elearning were supported: (1) autonomy – able to control when and where to learn; (2) efficiency -- able to learn materials in less time; (3) flexibility – able to tailor learning to suit individual learner’s learning style; (4) accessibility – able to extend learning beyond the traditional classroom setting. However, we were also faced with the challenge of a seemingly unsuccessful e-learning module. In particular, students were apprehensive that e-learning was not able to meet many of their learning needs, including “address questions and concerns”, “voice opinion and viewpoints”, and “stimulate interest in the subject,” to name a few.
cycle. Students’ after opinion toward e-learning was 15% more favorable than their before opinion. With the exception of the use dimension, the information quality, user satisfaction, and net benefits dimensions all experienced statistically significant improvement over the first cycle. It was found that students’ attitudes towards e-learning could be positively influenced by informing them of the benefits of e-learning and giving them opportunities to experience those benefits first hand. Such positive attitude towards e-learning played an important role in defining the appropriate set of learning needs that e-learning could impact most. As such, a positive attitude towards e-learning should be viewed as an exogenous factor to the e-learning success model.
Second Cycle of Action Research
Third Cycle of Action Research
The second cycle of action research began with diagnosing the problem of a lack of enthusiastic reception for e-learning, as students’ before and after opinions of e-learning were the same low rating of 2.8 out of 5. This indifferent attitude presented a key barrier to successful e-learning that needed to be addressed right away. Scrutinizing the e-learning module from the viewpoint of the e-learning success model in the action-planning phase, the instructional design plan was revised with a goal of improving the module’s user satisfaction and net benefits success dimensions. In the action-taking phase, design changes were made to the e-learning module to enhance its information quality by using more examples to explain the problem-solving process involved in making facility location decisions. In addition, the benefits of e-learning were impressed upon students at the beginning of the quantitative methods course to prepare them to experience these benefits first hand with the e-learning module on facility location. The same course feedback survey was used at the conclusion of the facility location module. In the evaluation phase, the survey results were analyzed and compared with those from the first
Subsequent cycles of action research began with designing the remaining topics of study of the quantitative methods course for online delivery. The e-learning prototype served as a blueprint for the online course design and development. To continue the success assessment of the online course, a course satisfaction survey was designed by expanding the course feedback survey to include questions that measured the remaining two success dimensions in the system design stage: system quality and service quality. To augment the satisfaction survey’s validity, an independently designed course evaluation survey administered by the institution’s Distance Learning Technology Center was also used to measure success of the online course along the six success dimensions. In the diagnosing phase of the third cycle, the realization that e-learning was not for everyone and the lack of opportunity to influence students’ e-learning attitude before they registered for the online course led us to conclude that the failure to discern a student’s e-learning attitude and needs would be an impediment to e-learning. Consequently, a screening process was planned in the action-planning phase to accept students into
205
How to Design, Develop, and Deliver Successful E-Learning Initiatives
the online course only if they were online-ready. A survey was designed to evaluate a student’s online readiness on four dimensions: academic readiness, technical readiness, lifestyle readiness, and e-learning readiness. Students were considered online ready if they received at least a B in all of the course pre-requisites and responded near the high end of the 5-point scale on questions related to technical, lifestyle, and e-learning readiness. In addition, a course expectation survey was designed to understand students’ learning needs and experience. In the action-taking phase, students were asked to fill out four surveys at different times: (1) the online readiness survey before being allowed to enroll into the online course; (2) the course expectation survey at the beginning of the semester; (3) the course evaluation survey toward the end of the semester, and (4) the course satisfaction survey upon completion of the online course. Please refer to the Appendix for a copy of each of these four surveys. A database was created containing the following data about the students: (1) demographics; (2) online readiness; (3) learning needs and experiences; (4) opinions towards the online course’s design, delivery, and impacts; (5) learning outcomes. In the evaluation phase, results from both the course satisfaction and course evaluation surveys were analyzed. To evaluate the success of the online course, an overall rating for each success dimension was computed by averaging all respondents’ ratings on items of the survey related to the specific dimension. Based on the fact that none of the six success dimensions had an average rating below 3 on a 4-point or 5-point scale, the online course was considered to be well-received. In the learning phase, the favorable reception of the online course rendered support to the use of the screening process to circumvent students’ e-learning attitude concerns. In particular, students’ online readiness was a pre-requisite for e-learning success.
206
Fourth Cycle of Action Research The fourth cycle of action research began with the goal of raising the average ratings for all six success dimensions, in particular the use and net benefits dimension as they were the two that had received the lowest average ratings. In the diagnosing phase, a close examination of the survey items corresponding to the use construct revealed that discussion questions were rated as ineffective (2.89 on a 4-point scale) in contributing to students’ understanding of the course content. This was confirmed by the fact that “Voice my opinion and concerns” was the lowest rated item corresponding to the net benefit construct. The current instructional design plan was then evaluated against the seven principles of effective teaching (Graham et al., 2001) to uncover underlying reasons for the ineffective practices. As such, the barrier to successful e-learning was identified as the unsatisfied need for interactions in learning. In the action-planning phase, learning interventions were designed to promote interactions in learning from three fronts: (1) student-to-content interactions such as interactive spreadsheets and interactive problem solving programs; (2) studentto-instructor interactions such as virtual office hours and a 48-hour maximum email response rate; and (3) student-to-student interactions such as study groups and threaded discussions. Learning interactions enhancements were implemented in the action-taking phase. Results from both the course satisfaction and evaluation surveys were analyzed in the evaluation phase. The average ratings for all six success dimensions were raised. It was learned that e-learning success was a process of continuous pursuit of excellence along the dimensions of system quality, information quality, service quality, use, user satisfaction, and net benefits.
How to Design, Develop, and Deliver Successful E-Learning Initiatives
Findings The first two cycles of action research confirmed the utility of a prototyping strategy for e-learning systems development. Not only did the prototype e-learning module help identify “control where and when to learn” and “learn materials in less time” as the merits of e-learning from the students’ perspective, it also served as a blueprint for the development of the rest of the course topics to ensure learning flexibility, efficiency and autonomy. As such, course materials in each topic of the course were designed in such a way that they were delivered in multiple media formats (animation, audio clips, lecture scripts, PowerPoint slides) to suit different students’ learning styles (visual, aural, read/write, and kinesthetic (Fleming and Mills, 1992)). The subsequent cycles of action research, on the other hand, confirmed the utility of our e-learning success model. The model provides solid specifications of what a successful e-learning initiative should be and gives guidance to how to achieve success in e-learning undertakings. Essentially, a successful e-learning initiative is one rated highly by its users on six dimensions: system quality, information quality, service quality, use, user satisfaction, and net benefits. To ensure success in e-learning undertakings, a process approach should be followed to continuously improve the ratings of all six success dimensions. In other words, attempts should be made to raise the system, information, and service quality ratings in the design stage first, before proceeding to increase the use and user satisfaction ratings in the system delivery stage. The effectiveness of the design and delivery improvements is then monitored in the outcome analysis stage by following through with measures to raise the net benefits rating. Our empirical studies provide examples of effective enhancements made in each of the three stages of e-learning systems development. Instructional design improvements can be as simple as correcting errors in the course materials promptly,
or adhering to a 48-hour maximum email response window. They can also involve more extensive overhaul such as incorporating interactive learning contents and activities, or conducting virtual office hours via videoconferencing. The specific instructional design improvements are then followed through in the course delivery stage. For example, one way to raise the use and user satisfaction dimension with more responsive emails is to include constructive comments and timely feedback in email responses to students’ questions about the course content, directing them to revisit a particular helpful course element. After specific instructional design and course delivery improvements are implemented, the associated net benefits can be monitored in the learning outcome analysis stage. Measures to raise the net benefits dimension can be an affirmation of improved class performance from higher quality instructor-student interactions. Our research reveals that a major barrier to e-learning success is students’ lack of an enthusiastic reception for e-learning. Indeed, as noted by Jones and Rainie (2002): One important unresolved question of how much today’s student will rely on online tools to advance their skills and polish their academic credentials. … Their current behaviors show them using the Internet as an educational tool supplementing traditional classroom education, and it may be difficult to convince them to abandon the traditional setting after they have the kinds of attention afforded them in the college classroom. (pg.19) Students’ indifferent attitudes toward e-learning lead us to recognize that e-learning is not for everybody. As a result, to ensure student success in e-learning, we use an online readiness survey to identify those who are online ready before we accept them into the online course. The purpose of the survey is to evaluate whether or not a student is online ready based on four dimensions: academic preparedness, technical competence,
207
How to Design, Develop, and Deliver Successful E-Learning Initiatives
lifestyle aptitude, and e-learning preference. A student is online-ready if he/she scores highly on all four readiness dimensions. Another impediment to successful e-learning was found to be the failure to satisfy students’ learning needs, in particular, their need for meaningful interactions in learning. The virtual nature of e-learning presents a real challenge in meeting such needs, especially ones that require a human touch and/or physical experience. Currently we adopt a blended approach to address such needs. With the advance of Web 2.0 technologies, notably Web conferencing, Wiki, virtual reality, and social networking, the need for physical presence to interact may be diminishing. More importantly, not only can these educational technologies be used to support student-to-content, student-toinstructor, student-to-student instructions, but they can also be used to create communities of learners and collaborators, provide instant access to people and resources, and simulate real-world problem solving. Therefore, it is more important than ever, for us as educators, to be cognizant with developments in e-learning research.
FUTURE DIRECTIONS Our approach for measuring and assessing elearning success, from an Information System perspective, helps researchers and practitioners to better understand the elements of what constitutes this success. It also provides a holistic and relatively comprehensive model of how success in e-learning can be defined, evaluated, and promoted. However, there is still a need to broaden both the depth and breadth of e-learning research. In addition, inter-disciplinary research encompassing education and Information Systems needs to be forged to move e-learning research forward so that the educational promises of the Internet can be more fully realized.
208
The depth of e-learning research can be extended to investigate the longer-term impacts of e-learning beyond its immediate benefits of learning flexibility and efficiency. The role of technology on other aspects of learning improvements should be explored. As such, studies examining the cognitive and pedagogical implications of Internet technologies are greatly needed. Specifically, how to realize the full potential of the Internet to promote high-order thinking and learning, elevate intellectual capacity and intelligence level, and support a constructivist learning goal should be the next step in the research agenda of e-learning. The breadth of e-learning research can be extended to consider the perspectives of other stakeholders in e-learning. Our current studentcentered focus helps steer research attention away from an emphasis on technology in understanding e-learning. Recognition should also be given to the roles that instructors and the institution play in making e-learning a success. Specifically, further studies are needed to help institutions make wise e-learning decisions such as technology infrastructure investment, technology training and support, e-learning developmental and pedagogical aids, to name a few. Studies are also needed on the extent to which it is possible to convert instructors who are e-learning skeptics to e-learning adopters, and how to do so. Our current work exemplifies the mutual benefits of inter-disciplinary research. On the educational research front, a deeper understanding of e-learning is fostered. On the Information Systems research front, a more comprehensive view of a successful deployment of information technology is garnered. Further collaboration and sharing of what is known from each research base are needed before progress can be made in broadening the depth and breadth of e-learning research.
How to Design, Develop, and Deliver Successful E-Learning Initiatives
REFERENCES Alavi, M. (1984). An assessment of the prototyping approach to Information Systems development. Communications of the ACM, 27(6), 556–563. doi:10.1145/358080.358095 Arbaugh, J. B., & Benbunan-Fich, R. (2006). An investigation of epistemological and social dimensions of teaching in online learning environments. Academy of Management Learning & Education, 5, 435–447. Arbaugh, J. B., Godfrey, M. R., Johnson, M., Pollack, B. L., Niendorf, B., & Wresch, W. (2009). Research in online and blended learning in the business disciplines: Key findings and possible future directions. The Internet and Higher Education, 12, 71–87. doi:10.1016/j.iheduc.2009.06.006 Arbaugh, J. B., & Rau, B. L. (2007). A study of disciplinary, structural, and behavioral effects on course outcomes in online MBA courses. Decision Sciences Journal of Innovative Education, 5(1), 65–94. doi:10.1111/j.1540-4609.2007.00128.x Argyris, C., Putname, R., & Smith, D. (1985). Action science: Concepts, methods and skills for research and intervention. San Francisco, CA: Joessey-Bass. Avison, D., Lau, F., Myers, M., & Nielsen, P. A. (1999). Action research. Communications of the ACM, 42(1), 94–97. doi:10.1145/291469.291479 Bakerville, R. L. (1999). Investigating Information Systems with action research. Communications of the Association for Information Systems, 2(19). Benbunan-Fich, R. (2002). Improving education and training with IT. Communications of the ACM, 45(6), 94–99. doi:10.1145/508448.508454 Byrne, R. (2002). Web-based learning versus traditional management development methods. Singapore Management Review, 24(2), 59–68.
Chiasson, M., Germonprez, M., & Mathiassen, L. (2008). Pluralist action research: A review of the Information Systems literature. Information Systems Journal, 19, 31–54. doi:10.1111/j.13652575.2008.00297.x Davis, R., & Wong, D. (2007). Conceptualizing and measuring the optimal experience of the e-learning environment. Decision Sciences Journal of Innovative Education, 5(1), 97–126. doi:10.1111/j.1540-4609.2007.00129.x DeLone, W. H., & McLean, E. R. (2003). The DeLone and McLean Model of Information Systems success: A ten year update. Journal of Management Information Systems, 19(4), 9–30. Eom, S. B., Ashill, N., & Wen, H. J. (2006). The determinants of students’ perceived learning outcome and satisfaction in university online education: An empirical investigation. Decision Sciences Journal of Innovative Education, 4(2), 215–236. doi:10.1111/j.1540-4609.2006.00114.x Fleming, N. D., & Mills, C. (1992). Not another inventory, rather a catalyst for reflection. To Improve the Academy, 11, 137-155. Graham, C., Cagiltay, K., Lim, B. R., Craner, J., & Duffy, T. M. (2001). Seven principles of effective feaching: A practical lens for evaluating online courses. Technology Source. March/April. Hall, D. J., Cegielski, C. G., & Wade, J. N. (2006). Theoretical value belief, cognitive ability, and personality as predictors of student performance in object-oriented programming environments. Decision Sciences Journal of Innovative Education, 4(2), 237–257. doi:10.1111/j.15404609.2006.00115.x Hardaway, D., & Will, R. P. (1997). Digital multimedia offers key to educational reform. Communications of the ACM, 40(4), 90–96. doi:10.1145/248448.248463
209
How to Design, Develop, and Deliver Successful E-Learning Initiatives
Hardgrave, B. C., Wilson, R. L., & Eastman, K. (1999). Toward a contingency model for selecting an Information System prototyping strategy. Journal of Management Information Systems, 16(2), 113–136. Holsapple, C. W., & Lee-Post, A. (2006). Defining, assessing, and promoting e-learning success: An Information Systems perspective. Decision Sciences Journal of Innovative Education, 4(1), 67–85. doi:10.1111/j.1540-4609.2006.00102.x IDE. (1998). An emerging set of guiding principles and practices for the design and development of distance education. Innovations in Distance Education, Penn State University. Janson, M. A., & Smith, L. D. (1985). Prototyping for systems development: A critical appraisal. Management Information Systems Quarterly, (December): 305–316. doi:10.2307/249231 Johnson, R. D., Hornik, S., & Salas, E. (2008). An empirical examination of factors contributing to the creation of successful e-learning environments. International Journal of HumanComputer Studies, 66(5), 359–369. doi:10.1016/j. ijhcs.2007.11.003 Jones, S., & Rainie, L. (2002). The Internet goes to college. Washington, D.C.: Pew Internet and American Life Project. Lee, A. S., & Hubona, G. S. (2009). A scientific basis for rigor in Information Systems research. Management Information Systems Quarterly, 33(2), 237–262. Lee-Post, A. (2007). Success factors in developing and delivering online courses in operations management. International Journal of Information and Operations Management Education, 2(2), 131–139. doi:10.1504/IJIOME.2007.015279 Lee-Post, A. (2009). E-learning success model: An Information Systems perspective. Electronic Journal of eLearning, 7(1), 61-70.
210
Lewin, K. (1947). Frontiers in group dynamics II. Human Relations, 1, 143–153. doi:10.1177/001872674700100201 Meyer, K. A. (2002). Quality in distance education: Focus on online learning. Hoboken, NJ: Wiley Periodicals, Inc. Nauman, J. D., & Jenkins, A. M. (1982). Prototyping: The new paradigm for systems development. Management Information Systems Quarterly, (September): 29–44. doi:10.2307/248654 Nemanich, L., Banks, M., & Vera, D. (2009). Enhancing knowledge transfer in classroom versus online settings: The interplay among instructor, student, content, and context. Decision Sciences Journal of Innovative Education, 7(1), 123–148. doi:10.1111/j.1540-4609.2008.00208.x Nunes, M. B., & McPherson, M. (2003). Constructivism vs objectivism: Where is the difference for designers of e-learning environments? Proceedings of the 3rd IEEE International Conference on Advanced Learning Technologies. Parsad, B., & Lewis, L. (2008). Distance education at degree-granting postsecondary institutions: 2006–07 (NCES 2009–044). National Center for Education Statistics, Institute of Education Sciences, U. Washington, DC: S. Department of Education. Peltier, J. W., Schibrowsky, J. A., & Drago, W. (2007). The interdependence of the factors influencing the perceived quality of the online learning experience: A causal model. Journal of Marketing Education, 29(2), 40–153. doi:10.1177/0273475307302016 Piccoli, G., Ahmad, R., & Ives, B. (2001). Webbased virtual learning environments: A research framework and a preliminary assessment of effectiveness in basic IT skills training. Management Information Systems Quarterly, 25(4), 401–426. doi:10.2307/3250989
How to Design, Develop, and Deliver Successful E-Learning Initiatives
Pittinsky, M., & Chase, B. (2000). Quality on the line: Benchmarks for success in Internet-based distance education. Washington, D.C.: The Institute for Higher Education Policy, National Education Association.
Smith, L. J. (2001). Content and delivery: A comparison and contrast of electronic and traditional MBA marketing planning courses. Journal of Marketing Education, 23(1), 35–44. doi:10.1177/0273475301231005
Riley, R. W., Fritschler, A. L., & McLaughlin, M. A. (2002). Report to Congress on the distance education demonstration programs. U.S. Department of Education, Office of Postsecondary Education, Policy, Planning, and Innovation, Washington, D.C.
Sun, P. C., Cheng, H. K., & Finger, G. (2009). Critical functionalities of a successful e-learning system: An analysis from instructors’ cognitive structure toward system usage. Decision Support Systems, 48, 293–302. doi:10.1016/j. dss.2009.08.007
Robinson, V. M. J. (1993). Current controversies in action research. Public Administration Quarterly, 17(3), 263–290.
Sun, P. C., Tsai, R. J., Finger, G., Chen, Y. Y., & Yeh, D. (2008). What drives a successful e-learning? An empirical investigation of the critical success factors influencing learner satisfaction. Computers & Education, 50, 1183–1202. doi:10.1016/j. compedu.2006.11.007
Saad’e, R. G. (2007). Dimensions of perceived usefulness: Towards enhanced assessment. Decision Sciences Journal of Innovative Education, 5(2), 289–310. doi:10.1111/j.1540-4609.2007.00142.x Saad’e, R. G., & Bahli, B. (2005). The impact of cognitive absorption on perceived usefulness and perceived ease of use in online learning: An extension of the technology acceptance model. Information & Management, 42, 317–327. doi:10.1016/j. im.2003.12.013 Santhanam, R., Sasidharan, S., & Webster, J. (2008). Using self-regulatory learning to enhance e-learning-based Information Technology training. Information Systems Research, 19(1), 26–47. doi:10.1287/isre.1070.0141 Schank, R. C. (2001). Revolutionizing the traditional classroom course. Communications of the ACM, 44(12), 21–2408. doi:10.1145/501317.501330 Simmering, M. J., Posey, C., & Piccoli, G. (2009). Computer self-efficacy and motivation to learn in a self-directed online course. Decision Sciences Journal of Innovative Education, 7(1), 99–121. doi:10.1111/j.1540-4609.2008.00207.x
Teh, G. P. L. (1999). Assessing student perceptions of Internet-based online learning environment. International Journal of Instructional Media, 26(4), 397–402. van der Rhee, B., Verma, R., Plaschka, G. R., & Kickul, J. R. (2007). Technology readiness, learning goals, and e-learning: Searching for synergy. Decision Sciences Journal of Innovative Education, 5(1), 127–149. doi:10.1111/j.15404609.2007.00130.x Wan, Z., Wang, Y., & Haggerty, N. (2008). Why people benefit from e-learning differently: The effects of psychological processes on e-learning outcomes. Information & Management, 45(8), 513–521. doi:10.1016/j.im.2008.08.003 Web-based Education Commission. (2000). The power of the Internet for learning: Moving from promise to practice. Washington, D.C. Retrieved from http://www.hpcnet.org/ webcommission
211
How to Design, Develop, and Deliver Successful E-Learning Initiatives
Williams, E. A., Duray, R., & Reddy, V. (2006). Teamwork orientation, group cohesiveness, and student learning: A study of the use of teams in online distance education. Journal of Management Education, 30, 592–616. doi:10.1177/1052562905276740
KEY TERMS AND DEFINITIONS Action Research: an iterative process of diagnosing, action planning, action taking, evaluating and learning to study complex social systems. Constructivist Learning: knowledge emerges through active learning. E-Learning: a formal education process in which the student and instructor are interacting via Internet-based technologies at different locations.
212
Hybrid/Blended Online Courses: learning is extended beyond in-class instruction with the mediation of learning technologies. Information Quality: the desirable characteristics of the course content. Objectivist Learning: knowledge is transferred from teachers to students. Prototyping: an approach to developing information systems whereby a user is provided with a hand-on use of a mock-up system for experimentation before a full-scale system is developed. Service Quality: the desirable characteristics of student-instructor interactions. System Quality: the desirable characteristics of a course management system such as Blackboard or WebCT.
How to Design, Develop, and Deliver Successful E-Learning Initiatives
APPENDIX A. THE COURSE FEEDBACK SURVEY PART I. Student Information 1. What is your current GPA? 4
2. What is your current grade in this course? A
B
3. How many courses are you taking this semester? 1
2
3
4. Have you participated in an Internet-based course before? [ ]-Yes [ ]-No 5. Have you used Blackboard before? [ ]-Yes [ ]-No 6. How often do you use the Internet in your coursework? Never
Seldom
Sometimes
Usually
Always
4
Excellent-5
4
Excellent-5
7. How do you rate your knowledge of using the computer? Poor-1
2
3
8. How do you rate your knowledge of using the Internet? Poor-1
2
3
9. What is your opinion towards distance-learning before taking this course? Negative-1
2
3
4
Positive-5
213
How to Design, Develop, and Deliver Successful E-Learning Initiatives
10. What is your opinion towards distance-learning after taking this course? Negative-1
2
3
4
Positive-5
PART II. Learning Evaluation 1. The following have been valuable to you in your learning experience: Strongly Disagree
Strongly Agree
PowerPoint slides
Not Used
1
2
3
4
5
Audio to accompany the slides
Not Used
1
2
3
4
5
Script to accompany the slides
Not Used
1
2
3
4
5
Discussion board questions
Not Used
1
2
3
4
5
Case studies
Not Used
1
2
3
4
5
2. Please evaluate the course material section of the course: Strongly Disagree
Strongly Agree
Materials are well organized
1
2
3
4
5
Materials are effectively presented
1
2
3
4
5
Materials are the right length
1
2
3
4
5
Materials are clearly written
1
2
3
4
5
3. Compared to the traditional classroom format, the web-based delivery of the course better enables you to: Strongly Disagree
Strongly Agree
Be actively involved in the learning process
1
2
3
4
5
Address my questions and concerns
1
2
3
4
5
Voice my opinion and viewpoints
1
2
3
4
5
Understand the course materials
1
2
3
4
5
Stimulate my interest in the subject
1
2
3
4
5
Relate the subject matter to other areas
1
2
3
4
5
Put effort into non-assessed work
1
2
3
4
5
Control when and where to learn
1
2
3
4
5
Learn the materials in less time
1
2
3
4
5
Complete the assignments in less time
1
2
3
4
5
Use written communication in learning
1
2
3
4
5
214
How to Design, Develop, and Deliver Successful E-Learning Initiatives
4. 5. 6. 7. 8. 9.
How could the web-based delivery of the course be improved? What do you like the most about the web-based format of the course? What do you like the least about the web-based format of the course? What elements of the subject have you found most difficult to master on the web? How could the instructor make these subjects more easily understandable on the web? Do you have any other comments, questions, or feedback?
APPENDIX B. THE ONLINE READINESS SURVEY PART I. Student Information 1. What is your current GPA? 4
3. Are you an upper division business major?
[ ]Yes [ ]No -- If No, did you file a request form in BE445? [ ]Yes [ ]No 4. What grade did you earn for the following prerequisites? [ ]CS101 or Microsoft Certification [ ]ACC202 [ ]ECO201 [ ]STA291 [ ]MA113 or MA123, 162 5. Have you participated in an Internet-based course before? [ ]Yes [ ]No 6. Your primary reason for taking the course online is ____________________________________ ____________
PART II. Technical Readiness These questions are designed to help you assess your readiness for participating in online courses, based on your assessment of your computer setup and technical literacy. On a scale of 1 to 5 rate yourself
215
How to Design, Develop, and Deliver Successful E-Learning Initiatives
according to each of the following statements; 5 indicating the greatest agreement and 1 indicating the least agreement. Be as forthright as possible with your responses. 1. I routinely use Microsoft Office tools on a computer. 1
2
3
4
5
4
5
4
5
2. I know how to access the technical support desk. 1
2
3
3. My computer setup is sufficient for online learning. 1
2
3
4. I have the following pieces of software downloaded on my computer: ◦⊦ Microsoft Office tools such as Word, Excel, and PowerPoint ◦⊦ Real Player Basic ◦⊦ Adobe Acrobat Reader ◦⊦ Macromedia Shockwave Player ◦⊦ FireFox or Internet Explorer 1
2
3
4
5
3
4
5
4
5
5. I have access to a printer. 1
2
6. I have at least a 28.8 speed modem connection. 1
216
2
3
How to Design, Develop, and Deliver Successful E-Learning Initiatives
7. I have access to a dedicated network connection or to a telephone line that can be given over to Internet use for substantial periods of time, perhaps 45 minutes or so, at least 3 times a week. 1
2
3
4
5
8. I have access to a dedicated network connection or have a local or national Internet Service Provider/ ISP (a company to whom you pay a monthly fee to obtain a connection through a phone line and modem to the Internet.). 1
2
3
4
5
PART III. Lifestyle Readiness These questions are designed to help you assess your readiness for participating in online courses, based on your assessment of your lifestyle readiness. On a scale of 1 to 5, rate yourself according to each of the six following statements; 5 indicating the greatest agreement and 1 indicating the least agreement. Be as forthright as possible with your responses. 1. I have a private place in my home or at work near my computer, that is “mine,” and that I can use for extended periods. 1
2
3
4
5
2. I can put together blocks of time that will be uninterrupted in which I can work on my online courses (a block of time might be something like 90 minutes up to two hours, several times a week, depending upon the number of course credits you are taking online). 1
2
3
4
5
3. I routinely communicate with persons by using electronic technologies such as e-mail and voice mail. 1
2
3
4
5
217
How to Design, Develop, and Deliver Successful E-Learning Initiatives
4. I have persons and/or resources nearby who will assist me with any technical problems I might have with my software applications as well as my computer hardware. 1
2
3
4
5
5. I value and/or need flexibility. For example, it is not convenient for me to come to campus two to three times a week to attend a place and time-based traditional class. 1
2
3
4
5
PART IV. Learning Preferences These questions are designed to help you assess your readiness for participating in online courses, based on your assessment of how you learn best. On a scale of 1 to 5 rate yourself according to each of the six following statements; 5 indicating the greatest agreement and 1 indicating the least agreement. Be as forthright as possible with your responses. 1. When I am asked to use technologies that are new to me such as a fax machine, voice mail or a new piece of software, I am eager to try them. 1
2
3
4
5
3
4
5
2. I am a self-motivated, independent learner. 1
2
3. It is not necessary that I be in a traditional classroom environment in order to learn. 1
2
3
4
5
4. I am comfortable waiting for written feedback from the instructor regarding my performance, rather than receiving immediate verbal feedback. 1
218
2
3
4
5
How to Design, Develop, and Deliver Successful E-Learning Initiatives
5. I am proactive with tasks; tending to complete them well in advance of deadlines. 1
2
3
4
5
4
5
3.5
4.0
4
5 or more
3
4 or more
6. I communicate effectively and comfortably in writing. 1
2
3
APPENDIX C. THE COURSE EXPECTATION SURVEY PART I. Student Information 1. What is your current GPA?